← Back to Documentation

Native Compilation

Compile ARO applications to native binaries via LLVM IR. Deploy standalone executables with fast startup, low memory footprint, and full access to runtime services.

Two Modes: Development and Production

Use aro run for rapid development with the interpreter. Use aro build for production-ready native binaries.

Compilation Pipeline

ARO compiles through LLVM for maximum performance and portability:

.aro files Parser

AST
LLVM IR
(.ll)
Object
(.o)
Native
Binary

Basic Usage

$ aro build ./MyApp
Compiling main.aro...
Compiling users.aro...
Generating LLVM IR...
Linking with runtime...
Build complete: ./MyApp/MyApp

The result is a standalone executable that can be deployed without the ARO interpreter.

Build Options

# Basic build
aro build ./MyApp

# Enable optimizations
aro build ./MyApp --optimize

# Emit LLVM IR for inspection
aro build ./MyApp --emit-llvm

# Verbose output
aro build ./MyApp --verbose

# Keep intermediate files (.ll, .o)
aro build ./MyApp --keep-intermediate

Benefits

Fast Startup

No interpreter initialization. Your app starts immediately.

Lower Memory

No interpreter runtime in memory. Just your code and the runtime bridge.

Single Binary

Deploy one executable file. No dependencies to manage.

Cross-Platform

LLVM targets macOS (arm64, x86_64) and Linux (x86_64, arm64).

How It Works

1. LLVM Code Generation

Each feature set becomes an LLVM function. ARO statements become calls to runtime bridge functions:

; Feature Set: Application-Start
define ptr @aro_fs_application_start(ptr %ctx) {
entry:
  ; <Create> the <greeting> with "Hello, World!"
  call void @aro_variable_bind_string(ptr %ctx, ...)
  %result = call ptr @aro_action_create(ptr %ctx, ...)

  ; <Log> the <greeting> for the <console>
  call ptr @aro_action_log(ptr %ctx, ...)

  ; <Return> an <OK: status>
  call ptr @aro_action_return(ptr %ctx, ...)

  ret ptr %result
}

2. Runtime Bridge

The compiled binary links with libAROCRuntime.a, a Swift static library that provides all runtime services:

3. Entry Point Generation

The Application-Start feature set automatically generates the main() entry point:

define i32 @main(i32 %argc, ptr %argv) {
entry:
  %runtime = call ptr @aro_runtime_init()
  %ctx = call ptr @aro_context_create_named(%runtime, ...)
  %result = call ptr @aro_fs_application_start(%ctx)
  call void @aro_runtime_shutdown(%runtime)
  ret i32 0
}

Output Structure

MyApp/
├── main.aro           # Source files
├── users.aro
├── openapi.yaml
├── MyApp              # Final executable
└── .build/
    ├── MyApp.ll       # LLVM IR (if --keep-intermediate)
    └── MyApp.o        # Object file (if --keep-intermediate)

Platform Support

Requirements

Development vs Production

# Development: Use the interpreter for fast iteration
aro run ./MyApp

# Production: Compile to native binary
aro build ./MyApp --optimize
./MyApp/MyApp

Both modes produce identical behavior. The interpreter is perfect for development, while native binaries are ideal for deployment.

Learn More

See the full native compilation specification in ARO-0026: Native Compilation.