Skip to main content

What is a21e?

a21e turns natural-language instructions into expert AI outputs. Instead of writing prompts, you describe what you want — a21e selects the best techniques, compiles an optimized prompt, routes it to the right model, and returns production-quality results.
"Write a database migration to add a soft-delete column to the users table"
Behind the scenes, a21e:
  1. Parses your intent — understands what you’re asking for, the constraints, and the success criteria
  2. Selects techniques — matches your intent to curated prompt strategies from the prompt library
  3. Compiles a prompt — assembles a structured prompt with your context, preferences, and memory
  4. Routes to the best model — picks the optimal LLM for the task (or deliberates across multiple models)
  5. Returns results — delivers the output with quality scoring and feedback capture

Key capabilities

How it works

1

Get an API key

Sign up at a21e.com and generate an API key from the dashboard.
2

Submit an intent

Send a natural-language instruction to the API. a21e handles the rest.
curl -X POST https://api.a21e.com/v1/rpc \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "method": "intent.submit",
    "params": {
      "input": "Write unit tests for a React login form component",
      "auto_execute": true
    }
  }'
3

Get results

a21e returns the executed output along with quality metadata, credits used, and the model selected.

Pricing

a21e uses a credit-based model. 1 credit = 1 enhancement (one prompt compilation and execution).
  • Managed mode — a21e handles LLM calls. Credits cover both prompt engineering and model inference.
  • BYOK mode — Bring your own API keys for Anthropic, OpenAI, Google, or xAI. Credits only cover prompt engineering.
See Credits & billing for details.