Achonye Multi-LLM Orchestration
Achonye is GovernLayer's hierarchical multi-LLM orchestrator. It routes governance tasks to the optimal model based on complexity, cost, and capability — from free local models for trivial tasks to multi-model consensus for critical decisions.
Architecture
The Achonye hierarchy:
- Leader (Claude Opus) — Strategic decisions, complex analysis
- Board (Sonnet, Gemini, GPT-4o) — Deliberation on important decisions
- Validator — Consensus engine ensuring agreement across models
- Operators (14 models) — Task execution across local and cloud models
POST/v1/achonye/process
Process with Achonye
Route a governance task through the Achonye orchestrator. The system automatically selects the optimal model(s) based on task complexity.
Required attributes
- Name
task- Type
- string
- Description
The governance task to process.
- Name
priority- Type
- string
- Description
Task priority:
trivial,simple,complex, orcritical.
Optional attributes
- Name
force_consensus- Type
- boolean
- Description
Force multi-model consensus regardless of priority (default: false).
- Name
preferred_model- Type
- string
- Description
Force routing to a specific model.
Request
POST
/v1/achonye/processcurl -X POST https://api.governlayer.ai/v1/achonye/process \
-H "X-API-Key: gl_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"task": "Evaluate whether agent-47 pricing decisions comply with EU consumer protection law",
"priority": "complex"
}'
Response
{
"result": "Agent-47 pricing decisions present two compliance risks under EU Directive 2005/29/EC...",
"model_used": "claude-opus",
"routing_reason": "Complex legal analysis requires premium model capabilities",
"tokens_used": 1847,
"estimated_cost": "$0.042",
"latency_ms": 2340
}
GET/v1/achonye/ecosystem
View ecosystem status
View the current status of all models in the Achonye ecosystem, including availability, latency, and cost metrics.
Request
GET
/v1/achonye/ecosystemcurl https://api.governlayer.ai/v1/achonye/ecosystem \
-H "X-API-Key: gl_your_api_key_here"
Response
{
"total_models": 14,
"local_models": 5,
"cloud_models": 9,
"models": [
{
"id": "llama3:8b",
"provider": "ollama",
"tier": "local",
"status": "available",
"avg_latency_ms": 450,
"cost_per_1k_tokens": 0
},
{
"id": "claude-opus",
"provider": "openrouter",
"tier": "premium",
"status": "available",
"avg_latency_ms": 2100,
"cost_per_1k_tokens": 0.015
}
]
}