Multi-AI Orchestration CLI for the Omni-Performative Engine Project
A command-line tool that coordinates research, validation, and synthesis across 5 AI services (Perplexity, Gemini, ChatGPT, Copilot, Grok) in a structured pipeline with gate validation.
PHASE 1: RESEARCH VALIDATION (Perplexity)
├─ Precedent verification
├─ Funding landscape mapping
└─ [GATE 1: Human review]
PHASE 2: SPECIFICATION HARDENING (Gemini)
├─ Edge-case matrix (5×5)
├─ Latency/constraint validation
└─ [GATE 2: Human review]
PHASE 3: MESSAGING SYNTHESIS (ChatGPT)
├─ NSF grant narrative
├─ NEH grant narrative
├─ Ars Electronica narrative
├─ Artist statement
└─ [GATE 3: Human review]
PHASE 4: IMPLEMENTATION PLANNING (Copilot)
├─ Code architecture review
├─ Budget allocation
└─ [GATE 4: Human review]
PHASE 5: VULNERABILITY AUDIT (Grok)
├─ Assumption critique
├─ Failure scenario modeling
└─ [GATE 5: Final synthesis]
# Clone or copy this directory
cd omni-orchestrate
# Create virtual environment (recommended)
python -m venv venv
source venv/bin/activate # or `venv\Scripts\activate` on Windows
# Install dependencies
pip install -r requirements.txt
cp config.yaml.template config.yaml
Either edit config.yaml directly:
api_keys:
perplexity: "pplx-xxxxxxxxxxxx"
gemini: "AIzaSyxxxxxxxxxx"
chatgpt: "sk-xxxxxxxxxxxx"
grok: "xai-xxxxxxxxxxxx"
Or use environment variables:
export PERPLEXITY_API_KEY="pplx-xxxxxxxxxxxx"
export GEMINI_API_KEY="AIzaSyxxxxxxxxxx"
export OPENAI_API_KEY="sk-xxxxxxxxxxxx"
export GROK_API_KEY="xai-xxxxxxxxxxxx"
python src/orchestrator.py status --services all
Expected output:
Service Status:
------------------------------
✓ perplexity
✓ gemini
✓ chatgpt
✓ copilot
✓ grok
# Run all phases with gate validation
python src/orchestrator.py run --phase all --gates --output-dir ./results
# Run with human review pauses at each gate
python src/orchestrator.py run --phase all --pause-at-gate
# Run only Phase 1 (Research Validation)
python src/orchestrator.py run --phase research-validation
# Run Phase 2 with pause at gate
python src/orchestrator.py run --phase spec-hardening --pause-at-gate
| Phase | Name | Service |
|---|---|---|
| 1 | research-validation |
Perplexity |
| 2 | spec-hardening |
Gemini |
| 3 | messaging-synthesis |
ChatGPT |
| 4 | implementation-planning |
Copilot |
| 5 | vulnerability-audit |
Grok |
python src/orchestrator.py status --services all
python src/orchestrator.py estimate --phases all
python src/orchestrator.py synthesis --input-dir ./results --format markdown
results/
├── phase1_research_validation/
│ ├── precedent_verification.json
│ ├── precedent_verification.md
│ ├── funding_landscape.json
│ ├── funding_landscape.md
│ └── gate_result.json
├── phase2_spec_hardening/
│ ├── edge_case_matrix.json
│ ├── edge_case_matrix.md
│ ├── latency_constraints.json
│ └── gate_result.json
├── phase3_messaging_synthesis/
│ ├── grant_narrative_nsf.json
│ ├── grant_narrative_nsf.md
│ ├── grant_narrative_neh.json
│ ├── grant_narrative_ars.json
│ ├── artist_statement.json
│ └── gate_result.json
├── phase4_implementation_planning/
│ ├── code_architecture.json
│ ├── budget_allocation.json
│ └── gate_result.json
├── phase5_vulnerability_audit/
│ ├── assumption_critique.json
│ └── failure_scenarios.json
├── aggregated_results.json
└── EXECUTIVE_REPORT.md
All prompt templates are in the prompts/ directory:
prompts/
├── phase1_research/
│ ├── precedent_verification.txt
│ └── funding_landscape.txt
├── phase2_specification/
│ ├── edge_case_matrix.txt
│ └── latency_constraints.txt
├── phase3_messaging/
│ ├── grant_narrative_nsf.txt
│ ├── grant_narrative_neh.txt
│ ├── grant_narrative_ars.txt
│ └── artist_statement.txt
├── phase4_implementation/
│ ├── code_architecture.txt
│ └── budget_allocation.txt
├── phase5_vulnerability/
│ ├── assumption_critique.txt
│ └── failure_scenarios.txt
└── gates/
└── all_gates.txt
Edit the .txt files in prompts/ to customize for your project. Use {placeholder} syntax for dynamic substitution:
You are validating research for {project_name}.
CLAIMS TO VERIFY:
{precedent_claims}
Pass context when running:
orchestrator.run_phase(phase, context={
"project_name": "My Project",
"precedent_claims": "..."
})
Each phase (except Phase 5) has a gate validation step that checks:
| Service | Tasks | Est. Tokens | Est. Cost |
|---|---|---|---|
| Perplexity | 2 | 6,000 | $0.01 |
| Gemini | 2 | 16,000 | $0.02 |
| ChatGPT | 4 | 16,000 | $0.16 |
| Copilot | 2 | 6,000 | $0.06 |
| Grok | 2 | 6,000 | $0.03 |
| Total | 12 | 50,000 | ~$0.30 |
Actual costs depend on response length and may vary.
pytest tests/ -v
black src/
mypy src/
config.yaml or environmentpython src/orchestrator.py status to verifyprompts/{phase_name}/.txt, .md, or .prompt)timeout_seconds in config.yamlparallel_limit to avoid rate limitsresults/phaseN_*/gate_result.jsonMIT License - See LICENSE file for details.
Built for the Omni-Performative Engine project