CLI Reference
Complete guide to Diverga CLI commands and skills
"Just say what you need. Diverga understands research context."
CLI Overview
Diverga CLI provides two ways to interact: direct commands and auto-detected triggers.
Direct Commands
Invoke with /skill-name (e.g., /a1, /memory)
Auto-Detection
Keywords trigger agents automatically
Context-Aware
Agents understand research context from conversation
Core Commands
Essential commands for research orchestration and memory management
/divergaor /diverga-helpShow main help and available agents
/diverga/diverga-research-orchestratorFull research orchestrator with 24 agents
/diverga-research-orchestrator/memoryOpen Memory HUD (session state, checkpoints)
/memory/noteSave to Working Memory (auto-pruned after 7 days)
/note "Research question needs refinement"/rememberSave to Persistent Memory (never auto-pruned)
/remember "User prefers PRISMA 2020 over PRISMA-P"/checkpointManual checkpoint (save decision point)
/checkpointAgent Invocation Commands
Direct agent calls for specific research tasks
Foundation
/a1Research Question Refiner
Trigger: research question
/a2Theoretical Framework Architect
Trigger: theoretical framework
/a3Devil's Advocate
Trigger: criticism, weakness
/a4Research Ethics Advisor
Trigger: ethics, IRB
/a5Paradigm & Worldview Advisor
Trigger: paradigm, ontology
Evidence
/b1Systematic Literature Scout
Trigger: literature review
/b2Evidence Quality Appraiser
Trigger: quality appraisal
/b3Effect Size Extractor
Trigger: effect size, Cohen's d
/b4Research Radar
Trigger: latest research
/b5Parallel Document Processor
Trigger: batch PDF, multiple documents
Design & Meta-Analysis
/c1Quantitative Design Consultant
Trigger: RCT, experimental design
/c2Qualitative Design Consultant
Trigger: phenomenology, grounded theory
/c3Mixed Methods Design Consultant
Trigger: mixed methods
/c5Meta-Analysis Master
Trigger: meta-analysis
/c6Data Integrity Guard
Trigger: data extraction, validation
/c7Error Prevention Engine
Trigger: error prevention
Data Collection
/d1Sampling Strategy Advisor
Trigger: sampling, sample size
Analysis
/e2Qualitative Coding Specialist
Trigger: coding, themes
/e3Mixed Methods Integration
Trigger: integration, joint display
/e4Analysis Code Generator
Trigger: R code, Python code
/e5Sensitivity Analysis Designer
Trigger: sensitivity analysis
Quality
/f2Checklist Manager
Trigger: checklist, PRISMA, CONSORT
/f3Reproducibility Auditor
Trigger: reproducibility, OSF
/f4Bias & Trustworthiness Detector
Trigger: bias, p-hacking
/f5Humanization Verifier
Trigger: verify humanization
Communication
/g1Journal Matcher
Trigger: journal, submission
/g5Academic Style Auditor
Trigger: AI patterns, audit
/g6Academic Style Humanizer
Trigger: humanize, transform
Specialized
/h2Action Research Facilitator
Trigger: action research
Systematic Review
/i0Pipeline Orchestrator
Trigger: systematic review, PRISMA
/i1Paper Retrieval Agent
Trigger: fetch papers, database search
/i2Screening Assistant
Trigger: screening, inclusion criteria
/i3RAG Builder
Trigger: build RAG, vector database
Humanization Pipeline
Transform AI-generated academic text into natural scholarly prose
/g5Academic Style Auditor
Analyze AI patterns in text (modal verbs, hedging, transition density)
/g5 "Analyze this draft for AI patterns"/g6Academic Style Humanizer
Transform text while preserving scholarly integrity
/g6 "Humanize this abstract"/humanizeFull Pipeline
G5 audit → G6 transformation → F5 verification
/humanize "Full humanization pipeline"Systematic Review Commands
PRISMA 2020 systematic literature review automation
/scholaragPipeline Help
Show pipeline stages and usage
/scholarag/i0Pipeline Orchestrator
Coordinate 7-stage PRISMA pipeline
/i0 "Start systematic review on AI in education"/i1Paper Retrieval
Fetch from Semantic Scholar, OpenAlex, arXiv
/i1 "Retrieve papers on chatbots AND language learning"/i2Screening Assistant
AI-powered PRISMA screening with Groq/Claude
/i2 "Screen 500 papers with 90% threshold"/i3RAG Builder
Build vector database from screened PDFs
/i3 "Build RAG from 150 included papers"Auto-Detection Triggers
Keywords that automatically invoke agents
Foundation
research questiontheoretical frameworkcriticism, weaknessethics, IRBEvidence & Analysis
literature reviewmeta-analysiseffect size, Cohen's dbias, p-hackingSystematic Review
systematic review, PRISMAfetch papersscreening, inclusion criteriaHumanization
humanize, AI patternstransform, make naturalUsage Examples
Common workflows and command patterns
Start a Systematic Review
- 1Say: "I want to do a systematic review on AI in education"
- 2I0 auto-triggers → asks about research question, databases
- 3Or: /i0 "Start systematic review pipeline"
Refine Research Question
- 1Say: "Help me refine my research question"
- 2A1 auto-triggers → FINER criteria analysis
- 3Or: /a1 "Refine: Does AI improve learning?"
Humanize AI Text
- 1/g5 "Audit this abstract for AI patterns"
- 2Review audit results
- 3/g6 "Transform while preserving citations"
- 4/f5 "Verify transformation quality"
Meta-Analysis Setup
- 1Say: "I need to do a meta-analysis on effect sizes"
- 2C5 auto-triggers → asks about effect size type, model
- 3/c6 "Validate extracted data"
- 4/c7 "Check for errors and anomalies"
Usage Tips
- You don't need to memorize commands - just describe your research task naturally
- Agents detect context: saying "literature review" triggers B1 automatically
- Use /memory to check session state and checkpoint history
- Parallel execution: independent agents run simultaneously (e.g., A1+A2+A5)
- Checkpoints enforce human decisions - agents pause for approval at key points
- /note for temporary notes, /remember for permanent project context
Ready to Use CLI?
Explore agents, checkpoints, and VS methodology