Back to Docs

CLI Reference

Complete guide to Diverga CLI commands and skills

"Just say what you need. Diverga understands research context."

CLI Overview

Diverga CLI provides two ways to interact: direct commands and auto-detected triggers.

Direct Commands

Invoke with /skill-name (e.g., /a1, /memory)

Auto-Detection

Keywords trigger agents automatically

Context-Aware

Agents understand research context from conversation

Core Commands

Essential commands for research orchestration and memory management

/divergaor /diverga-help

Show main help and available agents

/diverga
Help
/diverga-research-orchestrator

Full research orchestrator with 24 agents

/diverga-research-orchestrator
Orchestration
/memory

Open Memory HUD (session state, checkpoints)

/memory
Memory
/note

Save to Working Memory (auto-pruned after 7 days)

/note "Research question needs refinement"
Memory
/remember

Save to Persistent Memory (never auto-pruned)

/remember "User prefers PRISMA 2020 over PRISMA-P"
Memory
/checkpoint

Manual checkpoint (save decision point)

/checkpoint
Memory

Agent Invocation Commands

Direct agent calls for specific research tasks

A

Foundation

/a1

Research Question Refiner

Trigger: research question

/a2

Theoretical Framework Architect

Trigger: theoretical framework

/a3

Devil's Advocate

Trigger: criticism, weakness

/a4

Research Ethics Advisor

Trigger: ethics, IRB

/a5

Paradigm & Worldview Advisor

Trigger: paradigm, ontology

B

Evidence

/b1

Systematic Literature Scout

Trigger: literature review

/b2

Evidence Quality Appraiser

Trigger: quality appraisal

/b3

Effect Size Extractor

Trigger: effect size, Cohen's d

/b4

Research Radar

Trigger: latest research

/b5

Parallel Document Processor

Trigger: batch PDF, multiple documents

C

Design & Meta-Analysis

/c1

Quantitative Design Consultant

Trigger: RCT, experimental design

/c2

Qualitative Design Consultant

Trigger: phenomenology, grounded theory

/c3

Mixed Methods Design Consultant

Trigger: mixed methods

/c5

Meta-Analysis Master

Trigger: meta-analysis

/c6

Data Integrity Guard

Trigger: data extraction, validation

/c7

Error Prevention Engine

Trigger: error prevention

D

Data Collection

/d1

Sampling Strategy Advisor

Trigger: sampling, sample size

E

Analysis

/e2

Qualitative Coding Specialist

Trigger: coding, themes

/e3

Mixed Methods Integration

Trigger: integration, joint display

/e4

Analysis Code Generator

Trigger: R code, Python code

/e5

Sensitivity Analysis Designer

Trigger: sensitivity analysis

F

Quality

/f2

Checklist Manager

Trigger: checklist, PRISMA, CONSORT

/f3

Reproducibility Auditor

Trigger: reproducibility, OSF

/f4

Bias & Trustworthiness Detector

Trigger: bias, p-hacking

/f5

Humanization Verifier

Trigger: verify humanization

G

Communication

/g1

Journal Matcher

Trigger: journal, submission

/g5

Academic Style Auditor

Trigger: AI patterns, audit

/g6

Academic Style Humanizer

Trigger: humanize, transform

H

Specialized

/h2

Action Research Facilitator

Trigger: action research

I

Systematic Review

/i0

Pipeline Orchestrator

Trigger: systematic review, PRISMA

/i1

Paper Retrieval Agent

Trigger: fetch papers, database search

/i2

Screening Assistant

Trigger: screening, inclusion criteria

/i3

RAG Builder

Trigger: build RAG, vector database

Humanization Pipeline

Transform AI-generated academic text into natural scholarly prose

/g5

Academic Style Auditor

Analyze AI patterns in text (modal verbs, hedging, transition density)

/g5 "Analyze this draft for AI patterns"
/g6

Academic Style Humanizer

Transform text while preserving scholarly integrity

/g6 "Humanize this abstract"
/humanize

Full Pipeline

G5 audit → G6 transformation → F5 verification

/humanize "Full humanization pipeline"

Systematic Review Commands

PRISMA 2020 systematic literature review automation

/scholarag

Pipeline Help

Show pipeline stages and usage

/scholarag
/i0

Pipeline Orchestrator

Coordinate 7-stage PRISMA pipeline

/i0 "Start systematic review on AI in education"
/i1

Paper Retrieval

Fetch from Semantic Scholar, OpenAlex, arXiv

/i1 "Retrieve papers on chatbots AND language learning"
/i2

Screening Assistant

AI-powered PRISMA screening with Groq/Claude

/i2 "Screen 500 papers with 90% threshold"
/i3

RAG Builder

Build vector database from screened PDFs

/i3 "Build RAG from 150 included papers"

Auto-Detection Triggers

Keywords that automatically invoke agents

Foundation

research question
A1-ResearchQuestionRefiner
theoretical framework
A2-TheoreticalFrameworkArchitect
criticism, weakness
A3-DevilsAdvocate
ethics, IRB
A4-ResearchEthicsAdvisor

Evidence & Analysis

literature review
B1-SystematicLiteratureScout
meta-analysis
C5-MetaAnalysisMaster
effect size, Cohen's d
B3-EffectSizeExtractor
bias, p-hacking
F4-BiasTrustworthinessDetector

Systematic Review

systematic review, PRISMA
I0-ScholarAgentOrchestrator
fetch papers
I1-PaperRetrievalAgent
screening, inclusion criteria
I2-ScreeningAssistant

Humanization

humanize, AI patterns
G5/G6-HumanizationPipeline
transform, make natural
G6-AcademicStyleHumanizer

Usage Examples

Common workflows and command patterns

Start a Systematic Review

  1. 1Say: "I want to do a systematic review on AI in education"
  2. 2I0 auto-triggers → asks about research question, databases
  3. 3Or: /i0 "Start systematic review pipeline"

Refine Research Question

  1. 1Say: "Help me refine my research question"
  2. 2A1 auto-triggers → FINER criteria analysis
  3. 3Or: /a1 "Refine: Does AI improve learning?"

Humanize AI Text

  1. 1/g5 "Audit this abstract for AI patterns"
  2. 2Review audit results
  3. 3/g6 "Transform while preserving citations"
  4. 4/f5 "Verify transformation quality"

Meta-Analysis Setup

  1. 1Say: "I need to do a meta-analysis on effect sizes"
  2. 2C5 auto-triggers → asks about effect size type, model
  3. 3/c6 "Validate extracted data"
  4. 4/c7 "Check for errors and anomalies"

Usage Tips

  • You don't need to memorize commands - just describe your research task naturally
  • Agents detect context: saying "literature review" triggers B1 automatically
  • Use /memory to check session state and checkpoint history
  • Parallel execution: independent agents run simultaneously (e.g., A1+A2+A5)
  • Checkpoints enforce human decisions - agents pause for approval at key points
  • /note for temporary notes, /remember for permanent project context

Ready to Use CLI?

Explore agents, checkpoints, and VS methodology