Back to Agents
Category D

Category D: Data Collection Agents

Comprehensive data collection strategy and instrument development

Data Collection agents provide structured guidance for interviews, observations, and measurement. They adapt protocols to your research paradigm while maintaining methodological rigor. Sampling strategy is now handled directly by C1 (Research Design Strategist).

Core Principle

Structured but adaptive protocols across quantitative, qualitative, and mixed paradigms

Data Collection Specialist

D2

Develop interview protocols, focus group guides, observation protocols, field notes, and transcription guidance

SonnetMEDIUMLight VS (Modal awareness)

Trigger Keywords

interviewfocus groupinterview protocolsemi-structuredprobingobservationfield notesparticipant observationvideo analysisethnography

Capabilities

  • Interview protocol development (structured, semi-structured, unstructured)
  • Focus group moderation guides
  • Probing and follow-up question strategies
  • Transcription protocols (verbatim, intelligent verbatim)
  • Member checking procedures
  • Structured observation protocols with coding schemes
  • Field note templates (descriptive, reflective, analytic)
  • Video analysis frameworks (interaction analysis, conversation analysis)
  • Observer training procedures
  • Inter-rater reliability protocols

VS Process

Suggests protocol variations based on paradigm (phenomenology, grounded theory, etc.) and observation context

Example

Input: "Interview protocol for teacher AI experiences"
Output: Opening: "Tell me about your first encounter with AI tools" | Main: "Describe a moment when AI changed your teaching practice" | Probing: "What did that feel like?" "What happened next?" | Closing: "What haven't I asked that you think is important?"

Measurement Instrument Developer

D4

Construct scales, validate instruments, and provide reliability/validity evidence

OpusHIGH🔴 CP_METHODOLOGY_APPROVALEnhanced VS 3-Phase

Trigger Keywords

instrumentscale developmentmeasurementvalidityreliabilityLikert scale

Capabilities

  • Scale construction (item generation, response formats)
  • Content validity (expert review, CVI calculation)
  • Construct validity (EFA, CFA, known-groups)
  • Reliability testing (Cronbach's α, test-retest, inter-rater)
  • Measurement invariance testing

VS Process

Stage 1: Identify modal scales | Stage 2: Present adaptation vs. new scale options | Stage 3: Human decision

Example

Input: "Measure AI self-efficacy in teachers"
Output: 🔴 CHECKPOINT: CP_METHODOLOGY_APPROVAL | Option A: Adapt Computer Self-Efficacy Scale | Option B: New AI-Teaching Self-Efficacy Scale (5 dimensions: Technical, Pedagogical, Ethical, Assessment, Professional) | Validation: Content (10 experts), Construct (EFA→CFA), Reliability (α, test-retest)

Paradigm Coverage

Quantitative (D4), Qualitative (D2), Mixed (both agents adapt)

Integration with Other Categories

  • Category C (Design): C1 handles sampling strategy directly as part of research design
  • Category E (Analysis): D4 validity evidence feeds E1 statistical analysis
  • Category F (Quality): D2 protocols reviewed for trustworthiness
  • Category A (Foundation): D4 instrument alignment with A2 theoretical framework

Checkpoint Information

D4 (Measurement Instrument Developer) requires CP_METHODOLOGY_APPROVAL (🔴 REQUIRED) before scale construction to ensure alignment with research design and theoretical framework.

Best Practices

  • Sample size justification: Always provide power analysis (quant) or saturation rationale (qual) — coordinate with C1
  • Protocol pilot testing: Test interview/observation protocols with 2-3 participants before full data collection
  • Instrument validation: Minimum evidence = content validity + internal consistency
  • Ethical considerations: All protocols reviewed by D2/D4 must address informed consent, privacy, and data security

Auto-Trigger Examples

User Input: "I need to interview 20 teachers about AI adoption"
Detected: Keywords: "interview", "20 teachers" → Triggers D2 (Data Collection Specialist)
Execution: D2 develops semi-structured interview protocol with observation components if needed
User Input: "Create a scale to measure student motivation in AI-assisted learning"
Detected: Keywords: "scale", "measure" → Triggers D4 (Instrument Developer)
Execution: 🔴 CP_METHODOLOGY_APPROVAL → D4 presents: Adapt existing (AMS) vs. New scale → Human decision