1. The Shift to Deterministic Output
Problem: "Chat" interfaces encourage vague querying, leading to non-actionable "vibes" based responses.
Solution: Treat the LLM as a Junior Analyst. Do not ask for opinions; assign specific deliverables with defined schemas.
2. The Handshake Protocol
Every task assignment must satisfy the 4-part Handshake before execution begins.
sequenceDiagram
participant User as Architect
participant AI as Operator
User->>AI: 1. Objective (Definition of Done)
User->>AI: 2. Context (Constraints/Tone)
User->>AI: 3. Output Schema (JSON/Table)
AI->>User: [Confirm Understanding / Ask Clarification]
User->>AI: [Execute]
AI->>User: [Deliverable]
3. The Task Template (JSON-S)
Use this schema for all complex requests. It forces constraint definition.
📄 TEMPLATE: STD-TASK-BRIEF
1. OBJECTIVE:
- [ ] Review Portfolio Copy
- [ ] Identify 3 weakest claims
2. CONTEXT:
- Target Audience: Tech Recruiters
- Tone: Confident, terse, quantitative
3. STEPS:
- READ input file
- EXTRACT claims
- CRITIQUE against "So What?" test
- REWRITE
4. OUTPUT_FORMAT:
| Original | Critique | Proposed Rewrite | Metric |
5. QUALITY_BAR:
- No buzzwords ("passionate", "innovative")
- Every rewrite must contain a number.
4. Efficiency Metrics
Adopting this protocol resulted in:
- Prompting Time: Reduced from 8m to 3m (Template Reuse).
- Re-roll Rate: Reduced by 60% (Clearer initial constraints).