The Correct Way to Use Chain-of-Thought Prompting: Avoiding Common Pitfalls

2 minute read

Published:

I recently attended an AI in Finance conference and was surprised to discover that many researchers are using chain-of-thought (CoT) prompting incorrectly. This powerful technique can significantly improve reasoning in LLMs—but only when implemented properly. Let’s clarify the right approach, especially in zero-shot settings.

What is Zero-Shot Chain-of-Thought?

Zero-shot CoT involves two distinct rounds of prompting without using any task-specific examples. First, you prompt the model to generate step-by-step reasoning. Then, in a second round, you explicitly ask for the final answer based on that reasoning. This differs from few-shot CoT, which includes labeled examples.

Example Question

Consider the question:
“A company just announced a 20% dividend increase while simultaneously reporting declining revenues. Is this news good or bad?”

Incorrect Single-Stage Approach (Common Mistake)

# WRONG IMPLEMENTATION
response = llm.generate(
    prompt="Let's think step by step: A company just announced..."
)
# Output includes both reasoning AND final answer in one response

Correct Zero-Shot CoT Approach

# STEP 1: Trigger reasoning
reasoning_prompt = "Q: A company announced a 20% dividend increase but declining revenues... A: Let's think step by step"
intermediate_response = llm.generate(reasoning_prompt)

# STEP 2: Request final answer
answer_prompt = f"""
Based on this analysis: '{intermediate_response}'
Is the news good or bad? Answer ONLY 'good' or 'bad'"""
final_answer = llm.generate(answer_prompt)

Why This Matters

  1. Prevents answer bleeding: Separating reasoning from final output avoids biasing the model’s explanation
  2. Improves transparency: You get a clean, auditable chain of logic
  3. Reduces hallucination: Clear separation minimizes speculative or fabricated conclusions

Explainable Prompting vs. Chain-of-Thought Prompting

Although both aim to improve interpretability, they differ in structure and use case:

FeatureExplainable PromptingChain-of-Thought Prompting
GoalProvide human-readable justificationEncourage structured reasoning
Output StructureSingle response with embedded rationaleTwo-step: reasoning then answer
Best ForSummaries, end-user reportsComplex logic, analysis
Process“Explain why…” style promptsSequential prompt-response flow
Example“Explain why this is a strong argument”“Solve this logic puzzle step-by-step”

Key Differences:

  • Explainable prompting focuses on self-contained narratives.
  • CoT emphasizes decomposing logic from decision-making.
  • Use Explainable Prompting for clear summaries, and CoT when correctness and traceability are essential.

Key Implementation Rules

  1. Do not include “so the answer is…” in the initial reasoning prompt
  2. Always split into two prompts/responses
  3. Validate or sanitize intermediate reasoning if needed

References