Which prompt tweaks reduce hallucinations on ChatGPT and Perplexity

Quick Answer

The most effective prompt tweaks to reduce hallucinations differ for Perplexity and ChatGPT, but both follow similar foundational principles: referencing credible sources, adding constraints, and using verification steps.​

The most effective prompt tweaks to reduce hallucinations differ for Perplexity and ChatGPT, but both follow similar foundational principles: referencing credible sources, adding constraints, and using verification steps.​

Perplexity AI Hallucination Reduction Source-Constrained Prompting: Explicitly instruct Perplexity to use specific sources (e.g., “Cite only peer-reviewed articles or SEC filings”).​

Evidence-based Requests: Use phrases like “According to…,” “Based on recent studies…,” or “Cite all facts with a reference” to force source-backed answers.​

Search Operator Usage: Use advanced search operators to restrict retrieved data to reliable domains (e.g., site:.gov, site:.edu).​

Format Demands: Request answers in tables with dedicated source columns or enforce inline citations for every claim.​

Deep Research Mode: Activate research tools or specialized spaces for more rigorous fact-checking and in-depth source cross-checking.​

ChatGPT Hallucination Reduction Grounded Answer Prompts: Direct ChatGPT to use only the sources provided or accessible in its context. Example: “Use only the sources below to answer. If the sources don’t contain the answer, say, ‘I don’t have enough evidence.’”.​

Chain-of-Verification: Force the model to self-check by asking it to draft, then list critical verification questions, answer those from the sources, and edit the initial answer for unsupported claims.​

Atomic Facts Checklist: Break down answers into bullet-pointed atomic facts, requiring a source for each or marking those without support for removal.​

Strict Citation Prompts: Specify “Cite a source for every statistic, quote, or named entity. If a claim has no source, do not include it.”.​

Post-generation Self-rating: Instruct the model to rate support for every sentence post-generation as Supported / Partially Supported / No Support, revising the output as needed.​

Lower Model Temperature: Lower the sampling temperature parameter or top-k setting if possible—this selects more probable, less speculative tokens, reducing error rates.​

Platform Differences Platform Hallucination Reduction Strategies Strengths Perplexity Source constraints, evidence-first answers, search operators, enforced citations Auto-inlines sources, excels in research, can query real-time ChatGPT Source-limited input, chain-of-verification, citation requirements, atomic checklist Creative controls, strong self-correction, highly adaptable Prompting each platform for evidence, source constraint, and verification before output is the most reliable way to reduce hallucinated and unsupported information

Last updated:

Was this answer helpful?

Recommended Questions