You’ve uploaded your sources. You ask “summarize this.” NotebookLM gives you a summary. It’s fine. But it’s not what you actually needed: the specific argument in section 3, the data point that contradicts the author’s conclusion, or every claim that isn’t backed by a citation. The difference between a mediocre and a genuinely useful NotebookLM session is prompt specificity. Vague questions produce vague answers. This guide gives you 30 prompts organized by use case — research, learning, Audio Overview, content creation, and business strategy.
Key Takeaways
- NotebookLM reached 31.5 million monthly visits by late 2024 because its citation-grounded AI is genuinely useful, but most users only scratch the surface with basic summary requests
- Prompts that specify format (table, list, quote), perspective (skeptic, executive, peer reviewer), and scope (this source vs. all sources) produce dramatically better output
- The prompts below cover seven use cases: source understanding, synthesis, output preparation, quality checking, Audio Overview, learning, content creation, and business strategy
- Kortex’s Prompt Library lets you save any prompt and insert it with one click, eliminating the need to retype the same prompts every session
Why do most NotebookLM prompts produce weak results?
NotebookLM reached 31.5 million monthly visits by late 2024, making it one of the fastest-adopted AI research tools ever released. Most users arrive expecting a smarter search engine. They type questions they’d type into Google: short, general, and open-ended.
The problem is that NotebookLM is not a search engine. It’s a reasoning system over a specific document set. The quality of its output scales directly with the specificity of your instruction. “Summarize this paper” gives you a summary. “List every major claim in this paper, and for each, tell me whether it’s backed by data, by citation, or by the author’s opinion alone” gives you a structured audit you can act on.
Three things make prompts more effective with NotebookLM:
Specify a format. Asking for a table, a numbered list, a timeline, or a set of quotes forces the model to organize information rather than prose-dump it.
Specify a perspective. “What would a skeptical peer reviewer say?” and “What would a time-pressed executive need to know?” pull out completely different layers of the same source set.
Specify scope. “Across all my sources” versus “in this specific document” produce different levels of synthesis. Use both, at different stages.
The prompts below apply all three principles. The complete student workflow shows how these prompts fit into a week-by-week research sequence for academic work specifically.
What are the best prompts for understanding new sources?
These five prompts are designed for the first 30 minutes of any research session — before reading a single source in detail. They extract the field’s structure, key disagreements, and evidence quality across your entire source set and produce a working map that makes every subsequent hour of reading significantly more efficient.
Use these when you’ve just uploaded documents and need a map of what you’re working with. Run them before you read anything in detail.
Prompt 1: The contradiction finder
“What claims in these sources directly contradict each other? List each contradiction with quotes from both sides.”
What makes it work: Forces NotebookLM to identify disagreement rather than consensus. Most summaries blend conflicting views into false agreement. This prompt surfaces where your sources actually fight each other, which is where the interesting research questions live.
When to use it: Any time you have three or more sources on the same topic. Especially useful before writing a literature review or preparing a debate-style presentation.
Prompt 2: The evidence auditor
“List every major claim in these sources. For each, tell me: is it backed by data, by citation, or by the author’s opinion only?”
What makes it work: The three categories (data, citation, opinion) force a structured output and reveal how much of a source is asserted versus supported. You’ll often find that arguments you found persuasive were almost entirely opinion-based.
When to use it: Before relying heavily on any source for a high-stakes output. Also useful for journalism and fact-checking workflows.
Prompt 3: The gap finder
“What questions do these sources raise that they don’t answer? What’s conspicuously absent from this literature?”
What makes it work: NotebookLM can identify what’s missing because it understands the shape of the argument. This prompt surfaces the research gaps that your own paper or project might address.
When to use it: When developing your research question or identifying the contribution your work will make.
Prompt 4: The assumption surfacer
“What assumptions does this source make that it never explicitly states? What would need to be true for its conclusions to hold?”
What makes it work: Every argument rests on unstated premises. This prompt makes those premises visible. It’s the difference between accepting a conclusion and understanding its foundations.
When to use it: When reading persuasive sources that you want to engage with critically. Essential for policy research, philosophy, and any field where framing shapes conclusions.
Prompt 5: The methodology critique
“If this were submitted for peer review, what methodological objections would a skeptical reviewer raise? What are the weakest links in the research design?”
What makes it work: Adopting the peer reviewer’s stance shifts the evaluation from “what does this argue?” to “how solid is this argument?” The skeptical framing prevents the model from just listing strengths.
When to use it: Before citing a source in high-stakes work. Especially valuable for quantitative studies where methodology can quietly undermine findings.
What are the best prompts for synthesizing multiple sources?
These five synthesis prompts require at least three sources to be meaningful and work best after you’ve completed the initial source map. They surface cross-source patterns — consensus views, active contradictions, chronological development, stakeholder motivations — that reading sources individually never reveals. The consensus map in particular needs five or more sources to produce a useful table.
Use these when you have a full set of sources loaded and need to find patterns across them.
Prompt 6: The consensus map
“Across all my sources, what do all authors agree on? What does only one author claim? Format as a table with three columns: widely agreed, contested, unique claim.”
What makes it work: The table format makes the output immediately scannable. The three-column structure distinguishes solid ground (agreed) from interesting territory (contested) from outliers (unique claims).
When to use it: When writing a literature review section or trying to identify the dominant view in a field before taking a position.
Prompt 7: The timeline builder
“Extract all dates and events mentioned across my sources. Build a chronological timeline with source attribution for each entry.”
What makes it work: Source attribution in the timeline means you can trace any event back to its original document. Useful for historical research, case study construction, and tracking how a situation evolved.
When to use it: For any research involving narrative sequence: legal cases, policy history, organizational change, medical case studies.
Prompt 8: The stakeholder mapper
“Who are the key stakeholders mentioned across these sources? For each, describe what they want, what they fear, and what they stand to gain or lose.”
What makes it work: The want/fear/gain/lose framework forces NotebookLM to infer motivation, not just role. “The pharmaceutical company” becomes much more analytically useful when you know what it’s optimizing for.
When to use it: Policy analysis, business case research, negotiation prep, or any situation involving competing interests.
Prompt 9: The definition reconciler
“The term ‘[X]’ appears in multiple sources. How does each source define it differently? Which definition is most precise and why?”
What makes it work: Terminological inconsistency is one of the most common hidden problems in research. This prompt makes definitional conflicts explicit before they contaminate your own writing.
When to use it: Whenever a key term appears frequently across sources in a field where definitions are contested. Especially valuable in social science, law, and medicine.
Prompt 10: The surprising connection
“What’s the most non-obvious connection you can draw between sources that appear unrelated on the surface? Explain the connection and which sources support it.”
What makes it work: Forces synthesis rather than summary. The “non-obvious” constraint prevents the model from stating things explicitly addressed in the sources and pushes toward genuine analytical cross-pollination.
When to use it: When you’re looking for a fresh angle or trying to connect disparate literature into a coherent argument.
What are the best prompts for preparing research outputs?
These five prompts convert raw research into formats your audience can act on: tightly scoped executive briefs, steelmanned arguments, devil’s advocate cases, beginner-friendly FAQs, and structured decision frameworks. Specifying a role (executive, skeptic, advocate) consistently produces more useful output than specifying a format alone — the role forces the model to adopt a perspective, not just reorganize.
Use these when you’re ready to turn your research into a report, article, presentation, or decision. For moving outputs into a long-term knowledge base, the NotebookLM vs Notion comparison covers when each tool fits which stage of the workflow.
Prompt 11: The executive brief
“Summarize the 5 most important things a time-pressed executive needs to know from these sources. No more than 3 sentences per point. Lead each point with the business implication, not the academic finding.”
What makes it work: The “business implication first” constraint forces translation from academic language to decision-relevant language. It prevents the summary from leading with methodology or context.
When to use it: Any time you need to present research to an audience that didn’t read the underlying sources and needs to make decisions, not deepen their understanding.
Prompt 12: The steelman builder
“Give me the strongest possible version of the argument that [X]. Use only evidence from my sources. Make it as compelling as possible. Don’t hedge.”
What makes it work: The “don’t hedge” instruction is critical. Most AI summaries are so balanced they become useless for advocacy. This prompt builds the best case for a specific position.
When to use it: Before a debate, negotiation, or persuasive presentation. Also useful for stress-testing your own position by making the opposing case as strong as possible.
Prompt 13: The devil’s advocate
“Now argue the opposite position with equal conviction. What’s the strongest case against [X]? Use evidence from my sources.”
What makes it work: Pair this with Prompt 12. Running both gives you a complete view of the argumentative landscape. The “equal conviction” instruction prevents a weak counterargument that only exists to confirm your original position.
When to use it: Always run this after Prompt 12. If the devil’s advocate case is stronger than your steelman, you should probably revise your position.
Prompt 14: The FAQ generator
“Generate 10 questions someone completely unfamiliar with this topic would ask when first encountering it. Answer each question using only my sources. Keep answers under 100 words each.”
What makes it work: Builds a natural “beginner-friendly” knowledge structure that’s much easier to work with than a dense literature summary. The 100-word constraint prevents answer-bloat.
When to use it: Before explaining the topic to a non-expert audience, teaching a concept, or writing a “what is X?” explainer article.
Prompt 15: The decision framework
“I need to decide whether to [decision]. Extract all relevant evidence from my sources and organize it as: evidence for, evidence against, unknowns that would change the decision.”
What makes it work: The “unknowns that would change the decision” column is the most valuable part. It tells you what additional research would actually move the needle, rather than just confirming what you already believe.
When to use it: Before any significant decision informed by research — investment, strategy, hiring, policy choices.
What are the best prompts for quality checking before publishing?
These two prompts catch the errors self-review consistently misses: claims that felt supported while writing but aren’t, and statistics likely outdated since your sources were published. Running both before finalizing any research output takes under five minutes and reliably surfaces at least one claim worth revising before it reaches a critical audience.
Use these before finalizing any research output. They catch problems your own reading won’t catch because you’re too close to the material.
Prompt 16: The citation checker
“I’ve written the following: [paste your draft text]. For every factual claim in this text, tell me: does my source material support it, contradict it, or is there no relevant source? Flag any unsupported claims.”
What makes it work: Catches the gap between what you wrote and what your sources actually support. Common failure mode: you understood something correctly but your paraphrase introduced a subtle distortion.
When to use it: Before any research output that will be read by a critical audience. Non-negotiable for academic submission, journalism, and legal documents.
Prompt 17: The recency auditor
“Which facts and figures in my sources are most likely to be outdated? What would I need to verify before publishing? Flag any statistics, policies, or trends that may have changed since publication.”
What makes it work: NotebookLM can identify data that’s time-sensitive (market statistics, policy details, epidemiological figures) and flag them for verification. It doesn’t know what’s changed, but it knows what’s likely to change.
When to use it: Any time you’re working with sources more than 12-18 months old. Publication dates in NotebookLM are visible, so the model can identify which sources may be stale.
What are the best prompts for NotebookLM’s Audio Overview?
Type these into the “Customize” box before generating. They change the entire character of what gets produced — not just the topic, but the depth, pace, and audience assumption.
Prompt 18: The tight brief
“Generate a 3-minute brief Audio Overview focusing only on [SPECIFIC TOPIC]. Skip the introduction and go straight to the main points. Prioritize practical examples over theory. Speak as if the listener has already read a summary of these documents.”
What makes it work: Forces short, focused output instead of broad rambling. The “already read a summary” instruction cuts all the context-setting that makes default overviews feel slow.
When to use it: When you know what you want and don’t need the full conversation. Commute listening, quick refreshers on a single paper or section.
Prompt 19: The expert-level deep dive
“Generate an Audio Overview that speaks to someone who has already read the material and does not need definitions explained. Assume expert-level familiarity with [FIELD]. Focus on: second-order implications of the key findings, where sources disagree and why it matters, what questions the material leaves unanswered. Skip all introductory framing. Start mid-conversation.”
What makes it work: The default Audio Overview assumes a general audience. Specifying expert familiarity stops the hosts from explaining basic concepts and pushes the conversation into analytical territory.
When to use it: When you’ve absorbed the basic content and want to push the discussion deeper. Works especially well with papers you’ve already read once.
Prompt 20: The expert-beginner format
“Generate a structured podcast interview. One host should act as the expert clearly defining key terms and concepts, while the other plays the curious beginner asking clarifying questions. Ensure the pace is steady for easy note-taking. Give complex concepts at least one analogy each. The beginner host should challenge assumptions, not just ask for definitions. End with 3 actionable takeaways.”
What makes it work: Structures hosts as teacher/student. The role assignment forces alternation between depth and accessibility. The “challenge assumptions” instruction stops the beginner from just being a passive question machine.
When to use it: When learning a new field or preparing to explain something to a non-expert. Pairs well with Kortex’s Podcast Pipeline for automatically routing new overviews to a personal RSS feed.
What are the best prompts for studying and learning?
These prompts produce flashcards, quizzes, and structured learning frameworks directly from whatever you’ve uploaded. No separate study tool required.
Prompt 21: The Feynman explainer
“Explain the uploaded material as if you were teaching a curious 7th-grade student. Use simple language and short sentences. Do not assume prior knowledge. Use analogies for every complex term. Ground abstract concepts in a real-world physical example. Focus on WHY this matters, not just WHAT it is. If any section is dense or technical, convert it into a short True/False quiz to check understanding.”
What makes it work: Named after physicist Richard Feynman’s learning technique: if you can’t explain something simply, you don’t fully understand it. NotebookLM applying this constraint reveals which concepts in your sources are genuinely clear versus which are technically described but conceptually murky.
When to use it: When starting with unfamiliar material, preparing to teach a concept, or testing your own understanding before an exam or presentation.
Prompt 22: The flashcard builder
“Generate 15 flashcards from this document. Format each as: Q: [question] / A: [answer]. Focus on: key definitions, core concepts, cause-effect relationships. Keep each answer under two sentences. Include the page number or section name next to each card. Weight towards concepts the author emphasizes most.”
What makes it work: The source reference in each card makes the output drop directly into Anki or any spaced-repetition system. The “author emphasizes most” instruction avoids generating cards from the introduction and conclusion rather than the substance.
When to use it: Exam prep, spaced repetition, building a reference card set when entering a new field.
Prompt 23: The skill gap audit
“Based on these sources, identify the core competencies someone needs to master [SKILL/FIELD]. For each competency: rate it as foundational, intermediate, or advanced. Quote the source that establishes its importance. Suggest one specific exercise or practice from the sources to develop it. Present as a self-assessment table I can use to track my progress.”
What makes it work: Converts passive source material into a personal learning roadmap. The three-tier rating distinguishes what you need before anything else from what’s worth investing in later.
When to use it: Professional development, upskilling in a new domain, structuring a self-directed curriculum.
What are the best prompts for content creation?
These prompts convert source material into publication-ready drafts. Every claim stays grounded in your uploads — which means less fact-checking after writing.
Prompt 24: The blog post outline
“Using all uploaded sources, create a detailed blog post outline for the topic: [TITLE]. Target audience: [AUDIENCE]. Structure: Hook — the most surprising stat or claim from the sources (cite it). H2 sections — 4–5 sections, each grounded in at least one source. For each section: key argument + supporting quote + one counterargument. CTA — what should the reader do next based on the sources? Flag any section where the sources are thin and I need to research more.”
What makes it work: The “flag thin sections” instruction is the key differentiator. It identifies where you need additional research before you start writing, rather than discovering it mid-draft when stopping is more disruptive.
When to use it: Blog posts, articles, and research reports where every section needs to be sourceable.
Prompt 25: The newsletter issue builder
“Using these sources, draft a newsletter issue for [AUDIENCE] on [TOPIC]. Format: SUBJECT LINE — 3 options ranked by predicted open rate. HOOK (50 words) — most compelling insight from the sources. SECTION 1 — What happened: key development, cite the source. SECTION 2 — Why it matters: implications for the reader specifically. SECTION 3 — What to do: 1–3 concrete actions. CLOSING — one thought-provoking question. Tone: [casual / professional / opinionated].”
What makes it work: The structured sections stop NotebookLM from producing a prose summary and force the editorial shape of an actual newsletter. The three subject line options — with a ranking rationale — are immediately usable.
When to use it: Newsletter creators turning research or news coverage into a regular publication for a specific audience.
Prompt 26: The thought leadership post
“Using these sources, write a thought leadership post for [LinkedIn / X / Substack] on [TOPIC]. Open with the most surprising or counterintuitive insight from the sources — no ‘I’ve been thinking about…’ opener. Take a clear, specific stance — not both sides. Back each key claim with a stat or quote from the sources. End with one question that invites debate. Word count: 150 for X, 300 for LinkedIn, 600 for Substack.”
What makes it work: The platform-specific word count and the ban on the “I’ve been thinking about…” opener address the two most common failure modes in AI-drafted social posts: wrong length and a weak hook.
When to use it: Founders, executives, and content creators building an audience through consistent publication on a specific topic.
What are the best prompts for business and strategy work?
These prompts work on strategy decks, customer research, contracts, and planning documents. Each demands evidence, not inference — making outputs directly usable in business contexts.
Prompt 27: The product decision memo
“Act as a Lead Product Manager reviewing internal documentation. Ruthlessly scan for actionable insights, ignoring fluff. Synthesize into a Decision Memo: USER EVIDENCE — direct quotes indicating user problems. FEASIBILITY CHECKS — technical constraints mentioned in the sources. BLIND SPOTS — what’s missing from the source text that a PM would need before deciding. DECISION — based only on these sources, what should we build or not build next? Use bullet points. No summaries of what the documents say — only what they mean for product decisions.”
What makes it work: The BLIND SPOTS section is what separates this from a standard summary. It identifies what the documents don’t cover — the gaps a product manager would need to fill before making a real decision rather than an informed-sounding guess.
When to use it: Product reviews, roadmap planning, customer research synthesis. Also useful for investors reviewing a startup’s internal documentation.
Prompt 28: The risk and assumption audit
“Review all uploaded strategy or planning documents. Identify: STATED RISKS — risks the documents explicitly acknowledge. UNSTATED ASSUMPTIONS — things the plan assumes are true but never validates. SINGLE POINTS OF FAILURE — where does the whole plan break if one thing goes wrong? RED FLAGS — claims not supported by evidence in the sources. Rate each item as High, Medium, or Low severity. Cite the specific document and section for each finding.”
What makes it work: Unstated assumptions are the most dangerous element in any plan because they’re invisible by design. This prompt makes them visible by treating absence of evidence as its own signal.
When to use it: Before any major launch, investment, strategic pivot, or board presentation. Essential for due diligence on any document set where you need to know what the authors assumed rather than proved.
Prompt 29: The second-order implications scanner
“Based on these sources, identify the second-order implications of [TREND/DECISION/FINDING]. First-order effects: what the sources directly state will happen. Second-order effects: what will happen as a result of those effects — infer from the sources and flag when inferring. Third-order effects: longer-term consequences — clearly mark as speculative. For each implication: who is most affected, what’s the timeframe, and what would have to be true for this to NOT happen?”
What makes it work: First-order effects are already stated in your sources. Second and third-order effects are where the real strategic insight lives. The “flag when inferring” instruction keeps speculation clearly separated from sourced analysis.
When to use it: Strategy sessions, policy analysis, investment research, or any situation where the interesting question is not “what does this say” but “what does this mean three steps out.”
What is the single best prompt for deep research?
If you use one complex prompt per session, use this one. Run it after loading a full source set for a structured research-to-insight report in a single pass.
Prompt 30: The ultimate synthesis
“You are an expert research mentor specializing in [DISCIPLINE]. Using all uploaded sources: STEP 1 — MAP: Identify the 5–10 most recurring themes. Build a concept map showing how they relate. STEP 2 — GAP: Identify at least 3 categories of research gaps — (a) methodological, (b) theoretical, (c) empirical. For each, write a description, an example from the sources, and a suggested research question. STEP 3 — SYNTHESIZE: Write a 3-paragraph synthesis representing the state of knowledge in this field based only on these sources. STEP 4 — ACT: What should a practitioner, founder, or researcher do first based on this synthesis? Present as a structured report with citations throughout.”
What makes it work: The four-step structure (Map → Gap → Synthesize → Act) forces a complete research workflow rather than a response to a single question. Most prompts optimize for one output type; this one builds a full analysis pipeline. STEP 4 is often the most valuable — it converts knowledge into action, which is why you were reading the sources in the first place.
When to use it: Research project kickoffs, strategy sessions, deep reading of an unfamiliar field. Allow 60–90 seconds for generation — the output is long.
How do you save and reuse NotebookLM prompts?
Retyping these prompts every session is the biggest practical friction point. Kortex’s Prompt Library solves this: save any prompt with a name, organize prompts by category, and insert them into the NotebookLM chat with a single click.
A practical folder structure for the prompts above: Research (prompts 1–5 for understanding, 6–10 for synthesis), Output (11–15), QC (16–17), Audio (18–20), Learning (21–23), Writing (24–26), Business (27–29), and a Capstone folder for prompt 30.
The first session you spend saving your best prompts pays dividends across every subsequent project. Pairing a saved prompt library with a consistent notebook organization system multiplies the value of both — the guide to organizing 50+ notebooks covers a tag system where notebook categories map directly to prompt categories. If you’re already using Kortex for export and organization, the getting started guide covers Prompt Library setup on the same page as export configuration.
Frequently asked questions
Do these prompts work on all document types?
Yes, with one caveat: prompts requesting methodology critiques or evidence audits work best on research papers and analytical reports. For narrative sources (books, journalism, transcripts), the gap finder and stakeholder mapper tend to be more productive than the evidence auditor.
How many sources should I upload before running these prompts?
For understanding prompts (1–5), even one or two sources is enough. For synthesis prompts (6–10), you need at least three — the consensus map needs at least five to produce a useful table. Business, content, and advanced prompts (18–30) generally work better with three or more sources, though the flashcard builder and Feynman explainer work fine on a single document.
Can I combine multiple prompts in one message?
You can, but generally don’t. NotebookLM performs better on focused, single-purpose requests. Running prompts 6 and 7 in sequence (separate messages) produces better output than asking for both simultaneously.
How specific should I be when referencing a paper in a prompt?
Use the title or author name. NotebookLM can identify sources within your notebook by title, author, or topic. “In the Smith 2024 paper” is clear. “In that recent paper” is ambiguous.
How do I handle it when NotebookLM says “I don’t have information about that”?
It means the concept isn’t in your source set. Either the term is absent from your documents, or it’s present but described differently. Try the definition reconciler prompt first: it often surfaces terminology differences that explain why direct searches fail.
Save your best prompts in Kortex and run them with one click on any notebook. The free tier includes a Prompt Library with 10 saved slots. Install Kortex →