Skip to content

How to write AI prompts that actually work — a beginner’s guide

Most prompt advice is either too abstract to use ('be specific') or too magical to trust ('act as an expert'). Here are the patterns that actually pay off — with concrete examples, the order to learn them, and the one mistake most beginners make in their first week.

Programming code on a dark screen
Summary · 30 sec

Most prompt advice is either too abstract to use ('be specific') or too magical to trust ('act as an expert'). Here are the patterns that actually pay off — with concrete examples, the order to learn them, and the one mistake most beginners make in their first week.

Prompt engineering is not really engineering. It is closer to writing — a small craft of describing what you want clearly enough that someone (or something) else can deliver it. Most of the advice on the internet is either too abstract to act on or too magical to trust. The patterns that actually work are simpler than they sound, and learnable in a few hours.

This is a beginner’s guide to writing AI prompts that produce useful, reliable output. The patterns below are in the order they pay off — learn the first three this week and you will be in the top quartile of users by Friday.

1. Tell the model who it is talking to

Before any other technique, give the model context about you and the situation. Most underwhelming AI outputs are not the model’s fault; they are the result of asking a generic question and getting a generic answer.

Weak prompt:

“Write an email to a customer who is unhappy with a refund.”

Better prompt:

“I run a small online shop selling handmade ceramics. A customer is unhappy that her order took six weeks because of a kiln issue. I want to refund her, apologize, and keep the relationship — she has bought from us before. Draft the email I should send. Warm, plain English, under 120 words.”

The second version is not asking the model to do more work. It is removing the guesswork. The output will be specific in ways the first version cannot be.

2. Give the model the constraint, not just the goal

The second most common reason AI output disappoints is that the writer told the model what they want but not what they don’t want.

If the output keeps coming back too long, say “under 150 words” up front. If it keeps coming back too formal, say “casual, like you are texting a colleague.” If it keeps including a section you don’t need, say “skip any introduction or summary; go straight to the steps.”

The model is not a mind-reader. The first draft will reflect a generic interpretation of your goal. Constraints are how you narrow it.

3. Show, don’t tell (few-shot)

For any task with a particular voice, format, or style, giving the model one or two examples is faster than describing what you want.

Example prompt:

“Rewrite the following product descriptions in the same voice as these two examples. Examples first:
[example 1]
[example 2]
Now rewrite these:
[product 1]
[product 2]”

This is called “few-shot prompting” in the technical literature. It works because the model is good at pattern-matching. One or two examples of what good output looks like is worth two paragraphs of abstract description.

4. The “ask me first” pattern

For any non-trivial task, the first move should be the model asking you questions, not producing output. Telling it to do so is a one-line technique that improves nearly every prompt:

“Before you start, ask me up to five clarifying questions if anything important is unclear.”

This produces a brief Q&A round, after which the model has the context it needed. The output is consistently better, and you have not wasted a draft.

For shorter tasks this is overkill. For anything you actually care about — a presentation, a strategy memo, a careful email — it is worth the extra two minutes.

5. The “review and revise” loop

The single biggest beginner mistake is treating the first output as the final output. Almost every AI draft is improved by one round of careful revision.

The pattern:

  • Get the first draft.
  • Read it carefully. Note three specific things you want to change.
  • Ask the model to revise with your three notes, by name. “Tighten the second paragraph, remove the cliche in the third line, and make the closing less salesy.”
  • Repeat once if needed.

One full revision round usually moves an output from “fine” to “actually good.” Beginners skip this step because it feels like extra work. The opposite is true — the second draft is faster than the first, and it is the one worth using.

6. The one mistake to avoid

If you are new to prompting and you do nothing else, do this: stop asking the model to lie about its limits.

Prompts that say things like “you are a world-class expert who never makes mistakes” or “you know everything about X” do not make the model more accurate. They make it more confident. That is the opposite of what you want.

The model is going to sometimes be wrong. The best prompts make space for that. End your important prompts with a single line:

“Flag any claim you are unsure about, and tell me what you would need to verify.”

This produces output that is honest about its limits — which makes it much more useful. The confidence you wanted is something you should bring; the verification is something you and the model should do together.

A short library of prompts to keep

A handful of prompts worth saving and reusing:

  • The summary prompt. “Summarize this in one paragraph, then in three bullets, then in one sentence. Be specific; don’t generalize.”
  • The reverse outline. “Read this draft and tell me what the actual structure is — section by section. Don’t suggest edits yet.”
  • The pre-call brief. “I have a 30-minute first call with [name] at [company]. Tell me their background, what their company does, two specific things I should ask, and one question they might be wrestling with.”
  • The decision frame. “I am deciding between [option A] and [option B]. Without telling me which to pick, lay out the three strongest arguments for each, and one question I should be able to answer before deciding.”
  • The voice-match. “Rewrite this email in the voice of the two examples I will paste. Keep the underlying message exactly the same.”

Where to go next

The patterns above will take you most of the way. The remaining 20% of prompting craft is domain-specific — what works for coding is different from what works for copywriting, which is different from what works for research. The best way to learn the rest is to keep a running notes file of the prompts that worked for you, and the small adjustments that improved them. Six months in, you will have a library that is more useful to you than any blog post.

0 comments

Be the first to respond

Your email address will not be published. Required fields are marked *

Markdown supported. Be kind.