Prompt Engineering Isn’t the Skill You Think It Is

How to guides By Nov 20, 2025 No Comments

Every few weeks there’s a new post doing the rounds about “mastering prompt engineering”.

The latest one I saw claimed the author had spent 1,000 hours developing the perfect prompt framework. A long thread, a catchy acronym, multiple reposts by “AI Influencers” on LinkedIn and other social medias, the whole nine yards.

The top reply to the Reddit post? Someone who’d built their own framework that performs just as well without the 1000 hours of development.

That pretty much sums up the “prompt engineering” hype for me.

Most of these frameworks aren’t some kind of LLM magic, they’re just basic communication, wrapped in an acronym. If you can already explain yourself clearly, you’re doing the important parts already – with or without a fancy label.

Prompt frameworks are just checklists for clarity

Take any of the popular frameworks, KERNEL, PRISM, whatever’s trendy this week. Strip away the branding and they all say the same thing:

  • Give context
  • State the task
  • Add any constraints
  • Describe the output format
  • Maybe give an example

That’s it. That’s how you should be asking anyone for help – human or machine.

When I use AI language tools, I just talk the way I brief a colleague. I give background, explain what I’m trying to do, note any constraints, and say what I want back.

To test things out though, I fed the KERNEL framework into Microsoft Copilot’s “prompt coach” along with one of my regular LLM prompts and asked it to improve my prompt based on KERNEL. It told me I’d already hit every part of the framework except one letter – L which is “logical structure”. So based on the framework, Copilot’s big contribution was to… reformat my words under headings.

“Build me a digital twin”

If I walked into a project room and said:

“Can you build me a digital twin?”

…everyone would stare at me.

You’d immediately ask:

  • For which assets?
  • For which phase?
  • For which use cases?
  • What’s in scope, what’s out?
  • What’s the budget and timeline?
  • Who’s actually going to use this thing?

But people type the AI equivalent of “build me a digital twin” every day and expect a perfect answer.

The model isn’t the problem. The prompt is, and the prompt is just a reflection of how clearly the person can think and communicate.

What actually matters when you talk to AI

Ignore the noise for a second. For real work, three things matter far more than any prompt acronym:

  1. Clarity of intent
    You need to know what “good” looks like. If you don’t know what you want, the model definitely won’t.
  2. Relevant context
    Models are pattern machines. The more signal you give them – project background, data samples, code snippets, audience, constraints – the narrower the space they have to guess in.
  3. Useful constraints
    Tell it what not to do. Length limits. Tone. Format. Example structures. Validation rules. This is where domain knowledge shows up.

None of that requires a framework. It just requires you to think the problem through and then explain it properly.

Prompt “engineering” has exposed our weak communication skills

One thing AI has done is make our communication habits painfully visible. People who were always clear, structured and detailed now get great results from AI tools.

People who sent vague emails, half-finished briefs and one-line support tickets now send vague prompts and get vague answers – and then blame the AI for being “slop generators”. To those people, prompting frameworks feel like a shortcut, or even a cheat code, “If I follow this recipe, I don’t need to think too hard about what I’m asking.”

Where frameworks can help

To be fair, frameworks aren’t completely useless. They can help people who:

  • don’t normally write structured briefs
  • are new to the tools and need a checklist
  • panic when presented with a blank text box

As training wheels, that’s fine. The problem is pretending the training wheels are now a profession.

The real work isn’t the prompt

The other reason dislike “prompt engineering” trying to be forced as a job title, or with prompt packs being sold by AI grifters, is that the hard problems sit around the model, not in the wording:

  • gaining years of domain expertise
  • connecting internal data
  • building retrieval and context pipelines
  • designing workflows
  • integrating with existing tools
  • evaluating outputs
  • handling security and governance
  • deciding when not to use AI at all

That’s actual engineering and design work. The sentence you type into the box is the visible bit, but it’s the easy part.

So what should we actually be teaching people?

Instead of selling prompt frameworks like magic spells, we should probably focus on:

  • How to describe a problem clearly
  • How to give enough context without dumping everything
  • How to define success (“What do you want to do with this output?”)
  • How to check whether the answer is any good

Those skills are useful with AI, with colleagues, with clients, with support desks.. everywhere. If more people could write a decent support ticket, project brief or email, most of the “prompt engineering” industry would evaporate overnight.

AI hasn’t created a brand-new superpower called prompt engineering, but rather it’s just revealed an old truth:

People who communicate clearly get better results.

If you already give solid context, explain your intent, and set expectations, you’re doing fine. You don’t need a seven-step framework and a certificate to talk to a text box.

If you don’t, a framework might help you remember the steps, but it won’t think for you. Instead of spending 1,000 hours “perfecting” prompts, maybe spend a bit of time learning to brief someone you’d actually work with.

The AI, and everyone else around you, will thank you for it.

No Comments

Leave a comment

Your email address will not be published. Required fields are marked *