Serge Billiouw

Bad GenAI output? It’s not always the prompt’s fault.

Most people start prompt engineering when GenAI doesn’t behave.

  • Tweaking wording.
  • Changing tone.
  • Trying “act as if” for the 10th time.

Sometimes it works.
But more often?
You’re fixing the wrong thing.

The real secret:
🧠 Feed it more context.

We ran a project where GenAI had to classify customer service calls.
Early results were meh.
Accuracy was inconsistent.
Until we changed one thing:

We started injecting customer context into the prompt:
✅ Products they use
✅ Previous support calls
✅ Interaction history

Boom 💥
Our success rate more than doubled.

Lesson:
LLMs are not mind readers.
They’re pattern matchers.
Give them richer context, and they give you better answers.