Say please? The best way to talk to an AI

Thomas Germain
News imageSerenity Strull/ BBC A collage of a woman looking at a phone (Credit: Serenity Strull/ BBC)Serenity Strull/ BBC

From being polite to pretending you're on Star Trek, the advice you get about talking to chatbots can be truly bizarre, and totally useless. Here's what actually works.

When a group of researchers decided to test whether "positive thinking" made AI chatbots more accurate, it led to some surprising results. As they asked various chatbots questions, they tried calling the AIs "smart", encouraged them to think carefully and even ended their questions with "This will be fun!" None of it made a consistent difference, but one technique stood out. When they made an artificial intelligence pretend it was on Star Trek, it got better at basic maths. Beam me up, I guess.

People have all sorts of bizarre strategies to get better responses from large language models (LLMs), the AI technology behind tools like ChatGPT. Some swear AI does better if you threaten it, others think chatbots are more cooperative if you're polite and some people ask the robots to role-play as experts in whatever subject they're working on. The list goes on. It's part of the mythology around "prompt engineering" or "context engineering" – different ways to construct instructions to make AI deliver better results. Here's the thing: experts tell me that a lot of accepted wisdom about prompting AI simply doesn't work. In some cases, it could even be dangerous. But the way you talk to an AI does matter, and some techniques really will make a difference.

"A lot of people think there's some magic set of words you can use that will make LLMs solve a problem," says Jules White, a computer science professor who studies generative AI at Vanderbilt University in the US. "But it's not about word choice, it's about how you fundamentally express what you're trying to do."

Mind your manners?

In 2025, a user on X (formerly Twitter) posted a tweet asking, "I wonder how much money OpenAI has lost in electricity costs from people saying 'please' and 'thank you' to their models". Sam Altman, chief executive of OpenAI, which makes ChatGPT, responded. "Tens of millions of dollars well spent," he said. "You never know."

Most people read the last line as a cheeky reference to the idea of a potential AI apocalypse, although it's hard to know how seriously to take that "tens of millions of dollars" number. But politeness is also a practical question.

News imageSerenity Strull/ BBC Some studies suggest being nice to AI gets better answers – others find the opposite (Credit: Serenity Strull/ BBC)Serenity Strull/ BBC
Some studies suggest being nice to AI gets better answers – others find the opposite (Credit: Serenity Strull/ BBC)

LLMs work by chopping your words up into little chunks called "tokens", before analysing them using statistics to come up with an appropriate response. That means every single thing you say, from your word choice to an extra comma, will affect how the AI responds. The problem is it's unspeakably hard to predict. There's been all kinds of research looking for patterns in minor changes to AI prompts, but much of the evidence is conflicting and inconclusive.

For example, one 2024 study found that LLMs gave better and more accurate answers when they asked politely instead of just giving commands. Even weirder, there were cultural differences. Compared to Chinese and English, chatbots speaking Japanese actually did slightly worse if you got a little too courteous.

But don't rush out and buy your AI a thank you card just yet. Another small test found a previous version of ChatGPT was actually more accurate when you insulted it. And overall, there just hasn't been enough research on this subject for any solid determinations. Plus, AI companies constantly update their chatbots, which means research immediately goes out of date. 

Experts say that AI models have improved dramatically in just a few years, which has rendered techniques like flattery, being polite, insulting or threatening a waste of time if your goal is getting the AI to be more accurate.

Keeping Tabs

Thomas Germain is a senior technology journalist at the BBC. He writes the column Keeping Tabs and co-hosts the podcast The Interface. His work uncovers the hidden systems that run your digital life, and how you can live better inside them.

"It was 100% a crapshoot back then," says Rick Battle, an applied machine learning engineer at Broadcom who co-authored the Star Trek study. Although that study was conducted in 2024, things have already changed. Today, Battle and others say the newer AI models you encounter on any mainstream product such as ChatGPT, Gemini or Claude are better at picking up the most important parts of your prompt. They probably won't be swayed by these small changes in language, at least not in any consistent manner that you can take advantage of. 

The takeaway is unsettling in its own way. Companies design AIs like ChatGPT or Google's Gemini to behave like people, so it makes sense they can sometimes seem as if they have moods you can manage or personalities you can steer. Don't be fooled. AI tools are mimics, not living beings. They're just simulating human behaviour. If you want better answers, stop treating AI like a person and start treating it like a tool. 

How to talk to your chatbot

There are some very real problems with AI, from ethical concerns to the environmental impact it can have. Some people refuse to engage with it altogether. But if you are going to use LLMs, learning to get what you want faster and more efficiently will be better for you and, probably, for the energy consumed in the process. These tips will get you started. 

Ask for multiple options

"The first thing I tell people is don't ask for one answer, ask for three or five," White says. If you want help with a piece of writing, for example, tell the AI to give you multiple options that vary in some important way. "This forces the human being to re-engage and think about what they like and why."

Give examples

Provide the AI with a sample whenever possible. "For instance, I see people ask an LLM to write an email and then get frustrated because they're like 'that doesn't sound like me at all'," White says. The natural impulse is to respond with a list of instructions, "do this" and "don't do that". White says it's much more effective to say "here are 10 emails I've sent in the past, use my writing style".

Ask for an interview

"Let's say you want to generate a job description. Tell the AI 'I want you to ask me questions, one at a time, until you've gathered enough information to write a compelling job listing," White says. "By doing it one question at a time, it can adapt to your answers."

Be careful about role-playing

"There used to be this thought that if you told the AI it was a maths professor, for example, it would actually have higher accuracy when answering maths questions," says Sander Schulhoff, an entrepreneur and researcher who helped popularise the idea of prompt engineering. But when you're looking for information or asking questions with one right answer, Schulhoff and others say role-playing can make AI models less accurate

"That can actually be dangerous," Battle says. "You're actually encouraging hallucination because you're telling it it's an expert, and it should trust its internal parametric knowledge." Essentially, it can make the AI act too confident.

But for wide open tasks with no single answer, role-playing is effective (think advice, brainstorming and creative or exploratory problem solving). If you're nervous about job interviews, telling a chatbot to imitate a hiring manager could be good practice – just consult other resources, too.

Stay neutral

"Don't lead the witness," Battle says. If you're trying to decide between two cars, don't say you're leaning towards the Toyota. "Otherwise, that's the answer you're likely to get."

Pleases and thank yous

According to a 2019 Pew Research Center survey, more than half of Americans say "please" when they're talking to their smart speakers. It seems that trend continued. A 2025 survey by the publisher Future found 70% of people are polite to AI when they use it. Most said they're nice because it's just the right thing to do, though 12% said they do it to protect themselves in case of robot uprisings.

Politeness may not protect you from angry robots or make LLMs more accurate, but there are other reasons to keep doing it.

More like this:

• I hacked ChatGPT and Google in 20 minutes

• Not on TikTok? They're tracking you anyway

• The words you can't say on the internet

"The bigger thing for me is saying 'please' and 'thank you' might make you more comfortable interacting with the AI," says Schulhoff. "It's not helping the performance of the model, but if it's helping you use the model more because you're more comfortable, then it's useful."

There's also the tenderness of your own human nature to consider. The philosopher Immanuel Kant argued that one reason you shouldn't be cruel to animals is that it's also damaging to yourself. Essentially, being unfriendly to anything makes you a harsher person. You can't hurt AIs feelings because it doesn't have any, but maybe you should be nice anyway. It’s a habit that could benefit other parts of your life. 

--

For more technology news and insights, sign up to our Tech Decodednewsletter, while The Essential Listdelivers a handpicked selection of features and insights to your inbox twice a week.

For more science, technology, environment and health stories from the BBC, follow us on Facebook and Instagram.