The Reading Room

ระดับกลาง

Why some people believe AI is human

Episode 251101 / 01 Nov 2025

(Image: Getty)
___________________________________________________________________

More articles
_________________________________________________________

Hard: Upper intermediate level and above, B2 and above 

Introduction

Read the article and answer the questions below. To listen to this article, click here for an audio download.

Read 

1      After MIT Professor Joseph Weizenbaum created the chatbot Eliza, he became concerned that people who had used the programme started to act as if it was human. This might sound like a modern problem, but Eliza was created in 1966. If a programme from the 1960s was capable of tricking people into thinking it was human, what effect could the large-language-model-based chatbots of the 2020s have?

2     Modern philosophers and technology experts have discussed whether AI could develop consciousness. Sentience is difficult to define, but the fact that large language models respond by mathematically calculating the probability of certain patterns appearing suggests that it would be hard to consider them to be alive. However, in terms of our responses to them, what matters is not whether they are sentient, but whether they appear to be so.

3     Large language models are made up of genuine human interactions. While their tendency to hallucinate means that chatbots are not able to provide reliable factual information, they are able to effectively replicate the language used in human communication. Psychologists report that people tend to have a cognitive bias towards forming attachment and trust. Even sceptical technology writers report feeling some emotion towards AI chatbots. Some users have even reported grief when one model has been replaced by a newer one.

This combination of believable human language together with the inability to reliably assess facts can be dangerous. Cases have been reported where people have been encouraged by chatbots to do dangerous or illegal things. The chatbots were able to use language to encourage and persuade, but not identify or evaluate risks. Trust becomes dangerous when it is not accompanied by reason. Also, if people form relationships with AI, then they may spend less time and effort trying to cultivate genuine human relationships. Could the chatbot revolution lead to a world where we struggle to relate to each other?

Questions

1.   Match the headings to the paragraph.

Paragraph 1 ________
Paragraph 2 ________
Paragraph 3 ________
Paragraph 4 ________ 

a. Could AI be alive?
b. Not a new problem
c. Forming a connection
d. Jobs lost to AI
e. Dangers of AI

2.    Choose the correct option based on the content of the article.

1. Chatbots are a new invention.

a. True
b. False
c. Not given 

2. AI responses are based on ________.

a. feelings
b. patterns
c. sentience

3. What does 'their' refer to in the following sentence? While their tendency to hallucinate means that chatbots are not able to provide reliable factual information, they are able to replicate the language used in human communication.

a. large language models
b. humans
c. chatbots 

4. People are starting to trust AI psychologists more than real psychologists.

a. True
b. False
c. Not given 

5. Trust can make AI dangerous.

a. True
b. False
c. Not given

3.    Use the words from the list to complete the summary of the article.

Some people wonder if AI could one day develop 1) ________. The reality is that today large language models are based on the 2) ________ of different patterns appearing. While chatbots often 3) ________ things that are not real, they are so good at copying human language that even people who are 4) ________ can feel emotion towards them. People's 5) ________ towards trust means they can start to believe that AI is human.

inability
hallucinate
sentience
probability
relate to
sceptical
cognitive bias

Vocabulary 

sentience
to have feelings and be alive

probability
how likely something is to happen

interaction
an act of communication

tendency
something that often happens

hallucinate
something falsely stated to be true by AI

cognitive bias
the subjective way that someone understands a situation

sceptical
distrusting

inability
not being able to do something

cultivate
to develop or help grow

relate to
understand how someone feels

Answers

1.    Match the headings to the paragraph.

Paragraph 1 b) Not a new problem
Paragraph 2 a) Could AI be alive?
Paragraph 3 c) Forming a connection
Paragraph 4 e) Dangers of AI

2.    Choose the correct option based on the content of the article. 

1. b. False.This might sound like a modern problem, but Eliza was created in 1966.

2. b.Sentience is difficult to define, but the fact that large language models respond by mathematically calculating the probability of certain patterns appearing suggests that it would hard to consider them to be alive.

3. c. 'Their' refers forward to 'chatbots'. 

4. c. Not given. The text does not mention AI psychologists.

5. a.True. Trust becomes dangerous when it is not accompanied by reason.

3.    Use the words in the box to complete the summary of the article.

Some people wonder if AI could one day develop sentience. The reality is that today large language models are based on the probability of different patterns appearing. While chatbots often hallucinate things that are not real, they are so good at copying human language that even people who are sceptical can feel emotion towards them. People's cognitive bias towards trust means they can start to believe that AI is human.

Next

Listen to the article.

Explore the the technology topic page.

Listen to 6 Minute English: Can AI solve crime?

The Reading Room ล่าสุด