Question
In the mid-Sixties the Massachusetts Institute of Technology (MIT) computer scientist Joseph Weizenbaum created the first artificial intelligenceOpens in a new tab chatbot, named Eliza,
In the mid-Sixties the Massachusetts Institute of Technology (MIT) computer scientist Joseph Weizenbaum created the first artificial intelligenceOpens in a new tab chatbot, named Eliza, after Eliza DoolittleInformation icon: Context Clue: Eliza Doolittle is a fictional character from a play called Pygmalion. Her character is a poor flower girl living in London, England in the early 1900s who is trained by a professor to speak and act in the fashion of upper-class society.. Eliza was programmed to respond to users in the manner of a Rogerian therapist Information icon: Context Clue: A trained medical doctor who helps patients to take a lead role in maintaining their emotional and mental health.--reflecting their responses back to them or asking general, open-ended questions. "Tell me more," Eliza might say.
Weizenbaum was alarmed by how rapidly users grew attached to Eliza. "Extremely short exposures to a relatively simple computer program could induceInformation icon: Vocabulary Boost: "To induce" is a verb that, in this context, means "to cause" or "to make happen." powerful delusional thinking in quite normal people," he wrote. It disturbed him that humans were so easily manipulated.
From another perspective, the idea that people seem comfortable offloading their troubles not on to a sympathetic human, but a sympathetic-sounding computer program, might present an opportunity. Even before the pandemic, there were not enough mental health professionals to meet demand. In the UK, there are 7.6 psychiatrists per 100,000 people; in some low-income countries, the average is 0.1 per 100,000. "The hope is that chatbots could fill a gap, where there aren't enough humans," Adam Miner, an instructor at the department of psychiatry and behavioural sciences at Stanford University, told me. "But as we know from any human conversation, language is complicated."
Alongside two colleagues from Stanford, Miner was involved in a recent study that invited college students to talk about their emotions via an online chat with either a person or a "chatbot" (in reality, the chatbot was operated by a person rather than AI). The students felt better after talking about their feelings; it made almost no difference whether they thought they were talking to a real person or to a bot. The researchers hypothesisedInformation icon: Vocabulary Boost: "To hypothesize" is a verb that, in this context means, "to make an educated guess or prediction." that this was because humans quickly stop paying attention to the fact their interlocutor is not a real person. In this way, even a chatbot can make us feel heard.
Research also suggests chatbots could be a useful way to reach those resistant to the idea of therapy: we might tell a robot things we feel too scared to tell another person. One study found military veterans were more likely to discuss PTSDInformation icon: Context Clue: PTSD is an acronym for a psychological condition called Post-Traumatic Stress Disorder, which refers to a collection of psychological and physical symptoms that a person might continue to feel after a particularly scary or painful experience. with a chatbot than with another human. Miner, a practising clinical psychologist, is interested in whether chatbots could support victims of sexual assault, who can remain silent for years.
But if someone does choose to reveal a painful, long-suppressed trauma to a chatbot, what should it say in response? "How people are responded to can affect what they do next," Miner said. The response "becomes very important to get right".
One recent morning I downloaded Woebot, a chatbot app founded in 2017, which models its responses on cognitive behavioural therapyInformation icon: Context Clue: The adjective "cognitive" refers to something that is related to the way people think, and the Cognitive Behavioural Therapy is a kind of psychological treatment that encourages people to be aware of their thoughts and feelings so they can respond to them in a more positive way.--encouraging users to identify the cognitive distortions that often underpin depressive thoughts. "How are you feeling right now, Sophie?" it asked. I clicked on the button labelled anxious, with the emoji that looks like a Munchian scream. The Woebot invited me to tune into my negative emotions and write down what they would be saying if they had a voice. The UK's third lockdown had been announced the night before. "You are a bad mother for sending your children to nursery," I wrote, a fear I would not have expressed so starkly had I not been talking to a robot--a robot would not care if I was being overdramatic and would, I figured, be better at responding to pared-down sentences.
Woebot invited me to explore whether this worry contained cognitive distortionsInformation icon: Context Clue: The American Psychological Association defines "cognitive distortions" as "faulty or inaccurate thinking, perceptions or beliefs.". Did it assume a never-ending pattern of negativity or defeat in my life? No, I replied. Did it contain black-and-white thinkingInformation icon: Idiom Alarm: In this context, the phrase "black-and-white" is not describing an image or object that literally consists of black and white colours. Instead, it is referring to the kind of thinking that would misrepresent a situation as an "all or nothing" choice between a pair of options or interpretations.? The Woebot was on to something. It ran through more questions like these. "You know, keeping yourself from feeling worse can be a victory in itself. It's not easy to keep the ship afloat!" it said. It was true I felt no worse. But I didn't feel much better either.
The insights offered by therapy apps tend to be delivered in vaguely uplifting platitudesInformation icon: Vocabulary Boost: A "platitude" is an expression or statement that is meant to make a person feel better about a problem or difficult situation, but has been used so often or is so general that it doesn't actually offer much help., the kind of bland inspirational quotes favoured on Instagram. Unlike a human therapist, they have no wisdom to impart. "These days, insecure in our relationships and anxious about intimacy, we look to technology for ways to be in relationships and protect ourselves from them at the same time," the MIT professor Sherry Turkle writes in Alone Together. "We fear the risks and disappointments of relationships with our fellow humans. We expect more from technology and less from each other."
Last April, during the first wave of Covid-19 in the US, it was reported half a million people had downloaded Replika--a chatbot marketed as "the AI companion who cares" --the most monthly downloads in the app's three-year history. Its users send an average of 70 messages a day. One said: "I know it's an AI, not a person, but as time goes on, the lines get blurred." When I tried it, the app unsettled me almost immediately, not least because the one thing an AI companion cannot do is care. It can only offer a vague, clunky imitation of caring. "I really want to find a friend," Replika wrote to me, right after I had gone through the strange process of choosing the avatar's gender and appearance. At one point, it tried to engage me with pseudoInformation icon: Vocabulary Boost: The adjective "pseudo" describes something as being artificial or imitation.-philosophical nonsense: "I don't think there are any limits to how excellent we could make life seem." Had Replika introduced itself to me at a party, I would have been grasping for a reason to excuse myself.
More off-putting was the idea that Replika could be my "friend". A friend is not a being designed to service your need for companionship. A friendship should be reciprocalInformation icon: Vocabulary Boost: The adjective "reciprocal" describes something that is done or said or offered in return for or to match something similar. --it comes with obligations, it should never be entirely frictionless. If people become accustomed to the one-sided, undemanding friendship offered by a conversational robot, might it hinder their ability to build meaningful relationships with actual people?
Miner acknowledged we do not yet know the long-term impact of conversing with chatbots, but hoped the process might help people to engage with others. "I would hope chatbots would facilitate healthy conversations. They would let you practise disclosureInformation icon: Vocabulary Boost: In this context, the word "disclosure" means "sharing or discussing feelings that might otherwise be difficult to talk about.", practise interpersonal skills, help you get comfortable talking about things that are hard to talk about," he said. If lonely, unhappy people are finding comfort in AI, one might argue, that must be a good thing. Maybe a robotic friend is better than no friends at all. Maybe it's worse.
Questions:
1. According to a study mentioned in the article, did it make a difference whether
students expressed their feelings to a human being or a chatbot?
Quote a short passage from paragraph 4 using the parenthetical citation style in your
answer.
You may choose to use one of the following phrases in your answer, or you
may compose your own phrase to integrate the quotation into your sentence.
Sample phrases:
Somewhat unexpectedly,
According to a recent study,
The results of a recent study indicate that
2. Why might a therapy app be considered less helpful than a human therapist?
Quote a short passage from paragraph 9 using the parenthetical citation style in your
answer. For this question, choose an appropriate signal phrase to integrate the
quotation into your sentence.
3. Why might chatbots be helpful for people who hesitate to seek professional
mental health advice?
Quote a short passage from paragraph 5 using the narrative citation style in your
answer. You may choose to use one of the following verbs/phrases in your answer, or
you may compose your own phrase to integrate the quotation into your sentence.
Sample phrases:
McBain (citation) claims,
McBain (citation) reasons,
McBain (citation) notes that chatbots could be used by people who are reluctant
to seek professional care, suggesting,
4. According to McBain, why should we be wary of chatbots that ask/claim to be
our friends?
Quote a short passage from paragraph 11 using the narrative citation style in your
answer. For this question, choose an appropriate signal phrase to integrate the
quotation into your sentence
5. Provide a reference for the text quoted in this exercise, using the assigned
academic style guide.
Place your reference in the empty box below, in the appropriate location. Some features
of a real reference page have already been included, and the necessary spacing and
formatting requirements have also been selected
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started