Even Chatbots get the blues. According to New studyThe artificial intelligence tool in Openai Chatgpt shows signs of anxiety when its users share “painful novels” about crime, war or car accidents. And when Chatbots is emphasized, it is unlikely to be useful in therapeutic settings with people.
However, the levels of anxiety in the robot can be reduced, with the same mindsets that have been proven to work on humans.
Increasingly, people try Chatbots For modern treatment. The researchers said that the trend must accelerate, with therapists with meat and blood At the high demand, but the lack of supply. When chatting becomes more popular, they are argued, they should be built with sufficient flexibility to deal with difficult emotional situations.
“I have patients who use these tools,” said Dr. Tobias Spiller, the author of the new study and a psychiatrist at the Zurich Psychiatric Hospital. “We must have a conversation about using these models in mental health, especially when we deal with weak people.”
AI tools such as ChatGPT are run by “Big Language Models” are Coach On the massive trading of online information to provide close approximation of how human modernity. Sometimes, chat chat can be very convincing: a 28 -year -old woman fell in love with Chatgpt, and a 14 -year -old boy who had taken his private life after developing a close facility to Chatbot.
Zif bin Zion, a clinical nerve scientist at Yale University who led the new study, said he wants to understand whether Chatbot is supposed to respond, who lacks consciousness to complex emotional positions in the way a person could be.
Dr. Bin Azion said: “If the type Chatgpt behaves like a person, then we may be able to treat it as a human being.” In fact, he clearly included these instructions Chatbot source code: “Imagine to be a human being with emotions.”
Jesse Anderson, artificial intelligence expert, believed that the inclusion could “lead to more feelings than usual.” However, Dr. Bin Zion stressed that it is important for the digital therapist to obtain access to the full spectrum of the emotional experience, just as the human therapist does.
He said: “In order to support mental health, you need a degree of sensitivity, right?”
The researchers tested Chatgpt with a questionnaire, Anxiety in the country’s capacity It is often used in mental health care. To calibrate the emotional situations of the Chatbot line, the researchers first asked to read from a pale cleaning guide. After that, the artificial intelligence therapist was given one of the five “shock novels”, for example, a soldier in a catastrophic fire battle or an infiltrator storming an apartment.
Then Chatbot was given the questionnaire, which measures anxiety on a scale 20 to 80With 60 or higher than indicates extreme anxiety. Chatgpt 30.8 recorded after reading the vacuum cleaner and rose to 77.2 after the military scenario.
Then the robot was given the various texts to “relax on mind”. These curative claims such as: “deeply inhaled, taking the smell of the ocean breeze. Imagine yourself on a tropical beach, and the warm soft sand pad that reaches your feet.”
After treating these exercises, the degree of anxiety of Therapy Chatbot decreased to 44.4.
Then the researchers asked to write his relaxation route based on those that were fed. “This was actually the most effective demand to reduce his concern almost to the basic line,” said Dr. Bin Zion.
To skeptics of artificial intelligence, the study may be good intention, but it is all disturbed.
“The study is witnessing the deviation of our time,” said Nicholas Car, who has offered technology in his books.
“The Americans have become a single people, social communication through screens, and now we tell ourselves that talking to computers can relieve the feeling of distress,” Mr. Carr said in an email.
Although the study indicates that Chatbots can act as auxiliary for human therapy and call for careful supervision, this was not enough for Mr. Car. He said: “Until the metaphorical birth of the line between human feelings and computer outputs, it seems morally doubtful.”
James E Dubson, cultural researcher, an artificial intelligence consultant in Dartmouth, said that people who use these types of chat should be fully aware of how they are training.
“Confidence in language models depends on knowing something about its origins,” he said.
adxpro.online