The 'New World Order'
BIBLICAL END TIMES COMING:   SOON!
Digital ID Or Digital Prison
 
The New World Order

The Future Of AI-Powered Therapy Is Here And Mostly Unregulated

AI-powered therapy bots are gaining popularity, but researchers caution that not every service claiming to be a therapist qualifies as one.

Last spring, psychologist and therapist Jessica Jackson got word about a mysterious website that had become notorious among her colleagues.

“There was a company, an anonymous company. So, they weren’t sharing who they were, but they were paying people to upload their therapy sessions,” she said.

They were paying $50 via Venmo or Paypal for people in therapy who were willing to share 45-minutes of clear audio from their sessions.

No one seemed to know who was making this offer — all of the website domain ownership details were kept private. But she and her colleagues had a hunch as to why that audio was worth money.

“We assumed that they were training a large language model on these things,” she said.

She suspected that whoever was buying the therapy session audio was using it to train a chatbot, a robot therapist.

In principle, she wasn’t offended by the idea behind the website. Recording therapy sessions have long been a part of training for human therapists.

“When I was in grad school, we would record sessions,” Jessica said. “Our supervisor would listen to it and give feedback.”

But the recordings she trained with were made after clients consented to very specific conditions. Their personal information was anonymized, the audio was only available to other therapists in training. This website seemed more like a free-for-all, missing disclosures about how the client’s information would be used. And the therapist’s, too.

“It became a big thing in the therapist community, because that means that clients were now taping their sessions and not letting their therapist know. And then they were getting paid to upload this session and the therapist did not know that was happening.”

She noted audio like this is likely a valuable tool in the race to deploy chatbots to help solve the ever-widening mental health crisis. She says the idea of texting with a bot as opposed to opening up to a real-life person may be a sign of changing times.

“I think younger demographics tend to be a little bit more open to it,”  she said, as one who consults for several technology companies.

Jessica says the pandemic helped make people comfortable with the idea of finding help online and disclosing sensitive information to and through machines.

It normalized seeking help through technology because everything became virtual.

In the 1960s, computer scientist Joseph Weizenbaum created ELIZA, a computer program that engaged people in typed conversation with a computer with less memory than most thumbdrives. Despite those early limitations, after a few brief exchanges, Weiznberg’s secretary famously asked the MIT professor to leave the room so she could type to the computer in private.

Today, companies are vying to scale-up the experience.

As the pandemic slowly subsided, Apple introduced a new Journaling app, encouraging iPhone-users to reflect on their day within their phone.

The news received puzzled reactions. There were already plenty of journaling apps out there, so this new feature seemed years behind and lacking real purpose, a hollow repository for your thoughts and feelings. But in the age of mental health chatbots, that repository of personal information could prove extremely valuable as the company introduces “Apple Intelligence” to its devices.

Now, talking to a chatbot instead of a real human seems like just one more step along a path that could lead technology companies right into the $75 billion psychology and counseling industry.

And some people don’t think that’s necessarily a bad thing.

“The excitement is democratization of expertise,” I. Glenn Cohen, a professor and bioethicist at Harvard Law School, said.

Cohen, a self-described techno-optimist, was not surprised to learn how common it has become for people to seek out these relatively cheap, human-sounding chatbots for therapy.

“[If] you are trying to get access to a therapist in America, let alone in a lower middle income country, the waitlist, the cost, it’s extremely high,” said Cohen. “So if we want to take people’s mental health seriously, and we’re not willing to scale-up the supply, or we can’t afford to scale-up the supply of therapists, it’s really exciting there might be opportunities to help and engage people’s mental health and help improve it through the use of some amount of automation or artificial intelligence technology.”

But at least for now, Cohen says most of the technology is not ready for prime time.

Still, dozens of “therapy bots” have emerged online spouting buzzwords and claiming to be “here for you.”

“What worries me is that we already actually have a reported case from Belgium of a man chatting with a general purpose large language model. And essentially, at the end of this conversation, the way it went without guardrails, it advises the man to end his life — and the man ends his life,” Cohen said.

While Cohen acknowledges the complexity of identifying what actually caused the man’s death, he says the case may provide a bleak window into the future.

“When people engage and when they’re in vulnerable positions, as many people in mental health crises are, the concern is that if this is not being used in a responsible way, in a way that can determine if somebody needs something more than the LLM, it really has the potential of putting people in risky situations.”

Cohen says therapy chatbots fall within a regulatory loophole.

While some companies have received FDA approval to deploy their chatbots for cognitive behavioral therapy, many simply label themselves as non-medical wellness apps to legally skirt the FDA oversight and state regulations pertaining to humans offering therapy.

“As a result, I think they fall in this very interesting middle space, which for innovators and entrepreneurs is exciting because it’s a possibility to really explore and to build out. But for those of us who might have concerns about it, it’s something that we want to flag and be worried about and be thoughtful about how we might do better,” said Cohen.

As a psychologist Jessica thinks there is a role for artificial intelligence as a tool for therapists, like an updated crisis hotline and mental health surveillance tool – the first line of defense fielding calls and guiding people toward professionals who can help.

But for now, she has started encouraging her colleagues to ask their clients if they have sought help from chatbots before — to open up a conversation and let their clients know that not everything that calls itself a therapist actually is one — or, has any real expertise in mental health issues.

“If you’ve ever looked at the GPT store and then looked up mental health GPTs, anyone can create one,” she said. “There are several companies out there right now who have built startups that are focused on leveraging AI only and call themselves therapists.”

She questions how these AI “therapists” are being trained.

“There are some data sets that people can use, but they’re not full-on therapy scripts. But also these [chatbots] are [not] being created by clinicians. So how do you know what exactly is the training that’s happening?”

Jessica says people should always ask their therapists for permission before recording their sessions, and remain cautious about uploading versions of these sessions for money without really knowing what that audio will be used for – or how their most private information is stored.


Also:


AI Is Changing Every Aspect Of Psychology. Here’s What To Watch For

In psychology practice, artificial intelligence (AI) chatbots can make therapy more accessible and less expensive. AI tools can also improve interventions, automate administrative tasks, and aid in training new clinicians. On the research side, synthetic intelligence is offering new ways to understand human intelligence, while machine learning allows researchers to glean insights from massive quantities of data. Meanwhile, educators are exploring ways to leverage ChatGPT in the classroom.

“A lot of people get resistant, but this is something we can’t control. It’s happening whether we want it to or not,” said Jessica Jackson, PhD, a licensed psychologist and equitable technology advocate based in Texas. “If we’re thoughtful and strategic about how we integrate AI, we can have a real impact on lives around the world.”

Despite AI’s potential, there is still cause for concern. AI tools used in health care have discriminated against people based on their race and disability status (Grant, C., ACLU News and Commentary, October 3, 2022). Rogue chatbots have spread misinformation, professed their love to users, and sexually harassed minors, which prompted leaders in tech and science to call for a pause to AI research in March 2023.

“A lot of what’s driving progress is the capacities these systems have—and that’s outstripping how well we understand how they work,” said Tom Griffiths, PhD, a professor of psychology and computer science who directs the Computational Cognitive Science Lab at Princeton University. “What makes sense now is to make a big parallel investment in understanding these systems,” something psychologists are well positioned to help do.

Uncovering bias

As algorithms and chatbots flood the system, a few crucial questions have emerged. Is AI safe to use? Is it ethical? What protections could help ensure privacy, transparency, and equity as these tools are increasingly used across society?

Psychologists may be among the most qualified to answer those questions, with training on various research methodologies, ethical treatment of participants, psychological impact, and more.

“One of the unique things psychologists have done throughout our history is to uncover the harm that can come about by things that appear equal or fair,” said Adam Miner, PsyD, a clinical assistant professor of psychiatry and behavioral sciences at Stanford University, citing the amicus brief filed by Kenneth Clark, PhD, and Mamie Phipps Clark, PhD, in Brown v. Board of Education.

When it comes to AI, psychologists have the expertise to question assumptions about new technology and examine its impact on users. Psychologist Arathi Sethumadhavan, PhD, the former director of AI research for Microsoft’s ethics and society team, has conducted research on DALL-E 2, GPT-3, Bing AI, and others.

Sethumadhavan said psychologists can help companies understand the values, motivations, expectations, and fears of diverse groups that might be impacted by new technologies. They can also help recruit participants with rigor based on factors such as gender, ancestry, age, personality, years of work experience, privacy views, neurodiversity, and more.

With these principles in mind, Sethumadhavan has incorporated the perspectives of different impacted stakeholders to responsibly shape products. For example, for a new text-to-speech feature, she interviewed voice actors and people with speech impediments to understand and address both benefits and harms of the new technology. Her team learned that people with speech impediments were optimistic about using the product to boost their confidence during interviews and even for dating and that synthetic voices with the capability to change over time would better serve children using the service. She has also applied sampling methods used frequently by psychologists to increase the representation of African Americans in speech recognition data sets.

“In addition, it’s important that we bring in the perspectives of people who are peripherally involved in the AI development life cycle,” Sethumadhavan said, including people who contribute data (such as images of their face to train facial recognition systems), moderators who collect data, and enrichment professionals who label data (such as filtering out inappropriate content).

Psychologists are also taking a close look at human-machine interaction to understand how people perceive AI and what ripple effects such perceptions could have across society. One study by psychologist Yochanan Bigman, PhD, an assistant professor at the Hebrew University of Jerusalem, found that people are less morally outraged by gender discrimination caused by an algorithm as opposed to discrimination created by humans (Journal of Experimental Psychology: General, Vol. 152, No. 1, 2023). Study participants also felt that companies held less legal liability for algorithmic discrimination.

In another study, Bigman and his colleagues analyzed interactions at a hotel in Malaysia employing both robot and human workers. After hotel guests interacted with robot workers, they treated human workers with less respect (working paper).

“There was a spillover effect, where suddenly we have these agents that are tools, and that can cause us to view humans as tools, too,” he said.

Many questions remain about what causes people to trust or rely on AI, said Sethumadhavan, and answering them will be crucial in limiting harms, including the spread of misinformation. Regulators are also scrambling to decide how to contain the power of AI and who bears responsibility when something goes wrong, Bigman said.

“If a human discriminates against me, I can sue them,” he said. “If an AI discriminates against me, how easy will it be for me to prove it?”

AI in the clinic

Psychology practice is ripe for AI innovations—including therapeutic chatbots, tools that automate notetaking and other administrative tasks, and more intelligent training and interventions—but clinicians need tools they can understand and trust.

While chatbots lack the context, life experience, and verbal nuances of human therapists, they have the potential to fill gaps in mental health service provision.

“The bottom line is we don’t have enough providers,” Jackson said. “While therapy should be for everyone, not everyone needs it. The chatbots can fill a need.” For some mental health concerns, such as sleep problems or distress linked to chronic pain, training from a chatbot could suffice.

In addition to making mental health support more affordable and accessible, chatbots can help people who may shy away from a human therapist, such as those new to therapy or people with social anxiety. They also offer the opportunity for the field to reimagine itself, Jackson said—to intentionally build culturally competent AIs that can make psychology more inclusive.

“My concern is that AI won’t be inclusive,” Jackson said. “AI, at the end of the day, has to be trained. Who is programming it?”

Other serious concerns include informed consent and patient privacy. Do users understand how the algorithm works, and what happens to their data? In January, the mental health nonprofit Koko raised eyebrows after it offered counseling to 4,000 people without telling them the support came from ChatGPT-3. Reports have also emerged that getting therapy from generative language models (which produce different text in each interaction, making it difficult to test for clinical validity or safety) has led to suicide and other harms.

But psychology has AI success stories, too. The Wysa chatbot does not use generative AI, but limits interactions to statements drafted or approved by human therapists. Wysa does not collect email addresses, phone numbers, or real names, and it redacts information users share that could help identify them.

The app, which delivers cognitive behavioral therapy for anxiety and chronic pain, has received Breakthrough Device Designation from the United States Food and Drug Administration. It can be used as a stand-alone tool or integrated into traditional therapy, where clinicians can monitor their patients’ progress between sessions, such as performance on cognitive reframing exercises.

“Wysa is not meant to replace psychologists or human support. It’s a new way to receive support,” said Smriti Joshi, MPhil, the company’s chief psychologist.

AI also has the potential to increase efficiency in the clinic by lowering the burden of administrative tasks. Natural language processing tools such as Eleos can listen to sessions, take notes, and highlight themes and risks for practitioners to review. Other tasks suited to automation include analysis of assessments, tracking of patient symptoms, and practice management.

Before integrating AI tools into their workflow, many clinicians want more information on how patient data are being handled and what apps are safe and ethical to use. The field also needs a better understanding of the error rates and types of errors these tools tend to make, Miner said. That can help ensure these tools do not disenfranchise groups already left out of medical systems, such as people who speak English as a second language or use cultural idioms of distress.

Miner and his colleagues are also using AI to measure what’s working well in therapy sessions and to identify areas for improvement for trainees (npj Mental Health Research, Vol. 1, No. 19, 2022). For example, natural language models could search thousands of hours of therapy sessions and surface missed opportunities to validate a patient or failures to ask key questions, such as whether a suicidal patient has a firearm at home. Training software along these lines, such as Lyssn—which evaluates providers on their adherence to evidence-based protocols—is starting to hit the market.

“To me, that’s where AI really does good work,” Miner said. “Because it doesn’t have to be perfect, and it keeps the human in the driver’s seat.”

Transforming research

For researchers, AI is unlocking troves of new data on human behavior—and providing the power to analyze it. Psychologists have long measured behavior through self-reports and lab experiments, but they can now use AI to monitor things like social media activity, credit card spending, GPS data, and smartphone metrics.

“That actually changes a lot, because suddenly we can look at individual differences as they play out in everyday behavior,” said personality psychologist and researcher Sandra Matz, PhD, an associate professor at Columbia Business School.

Matz combines big data on everyday experiences with more traditional methods, such as ecological momentary assessments (EMAs). Combining those data sources can paint a picture of how different people respond to the same situation, and ultimately shape personalized interventions across sectors, for instance in education and health care.

AI also opens up opportunities for passive monitoring that may save lives. Ross Jacobucci, PhD, and Brooke Ammerman, PhD, both assistant professors of psychology at the University of Notre Dame, are testing an algorithm that collects screenshots of patients’ online activity to flag the use or viewing of terms related to suicide and self-harm. By pairing that data with EMAs and physiological metrics from a smart watch, they hope to build a tool that can alert clinicians in real time about patients’ suicide risk.

“The golden goose is passive sensing,” Jacobucci said. “How can that inform, not only who is at risk, but more importantly, when they’re at risk?”

Natural language processing models are also proving useful for researchers. A team at Drexel University in Philadelphia has shown that GPT-3 can predict dementia by analyzing speech patterns (Agbavor, F., & Liang, H., PLOS Digital Health, Vol. 1, No. 12, 2022). Cognitive psychologists are testing GPT’s performance on canonical experiments to learn more about how its reasoning abilities compare to humans (Binz, M., & Schulz, E., PNAS, Vol. 120, No. 6, 2023). Griffiths is using GPT as a tool to understand the limits of human language.

“These models can do a lot of things that are very impressive,” Griffiths said. “But if we want to feel safe in delegating tasks to them, we need to understand more about how they’re representing the world—and how it might differ from the way we think about it—before that turns into a problem.”

With their toolbox for understanding intelligent systems, psychologists are in the perfect position to help. One big question moving forward is how to prepare graduate students to collaborate more effectively with the computer scientists who build AI models.

“People in psychology don’t know the jargon in computer science and vice versa—and there are very few people at the intersection of the two fields,” Jacobucci said.

Ultimately, AI will present challenges for psychologists, but meeting those challenges carries the potential to transform the field.

“AI will never fully replace humans, but it may require us to increase our awareness and educate ourselves about how to leverage it safely,” Joshi said. “If we do that, AI can up the game for psychology in so many ways.”


Also:


The (Artificial Intelligence) Therapist Can See You Now

New research suggests that given the right kind of training, AI bots can deliver mental health therapy with as much efficacy as — or more than — human clinicians.

The recent study, published in NEJM AI, a journal of the New England Journal of Medicine, shows results from the first randomized clinical trial for AI therapy.

Researchers from Dartmouth College built the bot as a way of taking a new approach to a longstanding problem: The U.S. continues to grapple with an acute shortage of mental health providers. "I think one of the things that doesn't scale well is humans," says Nick Jacobson, a clinical psychologist who was part of this research team. For every 340 people in the U.S., there is just one mental health clinician, according to some estimates.

While many AI bots already on the market claim to offer mental health care, some have dubious results or have even led people to self-harm.

More than five years ago, Jacobson and his colleagues began training their AI bot in clinical best practices. The project, says Jacobson, involved much trial and error before it led to quality outcomes.

"The effects that we see strongly mirror what you would see in the best evidence-based trials of psychotherapy," says Jacobson. He says these results were comparable to "studies with folks given a gold standard dose of the best treatment we have available."

The researchers gathered a group of roughly 200 people who had diagnosable conditions like depression and anxiety, or were at risk of developing eating disorders. Half of them worked with AI therapy bots. Compared to those that did not receive treatment, those who did showed significant improvement.

One of the more surprising results, says Jacobson, was the quality of the bond people formed with their bots. "People were really developing this strong relationship with an ability to trust it," says Jacobson, "and feel like they can work together on, on their mental health symptoms."

Strength of bonds and trust with therapists is one of the overall predictors of efficacy in talk and cognitive behavioral therapy.

Another advantage of AI therapy is the lack of time constraints. Jacobson said he and his team were surprised by how frequently patients accessed their AI therapists. " We had folks that were messaging it about their insomnia symptoms in the middle of the night," he says, "and getting their needs met in these moments."

The American Psychological Association has raised the alarm recently about the dangers of using unregulated AI therapy bots.

But the organization says it's pleased with the rigorous clinical training that researchers invested in this model.

"The therabot in this study checks a bunch of the boxes that we have been hoping technologists would start to engage in," says Vaile Wright, director for the Office of Health Care Innovation at the APA. "It is rooted in psychological science. It is demonstrating some efficacy and safety, and it's been co-created by subject matter experts for the purposes of addressing mental health issues."

Dartmouth researchers stress that the technology is still a long way from market and say they need to run additional trials on the therabot before it will be widely available.

And, Wright says, human therapists should not be intimidated by their AI counterparts. Given the tremendous shortage of mental health providers, " I don't think humans need to be concerned that they're going to be put out of business," she says.

She says the country needs all the quality therapists we can get — be they human or bot.


Also:


Increase In Patients Turning To AI For Therapy

As patients face barriers to mental health care, many are turning to ChatGPT for help, but experts say it brings a risk of devastating mistakes.

With more Australians than ever before reporting barriers to accessible mental health care, GPs have pointed to a growing and worrying trend of patients experimenting with chatbots as a form of talk therapy.

Health experts say they have seen a rise in patients using artificial intelligence (AI) platforms, such as ChatGPT, to seek psychological support.

In one example, a TikTok user is seen writing her thoughts into a word document before entering instructions such as ‘read the following journal entry and provide an analysis of it’ into ChatGPT.

This comes at a time when mental health services continue to be difficult to access, with long wait times and closed books leaving many patients feeling unsupported.

Dr James Collett, a Psychologist and Senior Lecturer at RMIT University, has himself noticed the trend and says AI therapies are ‘here to stay’.

But he said the use of general platforms, such as ChatGPT, could lead to patients missing out on important elements of psychotherapy, such as personal trust and rapport, and so ‘might not be getting the best support’.

‘What we’re seeing is unsupervised seeking of mental health support online,’ Dr Collett told newsGP.

‘There might be cases where people are talking about topics that they realistically need support with, and we would be worried about their welfare, but that’s not coming to light because they’re using ChatGPT.

‘There’s probably some superficially useful therapeutic advice that it can draw on, but it’s not necessarily matching that to clients’ individualised needs.’

Dr Collett said AI could be used to compliment psychological therapy, such as when people are considering if they will seek psychological supports, and to provide scaffolding between sessions or once a course of treatment has been completed.

However, he said these new therapies must be developed by teams with training and experience in psychotherapy.

‘I don’t think that there’s any putting AI back in the bottle, it’s out in the world, so I think it would be naive to propose a message like “we should never use AI for anything to do with therapy”,’ Dr Collett said.

‘I’m sure there are people out there developing therapeutic-oriented AI with an evidence base behind them, I would envisage that is probably the ideal future of AI use in psychotherapy.’

This rise in AI therapy comes at a time when costs, lack of availability, and patients not knowing where to seek help were the top three barriers to people getting the care they wanted, according to a recent Australian Psychological Society survey.

Dr Cathy Andronis, Chair of RACGP Specific Interests Psychological Medicine, told newsGP there are many considerations for clinicians and patients as they consider how to best use AI.

‘While there are benefits for GPs using AI, mostly some time saving with note taking of the conversation, there are many more risks for GPs in a consultation with patients discussing sensitive mental health-related content,’ she said.

Mental health continues to be one of the top reasons patients are seeing a GP, with the RACGP’s 2024 Health of the Nation report finding psychological issues remain in the top three presentations for 71% of GPs.

In August, a four-year review from more than 50 leading psychiatrists, psychologists, and those with lived experience across five continents described the rise in youth mental health problems as a ‘global crisis’.

It found that in less than 20 years, there has been a 50% increase in rates of mental ill-health among Australian youth, with the peak age of onset 15 years old and 63–75% of onsets occurring before the age of 25.

In response, Dr Andronis highlighted that experienced clinicians use metacognition to understand and support their patients, and work towards helping them develop skills to manage on their own – something which AI cannot yet do.

‘An astute clinician recognises key themes and content which are affecting the patient and contributing to their problems, an AI transcriber cannot do this, as it requires reflective skills,’ she said.

‘This capacity for metacognition is the trademark of an experienced therapist.

‘As psychotherapists become more experienced, they focus less on content and more on the process components of therapy … this is our expertise.’

She said it remains to be seen if AI can ever develop the ability for these reflective skills.

‘There is research currently of AI assisted therapy using Chatbots – these are being trained by psychotherapists,’ Dr Andronis said.

‘They can only learn what we teach them. And if they are not well trained, nor taught by experienced therapists, they will make mistakes, including potentially fatal ones for the patient.’