Factors to consider regarding AI therapy

Source: Willyam Bradberry/Shutterstock

In the first part of this article, we discussed how an effective artificial intelligence therapist could be developed in the near future. In this second part, we will discuss some additional considerations of an AI approach to psychological therapy.

Some potential benefits of AI therapy

In a world with AI therapists, many physical and temporal limitations of current mental health care delivery will be eliminated. Patients could receive therapy when and where they want, including in places where current access to mental health care is poor. There would be no more waiting time before being able to see a therapist. In the event of a crisis, patients could have immediate access to therapy. This might help prevent catastrophic situations from escalating.

Imagine a world in which everyone could have easy and unlimited access to therapy. Would people be less likely to develop serious mental illness if they had access to lifelong mental health support?

Additionally, human therapists will be able to focus on working with the most difficult patients, while artificial intelligence therapists could focus on therapy for the vast majority of people with minor mental health care needs.

An advantage of AI therapy could be that the information exchanged during this therapy could remain completely private. An additional feature of therapy could be to allow a supervising mental health professional to review the therapy process to provide additional feedback to the patient, help refine the AI ​​protocol, or deal with very psychological situations. difficult.

An AI therapist, without time constraints, will be able to easily pace the therapy according to the needs of each patient, never forget what was said by the patients and remain non-judgmental (Fiske, 2021).

Machine learning could lead to the development of new types of psychotherapy, including by combining current modes of therapy and perhaps through innovation, in the same way that chess-playing AI has developed new strategies for playing games. chess. By studying the outcomes of AI therapy, we could make exciting advances in our understanding of human psychology and how to effect therapeutic change.

Some Serious Potential Negative Consequences of AI Therapy

Like any therapy, AI therapy would not be appropriate for everyone in every situation. Perhaps, as a first step, potential patients would be screened to determine whether and when a referral should be made to a human therapist.

The fear of losing confidentiality can make some patients hesitant or resistant to AI therapy. For example, they may wonder if their dating data will be used for marketing purposes, including targeted advertising, spying, or other nefarious purposes. There might be fears that the data will be hacked and even exploited for ransom.

People may also be concerned that someone else can access their AI therapy details by logging into their account. Fortunately, AI facial recognition protocols could prevent this kind of privacy breach.

Will ubiquitous access to AI therapy make some people feel like there is no “safe place” where they can spend time with their therapist, away from the pressures of the world, like the therapist’s office? Conversely, others may feel that there is no “safe space” away from their therapist, who could theoretically monitor them from any computer.

The issues of privacy and pervasive AI access are ones we should already be tackling, given Alexa’s ongoing monitoring of verbal interactions in our homes.

Some patients may be put off by the visual appearance of an artificial intelligence therapist. Patients may also be perplexed by the reality testing process administered by an artificial therapist.

Ethical concerns about capacity to consent to therapy will apply to patients who may not have the mental capacity to understand that they are working with a non-human therapist (for example, the elderly, children, or people with intellectual disability).

Patients might rely too much on their AI therapist. For example, they can choose not to make important decisions on their own without consulting the AI. In this case, the AI ​​could be programmed to identify the patient’s overdependence and advise against it.

If insufficient safeguards are in place, a patient may engage in ineffective or even harmful AI therapy without realizing there is a problem. In this context, a patient may be harmed if he does not seek another type of therapy. This is also a possible event with human therapy.

Another set of questions relates to monitoring. Would an AI therapist be subject to state oversight and need a license or malpractice insurance? Who will oversee the AI ​​therapy or be responsible if the AI ​​therapy stops working or goes bad?

The AI ​​therapist could influence their patients based on their programming. Who would be in charge of programming? A private company with its own prejudices? A national government? From which country? While it is true that a human therapist can also influence patients, an AI program could influence millions of people. This could cause too much influence on world events. For example, the program could sow significant political discord.

It has been suggested that transparency regarding the algorithms used for therapy would help address these concerns. However, in an environment involving machine learning, the algorithms used can become so complex that they would be difficult to analyze even if fully open to scrutiny.

An AI therapist trained through interactions with people from one culture may need to adjust their algorithms significantly when working with people from another culture, given the differences in cultural norms and ethics, as well as their languages ​​and even their non-verbal responses.

Finally, sometimes our rapid advances in science and technology outpace our ability to learn how to use them wisely. For example, widespread access to smartphone technology has dramatically changed our behaviors, especially among young people. We have already come to realize that excessive use of electronics is associated with increased anxiety and depression. Other long-term consequences of smartphone use remain to be defined.

Thus, we are reminded that a deployment of AI therapy must be undertaken slowly and deliberately, with the contribution of many thoughtful people, especially in the fields of information technology, linguistics, clinical psychology and research, medicine, education, business, government, ethics, and philosophy.

Carry

AI-administered therapy has great potential benefits, but could also cause significant harm. Similar AI technology could also be used to change other fields, such as education and financial advice. Many of the pros and cons of AI therapy apply to these areas as well.