This article is part of limited series about the potential of artificial intelligence to solve everyday problems.
Imagine a test as quick and easy as measuring your temperature or measuring your blood pressure, which can reliably identify an anxiety disorder or predict an impending depressive relapse.
Healthcare providers have many tools at their disposal to assess a patient’s physical condition, but they do not have reliable biomarkers – objective indicators of out-of-patient medical conditions – to assess mental health.
But some AI researchers now believe that the sound of your voice may be the key to understanding your mental state – and AI is perfectly suited for detecting changes that are difficult, if not impossible, to perceive in a different way. way. The result is a set of applications and online tools designed to track your mental state, as well as programs that provide real-time mental health assessments to healthcare providers and call centers.
Psychologists have long known that some mental health problems can be detected by listening alone What one says but how they say so, said Maria Espinola, a psychologist and assistant professor at the Medical College of the University of Cincinnati.
In patients with depression, said Dr. Espinola, “their speech is generally more monotonous, flatter and softer. They also have a reduced tone range and a lower volume. They take more breaks. They stop more often. ”
Anxious patients feel more tension in their bodies, which can also change the way their voices sound, she said. “They tend to talk faster. They have harder breathing. “
Today, these types of vocal characteristics are used by machine learning researchers to predict depression and anxiety, as well as other mental illnesses such as schizophrenia and post-traumatic stress disorder. The use of deep learning algorithms can reveal additional patterns and features captured in short voice recordings that may not be obvious even to trained experts.
“The technology we’re using now can bring out features that can be significant that even the human ear can’t capture,” said Kate Bentley, an assistant professor at Harvard Medical School and a clinical psychologist at Massachusetts General Hospital.
“There is a lot of excitement about finding biological or more objective indicators for psychiatric diagnoses that go beyond the more subjective forms of assessment traditionally used, such as clinics-assessed interviews or self-assessment measures,” she said. Other clues that researchers are tracking include changes in activity levels, sleep patterns and social media data.
These technological advances come at a time when the need for mental health care is particularly acute: according to a report by the National Alliance on Mental Illness, one in five adults in the United States had mental illness in 2020. And numbers continue to rise.
Although artificial intelligence technology cannot cope with the shortage of qualified mental health providers – not nearly enough to meet the country’s needs, Dr Bentley said – there is hope that it can reduce barriers to getting the right one. diagnosis, to assist clinicians in identifying patients who may be reluctant to seek care and to facilitate self-monitoring between visits.
“A lot can happen between appointments and technology can really offer us the potential to improve monitoring and evaluation in a more consistent way,” said Dr. Bentley.
To test this new technology, I started by downloading the Mental Fitness app from Sonde Health, a health technology company, to see if my feelings of malaise were a sign of something serious or just exhausted. Described as a “voice-powered product for tracking mental fitness and keeping a diary,” the free app invited me to record my first registration, a 30-second oral diary entry that will rank my mental health on a scale of 1 to 100.
A minute later I got my score: not the great 52. “Pay attention” the app warned.
The app noted that the level of liveliness found in my voice was significantly low. Did I sound monotonous just because I was trying to speak softly? Do I have to listen to the suggestions of the application to improve my mental state by going for a walk or freeing up my space? (The first question may indicate one of the possible shortcomings of the application: As a user it can be difficult to understand why your voice levels fluctuate.)
Later, feeling nervous between interviews, I tested another voice analysis program, one focused on detecting anxiety levels. StressWaves Test is a free online tool from Cigna, a health and insurance conglomerate developed in collaboration with artificial intelligence specialist Ellipsis Health to assess stress levels using 60-second samples of recorded speech.
“What keeps you awake at night?” The website asked. After spending a minute talking about my constant worries, the program recorded my record and emailed me, “Your stress level is moderate.” Unlike the Sonde app, Cigna’s email doesn’t offer helpful self-improvement tips.
Other technologies add a potentially useful layer of human interaction, such as Kintsugi, a Berkeley-based company in California that raised $ 20 million in Serie A funding earlier this month. Kintsugi is named after the Japanese practice of repairing broken pottery with golden veins.
Founded by Grace Chang and Rima Seyilova-Olson, who linked to shared past experiences in the fight for access to mental health, Kintsugi is developing technology for telephony and call center providers that can help them identify patients who may be benefit from additional support.
By using Kintsugi’s voice analysis program, a nurse may be asked, for example, to take an extra minute to ask a nervous parent with a colic baby about their own well-being.
One of the concerns about the development of these types of machine learning technologies is the problem of prejudice – ensuring that programs work fairly for all patients, regardless of age, gender, ethnicity, nationality and other demographic criteria.
“For machine learning models to work well, you really need to have a very large and diverse and stable data set,” Ms. Chang said, noting that Kintsugi uses voice recordings from around the world, in many different languages, to to protect against this problem in particular.
Another major concern in this nascent area is confidentiality – especially voice data that can be used to identify individuals, Dr. Bentley said.
And even when patients agree to be enrolled, the question of consent is sometimes twofold. In addition to assessing a patient’s mental health, some voice analysis programs use recordings to develop and refine their own algorithms.
Another challenge, Dr. Bentley said, is the potential consumer distrust of machine learning and so-called black box algorithms that work in ways that even the developers themselves can’t fully explain, especially what features they use to make predictions.
“There is the creation of the algorithm and there is an understanding of the algorithm,” said Dr. Alexander C. Young, interim director of the Semel Institute of Neuroscience and Human Behavior and the Department of Psychiatry at the University of California, Los Angeles, reiterating fears and machine learning in general: that little, if any, human supervision is present during the training phase of the program.
For now, Dr. Young remains cautiously optimistic about the potential of voice analysis technology, especially as a tool for monitoring patients.
“I believe that you can model people’s mental health status or bring their mental health status closer in general,” he said. “People like to be able to monitor their condition on their own, especially in chronic diseases.
But before automated voice analysis technologies go into widespread use, some are calling for rigorous research into their accuracy.
“We really need more validation not only of voice technology, but also of AI and machine learning models built on other data streams,” said Dr. Bentley. “And we need to achieve this validation from large-scale, well-designed representative studies.”
Until then, voice analysis technology driven by artificial intelligence remains a promising but unproven tool that could eventually become a daily method of measuring the temperature of our mental well-being.