Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Foundation models for generalist medical artificial intelligence



nlp structured data :: Article Creator

The Use Of Artificial Intelligence And Natural Language Processing For Mental Health Interventions

In a recent article published in Translational Psychiatry, researchers performed a systemic review and meta-analysis of scientific papers using an artificial intelligence (AI)-based tool that uses Natural Language Processing (NLP) to examine mental health interventions (MHI).

Study: Natural language processing for mental health interventions: a systematic review and research framework. Image Credit: MMD Creative/Shutterstock.Com

Background

Globally, neuropsychiatric disorders, such as depression and anxiety, pose a significant economic burden on healthcare systems. The financial burden of mental health diseases is estimated to reach six trillion US dollars annually by 2030.

Numerous MHIs, including behavioral, psychosocial, pharmacological, and telemedicine, appear effective in promoting the well-being of affected individuals. However, their inherent systemic issues limit their effectiveness and ability to meet increasing demand. 

Moreover, the clinical workforce is scarce, needs extensive training for mental health assessments, the quality of available treatment is variable, and current quality assurance practices cannot handle reduced effect sizes of widespread MHIs. 

Given the low quality of MHIs, especially in developing countries, there is a need for more research on developing tools, especially ML-based tools, that facilitate mental health diagnosis and treatment.

NLP facilitates the quantitative study of conversation transcripts and medical records for thousands of patients in no time. It renders words into numeric and graphical representations, a task previously considered unfathomable. More importantly, it could examine the characteristics of providers and patients to detect meaningful trends in large datasets.

Digital health platforms have made MHI data more readily available, making it possible for NLP tools to do many analyses related to studying treatment fidelity, patient outcomes, treatment components, therapeutic alliance, and gauging suicide risk.

Lastly, NLP could analyze social media data and electronic health records (EHRs) in mental health-relevant contexts.

While NLP has shown research potential, the current separation between clinical and computer science researchers has limited its impact on clinical practice.

Thus, even though the use of Machine Learning in the mental health domain has increased, clinicians have not included peer-reviewed manuscripts from AI conferences reporting advances in NLP.

About the study

In the present study, researchers classified NLP methods deployed to study MHI, identified clinical domains, and used them to aggregate NLP findings.

They examined the main features of the NLP pipeline in each manuscript, including linguistic representations, software packages, classification, and validation methods. Likewise, they evaluated its clinical settings, goals, transcript origin, clinical measures, ground truths, and raters.

Moreover, the researchers evaluated NLP-MHI studies to identify common areas, biases, and knowledge gaps in applying NLP to MHI to propose a research framework that could aid computer and clinical researchers in improving the clinical utility of these tools.

They screened articles on the Pubmed, PsycINFO, and Scopus databases to identify studies focused solely on NLP for human-to-human MHI for assessing mental health, e.G., psychotherapy, patient assessment, psychiatric treatment, crisis counseling, to name a few.

Further, the researchers searched peer-reviewed AI conferences (e.G., Association for Computational Linguistics) through ArXiv and Google Scholar.

They compiled articles that met five criteria: 

i) were original empirical studies; 

ii) published in English; 

iii)peer-reviewed; 

iv) MHI-focused; and 

v) analyzed MHI-retrieved textual data (e.G., transcripts).

Results

The final sample set comprised 102 studies, primarily involving face-to-face randomized controlled trials (RCTs), conventional treatments, and collected therapy corpora.

Nearly 54% of these studies were published between 2020 and 2022, suggesting a surge in NLP-based methods for MHI applications.

Six clinical categories emerged in the review: two and two for the patients and providers, respectively, and two for patient-provider interactions.

These were clinical presentation, intervention response (for patients), intervention monitoring, provider characteristics (for clinicians), relational dynamics, and conversational topics (interaction). They all operated simultaneously as factors in all treatment outcomes. 

While clinicians provided ground truth ratings for 31 studies, patients did so through self-report measures of symptom feedback and treatment alliance ratings for 22 studies. The most prevalent source of provider/patient information was Motivational Interviewing Skills Codes (MISC) annotations.

Multiple NLP approaches emerged, reflecting the temporal development of NLP tools. It shows growth and transformations in patient-provider conversations concerning linguistic representations. Word Embeddings were used the most for language representation, i.E., in 48% of studies.

The two most prevalent NLP model features were lexicons and sentiment analysis, as reflected by their use in 43 and 32 studies. The latter generated feature scores for emotions (e.G., joy) derived from lexicon-based methods.

Eventually, context-sensitive deep neural networks replaced word count and frequency-based lexicon methods in NLP models. A total of 16 studies also used topic modeling to identify common themes across clinical transcripts.

After linguistic content, acoustic characteristics emerged as a promising source of treatment data, with 16 studies examining the same from the speech of patients and providers.

The authors noted that research in this area showed immense progress in mental health diagnoses and treatment specifications. It also remarkably identified the quality of therapeutics for the patient.

Accordingly, they proposed integrating these distinctive contributions into one framework (NLPxMHI) that helped computational and clinical researchers collaborate and outlined novel NLP applications for innovations in mental health services. 

Only 40 studies reported demographic information for the dataset used. So, the authors recommended that NLPxMHI researchers document the demographic data of all individuals participating in their models' training and evaluation.

In addition, they emphasized the over-sampling of underrepresented groups to help address biases and improve the representativeness of NLP models.

Further, they recommended representing treatment as sequential actions to improve the accuracy of intervention studies, emphasizing the importance of timing and context in enriching beneficial effects. Integrating identified clinical categories into a unified model could also help investigators increase the richness of treatment recommendations. 

Fewer reviewed studies implemented techniques to enhance interpretability. It likely hindered investigators from interpreting the overall behavior of the NLP models (across inputs). 

Nonetheless, ongoing collaboration between clinical and computational domains will slowly fill the gap between interpretability and accuracy through clinical review, model tuning, and generalizability. In the future, it might help outline valid treatment decision rules and fulfill the promise of precision medicine.

Conclusions

Overall, NLP methods have the potential to operationalize MHI. Its proof-of-concept applications have shown promise in addressing systemic challenges.

However, as the NLPxMHI framework bridges research designs and disciplines, it would also require the support of large secure datasets, a common language, and equity checks for continued progress.

The authors anticipate that this could revolutionize the assessments and treatments of mental health diseases.


Advancing Data Literacy And Democratization With AI And NLP: Q&A With Qlik's Sean Stauth

Sep 20, 2023

The boom in generative AI interest serves as a visible tipping point in the yearslong journey of the enterprise embracing the power of data interaction through natural language processing (NLP). In recent years, NLP has undergone significant changes that have made it increasingly easier for users at all skill levels to handle and explore data without being a data scientist.

The adoption of generative AI approaches is the latest example of NLP's increasing potential to advance data literacy and democratization across the enterprise as well as drive performance for every employee.

Sean Stauth, global director of AI and machine learning at Qlik, discussed how NLP and generative AI can advance data literacy and democratization.

As director of AutoML solutions, Stauth helps Qlik's global customer base develop and gain success with their AI and machine learning initiatives. A believer in the power of AI and predictive analytics to help companies with their strategic needs, Stauth has spent his career helping companies build AI- and data-driven products.

How can NLP and generative AI advance data literacy and democratization across the enterprise? Can you elaborate on this?

The largest barrier to widespread adoption of analytics within organizations is data literacy and the requisite skills. Not everyone is analytical or cares to spend time evaluating data for patterns and insights. Executives just want results, and managers often can't afford the time needed to crunch numbers and thus make data driven decisions. NLP and generative AI change the game.

There are numerous examples of natural language interfaces being used by many people every single day. And in more recent years, NLP has undergone some significant changes thanks to advancements in machine learning and deep learning techniques.

[Take] a service chatbot, searching for a new item on Amazon, basically any Google search—we've all been using natural language to get answers and find items for years. Bringing that type of experience to enterprise analytics just makes sense, since it enables nontechnical staff to feel comfortable asking questions and trusting the answers they get back.

We know from experience that the more someone uses any service or technology, the more comfortable they become. That's part of what's driving this incredible interest in generative AI. The interface is so simple to use, and the results are easily understood, that there's really no skill gap to overcome.

Because of this, as organizations start to bring generative AI into more workflows, staff will naturally start to use those services more regularly, exposing them to more data and using data more regularly, creating a positive cycle that will only build on itself.

Why is this so important?

Despite continued investments over many years, according to Gartner, 85% of data analytics projects fail. There are various reasons for this, but one key ingredient is the lack of data skills across the business. We often see an enterprise deploy analytics to different parts of the organization, without coupling that with skills training. Inevitably, they don't see the adoption they were hoping for. The result is that despite the investment, staff are making decisions without key data, which leads to decisions that aren't as strategic or impactful.

This also increases the risk of business units being left behind and an increasingly stark parallel of business opportunities being lost because of it. Generative AI changes the playing field. With simple, intuitive interfaces, the adoption can move beyond technical departments.

Raising the level of data use and comfort with data through generative AI interfaces will mean more people actually using data in everyday decisions.

What is Qlik doing to help companies achieve data literacy and democratization?

We've long been a champion of data literacy as a founding member of the world's first data literacy project, with leading organizations such as Accenture, Cognizant, and Experian. We've also provided a wide range of data literacy training courses for free to both professionals and academic institutions to help anyone who wants to become more skilled to do so. This is reinforced in our products as well. We've had natural language interactions, search, and AI-powered insights integrated directly into our solutions for years to make it easier for any Qlik user to find answers, explore their data, and discover hidden insights. And a core focus of our R&D efforts is simplifying the adoption of technologies such as machine learning. Our AutoML capability is purpose-designed for business analysts and doesn't require previous expertise in data science or machine learning.

How would you address some of the concerns regarding NLP and generative AI?

The biggest issues we see right now with generative AI are driven by data quality and governance. Trust is key. But any organization that tries to shut down use of generative AI due to risk is kidding themselves—people are going to use it no matter what, given how easy and powerful it is. Organizations need to be proactive in identifying the areas where generative AI can bring value. At the same time, they need to audit their data framework and set up the right data quality and governance processes. Governance ensures core enterprise data is not being used outside the four walls. Data quality keeps you from feeding incomplete or biased data to the algorithm, which is crucial in reducing the hallucinations everyone is hearing about. Simply put, there is no generative AI without data—it's all about the data, but it has to be the right data. So, getting your "data house" in order is where it all begins.

How do NLP and generative AI assist data engineers and analysts? Are there other job roles that will benefit?

We are seeing many instances where NLP and generative AI are helping developers augment their efforts with code generation, taking out hours of manual time that they can then apply to other tasks. It can massively accelerate previously mundane tasks like data discovery and preparation. Similarly, analysts can more quickly explore data for what-if scenarios, especially when using NLP or generative AI as a layer on top of an AutoML solution for predictive analytics efforts.

What does the future hold in this area?

We're only in the early days of seeing what new applications will come from generative AI and NLP. We see the market quickly heading to enterprises deploying private instances of these technologies with large and small language models within their own networks, instead of using open public platforms to gain control and avoid risk. We also see an increase in adoption of data quality and governance—since these are crucial in making sure the algorithms are being fueled by trusted, relevant, and unbiased data so users can trust the outcomes and make more confident and certain business decisions.


AI: Is The Intelligence Artificial Or Amplified?

Mark Heymann, Managing Partner. Mark Heymann & Assoc. HFTP Hall of Fame; BA Economics Brown Univ, MS Business, Columbia Univ.

getty

In today's environment, there's barely a day that goes by when there isn't some discussion or article written about the latest in artificial intelligence. It's a very exciting time as we look at what computers can accomplish with or without human intervention.

To take a half a step back and to even the playing field in order to ensure clarity in the discussion that's going to ensue, I will highlight four key areas of what is called artificial intelligence.

• Machine Learning: This is a simple process by which a system gains more information that enables it to parse data. Based on all of this historical information, it makes predictions about what is going to happen in the future.

• Deep Learning: This refers to a machine learning approach that utilizes artificial neural networks, employing multiple layers of processing to progressively extract more advanced features from data.

• Natural Language Processing: Natural language processing (NLP) employs machine learning techniques to unveil the underlying structure and significance within textual content. Through NLP applications, businesses can analyze text data and gain insights about individuals, locations and events, enabling a deeper comprehension of social media sentiment and customer interactions.

• Cognitive Computing: Cognitive computing pertains to technology frameworks that, in a general sense, draw from the scientific domains of artificial intelligence and signal processing. These frameworks encompass a range of technologies, including machine learning, logical reasoning, natural language processing, speech recognition, visual object recognition, human-computer interaction, as well as dialog and narrative generation, among other capabilities. There is currently no agreed-on definition for cognitive computing in the industry or academia.

Computers And Decision Making

My intent here is not to rehash a group of definitions, but with this as a baseline, I want to specifically turn to decision making and how much involvement computers should have in this process.

I think one of the keys to where the final decision lies depends upon not just the impact of a decision on the business but also the risk profile of the decision's outcome. Further, when that decision is assessed and reviewed, who will be held accountable for the result? This does not seem to be an area that discussions of artificial intelligence focus on very much.

Years ago—literally over 40 years ago—we developed some initial technology to help hotels predict revenue center activity. These centers not only accounted for daily room occupancy but also factored in the anticipated number of guests to other facilities, such as restaurants and bars. This process resembled the familiar task of forecasting widget production to align with demand while avoiding any significant inventory excesses.

The approach at that time was what we now commonly call machine learning. Over time, these technologies and algorithms have evolved to now fall more into the category of deep learning. But at the end of the day, regardless of any computer-generated predictions, it was still up to the manager of the specific revenue center or production environment to make the final decision on projected volume.

Once that decision was made, one of the key areas influenced by these projections was staffing levels. This pertained not only to daily staffing but, in the service industry, often extended to staffing levels in half-hour increments as needed.

As systems have advanced and the scope of data analysis has expanded, the accuracy of predictions has consistently improved. However, it remains a rarity for the manager overseeing this specific aspect of the operation to be fully removed from the final predictions, which encompass staffing and cost levels that will be incurred.

Where Human Intervention Is Needed

Turning now to the broader economy and taking a look at where AI is being tried, we see examples where the systems that are being used have no human intervention whatsoever. At times, it is clear that human intervention is absolutely needed.

Consider, for example, trading systems within the stock market. In such systems, human intervention has proven critical in preventing excessively wide market fluctuations. This is just one area, but I'm sure if you take a moment to sit back and think about other areas where computers are making decisions based on some level of AI, you'll find many more examples of where human intervention is still crucial.

The Business Impact Of Decisions

As we look at the application of what is broadly called artificial intelligence, it becomes more and more important to understand the risk impact of specific decisions on business results. Simply put, the larger the impact of a decision on an operation, the more important it is to ensure that the decision is not left completely to the computer.

If the decision going to be made has a very low risk of business failure and/or the cost of failure is very low, then it's easy to turn to the computer for determination.

We all remember when Deep Blue played chess and, at first, suffered defeat. However, as it continued to learn, it won chess matches, sparking our excitement about the computer's capabilities. Nevertheless, it's important to recognize that winning a chess game, which holds little real-world consequence, is quite different from the task of making decisions such as estimating the demand for breakfast service or predicting the number of travelers heading to Chicago.

The cost of getting that number wrong or the impact on other revenue centers can be significant, counting both direct and indirect impacts.

Therefore, I believe it benefits us to understand the consequences of the decisions being made, as well as the associated costs and risks of potential failures. This understanding can guide us in determining the appropriate level of management involvement in making the final decision. Final accountability for decision making in key areas needs to remain with management, especially when the cost of failure is high.

Over time, computer information and interpretation will become more important and enlightening. But as we look for accountability in management decisions, we may want to think more about AI being defined as "amplified" intelligence as compared to purely "artificial."

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?








This post first appeared on Autonomous AI, please read the originial post: here

Share the post

Foundation models for generalist medical artificial intelligence

×

Subscribe to Autonomous Ai

Get updates delivered right to your inbox!

Thank you for your subscription

×