Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

HHCN FUTURE: The Role of Data, AI and Emerging Technologies in Home Care

This article is sponsored by AlayaCare. This article is based on a Home Health Care News discussion with Naomi Goldapple, SVP, Data and Intelligence at AlayaCare. This discussion took place on August 30, 2023 during the HHCN FUTURE Conference. The article below has been edited for length and clarity.

Home Health Care News: I think there’s a lot of exciting Technology and we’re going to talk through a little bit of what’s real, what’s not real, what’s out there right now, and what’s coming in the future.

A lot of what we’ll discuss relates to workforce shortages and payment challenges, which are driving technology innovation. Naomi, I want to ask you at a very high level what do you see as the two or three game-changing types of technology in home-based care right now?

Naomi Goldapple: We can’t ignore LLMs, which are Large Language Models which is what’s running ChatGPT, but I don’t mean ChatGPT — like the one that everybody uses to cheat in school or write recommendation letters or translate — but the actual underlying models that you can use for all kinds of different applications. It’s really quite exciting and revolutionary right now. I can tell you that with my team, there are things that we’ve been building for the last year and a half using natural language processing [NLP] that we literally had to trash because these new models are just so much better. There’s so much that you can do with it.

It’s very exciting. I also think the world of wearables is becoming more and more of a reality, so it’s less gimmicky than it was before. Like these little sensors in the home for fall detection or strap something onto your grandmother and then you can detect things in that. Those were a little bit invasive and I’d say gimmicky and now they’re becoming a lot less invasive.

It’s going to be a game changer when we don’t have enough caregivers to be physically in the home. Then my last one that I’ve always been a big proponent of is voice. I think being able to use the Alexas, the Google Minis, and all that, but with the large language models. Instead of writing your prompts, these can be questions and it works just as well. There’s a lot that you can do for remote patient monitoring and for drug adherence, and there’s all kinds of stuff that you can do with this technology.

These are areas that have practical uses right now. What about long term? What’s coming next?

What I think is coming next is actually these things becoming mainstream and becoming part of processes and becoming regular technology that’s actually embedded into processes. Right now, it’s a little bit gimmicky. You could be using ChatGPT maybe to help a little bit here and there but how do you actually leverage this so we’re squeezing out costs from the processes, and we are really making things more efficient. I think we’re going to see in the next two to three years real differences in efficiencies because honestly, there’s no choice. Everyone’s got to cut costs out.

The long term might not be as sexy but is really taking these sexy things and making them reality.

We’re going to talk a little bit more extensively about AI right now. How could predictive AI in particular revolutionize data utilization for home-based care agencies allowing them to make better decisions than they currently do?

That’s an easy one because predictive AI is able to use predictive algorithms, and there’s so many different things you can predict. There are companies who are making real gains with, for example, claims or things like, do I want to predict whether I should take on this referral? Is it going to be profitable? Do I have the staff to staff this? Those types of things are really very important in terms of keeping the profitability up.

There’s also predicting Risk, risk of so many things. We’re in a caregiver shortage, so risk of who’s at risk of leaving. That’s pretty important, and you need to know early who’s going to be leaving so that you can actually mitigate these. Getting those early warning signs. Something that I know I’ve been working on a lot is the risk of hospitalization. Risk of any of these adverse events.

Being able to automatically consume all of the data points that are collected every single visit, and being able to automatically put people into different categories, high risk, low risk, medium risk, and then what do I do about that, so that I can meaningfully move the needle with the measurements.

Sometimes from my perspective, I’m not an expert when it comes to AI, but it seems like within the past six months there’s been so much hype about ChatGPT. How much of the power of AI is overblown? What could it really do? Is it really game-changing like we said earlier?

Yes, I think it is, but not in the way that I think everybody got super excited about three, four months ago because all of a sudden everybody was able to just go on to the internet and ask their own questions and see the marvel of this agent being able to spew back amazing stuff. It really quickly democratized AI. It wasn’t just a bunch of researchers who were able to see the power of these models, it was everybody. That got everybody really excited, but it has died down now because aside from writing a letter for you, doing some translations, maybe writing a blog post, how can I actually weave this into my processes?

That’s where it’s going to change. There’s some things that people are using right now, for example, Copilot. GitHub Copilot is what ChatGPT is for text, it is for programmers. It’s pretty amazing. You can say, this is what I want to do, I want to build an application that does X, you can start it off and it can literally write all your code for you. Right now people who are building applications, we’re seeing like 30%, 50%, 60% productivity gains in terms of being able to be more efficient. Those things are pretty amazing. Where we can use the large language models is things like summarization. Going through the nurse’s notes, going through all this and picking out what’s important, and just summarizing that back.

Being able to do the question and answer, tell me about this patient, tell me about this client. Really being able to query any of your datasets with natural language is a game changer. Think of even accounting. I want to know if this particular visit is going to be profitable. It can go into your dataset and give you back a point of view — but again, we always have to be careful that there’s a human involved because it sounds smart but it’s not really smart. We know it’s intelligent, but it’s really still mathematical. Everything is just a prediction and it’s predicting the next word, or it’s predicting the next thing based on what it’s been taught.

It’s not really using outside intelligence to make that. You need to have a human that’s working alongside.

You mentioned using AI as a means to predict caregiver turnover. Can you talk a little bit more about that? How’s that being done exactly?

Sure. We collect information about our caregivers all the time. We know what their schedules are, we know what their skills are, we know what their availability is, we know what their behaviors are. Do they usually clock in and clock out with accuracy? How long do they have to travel during the day? We have all kinds of information and we can start to see patterns in that information.

Through the research that we’ve done on my team, we can see a few things. One is the higher their utilization is, so whatever the delta is between what are the hours they want to work and what are the hours that we’re giving them, that is almost the number one driver of happiness or satisfaction for caregivers. If they can have a schedule that is allowing them to earn a fair wage and get the hours that they need, they’re going to be pretty happy.

That’s very important as a metric to keep an eye on, to make sure that that delta doesn’t get too big. We’ve also seen things like: what is the delay between hire and actual first visit. If that is too long, they get disenchanted and they leave right away. You have to make sure they have the white glove treatment and what’s happening in the first 30 days, 60 days, 90 days, make sure they’re part of the family, and that works well.

You can have that on a dashboard, you can have metrics and you can be able to pinpoint when something is going off, and you can literally pick up the phone and be, “Hey, what’s going on? Do you need some more hours?” You can really mitigate this. We’ve been looking at trying to understand the groups, the clusters of caregivers by their behaviors. We can see there’s certain types that clock in and clock out with a lot of accuracy.

Some of them work, they do their documentation right away. When we do this, we have clusters of, we call them the hard workers and we can see hard workers have these types of characteristics. Then we had some that came out as sloppy workers and they had different characteristics. The highest numbers of clients who block them, they usually try to clock in when they’re not physically there yet, or a bunch of other things. You can start to identify behaviors and say, “This one smells like that type, I want them to be more this type, how can I judge their behaviors?” Be really data-driven about it.

You’ve already mentioned some really amazing real-world use cases of AI, and there’s probably several more that you could pull from, but do you have two or three real-world examples of providers doing something cool with AI that you haven’t talked about yet?

Well, we definitely see a lot in terms of claims processing, so reducing the error rates in claims. That’s where we can use something called anomaly detection, where we can see what a clean claim looks like, and then if there’s anomalous behavior that can be picked out before things are submitted so that they’re not rejected.

There’s definitely been a lot of impressive reduction in numbers of rejected claims by using anomaly detection algorithms. Then I think I’ve seen more and more what I was talking about in terms of caregiver churn. Then because the industry is forced to be more metrics-driven in terms of outcomes and reimbursement based on outcomes, we’ve seen a lot using these algorithms to get better total performance scores and try and reduce those hospitalizations, those falls so that they can get better overall metrics and really protect their reimbursements.

What about future uses? Are there any use cases for AI moving forward that you find really exciting but we’re just not quite there yet?

I alluded to things in the home, and I think this is really where we’re going. There’s not enough people to be in the home all the time. People want to be in the home, and there’s technology that’s becoming more and more accessible to be able to help monitor in the home, and even be interactive with the loved one at home to be able to make sure, are they taking their medications?

Is there anomalous behavior today? Usually, they wake up around this time and then they make it to the kitchen around this time, and then they do it this time. These sensors can try to see, they don’t seem to be getting up within the same timeframe today. These alerts can go to a caregiver or to other people to say, “You know what, they might be at risk of a fall. Something might have happened,” and you can go in and mitigate.

I think this ability to do remote patient monitoring is getting a lot more sophisticated and can be even interactive. Even things like loneliness, where they can start to talk to these agents who can talk back about, I’d like to hear this song, what’s the weather today? There’s a storm coming, and they’re afraid of storms so there can be more interactivity that can really be leveraged so that the caregiver doesn’t have to be there 24/7, and that can really help. I’m pretty excited about that.

In one of your previous responses you mentioned how important it is for there to be a person behind the AI tool, what are some of the other dangers of maybe leaning too strongly into AI?

I don’t know if you’ve heard the term hallucinations when all this came out three, four months ago and everybody’s playing with ChatGPT, but then there was also Microsoft Bing that came out at the same time, and then people were having conversations with Bing and all of a sudden Bing went off into a strange tangent and was telling this guy that he should leave his wife.

I was like, what is going on around here because these models, they tend to, if you don’t put guardrails, they can hallucinate. They start to grab information and contextualize in ways that are going off of what you want it to be doing. You have to make sure that they’re designed properly, but we always have to make sure that what we’re building is just decision support.

It’s not prescriptive, it’s not replacing because it really is the professional that will make that call. There’s some funny examples. If you are a caregiver, let’s say, instead of going to a visit as a caregiver, you want to just say, “Hey, can you tell me what has changed since the last time I visited this client?” Maybe it was the week before.

How nice it would be if it could just summarize for you, “Well, from the last time you were there, they changed this medication, they fell once, this and that.” That would be so great instead of hunting and pecking in your application or even in paper to try and read what the last person wrote, that would be great. When we were playing with this, the first thing that comes back is just the basic demographics about, well, this person is a 90-year old woman with these comorbidities, etc. so you just get a little summarization.

Then there’s what’s changed since the last one. One of them we saw was this person, let’s say Mr. X is a newborn. We were like a newborn, why would it be a newborn? We realized that the date just took today’s date, that date was blank. It was today’s date and therefore the large language model just assumed, well, they’re born today, therefore it’s a newborn. We were like, how do you have a newborn with all those comorbidities? You really need a human to take a look at that and make sure that you correct those types of things and that you train it properly.

One of the dangers is definitely the data. You have to make sure that the data is correct and accurate. You also have to put on those guardrails because you also want to make sure you’re not sending a bunch of personal health information to OpenAI, which you could very easily because when you’re playing with ChatGPT, they’re using your questions and your data to make it better.

You don’t want to be doing that with information in your database. You need to put guardrails. The other thing is about privacy as well. You have to make sure that the data that you want to protect is being protected and you can just share cleansed data, anonymized data, and you’ll still get the information out.

We spent a good chunk of this conversation so far looking at AI specifically. I want to shift gears and talk about data and data strategies and mistakes that providers typically make when it comes to their data strategies. What are some of the common challenges that home-based care agencies face when it comes to effectively using data?

I’m sure nobody would disagree with me, data capture, data input, and consistency. Getting everybody in your organization to input data in a timely and accurate fashion of course, it helps when you have fields that will validate, but everything starts with how you’re capturing the data. Garbage in, garbage out if it’s a sophisticated algorithm or if it’s just for regular information, that’s pretty important.

I’m finding that over the past three years, I feel like I would be talking to maybe one data person at providers, and now I feel like I’m talking to data teams. The providers are getting more sophisticated and really starting to leverage the data more, and really understanding where your data is coming from. One thing that we do notice is things like schema changes.

If you are relying on certain data to then amalgamate for other downstream processes, so let’s say you’re taking data from one vendor system, and then you have data from another vendor system, and then you’re making a report, and then that gets sent on to X and maybe that’s your utilization report that people are depending on, you have to make sure that if those vendors or if somebody changes anything in the database, that you’re aware of that so that you change all your downstream processes so that everything doesn’t get broken.

Everything is becoming very amalgamated because you want to get all the aggregated data together so you can get the fullest picture. You have to make sure that those data contracts with wherever the data is coming from, that those are set in place.

You just mentioned a few really good ones, but are there any other best practices providers should keep in mind as part of their data strategies to make sure that they’re collecting data that they could then actually act on?

One thing that I always talk about is being very hypothesis-driven. Why are you collecting this data? Why are you putting together this report?

These things are really important, and what are you going to do with it, because I’m sure you’ve created, or you’ve looked at dashboards that they were interesting at the beginning and then you stop looking at them, or you look and you’re like, oh yes, but what do you actually do with those results? Especially in AI, you have to think about how people are going to consume these predictions, because in AI everything comes down to a number.

It’s a prediction. It’s like 0.67 and you have to convert that into something that is really actionable. Maybe 0.67 means a medium level of risk, but it’s rising. If I’m telling that to a caregiver or a clinical supervisor that this particular patient was stable and now something is changing, what do I do about that? We don’t want to be too prescriptive again because we don’t want to say, “You should do this, because we don’t want to be responsible for that.

You still need the professional to make that call, but they can come up with all their mitigation strategies. When it’s somebody who’s medium risk with these types of things that are at a risk of fall and are not taking their medications, go to our guides, and this is what you have to do. You have to make sure that everything is actionable. If you see that there’s a caregiver that is at risk of churning, what do you do about it?

Do you just say, “Huh, it’s too bad, they’re probably going to quit next week. What do I do about it?” You need to make sure you finish the workflow and you think these all out and that you actually pull out the proper data that’s going to answer those questions.

During a lot of these conversations, I love going back to real-world examples to paint a picture of the things we’re talking about. When it comes to successful and effective data strategies, could you maybe share a real-world example or two of providers doing data-driven decision-making effectively that has had an actual positive change on their business?

One of the things that my team was working on for the past few years is all about schedule optimization. Using optimization algorithms to make sure that we’re not leaving big holes in schedules, that the right person is at the right place at the right time with the right skills, and continuity of care. You make that all configurable. What has been designed and what we’ve seen now is that a scheduler comes in the morning and says, “What are all the vacant visits that I have to fill?” They basically press a button, say optimize, and boom, it gives you all the proper matches because you’ve configured it properly.

We’ve seen some providers, they said this is something that used to take us the entire morning or even into the afternoon trying to find who’s the right person. With the press of a button, you have everything done in about 10 minutes. Sometimes you send them, you put it onto the schedule. Sometimes you have to do a shift offer, depending how you’ve organized things with the care workers.

This has eliminated tons and tons of repetitive tasks. Now, this only works if the availability is up to date, if we actually know when people are available. It only works if the skills are up to date. It works if we have the care worker and the client’s home addresses up to date. If any of those are wrong, it’s going to give erroneous answers. Then the scheduler is not going to trust it. Because you get very, very little margin for the user to actually trust these algorithms.

Because they’re already mistrustful, they think they’re black boxes. You want to try to make these as explainable as possible, given these are the reasons why it came up with this, and these are the actions. You have to design all that workflow. This is something we’ve seen has really started to change dramatically in terms of taking something that would take a morning, an entire day into a few minutes, which is pretty exciting.

There are other parts of healthcare that seem further along in their data journeys, so to speak. When we think about home care, how sophisticated is the industry at this point in your view?

If we compare it to the hospital system or other healthcare. Back in 2016, Geoffrey Hinton, one of the godfathers of deep learning. In 2016, he said, nobody should study. We are not going to need any radiologists in five years, that entire profession is going to be wiped out because AI can read the X-rays much better and much more accurately than any human.

We’re now in 2023, 5 years have gone by and radiologists are still needed. Today, we’re starting to see hospitals really starting to use this technology to actually not necessarily reduce the number of radiologists, but to reduce their workload. They only get now, here are the 10 that are potential tumors, let’s say, versus going through 500 and going through.

Now these are starting to be not just point solutions, but these are starting to be systematic in the healthcare system. I don’t think we’re there yet in home care. In home care, it’s still a lot of playing around and point solutions. I think over the next few years it’s going to become systemic that these applications are going to be part of the regular workflow because we need to get more efficient.

I want to get to some of the other emerging technologies that you’re really excited about, but tie a bow in the data point specifically, and AI. What have been some of the macro trends that are advancing the use of AI and advancing data strategies in home care? For me, for example, value-based care seems like something that you can’t do well if you don’t have a strong data strategy.

That’s one of the first things, because if you’re all of a sudden being measured on very specific criteria, you’re like, “How do I even know what to change to change the levers? How do I know what’s going to move the needle on which metrics?” This is definitely moving the needle. The CMS with the TPS, the Total Performance Scores, very specific scores for very specific things. What do I have to do to actually get a better score here and a better score there so that I can rank better so I can hopefully get better reimbursements. This has been a real forcing function to get more data-driven. I would say there’s always been a problem with churn, but the labor shortage, necessity is the mother of invention.

Because we have this labor shortage, that has been a real forcing function also to try and be more efficient, be more optimized, use these precious resources in a more optimal fashion, and make sure that they’re happy. I think that has really pushed the needle on needing to improve the technology.

I know we’ve talked in the past too, just about the shrinking margins that a lot of agencies are facing. You need the data component, the AI component, potentially the automation component, just to do more with less.

You have to eliminate as many repetitive tasks. I joined this industry about four and a half years ago, where I was meeting a lot of people who were still delighted because they came from a paper-based to a digital environment, and now we’re really going from digital to really data-driven. It’s a short history of really diving into this. Starting to use AI models, that’s really quite an accelerated path.

Now I feel like I’m talking to data teams and providers are hiring data scientists and they’re hiring data engineers, and they’re building these sophisticated data stacks because they’re like, “We’ve got to try and be more efficient and we’ve got to try and find the efficiency.” I’m really seeing a big change in the industry to try and be more data-driven and then leverage technology at every step of the way.

At the start of this conversation, you mentioned how you’re excited about sensors and wearables too, because they’re less gimmicky. What’d you mean by that?

I remember pre-COVID when people used to come to your office and knock on the door and show you stuff. People would come with all kinds of crazy stuff. Like, here, here’s a laser. You could point at your grandmother’s forehead, you get all of the vitals. Here’s another thing that you can stick under their shoe and you get all kinds of stuff. Some of them work some of the time, but if you have a sensor that has to be in their shoe to be able to tell if they’re at a risk of falling, somebody’s got to be there when that shoe doesn’t fit on properly or the sensor slips out, like it’s not as practical.

I find that the technology has gotten more sophisticated and it’s gotten less invasive. More like sensors in the home as opposed to cameras in every room, which you can’t do from a privacy perspective. You don’t want to have full cameras all the time. I think all of that. The other thing is that we’re all getting used to it.

You start to integrate that with how can that also care for the people that are in the home? Here’s a reminder, did you take this medicine, and with vision, you can see did they take the right medicine? I don’t think that’s the right one to take at this time. You can imagine all these scenarios and all that technology is quite mature. It’s ready to be used. It just has to be designed in a way that is seamless to how somebody lives.

I think that’s pretty exciting with the voice, the computer vision, the sensors, and then the newer older generation they’re more used to using technology. They know how to use Facebook and all that stuff. It’s not going to be as crazy a leap as maybe the generation right now to be able to start trusting and living with that technology.

Looking ahead, what tips and strategies could you provide to help home-based care agencies prepare for the future using a crawl, walk, run approach based on their current level of data utilization?

It’s at the beginning, where are you today? You need inventory of your data stack, your collection processes that you have right now. Really identify what are the main questions? What do I want the data to tell me? Come up with those hypotheses. What are the problem areas? We have really seen a reduction in profitability in this particular area. We have too many people, we have too many schedulers. What do we do about that? How do we reduce that?

You have to really pinpoint everything. Where is it very inefficient? Let’s see if we can pinpoint how to solve that. Find out what are the problems first, what’s the data that’s going to answer that? Then you can start getting sophisticated with putting processes into place, leveraging this sophisticated technology.

To learn more about how AlayaCare can help your organization ensure operations are consistent across multiple locations with real-time information updates for key stakeholders, visit https://www.alayacare.com/.

The post HHCN FUTURE: The Role of Data, AI and Emerging Technologies in Home Care appeared first on Home Health Care News.



This post first appeared on Home | Home Health Care News, please read the originial post: here

Share the post

HHCN FUTURE: The Role of Data, AI and Emerging Technologies in Home Care

×

Subscribe to Home | Home Health Care News

Get updates delivered right to your inbox!

Thank you for your subscription

×