April 12, 2024

Q&A: LG on developing its consumer health data collection offering

It’s not often that consumers think of their refrigerator or television as a tool that provides personalized health data to their healthcare providers, but LG NOVA is considering the possibility of consumer electronics becoming data collectors for preventive care measures.

Atul Singh, managing director of digital health at LG NOVA, sat down with us MobiHealthNews to discuss how LG Electronics’ North American Innovation Center is working to improve the healthcare experience of healthcare providers and patients in the clinical setting, and considers how it can evolve its consumer electronics products to improve health outcomes.

MobiHealthNews: How does LG work in digital healthcare?

Atul Singh: LG has been active in healthcare for decades, but mainly in the field of displays, TV monitors and radiology equipment in hospitals. So basically we sell hardware to hospitals.

What we’re doing differently now is we’re helping hospitals maximize their investments in these devices that they’ve purchased over the years to get even more value out of them.

The services we offer are basically virtual healthcare-related services. These are telehealth services. Imagine virtual nursing, where a remote nurse can collaborate with a bedside nurse or floor nurse to help them with a variety of tasks. And these tasks can be as simple as signing off on medications, which may require a double signature, certain discharge elements, or even nursing training. A senior nurse can remotely train junior nurses who are at the bedside for various tasks.

The other use cases are patient monitoring. So, [in the Smart Cam Pro] device, there is a camera, a number of sensors and an infrared camera. With this device, a nurse can remotely monitor multiple patient rooms. Today they can monitor up to sixteen rooms, but that number could easily grow. So they can monitor sixteen patients from an external location and, in principle, talk to them if necessary. Otherwise they just passively monitor for activity.

It’s a two-way street in the sense that we have AI capabilities built into the device. So the device is monitoring, because you can imagine having a remote nurse watching 16 patients at the same time 24/7 is very tiring and causes fatigue and screen fatigue, and they might not be paying attention.

Typically what they can do is they can set the parameters that they want to monitor for each patient. The system then monitors this.

MHN: Is there a way to generate notes for a doctor?

Singh: We are now introducing that option: ambient listening. The device therefore has four microphones on top. So it is listening to the conversation that is actively going on, whether it is between the nurse and the patient, or between the doctor and the patient. And what we do is catalog the entire conversation and then summarize the key outcomes of the conversation so that they can be included in the patient record.

We haven’t deployed it yet. We test it to make sure because it is a clinical conversation, so some of the words the doctor or nurse uses may be clinical in nature or medical terminology. We don’t want the AI ​​engine to misrepresent things. So a lot of testing has to take place in that space.

This is where we start, but our ultimate vision is to follow the patient home. So at home, the customer or consumer knows us through their interaction with our appliances or appliances – the TV, the refrigerator, the washer and dryer, and so on.

Then we want to expand care from the hospital once they are discharged home, and we want to enable these devices and the devices they have already invested in to provide healthcare services.

We currently have approximately 500 to 700 devices on the consumer market, and a large majority of them already have built-in intelligent sensors capable of collecting and analyzing information about user behavior.

So how often they use the device, when they use it, basically general usage patterns, as well as the device itself or the device itself monitoring the lifespan of the device so that if something goes wrong we can alert the customer and proactively engage before the device breaks down.

We have a lot more data on how the individual uses the device too: what time of day, how often and so on.

For example, how often do you walk in front of your refrigerator? So he can see it, and if there is a pattern that has identified that there is some movement in front of the refrigerator between 6:00 and 8:00 every day, a few times, then that is normal behavior. Then if we notice that there has been no movement or that the movement now starts at nine o’clock, for only ten minutes, overtime, we can start using that data with other data sets to see if there is anything medically that would be a problem for this individual. poses a challenge. , instead of six to eight, they shifted their window.

Or they stopped walking in front of the refrigerator altogether. Has the location of the refrigerator changed, or is there a medical issue that now prevents them from coming to the kitchen to perform their regular duties? But that’s a very loose data point. We cannot draw any conclusions from that.

But if we combine that with other data sets, such as how often is the washing machine used, the air purifier or the TV? And we generally know the location of these devices because of the customer’s location, the zip code.

Then we’ll look at social determinants of health data and ultimately connect it to their providers’ clinical data to see: Is there a change in the pattern? And if so, can we do something with these devices, with the smart TVs that they have, to alert the patient that you might want to do this or your doctor might want you to try something else. Or here’s just a simple warning that your medication will run out in three days. Do you want to refill?

So there are a lot of simple data points that we have now, but in total they can add intelligence to the interaction with the individual.

MHN: How can these consumer electronics evolve into health-related services?

Singh: Ultimately, you can imagine 10 to 15 years, regardless of time horizon, to be able to do predictive analytics. So if you see a decreased use of certain things, or a different time frame, or whatever, predictions can be made about that. A medical episode may occur, and can it be stopped or addressed in advance? But that’s far. Right now in the hospital we are learning, adapting, improving the quality of care there, and then moving to post-acute care, to long term and finally home.

Technology has some catching up to do. The regulatory framework needs to catch up. Payment models need to catch up, but everyone is moving in that direction.

Leave a Reply

Your email address will not be published. Required fields are marked *