Natural Language Processing (NLP) in Healthcare Tech

“I think that this new AI era we’re in is a little bit like the [commercial expansion of the] Internet, and it takes some time to get the technology stack built that leverages this new technology, but once it’s built, it really does change everything.”

Kwindla Hultman Kramer, CEO and Co-founder of Daily

To say that NLP has been a hot topic of 2023 would be an understatement. When OpenAI released ChatGPT—the first open-source generative AI model—it launched a paradigm shift in the tech world. Companies like Google, Amazon, and Meta were sent scrambling to the drawing boards to conjure up their own versions.

Natural language processing (NLP) is the core technology behind generative AI, which uses machine learning to analyze text or speech data, enabling computers to process and understand human language.

NLP is embedded in many of the digital tools we use daily. Examples include virtual assistants like Siri and Alexa, translation tools, the autocorrect feature in iOS, and search engines like Google, which predict what you want to look up before you finish typing your query.

Now, software developers and health-tech companies are introducing their own applications of NLP technology to solve age-old problems in healthcare.

According to a 2022 report, global NLP in the healthcare and life sciences market is projected to reach $7.2 billion by the end of 2027, representing a compound annual growth rate of 27 percent.

“There’s a lot of interest in [large language models] because they’re just massively more capable than previous NLP and AI tools were at taking big chunks of language.. and doing something useful and predictable with it in a very open-ended and broad range of use cases,” says Kwindla Hultman Kramer, subject matter expert in AI, machine learning and large language models and co-founder and CEO of Daily.

Many potential benefits stand to revolutionize the healthcare sector as we know it. Still, industry leaders and government agencies must contend with a few key obstacles before these changes take full effect.

Meet the Expert

Kyle Zebley

Kwindla Hultman Kramer is the CEO and co-founder of Daily, a video infrastructure software company. Kwin’s background lies in programming, hardware design, and developing large-scale systems architectures.

Before Daily, he served as the founding CTO of AllAfrica.com, one of the web’s largest content aggregators and the only independent, comprehensive pan-African news source.

Ways that NLP Can Impact Healthcare

When it comes to healthcare, timing is everything—and unfortunately, it is also one of the most limited and crucial resources. No matter how advanced our understanding of medicine becomes, its potential is limited by our healthcare system’s capacity to serve people quickly.

For example, cancer treatment delay is a major problem in health systems worldwide. One 2020 study found that even a four-week delay of cancer treatment is associated with increased mortality across surgical, systemic treatment, and radiotherapy indications for seven cancers.

With people living longer, population growth, and a shortage of healthcare workers, time will only become more limited in the coming decades. According to the World Health Organization, 55 countries are experiencing healthcare worker shortages as of 2023.

The most significant way that generative AI benefits healthcare operations is to free up healthcare workers’ time. Here are some major ways that NLP is poised to accomplish this goal.

Optimizing Intake

We’ve all gone through the client intake process during initial appointments at new clinics, doctor’s offices, or visits to hospitals. It used to be done with paper forms to gather baseline data, such as your age, sex, medical conditions, history, etc.

In recent years, most providers have digitized this process. However, it’s still a time suck for clinic administrators, patients, and even doctors themselves, who often have to waste limited time at the beginning of appointments asking patients basic questions.

NLP’s ability to understand language and create responses can help here. Various software companies such as Kahun, Talkdesk, and Phreesia offer AI chatbots take note of patients’ symptoms, provide basic information, and suggest next steps, such as scheduling doctor’s appointments. They can also send reminders to patients, reducing the time spent by administrative staff.

The idea is not to replace physician-patient interactions with AI, but rather to complete the basic data collection process outside of the clinic, saving the scarce time that doctors have with their clients for more meaningful conversations.

Studies have shown that better physician-patient communication is linked empirically to care outcomes, including patients’ satisfaction, health status, recall of information, and adherence.

For years, chatbots have been unpopular due to their limitations. However, NLP-enabled chatbots have made improvements to previous technology.

“In the past, basic chatbots were powered by a variety of NLP or keyword-matching technologies. The ability of these chatbots to respond flexibly was limited,” says Hultman Kramer.

“The new technology that changes this is the large language model [LLM]. This is what powers ChatGPT. LLMs make it possible for chatbots to respond flexibly, to perform a variety of customer service tasks, and to have long-running conversations with users.”

NLP chatbots’ ability to learn from user interactions also means that the user experience will continuously improve.

Automating Notetaking

NLP software is already being used in contexts like sales to generate summaries of client calls and keep track of interactions. So, health-tech startups figured, why not apply the same concept to healthcare?

With the increasing popularity of telehealth, which was propelled by the pandemic, generative AI startups gleaned any opportunity to leverage NLP’s ability to understand dialogue and summarize large amounts of text rapidly.

With NLP-based telehealth software, healthcare organizations can get quick transcriptions of full patient conversations.

“All of the conversations between a patient and a provider are already being captured by microphone in a telehealth context,” Hultman Kramer explains.

Generative AI takes transcription a step further by formatting raw text. Widely used notetaking frameworks like SOAP and DAP help providers keep track of specific tasks and evaluate information for clinical reasoning.

Products like Daily’s, which is HIPAA-compliant, can summarize these transcriptions into neatly formatted reports, saving doctors, nurses, and administrators the time and trouble of doing this task manually.

“It has the potential to really save a huge amount of documentation workload,” Hultman Kramer says.

And according to Hultman Kramer, many companies are already prioritizing the development of a telehealth technology stack, creating an opportunity “to test and deploy these new technologies efficiently as part of ongoing work that’s already being prioritized.”

Electronic Health Records

An electronic health record (EHR) is a digital version of a patient’s paper chart, which stores individuals’ progress notes, problems, medications, vital signs, past medical history, immunizations, radiology reports, etc. EHRs are also often integrated with billing, as insurance companies must translate medical information into codes.

While EHRs are a huge upgrade on old-school paper-based systems, the tedious nature of EHR tasks accumulates into a significant chunk of time.

The average time physicians spend using an EHR is 16 minutes per patient, with 11 percent of that time being completed after hours. Daily estimates that physicians spend about 20 hours per week working in EHRs.

EHR tasks contribute to the mental load on healthcare workers who are already often overworked. A 2017 study that surveyed primary care residents and teaching physicians found that 85 percent of respondents said the time they spent working in an EHR affected their work-life balance. Respondents who spent more than six hours weekly after hours in EHR work were three times as likely to report burnout and four times as likely to attribute burnout to the EHR.

This is not only detrimental to healthcare workers themselves but also for patients. Studies have explored the potential of burnout to undermine the quality of care and contribute to medical error and found a concerning connection.

NLP software can improve this bottleneck in healthcare operations, too. Rather than having a human perform data entry and coding, generative AI can plug data from transcripts into EHR systems.

For instance, Daily partners with a company called ScienceIO, which builds customized large language models specifically for healthcare tasks like coding.

“It’s really amazing how they’re able to extract the medical terminology and the categorization from just a transcript of a doctor and a patient talking for half an hour,” Hultman Kramer says. “And those categorizations and those keywords are incredibly accurate.”

With this integration, providers only have to review the work done by NLP software, vastly reducing the time spent working in EHRs and opening up more time for providers to engage in their primary duties.

The Challenges of NLP

While incorporating NLP into healthcare processes helps minimize time spent on tedious, automatable tasks, there are some major challenges with integrating NLP into current systems.

The In-Person Element

To benefit from NLP’s ability to understand language and generate useful output, it first needs input. In telehealth, that input is provided via microphones and video cameras. Without these hardware elements, a generative AI model has nothing to work with.

While telehealth is a boon to healthcare access, in-person care is irreplaceable. This means that health-tech companies need to address hardware concerns for promising applications, such as transcription, automated documentation and EHR tools to be a fully integrated part of clinical operations.

“In an in-person situation, you have to first figure out how to get a microphone into the room, how to capture the audio and how to do that in a data private and compliant way, which is definitely possible, but there’s some friction in utilizing new technologies in in-person situations that aren’t already set up for it,” says Hultman Kramer.

As Hultman Kramer suggests, the idea of planting microphones in exam rooms not only represents a substantial investment on the part of providers, but an even trickier issue: data privacy.

Data Privacy

One of the biggest challenges with using generative AI in healthcare is maintaining data privacy. When you ask an LLM like ChatGPT a question, it sources its answer from all kinds of sources, including blog posts, tweets, articles, datasets on specific topics, and other public digital sources.

In healthcare, NLP models work similarly but are designed for a specific purpose, such as generating standardized medical documents or helping to identify a patient’s undiagnosed medical condition.

To do these tasks well, the model needs access to real examples—and a lot of them. The accuracy and privacy of the data are both crucial.

“You want to use real-world data to train these models … Once you’re using the models, you can often create synthetic data, but to the extent that you can use real-world data, it’s helpful,” says Hultman Kramer.

In the context of healthcare, these datasets would ideally be derived from patients, bringing up some big questions about data privacy and government compliance for health-tech companies.

“Where does that data come from? How do you protect the privacy in the original sources of the data? Who owns it or has the right to use it?” Hultman Kramer asks.

As Hultman Kramer mentions, HIPAA safeguards patients’ information and sets boundaries on the use and release of health records, “but it’s not 100 percent clear [how it would apply to generative AI], so we’re going to need some clarifying regulations around training the models.”

Public Trust

Fear and skepticism associated with AI is not new. Writers have been conceptualizing the potential drawbacks of intelligent machines for decades (e.g., the 1948 novel The Humanoids, 2001: A Space Odyssey, The Matrix, and countless other examples).

While the real-life manifestations of AI haven’t turned out to be as dramatic as these fictional works imagine, the fear is still very real and enduring. You might have felt a twinge of fear skimming through this article.

Public trust of AI is generally low, but it has fallen even further in the last year alone. According to a 2023 poll conducted by MITRE, only 39 percent of U.S. adults believe AI is safe and secure, and 78 percent worry that it can be used for malicious intent.

The top concerns are that AI will replace people in the workforce, AI-aided cyber attacks, identity theft, and deepfake photos and videos.

However, every major innovation comes with public unease, Australian roboticist and former director of the MIT Computer Science and AI Laboratory Rodney Brooks explained in a 2017 article for the MIT Technology Review.

According to Rhodes’ thinking, there is a similar pattern with public distrust of new technologies that’s been observable over the last 30 years: “A big promise up front, disappointment, and then slowly growing confidence in results that exceed the original expectations,” he wrote. “This is true of computation, genome sequencing, solar power, wind power, and even home delivery of groceries.”

Rhodes warns against “hysteria about the future of artificial intelligence and robotics,” but while hysteria may be unproductive and inhibiting, dismissing a healthy hesitancy of AI altogether is a mistake—at least according to senior federal officials in the U.S. and Europe.

In July of 2023, the Biden-Harris Administration met with leading AI companies at the White House—among them Amazon, Google, Meta, Microsoft, and OpenAI—and secured voluntary commitments to help move toward safe, secure, and transparent development of AI technology.

In April, Italy temporarily banned ChatGPT to bide time to explore ethical and regulatory concerns, followed by the European Data Protection Board introducing a task force dedicated to ChatGPT the same month.

While HIPAA provides some guidance for health tech industry players, new regulations will likely emerge related to NLP specifically. As with other innovations, new legislation usually follows after the technology is introduced.

For example, when EHRs became popular in the early 2000s, this prompted the introduction of the Cures Act to prevent information blocking.

Addressing public hesitancy starts with creating regulations and guard rails, which industry leaders like Hultman Kramer welcome.

“I think of healthcare as conservative in some very good ways, [such as] around technology. We really do test technology heavily and check all the boxes with data privacy before we deploy new technology in healthcare, and that’s all good. So that breeds a certain kind of step-by-step approach to technology and healthcare.”

At present, the frontier of NLP is still wide open. There are more questions than answers about how the technology will be integrated and the process. But one thing is clear: it’s not going anywhere.

“I think that this new AI era we’re in is a little bit like the [commercial expansion of the] Internet, and it takes some time to get the technology stack built that leverages this new technology, but once it’s built, it really does change everything,” says Hultman Kramer.

“It’s kind of hard to imagine anything in healthcare today without the internet helping to move data around and ease communication. I think these new AI tools are going to feel the same way over time. It’ll be hard to imagine a healthcare visit without lots of different kinds of small but really helpful AI supporting functions.”

Nina Chamlou
Nina Chamlou Writer

Nina Chamlou is an avid freelance writer from Portland, OR. She writes about economic trends, business, technology, digitization, supply chain, healthcare, education, aviation, and travel. You can find her floating around the Pacific Northwest in diners and coffee shops, or traveling abroad, studying the locale from behind her MacBook. Visit her website at www.ninachamlou.com.