What Does Generative AI Mean For Health Care? We Asked The Experts

What Does Generative AI Mean For Health Care? We Asked The Experts

Healthcare organizations are rushing to integrate generative AI tools into their IT systems and product plans as technology proves that it can do many things faster, cheaper, and sometimes better than humans.

But the rush to harness the power of so-called big language models trained on massive amounts of data outweighs the effort to evaluate them. AI experts are still trying to understand and explain how and why it works better than previous systems, and what blind spots might hurt its usefulness in medicine.

For example, it is unclear how well these models will work and what ethical and privacy issues will arise when exposed to new types of data such as genetic sequencing, CT scans, and electronic health records. Even knowing how much information should be included in the model for optimal performance on a given task is still very likely.

"We don't have a satisfactory mathematical or theoretical explanation for why these patterns are so large," said Zachary Lipton, a professor of computer science at Carnegie Mellon University. "Why does it get better as we scale from millions of parameters to half a trillion parameters? These are very open technical questions.

STAT Reporters ask AI experts these questions to help explain the history and logic behind large language models and other forms of generative AI. The accuracy of these answers depends to a large extent on the data used to compile them. STAT asked experts to clear up a number of misconceptions swirling around these systems as healthcare organizations try to implement them in new missions. Here's what you need to know before betting on a patient's health or profiting from a first impression on ChatGPT.

AI is when generative models actually do, they generate answers

In short, they do math.

In particular, they perform the same type of autocomplete that has been built into our email and tools for years, such as machine language translation.

"AI recognizes and repeats patterns," University of Michigan computer scientists Gina Wiens and Trenton Chang said in response to a STAT question. Many generative text models are essentially based on predicting the next time each word is likely to appear, using probabilities to determine how "reasonable" an answer is.

"It's like playing Mad Libs or crossword puzzles: you look up certain words, and the trick is to statistically select the words you want," Heather Lane, senior data science team engineer at Athena Health, tells STAT. can be matched with But it doesn't fit." A true understanding of "what he does." AI models draw on large volumes of data (including Wikipedia, Reddit, books, and the rest of the web) to figure out what's "statistically likely" and learn what "looks good" from a set of human comments on your answer.

It is far removed from the way humans think and is less efficient and more limited than the thinking system that determines how our brain processes information and solves problems. If you think that large language models are approaching artificial general intelligence, artificial intelligence, you are misinformed.

How much better than previous versions of generative AI

This is because they are trained on more data than previous versions of generative AI, but several factors have combined in recent years to create today's powerful models.

"When you're talking about starting a fire, you need oxygen, fuel and heat," Elliott Bolton, a research engineer who works on generative artificial intelligence at Stanford University, told STAT in an email. Similarly, in recent years, the development of a technology called "Transformer" (the "T" in GPT), combined with massive computing power and massive models trained on large volumes of data, has led to the impressive results we see today. .

"People forget that just 12 years ago, if someone taught [artificial intelligence] on Wikipedia, it was a big study," Lipton said. "But now when people train a language model, they train it on all the text on the web."

Because models like OpenAI's GPT-4 and Google's PaLM 2 are trained on large amounts of data, they can more easily recognize and reproduce patterns. However, the fluency in producing complex products such as songs and pieces of computer code has amazed AI researchers, who did not expect such a leap from writing an essay on impressionism in the late nineteenth century.

"It turns out that these larger models with lots of computing resources, trained with lots and lots of data, have these enormous capabilities," Lipton said. Forms can be updated with new or different data models and integrated into existing products such as Microsoft's Bing search engine.

They may look smart, but they are far from it.

Lin said that while language models learn language the way a young child learns, these models require more training information than what a child learns. They are also unable to perform spatial reasoning and mathematical tasks because their language skills are not rooted in an understanding of the world or cause and effect.

"It's very easy to make models look stupid," adds Lipton Carnegie Mellon. After all, they are word processing engines. They don't know that there is a world where text makes sense.

But as more people start using them, he says, there are many unknowns about how they will affect human intelligence, especially as more people get used to relying on them to do what they used to do. Personal such as writing or summarizing information. .

"My biggest fear," he said, "is that they will somehow push us back so that we stop being as creative as we are."

There are ways to fix the problem of ChatGPT

Because these generative AI models only predict possible and plausible transcripts, the models have no basis for understanding what is true and what is false.

"He doesn't know he's lying to you because he doesn't know the difference between the truth and a lie," Lin said. "It's not unlike being in a relationship with a guy who seems very attractive and convincing, but his words have nothing to do with reality."

That's why it's important to ask yourself a few simple questions before using a template for a specific job: Who made it? Have you been trained on data that contains information that is relevant and reliable for its intended use? What bias and misinformation might occur if questionable sources are cited?

This is an especially important practice in healthcare, where misinformation can have many negative consequences. "I don't want my doctor to be a Reddit intern and know about you," said Nigam Shah, a professor of biomedical informatics at Stanford.

This does not mean that it is not possible to improve the accuracy of models that may have biased or false data in their training. Developers of generative artificial intelligence systems can use a technique known as reinforcement learning, which involves providing feedback so that the model learns the most accurate and useful answers based on the judgment of human experts.

This technology was used to build GPT-4, but the model's developer, OpenAI, did not disclose what data was used to develop it. Google has developed a large language model called MedPalm-2 designed to make medical data more suitable for healthcare use.

"Hallucinations may decrease as generative AI models advance," said Ron Kim, Merck's senior vice president of computer engineering.

Perhaps there will be no apocalypse, but wings are needed

The hype surrounding ChatGPT has sparked new concerns about AI stealing everyone's jobs or making them somehow obsolete.

But many researchers in the field are taking a more optimistic tone about the technology and its potential in healthcare. Doomsday scenarios are, broadly speaking, "extremely unlikely," and the tempting hypothesis is no reason to rule out the possibility of artificial intelligence. Democratizing access to high-quality medical care, developing better drugs, reducing pressure on underprivileged doctors, etc.

"In healthcare today, patients are dying not because of AI, but because of a lack of AI.

Although there are many examples of inappropriate use of algorithms in healthcare, experts hope that GPTs can be used responsibly with proper safeguards in place. There are no established rules for generative AI yet, but there is a growing movement for rules.

John Hallamka, president of the Mayo Clinic, said, "At least at this point we need to list the use cases ... where it would be reasonable and less risky to use generative AI for a particular purpose." He leads the AI ​​Health Coalition, which is debating what safeguards might be appropriate. While GPT-based tools can help non-English speakers write an insurance denial letter or scan a scientific paper, other use cases should be banned, he says.

"Things like asking [generative AI] to do a clinical summary or provide diagnostic support to a physician are not uses we would choose today," he said.

But as technology improves and becomes capable of performing such tasks, humans will have to decide that overreliance on AI will impair their ability to think about problems and write their own answers.

"If it turns out, what we really have to worry about is what smart people are saying," Lipton said. "And don't let GPT-4 complete something that someone has effectively said in the past?"

This story is part of a series examining and analyzing the use of artificial intelligence in healthcare and the sharing of patient data. Funded by the Gordon and Betty Moore Foundation.

Innovations in Generative AI: A Basic Overview

Tidak ada komentar untuk "What Does Generative AI Mean For Health Care? We Asked The Experts"