The next time you go in for a check-up with your doctor, there may be someone else in the room. Sort of. More doctors are turning to artificial intelligence to record conversations with patients and take notes. That's causing concern among some patients and experts about privacy and accuracy. Reporter Michelle Crouch joins Marshall Terry to talk more about her new three-part series on the increased use of AI in health care. It was published by NC Health News and the Charlotte Ledger Business newsletter.
Let's start with why do they want to use AI to record patient conversations?
Michelle Crouch: Well, for doctors this can be a big time saver. You've probably noticed, Marshall, that your doctor spends a lot of time typing into the computer when you go in for appointments these days. But this tool can change that for them. Here's how it works. The doctor comes in. They have an app on their phone. They should ask you for permission to record the appointment. The AI takes that recording and it can generate a detailed note that the doctor can review later and then upload it to your medical record. And so the doctors I talked to, they say this really frees them up to make eye contact with patients and to engage with the patients in a more direct way than they were able to when they had to worry about keeping track of everything that they were saying.
Terry: And I know some doctors that say the paperwork they have to fill out takes more time than they get to spend with patients. Is this aimed at helping with that?
Crouch: Yes. So, you know, national research shows that doctors, on average, spend one to two hours a day just charting or filling out paperwork for patients. It's a huge factor that's contributing to burnout among health care providers. And it's a reason why a lot of them are leaving the practice.
One of the doctors I interviewed said she's spending almost an hour less a night filling out paperwork compared to before she started using the tool, and she says that's partly because it's just a lot easier when the note is already created to edit it and update it versus starting with a blank screen and having to create something from scratch.
Terry: Now, as I mentioned, this is sparking some concerns over privacy and accuracy. Should we be worried about Siri eavesdropping on our medical problems? And are they accurate? I mean, I'm thinking my voice-to-text gets my words messed up all the time.
Crouch: So this isn't really Siri. These are tools that are specially designed for health care providers and they do have a lot of security features built in like biometrics and password access, and also the hospital says that the recordings disappear after the doctor approves the notes.
But as with any technology, there is always the risk that information can be hacked. You asked about accuracy and I think maybe that's an even bigger concern right now. Research shows that these virtual scribes are not as accurate for Black patients, for people who speak English as a second language, people who have speech disabilities. And even more concerning, sometimes they actually make up things. These are called hallucinations in the health care industry.
So sometimes if something is garbled or there is some quiet moment in a recording, researchers have found that AI will actually include some completely fabricated content. One researcher found, for example, that AI invented something called hyperactivated antibiotics that was never mentioned at all during an appointment.
I think that the fact that there are these biases and sometimes there are these mistakes really just highlights how important it is to make sure that doctors are reviewing these notes. And ensuring that they're accurate before they get submitted into a patient's chart.
Terry: AI is not just limited to taking notes. You also looked at other ways doctors are using it. Tell me about some of them.
Crouch: One of the big ones is to spot danger on patient X-rays and scans quickly, perhaps faster than a radiologist can. At Novant, for example, there's an AI tool in the ER that scans patient images as soon as they get uploaded into the system. And it can inform doctors right away if it spots something like a broken neck, a brain bleed, a stroke by letting the doctors know right away that this is a patient you need to treat immediately.
It's a huge help and can really help improve patient outcomes when they have these serious conditions. There's other ways that are kind of interesting, too, like at Ortho Carolina, they use an app called Medical Brain. And what this app does is it can check on a patient after they've had a knee replacement or a hip replacement. So traditionally you might have somebody from the office calling to check in on you after your surgery to see if you're having any side effects, or if you have any concerns. And this app replaces that live phone call and it checks in with patients. It asks them about their side effects. It can answer their questions and actually says it's reduced the volume of patient calls by about 70%.
Terry: Now, with all of this change coming to health care right behind it is oversight and regulation. That's what the final part of the series examines. What kind of rules are in place in North Carolina and how does that compare to other states and and also at the federal level?
Crouch: Well, right now, North Carolina doesn't really have any legislation addressing AI and health care. And there's very limited regulation at the federal level as well, but that's starting to change. I talked to Sen. Jim Burgin here in North Carolina. He heads our Senate health care committee. And he's very interested in talking about possible regulation in this area.
Probably his biggest concern is around liability. He's reviewed a lot of the contracts that these AI technology platforms have with health care systems and found that in most cases, the contracts relieve them of any responsibility for errors in the platform and he really thinks that these systems and their developers need to be held accountable for errors and problems. You know, we could also see legislation banning the use of discriminatory algorithm. A few states have already done so.
And then there's also another concern around notification. Should there be a law that requires health care systems to let patients know when they're using AI and how they're using AI and giving patients an option to opt out. And there are a few states that have already gone in that direction and passed that type of law.