Opinion | Stop Worrying, and Let A.I. Help Save Your Life

These days, I’m getting far more curbsides, but they are not with colleagues. They’re with A.I. Sometimes I consult with ChatGPT; other times I turn to OpenEvidence, a specialized tool for physicians. I find A.I.’s input is virtually always useful. These tools provide immediate and comprehensive answers to complex questions far more effectively than a traditional textbook or a Google search. And they are available 24/7.

To be clear, A.I. isn’t perfect. For my curbside consults, the answers are not as nuanced as the ones I’d hear from my favorite hematologist or nephrologist. On rare occasions, they’re just plain wrong, which is why I review them carefully before acting.

Some people argue that A.I.’s imperfections mean that we shouldn’t use the technology in high-stakes fields like medicine, or that it should be tightly regulated before we do. But the biggest mistake now would be to overly restrict A.I. tools that could improve care by setting an impossibly high bar, one far higher than the one we set for ourselves as doctors. A.I. doesn’t have to be perfect to be better. It just has to be better.