The Medical Futurist | 5 min | 2 March 2023
You have probably heard that ChatGPT has recently passed business, law and medical exams, qualified as a level-3 coding engineer at Google (with a $180K starting salary!), outperformed most students in microbiology and checked a passable grade in a 12th-grade AP literature test.
Of course, better take these results with a pinch of salt. Although the algorithm indeed did reasonably well on the USMLE test, it was not a full-value assessment as all questions requiring visual assessment were removed.
Pinch of salt or not, however, it is obvious how the capabilities of these large language models have been expanding and how far they have travelled. So what’s next? What will the new reality with smart AI at our fingertips look like?
1. AI tools will not be credited as authors in scientific publications
As of now, large language models will not become co-authors of scientific publications as they can’t take responsibility for their work – this is the ruling by Springer, the world’s largest academic publisher.
It doesn’t mean that such tools will not be used, thus we should move in a direction when authors declare what kinds of AI tools were used in exactly what ways in the methods/acknowledgements sections of publications.
2. Medical alternatives of ChatGPT will arrive
As large language models develop, we will soon meet ones that were specifically designed for medical use. Google’s MedPaLM is an early example, and I expect that not only will they make Bard available for public use, but they will also develop a medical version of it. Similarly, a medical ChatGPT is also on the horizon.
These will become game-changers. Trained on verified, accurate medical data, such algorithms will offer enormous assistance for healthcare workers in a number of ways, from checking suspicious symptoms to crafting easy-to-understand info material on the most common problems in any given practice.
3. Doctors will see mayhem as patients start arriving with info from ChatGPT
Physicians had some trouble with patients using Google in recent years, after all, not all headaches are induced by brain tumours, even though the first few pages of search results suggest it.
Now they will face a new difficulty: patients arriving with info provided by large language models, which may or may not be correct or relevant. This will require a new kind of empathy, and a new skill set to help users make proper sense of using such chatbots and understand their limitations.
4. Where are your sources and references, ChatGPT?
Medical information is non-existent unless there is a source we can verify. This will change forever, as there is no way such large language models – building their answers on billions and trillions of pages of diverse information – could list their sources. In some cases, each word in a sentence comes from somewhere else.
Thus medical professionals will increasingly become gatekeepers of reliable information, and they will need to level up their skills to learn and verify answers provided by such algorithms.
5. Using chatbots will become part of the (medical) curriculum
This is an immediate necessity, we don’t have years to figure out whether to include these algorithms in the study material, as most students are using them daily by now. At Hungary’s Semmelweis university, it is already part of the training, and I’m sure this is (will very soon be) the case at most universities all over the world. It is extremely important to react to this phenomenon and teach kids to use AI in a smart, critical and responsive manner.
A connected issue: written tests and assignments will also need to change in a way that can’t be easily completed by these algorithms alone. Essays are probably a thing of the past.
6. Will see the first healthcare company implementing it in their practice this year
I bet the first breaking ground will be a reasonably fancy healthcare practice that aims to exploit all marketing benefits of using such an advanced tool. I would not be at all surprised to hear about it this year, and the most likely use cases will be something like community outreach or using ChatGPT to craft messages.
Everyone is looking for an angle now
Such AI tools present brand new challenges for everyone, from medical associations to regulatory bodies, from universities to companies. No wonder there is quite a bit of confusion around these solutions and their potential use cases and legal/ethical/technical implications of implementing them. We will definitely keep our eyes on the target and cover how this field matures.
Leave a Reply