You're whizzing along a dark, outback highway when the symptoms start.
Your mouth is so dry it's getting hard to swallow or talk, your grip on the steering wheel is weakening and that centre white line just turned into two.
You pull in at a small town hospital where the tired junior doctor tells you it's probably a virus, and then hands your case over to night staff.
But the night doc comes with a difference.
She retakes your history wearing Google Glass, and the web-enabled headgear uses voice recognition to input your symptoms into a massive data base.
It's botulism; very rare, often fatal, and you're just in time to get the antitoxin.
If it all seems a bit far-fetched the technology is, in fact, on our doorsteps.
Google Glass is alive and kicking; San Francisco health start-up Augmedix is refining the internet-browsing eyeglasses to give doctors real time access to patients' electronic health records and the web.
And Google Glass is compatible with apps, such as Isabel, that can compute the likely top diagnoses from a patient's symptoms and, according to a 2016 review, even improve on the accuracy of clinicians.
In a world where the volume of healthcare data, including patient notes, lab tests, medications, imaging, and research articles, will soon be counted in yottabytes – that's 1024 Gigabytes and enough, according to IBM, to fill a stack of DVDs that would stretch from Earth to Mars – it's understandable doctors could use a little help.
But the march of technology is causing frissons of nervousness in medical circles, not just about how to incorporate it into everyday practice but, ultimately, whether jobs now done by doctors could one day be taken by machines.
"There is the universe of what we know, and then there is what I know," says Herbert Chase, a physician and professor of medicine at Columbia University.
"Medical practitioners can't be expected to master the opus required to recognise all diseases," Chase says.
"In terms of knowledge, diagnosis, optimal treatment, guideline-based care, I'm pretty sure that machines are already, in some ways, much better than we are."
Chase is referring to a branch of AI that promises a tectonic shift in how medicine is practised: it's called machine learning.
Machine learning is a way of training computers to tell things apart that leaves the "learning" bit up to the computer itself; its artificial neural networks forge "knowledge" much like our own brains.
Take the question of whether a shadow on a chest X-ray is a cancer or something less sinister.
A typical machine-learning approach would feed the computer a massive database of chest X-rays with shadows that had been proven cancerous or benign.
The computer would then come to its own conclusions about what features of the X-rays robustly predicted cancer.
What's revolutionary is that, because the computer "sees" differently to a radiologist – it objectively applies statistics to millions of pixels – it could, theoretically, discover features in an X-ray not previously thought to flag cancer.
With machine learning, according to a September editorial in The New England Journal of Medicine, "we let the data speak for themselves".
Letting the data speak for themselves has, this year alone, delivered lung cancer prognoses with greater accuracy than pathologists and, in a study lead-authored by Google scientists published in JAMA, predicted diabetic eye disease better than a panel of ophthalmologists.
And it may have saved its first life.
In August Japanese doctors reported using IBM's super computer Watson to crunch through a patient's myriad genetic mutations to diagnose a rare leukaemia.
The task would have taken a person two weeks; Watson took 10 minutes.
And the stakes couldn't be higher.
A May report in the British Medical Journal concluded that, after heart disease and cancer, medical error is the third-highest cause of death in the US, accounting for a staggering 251,000 lives.