Artificial Intelligence is Naturally Stupid

Over the past two years, there has been an explosion in the amount of artificial intelligence (AI) software available, not just to healthcare professionals like myself, but to the general public. In many ways, AI has been quite helpful. I myself have been using AI scribe software in my office for close to a year now. The software listens to the conversation I have with my patient, and automatically generates a clinical note.

The AI scribe has been an enormous benefit to me. My medical notes are much better (also somewhat more detailed). I also save one hour of admin time a day (!) As an aside, this is actually a reason why the government should fund AI scribes for physicians. Under the new FHO+ model, we are paid an hourly rate for administrative work. Surely, saving five hours of physicians time a week is worth the government purchasing a scribe for physicians.

There are also some significant benefits for patient care. Another piece of AI software I use (that’s restricted to health care professionals) helps me with challenging cases. I am able to put the symptoms and test results into the software and it generates a list of potential diagnoses, and suggestions for next steps. It can also recommend treatments for rare conditions.

The general public can also benefit from AI. I recently had a little bit of trouble with my trusty 13-year-old SUV. I put the make and model of the SUV into a commercially available AI, put the symptoms in, and it generated a list of potential causes based on known issues about my SUV.

To be abundantly clear, I would never attempt to fix a car myself. Just as, with all due respect, patients should never, ever attempt to implement a treatment plan for themselves. What AI did do is give me the ability to have an intelligent conversation with the auto mechanic about the situation. And, dare I say it, allowed me to ensure that the mechanic was not trying to pull the wool over my eyes. (My vehicle is now fixed and running very smoothly.)


But along with the many benefits of AI software, there is, of course, potential for harm. This can range from ludicrous to dangerous.

The phenomenon of AI scribe hallucination is well known to physicians like myself. I have seen it in my own software, and it is the reason why I always read the note before I paste it into the patient’s chart. Admittedly, some of that is laughable :

Hopefully this is an AI hallucination of my skills, as opposed to the software’s judgement!

Additionally, the reality is that AI scribes can’t often put a patient’s lived experience (which is so important to building a relationship with a patient) into a note. My colleague Keith Thompson had a superb post on LinkedIn talking about how the AI scribe failed to recognize his personal interactions with an Indigenous patient, particularly with respect to understanding generational trauma.

Sadly, there have been cases where actual harm has been caused by AI. Grok is currently being investigated for generating sexualized images without consent, including those of minors. This causes severe emotional distress and real harm to the victims. There have also been concerns that AI chatbots are helping or suggesting people harm themselves. No one wants any of this stuff to happen, including the people who write AI software. But it has happened.

All of which reminds me of something that my computer science teacher in high school was fond of saying. (Note to my younger readers, and particularly my sons if they ever read my blog: Yes, there actually were computers when I was a teenager. I am not that prehistoric!)

How I’m viewed by my younger colleagues and my children!

The redoubtable Mr. Williams always implored:

“Do not forget, computers and software are actually very very stupid. They can do some things very fast, but they can only do what they are told.”

It’s a piece of wisdom that still holds true today.

With processing speeds almost infinitely faster than when I took computer science, computers can do multiple calculations very very fast. My desktop computer, which is a few generations old, can run 11 trillion operations a second. Heck my phone, which itself is 4 years old, could probably run a fleet of 1980s Space Shuttles. Speed is not the problem now.

The fleet of US Space Shuttles

The problem is that these computers and software still don’t actually have the ability to “think” outside of their parameters. They only do what they are programmed to do. If for example, they are programmed to answer questions asked by a user, but they are not given specific rules to avoid illegal answers, well, they will answer the questions directly. If the programming contains an inadvertent error (someone entered a “0” in the code, instead of a “1”), well, then the software will NOT be able to realize that was a mistake, and will carry out calculations based on the wrong code.

It is true that software is increasingly being taught to “look” for errors. But again, the software can only see the errors it is programmed to look for. It can’t find inadvertent errors and it can’t “think outside of the box.” They are, for lack of better wording, too stupid to do so.

All of which is my fancy and longish way of saying that while these new tools are great, at the end of the day they simply cannot replace the human experience. Just as the software couldn’t recognize the generational trauma of an Indigenous patient, there is a lack of “gut instinct” present. That feeling you have when you are missing something, and you know a patient is sicker than they may seem. It’s a trait that seen in our best clinicians, and one that no programming can replace.

Using an AI tool is just fine. But for my part, I’m going to agree with Mr. Spock:

Corporatization of Medicine Continues Unabated

Last week, a story came across my feed that seems to have been almost completely ignored by most who are in/or follow medicine and health systems. WELL Health technologies announced that it has purchased 100% of CognisantMD, the developers of the Ocean platform. For those who don’t know, Ocean is a platform that links to various EMRs and allows for securely emailing patients, eReferrals, filling out forms online, and a bunch of other features.

Full disclosure, my practice uses Ocean as well (for now). Personally I find it somewhat clunky and not as smooth as advertised, but there are some positive features to it.

What’s the problem then? It’s a friendly corporate takeover. Happens all the time in the business world.

To understand the concerns, let’s look at what WELL Health does. According to their own website, WELL Health offers a wide array of digital health care solutions. But they also state they are “Canada’s largest outpatient medical clinic owner-operator and leading multi-disciplinary telehealth service provider”. In essence, they run the clinics, and physicians work for them.

A further dive into their strategy, under the “Reinvest” tab states:

“Acquisition of cash generating companies leads to increased cash flows which are re-invested to make additional new cash generating acquisitions.”

Pure and simple – WELL Health is a private, for profit corporation. There is of course, nothing wrong with private corporations. Most people who follow my twitter feed know that I am generally pro-business, and on most issues land on the right side of the political spectrum. I firmly believe we need more, not less, businesses in this country and we need to make it easier for businesses to function.

BUT – acquisitions like these, and the continued take over of clinics by corporations should make us ask legitimate questions about protection of individual health care data. It is no secret that the reasons that companies like Google and Facebook have become so successful is that they found a way to monetize personal data. In much the same way, personal health care data has enormous economic value to companies. Whoever can find a way to properly monetize this, will be the next Jeff Bezos/Mark Zuckerberg and so it’s no wonder that companies are extremely interested in getting into this field.

As I mentioned in a previous blog, Shoppers Drug Mart, for example, recently acquired a stake in Maple, a leading virtual care only provider for $75 million. They continue to advertise on their website (as of Dec 6, 2021) the ability to diagnose strep throat virtually (which personally I find questionable) and then to send antibiotics to a pharmacy near you (I’m guessing there is going to be a Shoppers Drug Mart near you).

Screen shot as of Dec 6, 2021

In a circumstance where a patient contacts Maple, the doctor or NP gets paid to virtually assess a patient, Maple gets a percentage of the fee to cover overhead – which presumably will be reflected in shareholder value to Shoppers. If a prescription gets sent to a Shoppers, well, they make a profit there too. Neat business model.

But it’s not just companies that already have an interest in providing health care related services that are trying to get involved in this field. Amazon is jumping into health care with a telemedicine initiative. Google has long planned to get into health care, and while not terribly successful yet, I doubt they will stop trying. Heck even Uber (!) wants to get involved in health care.

It’s easy to see why everyone wants in. There is a lot of money and potential profit in health care. And while I am all for companies making a profit, that doesn’t mean that we can’t ask some hard questions about the protection of personal health care data such as:

  • How secure is the data that is being held in the servers owned by these corporations?
  • How do we ensure personal health data doesn’t go where it’s not authorized? (eg. supposing the parent company owned a family practice clinic AND an disability insurance company)
  • How do we ensure personal health data is not to be used to monetize other aspects of a business (eg. supposing a walk-in clinic was owned by a pharmacy. A patient attends there for a renewal of cholesterol medications, and then gets ads offering, say, flax seed oil capsules that are helpfully sold by that same pharmacy).
  • How do we ensure aggregate health data housed in those servers is only used to help the community at large (eg. finding communities that may need extra resources for, say opiod addiction).
  • If a physician stops working at a clinic owned by MegaCorp Inc. for whatever reason, how does that physician access their charts after the fact (I’m aware of a number of cases where access to patient records were cut off immediately upon the physician leaving such a clinic).

I’ve just posited a few questions. I’m sure there are many more. I believe that most Canadians strongly value health care privacy. As more and more businesses attempt to get involved in health care delivery, it is vital that we have a framework for oversight that ensures that patients have the absolute right to protect their personal health information. Sadly, I don’t see any organization/government agency out there asking these important questions.