Artificial preservatives. Artificial colors. Artificial flavors. Artificial additives.
To most, these are “red-flag” words and lead to avoidance of the possessing product. I am starting to wonder if we should hold the same apprehension with the term, “artificial intelligence” or “AI.” I get that in most cases, the aforementioned “artificial” terms are almost always bad. That is, at least in terms of ingesting them. And that in most cases, AI is good. But does that mean that AI can never be bad? Of course not. And after a recent project I was called in on at work, my concern has grown exponentially.
AI has immersed itself in our world at an unprecedented rate recently. It can be found at the gas pump, the grocery store checkout line, our phones, our browsers, shipping and logistics systems, prosthetics, and even in implantable health monitoring devices. Heck, it’s even in our cars now and actually, has been for a while. And for the most part, it serves a good and that is its intent. But its integration into health care has raised my concern even more.
The project I was called in on was because I work in a job that uses analytical data-driven algorithms to drive medical decision-making. Of course, doctors and other medical professionals are concerned of the accuracy and unintended biases that find their way into these systems, and rightly so. This led to a rebranding, in my opinion, of AI as of late. It is now being referred to as “augmented intelligence” in health care. This means that it is a combination of both artificial intelligence and human intelligence. Meaning, the data is never to be left to the interpretation of the analytical algorithms alone, but instead used to enhance human decision-making.
Sounds safe, right? I’m not so sure. As this AI becomes more and more commonplace, I fear stakeholders will become complacent in discernment. Everyone from the designers of the algos to the medical providers that use them will ultimately forget some of them are even there. Sure, there are safeguards in place like “check protocols” for doctors and regular monitoring inspections of the algos and data for the scientists, but even if they were to catch every bias or inaccuracy, which is fundamentally impossible, who says this private information cannot be used unlawfully or even immorally? And this is my concern.
I love seeing the technological advancements being made on a daily basis in our world and I am grateful for the researchers devoting countless hours to their creations and discoveries. But I also know that some actors do not always share the morality of the subjects of this data. This could be governments or even some righteous radical thinking there manipulation of this power is good for the common good, no matter if it is in alignment with those affected or not. So my point here is not to stave off any use of this technology, but instead to always embrace it with full-fledged skepticism. Not just of the tech itself, but even more so of the individuals charged with its use and implementation.
Comments