Dr. HAL Will See You Now
Artificial Intelligence is all the rage now. One AI was reported to pass a medical board licensing exam. ChatGPT and Bing AI are free to use, so I’ve been playing around with them with them. Instead of discussing their dangers myself, I thought I would let Bing AI write the article. Here it is:
AI is artificial intelligence, which is the ability of machines or software to perform tasks that normally require human intelligence, such as diagnosis, treatment, and decision making. AI has many potential benefits for medicine, such as improving accuracy, efficiency, and accessibility of health care services. However, AI also poses many risks and challenges that need to be addressed carefully and ethically.
Some of the dangers of AI practicing medicine are:
• Misdiagnosis: AI systems may sometimes be wrong or inaccurate in diagnosing a patient’s condition or recommending a treatment plan. This may result from faulty or manipulated training data, lack of human supervision, or overlooking social variables that affect patient needs. Misdiagnosis can lead to patient harm, worsening outcomes, or unnecessary costs.
• Lack of responsibility: AI systems may not have clear accountability or liability for their actions or errors. It may be difficult to determine who is responsible for an AI system’s performance, maintenance, or oversight, and who should be sued in case of malpractice. Lack of responsibility can undermine trust, transparency, and quality of healthcare services.
• Malicious activity: AI systems may be vulnerable to security risks such as hacking, tampering, or theft of sensitive data. Malicious actors may exploit AI systems for personal gain, political agenda, or criminal purposes. Malicious activity can compromise patient privacy, safety, and well-being.
These are some of the dangers of AI practicing medicine that need to be considered and addressed by healthcare professionals, regulators, developers, and users. AI has great potential to improve health care delivery and outcomes but also poses significant risks and challenges that require careful evaluation and regulation.
One more thing, perhaps the most ominous. The more regulated we become, the smaller our scope of decisions are. Remember, AI is simply regulations for an idiot called a central processor. The more closely humans resemble central processors handling regulations, the less human we become. Providers are hired on the myth that, given sufficient input to the machine, the better the healthcare becomes. Sorry, but sequencing a genome does nothing for diagnosing a disease.
Ooh fun! Now do it for scope of practice laws…. 🤣
I’m giving up on the medical profession. This is ridiculous. What can we trust? My goodness the doctor that took over. Dr. Farrago is following the heart association and diabetic association food guidelines. It’s deplorable. I am under the impression no matter where Americans turn the government is going to dictate how we take care of ourselves. Nanny nanny’s everywhere and doctors hands who actually are practicing real common sense are tied.
Kathy, talk to Dr. Dan. He, himself, does intermittent fasting and a paleo, low carb diet and lost 30+ pounds.