Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AI’s Impact on Cybersecurity in the Healthcare Industry

The U.S. Health and Human Services, in collaboration with its Health Sector Cybersecurity Coordination Center (HC3), has issued a warning about the cybersecurity implications of artificial intelligence (AI) in the healthcare sector. The aim of this alert is to provide hospitals, health systems, and other healthcare organizations with insights and tools to enhance their security measures in the face of the increasing threats posed by AI-enhanced cybersecurity attacks.

HC3’s latest threat briefing, released on July 13, identifies the risks associated with OpenAI’s generative pre-trained transformer model, ChatGPT, and similar large language models (LLMs). These risks include phishing attacks, rapid exploitation of vulnerabilities, automated attacks, sophisticated malware, and evasive ransomware.

An example provided by HC3 demonstrates how phishing emails generated by ChatGPT can appear legitimate and use correct grammar and sentence structure to entice recipients into opening an attachment. Attackers could enhance these emails by attaching malicious files and customizing them to make them more believable.

Furthermore, HC3 highlights the hypothetical proof-of-concept research conducted by Hyas, which shows how an attacker can use ChatGPT to create a Python 3 program that captures and exports keystrokes. The briefing also reveals the use of Microsoft Teams for data exfiltration and developer tools to infiltrate networks through malware code.

HC3 recommends various cybersecurity measures to mitigate these risks, including penetration testing, automated threat detection, continuous monitoring, cyber threat analysis, incident handling, and AI training for cybersecurity personnel to enhance their ability to detect AI-enhanced phishing attempts.

Despite the security challenges posed by AI, ChatGPT and LLMs can also contribute to cybersecurity by improving email scanning to prevent cyberattacks and automating tasks for security teams. Bespoke AI models, in particular, provide organizations with their own AI defender that can adapt to their specific needs and behaviors.

To address the heightened risks of AI-enabled cybersecurity exploits, HC3 references the ATLAS framework and the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework tools. These resources can help healthcare organizations effectively manage the security of their AI and Machine Learning Models.

Ram Shankar Siva Kumar, principal program manager for AI security at Microsoft, emphasized the importance of safeguarding AI and machine learning models as organizations increasingly rely on them. Steps need to be taken to ensure the security of these models to empower the workforce and optimize time, budget, and resources.

HC3 emphasizes the need for vigilance in the face of AI’s evolving capabilities, as they are not only enhancing offensive cyber efforts but also driving advancements in defensive measures. Keeping up with the latest capabilities of AI technology is crucial for maintaining robust cybersecurity.

In conclusion, AI’s impact on cybersecurity in the healthcare industry is both a challenge and an opportunity. By implementing the recommended cybersecurity measures and leveraging AI for defense purposes, healthcare organizations can enhance their security posture and mitigate the risks posed by AI-enhanced cyber threats.

The post AI’s Impact on Cybersecurity in the Healthcare Industry appeared first on TS2 SPACE.



This post first appeared on TS2 Space, please read the originial post: here

Share the post

AI’s Impact on Cybersecurity in the Healthcare Industry

×

Subscribe to Ts2 Space

Get updates delivered right to your inbox!

Thank you for your subscription

×