Ai helps hackers steal data. Healthcare providers must get ready now

2 October 2023
AI
News
While AI algorithms have long been used to breach IT systems, hackers have had a new weapon at their fingertips for several months: generative artificial intelligence. For medical facilities already lagging in cybersecurity, this is terrible news. The number of hacker attacks on healthcare facilities has risen for several years. Hospitals find themselves trapped: they don't have the financial resources to invest in cybersecurity, it's increasingly difficult to find IT/Data Security experts who would agree to work for less money than in other industries, and on top of that, with much more responsibility (patients' lives and health are often at stake). This means that current data protection schemes are utterly unprepared for AI in the hands of hackers.

AI-powered virus changes like a chameleon

Generative AI, like ChatGPT, enables hackers to individualize and automate attacks. It can generate millions of emails in different languages tailored to the cultural profile of a targeted organization. It is used to prepare compelling emails personalized to the style of the person or organization the hacker is trying to impersonate. An increasingly common form of attack, for example, is uploading AI-generated videos of popular topics to YouTube or social media, which link to an infected website. But the biggest threat is a new generation of phishing attacks. AI systems today can fake an unrecognizable voice based on a few-second sample and seamlessly carry on a phone conversation. In this way, hackers can fabricate a phone call from the insurer asking for data verification or from the director of a facility. Here are four groups of cyber attacks carried out using AI: APT (Advanced Persistent Threat). A sophisticated, prolonged, and multi-stage cyberattack targeting a specific organization or individual. The hacker sneaks into the network, staying hidden for a long time to steal sensitive data. In this case, an artificial intelligence system is used to mask the presence of the cybercriminal. High-powered AI systems can attack IT resources and crack passwords faster and more effectively than humans. Phishing. Hackers use AI systems to process natural language, creating perfectly personalized emails designed to convince people to reveal confidential information. This can also be an email telling the victim to clarify the matter over the phone - when the victim calls a given number, the AI system handles the conversation. Deepfake attacks. In this case, artificial intelligence generates synthetic videos or audio recordings. With this technology, hackers can impersonate trusted individuals, such as network administrators or executives. Since the advent of generative AI systems, preparing deepfake attacks has become as easy as pie. Malware. The weakest point of classic viruses is that once cybersecurity experts spot their existence, a quick update of antivirus programs allows them to limit their power of destruction effectively. Such a program works similarly on each of the millions of computers it infects. However, viruses equipped with AI elements learn to camouflage themselves and adapt their modus operandi to a specific situation, computer, or user behavior.

Anyone can be a hacker

But the most significant security threat may be another trend: thanks to AI, any person with bad intentions can generate and personalize malware and generate deep fake videos in seconds using free software. The days are gone when a hacker was an exceptionally skilled computer scientist able to break into the most secure systems. Today, a virus can be generated by a layperson who knows how to access the Darknet (a hidden part of the Internet containing illegal resources) or join a group of hackers in the uncontrollable messenger Telegram. The new generation of hackers has no scruples. It doesn't matter if the system under attack belongs to a bank or a hospital - quick profit is what counts. On top of that is a new geopolitical situation: In the background of the war in Ukraine, there is a war on the net. The number of attacks by Russian or pro-Russian hacker groups - such as Killnet or Crop - on healthcare has increased several times since February 2022, when the Russian invasion began.

AI threatens and protects

With the help of AI, hackers are refining their methods of attack. But AI is also helping better protect data assets from attacks. Developers use AI to detect potential security vulnerabilities. Algorithms in spam filters are able to identify the most smart phishing attacks. Security professionals and IT experts in organizations are reaching out to AI-based cybersecurity systems that crawl network and IT resources to detect weak security infrastructure elements, predict attacks, and identify attempted attacks and learn from them. They can continuously test information system vulnerabilities and improve defense methods. The possibility of using artificial intelligence to enhance security can be deceptive. Even new big tech companies like OpenAI – the developer of ChatGPT – are falling prey to attacks by cybercriminals despite heavy investment in cybersecurity systems. For healthcare providers, this means they must start updating internal data security procedures. It is becoming necessary to intensify employee training so that everyone knows how to defend against attacks by hackers using AI – especially phishing attacks, which continue to pose the greatest cyber threat. The number of cybersecurity threats is snowballing. According to Check Point Research, in 2022, an average of 1,463 cyberattacks on healthcare organizations were registered per week, up 74% from 2021. Early projections suggest that in 2023, the increase could be 60% over 2022.