Executive Summary
It comes as no surprise that as artificial intelligence (AI) rapidly evolve and becomes more widely accepted in different business sectors; it has also unveiled a darker side, attracting cybercriminals to harness the power of AI in order to unleash a new wave of sophisticated cyber-attacks that pose a significant threat to individuals, businesses and governments.
Here are some jaw-dropping statistics about AI, according to Darktrace [1]:
- 88% of security leaders think offensive AI is inevitable
- 77% of respondents expect weaponized AI to lead to an increase in the scale and speed of attacks, while 66% felt that it would lead to novel attacks that no human could envision
- 75% of respondents cited system/business disruption as their top concern about weaponized AI
- Over 80% of cybersecurity decision-makers agree that organisations require advanced cybersecurity defences to combat offensive AI
In this blog, we will unravel the dark side of AI, going through the attack vectors which leverages AI techniques to enhance their malicious activities and how we can combat these emerging threats.
The Modus Operandi
AI-related attack vectors are ways which cybercriminals can utilise by exploiting a target, network, infrastructure, devices, data by using AI. They represent the dark side of AI because they use the capabilities of AI for harmful actions as opposed to the beneficial side of AI. The following are the current attack vectors which utilises AI and there are more to come.
AI-powered phishing campaigns: Cybercriminals are getting hands on with different tools and chatbot’s, with one being infamously known as ChatGPT to generate more convincing and personalized phishing emails, which makes it difficult for recipient to recognize the phishing email as fraudulent. According to Blessing et al. (2022) [2], malicious actors are capable of weaponising machine learning algorithms to improve phishing attacks and make them invisible by cybersecurity detection systems. One of the tools are called DeepPhish, which leverages AI and deep learning techniques to create highly convincing and personalised phishing emails or messages.
DeepPhish utilises large data sets concerning potential victims, which then learns patterns from the data sets given by the cybercriminal, DeepPhish will utilise deep learning to automatically generate phishing email/messages to replicate the legitimate email/messages. Lastly, DeepPhish is dynamic in the sense that it can adapt their phishing and delivery of the email in real time based on the target’s responses and behaviour.
AI-powered Malware: AI-powered malware is on the rise and there are more techniques being introduced so the malware can adapt and evolve in the environment, which will make it difficult for security solutions to detect. Cybercriminals are utilising fuzzy models to develop a next-generation malware capable of learning from the environment they are in, which allow the malware to update itself with new variants and infecting vulnerable and sensitive computer infrastructure without being noticed. Some of the AI techniques which can be used in malware are the following:
- Hiding malware code from detection
- Traffic detection evasion
- Attacking defensive AI
- Stealing authentication factors used on mobile devices
Although, AI-powered malware is in the progressive stage, there is a malware called ‘BlackMamba’ which utilises AI to allow the malware to dynamically modify benign code at runtime without any command-and-control (C2) infrastructure, allowing it to slip past current automated security systems that are attuned to look out for this type of behaviour to detect attacks.
According to Jeff (2023) [3], BlackMamba utilises a benign executable that reaches out to a high-reputation API (OpenAI) at runtime, so it can return synthesized, malicious code needed to steal an infected user’s keystrokes. It then executes the dynamically generated code within the context of the benign program using Python’s exec() function, with the malicious polymorphic portion remaining totally in-memory. Every time BlackMamba executes, it re-synthesizes its keylogging capability, making the malicious component of this malware truly polymorphic. BlackMamba was tested against an industry leading EDR which will remain nameless, many times, resulting in zero alerts or detections.
Deepfakes: AI-generated deepfakes involve creating a realistic-looking but fake audio, video, or image content which uses AI algorithms. These can be uses for nefarious purposes such as spreading misinformation, impersonation, fraud, blackmail or creating social engineering attacks. Now that deepfake is becoming more popular, there are many news articles stating how deepfake is being used to alter an image of a celebrity or someone who is famous, which makes it look realistic. This can be very concerning as cybercriminals can use this technology for their own nefarious gains.
A real world example, which was revealed by an insurance company which covered the cost of the incident, stated that the scam started when the CEO of a UK energy firm got a call from the boss, head of the firm’s German parent company. The UK based CEO heard his bosses voice, which of course is not the legitimate boss but a threat actor who is utilising Deepfake and asked the CEO to transfer $243,000 into an account of a Hungarian supplier [4].
The intentions behind the attacks mentioned above are can be for numerous of reasons such as for financial gain, politically motivated, disrupting competitors and data theft.
Why does this Matter?
It is highly important that everyone take good measure and precautionary action against AI generated cyber threats as it can have a huge impact. Whether you are an individual or a business, one must understand the risks and preventative methods to prevent being a victim to the attacks. It has just only got started and AI related cyber threats will continue to develop and improve.
To stay ahead of these threats, it is important to stay up to date with the current technologies and current threats which emerges in the current landscape. Always seeking professional advice from experts in the field, and equipping yourself with security defence mechanism. Lastly, it is always good to share and informing others and raising awareness of these threats.
How to Protect Yourself
Whether you are an individual or an organisation, it is important to take steps in protecting your assets from these emerging AI-related attack vectors by adopting AI-related detection systems, zero-trust architectures, cloud adoptions, back-ups and user awareness training. Additionally, organisations should work closely with security vendors, government agencies and industry groups to share information and stay abreast of the latest threats.
Whilst social media platforms are working hard to try and prevent these issues, they will not always be successful. Social media users should be aware of adverts they are clicking on platforms such as Facebook, Instagram and Twitter. Employers should also educate their employees about the risk of using social media on work devices, it should be understood that even if a popular page posts an ad, it does not guarantee it is safe as explained in the aforementioned example. Users should also make use of Antivirus Software that regularly detects malicious files and adblockers to stop accidental clicks to these malicious sites.
AI-related detection systems: Utilizing AI-related detection systems is one of the most efficient ways to defend against threat vectors connected to AI. These systems employ machine learning algorithms to spot and alert users to potentially dangerous behaviour, such as adversarial instances or deepfakes. Organisations may keep up with attackers by utilising AI to identify threats connected to AI. There are many cyber security companies which offer these services, effectively being called a SOC (Security Operational Centre) where analysts detect and respond to any malicious activities. A lot of modern SOC’s utilises a SIEM (Security Information and Event Management) with the capabilities of AI and machine learning techniques to enhance threat detection and response [5].
Zero-trust architectures: Adopting zero-trust architectures is another significant action that an organisation may take. According to the zero-trust security concept, businesses should only trust their own systems and networks and treat all other systems and networks as unreliable. Organizations can decrease their attack surface and make it harder for hackers to access sensitive data by implementing zero-trust security.
Secure email gateways: Using secure email gateways with advanced filtering and scanning capabilities to block phishing emails and malicious attachments before they reach users' inboxes. There are many example’s which offer these capabilities such as Microsoft Endpoint Defender, Mimecast and Darktrace.
Cloud Usage: Another important aspect of defending against AI-related threat vectors is cloud usage. Organizations can better defend themselves against sophisticated threats and respond to security incidents by utilising the scalability, flexibility and security of cloud services.
Backups: Another essential component of defence against AI-related attack vectors is backups. Organisations may make sure they can recover from a security issue quickly and with little data loss by routinely establishing and maintaining backups.
User education: Lastly, organisations must have a budget for training their staff members to be aware of current landscape attacks. Organisations may lessen the risk of human error and make sure that everyone understands the value of security by training staff members about the most recent risks and best practises.
Conclusion
The rise of Artificial intelligence will bring a lot of good, but there is also a dark side which is has only just started and will continue to grow. AI-power phishing campaigns, AI-enabled malware, and Deepfakes are no longer a myth as they are here and shaping the face of cyber threats in ways previously unimagined. It is evident that that the cyber landscape will shift to a new era of sophistication and the use of automation, machine learning and AI.
References
Include URLs of visited websites here.
[1] https://darktrace.com/news/88-of-security-leaders-say-supercharged-ai-attacks-are-inevitable
[2] https://www.tandfonline.com/doi/full/10.1080/08839514.2022.2037254
[3] https://www.hyas.com/blog/blackmamba-using-ai-to-generate-polymorphic-malware
[4] https://www.tessian.com/blog/what-are-deepfakes/
[5] https://www.balbix.com/insights/artificial-intelligence-in-cybersecurity/