Photo: Gilles Lambert / Unsplash
Words by Ka Man Mak
Oslo _ Our digital presence on social media platforms is often intertwined with the opportunity to find jobs, new clients, and connections. Companies use it to promote their brand attracting both clients and potential recruits. Our digital profiles become a billboard for people to learn about you. Yet, we seldom think about the information that we share and distribute online could also fall into the hands of cybercriminals to generate almost real-life content that we didn’t create.
We don’t need to look far for the impact deepfakes have caused, such as the recent £20m financial loss in a deepfake video call scam at a firm in Hong Kong, the spread of sexual and non-consensual deepfakes of Taylor Swift, and deepfake images of children killed in the Gaza war. Deepfake affects individuals, organisations, society, and on a global scale.
So what is deepfake?
Deepfakes, also known as artificial intelligence-generated synthetic media, use artificial intelligence (AI) technology to generate audio-visual content that shows people saying or doing things they never did in real life, or create persons that never existed in the first place. Through a deep learning process, the technology trains on the data it gathers, which makes it harder to spot audio-visual content that is fake over time.
Coupled with social engineering tactics by cybercriminals, deepfakes are being used to impersonate people we know and trust. At the NDC Security conference in January held in Oslo, speaker Aurel George Proorocu, IT OPS Chapter Lead – Cybersecurity and Fraud at ING Bank demonstrated live just how easy it was to generate a video and audio recording, swapping his face onto his colleague’s and voice of another.
Proorocu showed the process of how cybercriminals lure a mother to transfer money to the scammer to save her apparent son in an accident, a manager to transfer money requested by the supposed CEO; and an employee of a targeted firm to a potential HR hire, only to retrieve information. By deceiving, manipulating, and exploiting the trust of humans, they lure their victims to reveal sensitive information or perform an action.
There are dire consequences from such cyber scams that lead to distrust in society, huge financial losses, and even driving individuals to take their own lives. In 2012, a prank call made by 2Day FM, an Australian radio station, impersonating the voice of Queen Elizabeth II and Prince Charles led a nurse to expose Kate Middleton’s medical condition. The nurse who was duped later took her own life. Another happened last year August, when an online scam left a family in Thailand in debt and drove the man to kill his wife, two sons and himself. He survived.
There is no magic solution to preventing social engineering-related scams that use deepfakes and AI technology. With deepfake and AI technology advancing, so will the detection tools. Proorocu said that ING bank is tackling financial fraud by collaborating with other banks, media outlets, and local authorities to share their findings and raise awareness.
“Awareness is the strongest and most successful tool to fight some scams, especially social engineering, where scammers get people to take actions via their accounts and MFAs which bypasses all the security measures you have in place,” said Proorocu.
He also cautioned that “awareness is not a one-time shot”. In half a year, the situation may change. The detection tools may become invalid or a better AI tool could replace the ones we now know. It is a ‘marathon’ and a ‘cat-and-mouse game’ which resonates with many of the speakers at the NDC Security conference.
Social engineering pentesting, common in the US, is used to bring more security awareness at the corporate and institutional levels. However, such methods are frowned upon as they involve gathering information on staff and the company to use in creating a scripted text to lure and manipulate the victim-staff to perform an action that would compromise the security of a company.
Ragnhild Bridget Sageng, a security advisor, used to run such tests for corporate companies. She stressed the importance of following up on the victim-staff‘s well-being over time and creating a security culture in the organisation.
Sageng showed a case study where she impersonated an electrician to access a school’s servers room. She had used publicly sourced information such as the map of the school and staff information, in order to plan her test. She managed to convince the janitor with a fake email document from the IT department to let her into a room filled with servers. The test also revealed issues with the existing security protocol they had with their security keys.
After the pentesting, Sageng would follow up with the victim-staff as they could still be thinking about it even after six months had passed. When someone is in an emotional state one cannot learn anything new. So, it is important to ‘debrief’ them as she calls it to monthly check-in on the staff who were duped.
It is also important not to place blame on the person who clicked on the link or failed the security check. “Anyone can be duped,” said Sageng. “It doesn’t matter if you are the top security professional of the company. You can also fall for a phish.”
She presented an email that was sent out to security professionals to participate in an IISA conference impersonating as a staff of IEEE (Institute of Electrical and Electronics Engineers). 66% of security professionals clicked on the link in the phishing email.
Any form of human interaction via communication tools such as email, social media, SMS, or phone calls, and even appearing in person are avenues for cybercriminals to deploy their social engineering tactics to commit an attack.
“More knowledge on how social engineering works, and what techniques are being used to manipulate people will in combination with knowledge of the deepfake attack methods, empower people to react when they are falling victim to such an attack,” said Sageng in a LinkedIn conversation.
Sageng also cautioned asking people not to trust as it would be hard. Humans want to trust, so ‘Trust, but verify’. For example, when you get a call explaining that they need to use another phone number to call you, verify by calling their original phone number, and also use other contact channels to make sure that the person you are talking to is the actual person you are talking to. Not picking up unknown phone calls will start becoming the norm according to Proorocu, albeit that one might miss important calls such as a job offer.
Leading up to the EU’s legal framework for regulating AI, a study addressing deepfake stated, “There are no quick fixes. Mitigating the risks of deepfakes thus requires continuous reflection and permanent learning. The European Union could play a leading role in this process.”
While policy-making may not be watertight in erasing deepfakes in the media ecosystem, media literacy is becoming ever more important in tackling online scams. In December 2023, the EU Artificial Intelligence Act reached a provisional, and has been hallmarked as the world’s first comprehensive law on AI before China and the US.
Here are tips from Europol on how you can protect yourself from online scams – Take Control of Your Digital Life: Don’t be Victim of Cyber Scams.
Update Tuesday 6 February 2024, 09:32AM: Errors in the article were corrected, such as the fake email document was from the IT department and not from the principal; and that Ragnhild Bridget Sageng does not currently run social engineering pentesting.