Select Page

‘Deepfake’ technology used in AI-powered vishing attack

Mobilised people and technology can minimise the threat

In August this year, a top-ranking executive at a UK-based energy supplier became the latest victim of an AI-powered ‘Deepfake’ vishing attack, wiring $US 243,000 (€220,000) to cyber criminals.

For the most part, the attack was social engineering 101: criminals, posing as the exec’s manager, asked the exec to make an ‘urgent payment’. What made the attack so convincing, however, was the medium the criminals used.

As opposed to contacting the exec via email or SMS, the criminals spoke to the executive on the phone. ‘Deepfake’ technology was used to impersonate the exec’s boss precisely. The boss’s accent, rhythm and intonation were all perfect. As far as the exec was concerned, he was hearing his boss speak. So, no doubt influenced by a desire to follow orders, the executive ignored the warning signs and wired the funds.

 

A simple yet effective cyber attack

While some publications label the story the first reported AI-powered vishing attack, they’re incorrect. A month before, following years of security researcher speculation, the UK’s BBC announced at least three such attacks had already taken place. Security experts suggest AI vishing attacks aren’t particularly difficult to manufacture. Using publicly available audio content, criminals could, in theory, train AI models to replicate the voices of any number of high-profile subjects. And, given the estimated 500 hours of new content uploaded to YouTube every minute, the bank of audio criminals have to work with is on the rise.

As things stand, however, the attacks aren’t cheap. The cost of accurately replicating an individual’s voice is likely to run into the thousands. For the time being then, AI-powered vishing attacks will probably remain relatively rare – giving security professionals a small window in which to mount their defences.

 

How to prevent AI vishing

What might such a defence look like?

In the attack recounted above, criminals asked the victim to send them money. Although it might seem rudimentary, a simple verification check would have revealed the request to be fraudulent. What might have prompted the verification check?

Perhaps nothing. Perhaps the innovative nature of the cyber attack rendered the victim powerless. But it’s possible to imagine the exec becoming a shield under a people-centric security culture.

“Happy to send the funds,” the exec might have replied to the request. “I know you’d question me if I ducked a verification check, so I’m just going to give you a quick call back before hitting send.”

Both CybSafe and The Security Company have been advocating a culture of security for a long time now. Happily, today, building a people-centric security culture isn’t as difficult as it might once have seemed. New technology in the form of culture assessment tools (CybSafe’s C-CAT is one example) now measures and analyses several scientifically valid indicators of culture, then offers bespoke recommendations for improving the security culture of individual organisations based on what it finds.

So, yes, the advent of AI cyber attacks is cause for concern. The take home message for security professionals, however, remains static.

Harness the power of people and technology. In doing so, cyber risk drops while resilience soars.