It’s time to address corporate artificial intelligence (AI) deepfake fraud. AI can now create compelling fake videos and audio of real people. This ‘deepfake’ technology presents a growing threat to businesses, particularly through financial fraud. Criminals are using deepfakes to impersonate senior executives, tricking employees into making unauthorised payments.

Recent incidents in Hong Kong highlight serious vulnerabilities in corporate security. Even experienced staff can be deceived when confronted with the real face and voice of a trusted colleague or manager. This article examines how deepfake fraud works, examines real cases, discusses evolving regulations, and offers advice on strengthening company defences.

For corporates, how AI deepfake technology enables fraud

Deepfake technology uses AI to generate realistic video and audio content, often making it challenging to distinguish fakes from genuine recordings. What started as a technological novelty has quickly become a powerful tool for criminals.

Fraudsters use AI techniques like face mapping and voice synthesis, sometimes combined with existing footage, to create convincing impersonations. This allows them to pose as senior figures in virtual meetings, adding a layer of authenticity to their scams, as seen in recent Hong Kong fraud cases.

Real-world examples of corporate deepfake fraud

The fraudulent chief financial officer (CFO) call

In a notable Hong Kong case, a finance employee at a multinational firm was deceived into transferring approximately HK$35 million. The fraud involved a sophisticated multi-person video conference where criminals used deepfakes to impersonate the company’s overseas-based CFO and other executives.

Although an initial email about a secret transaction raised suspicion, the realism of the deepfake video call convinced the employee to proceed with the transfer. The fraud was discovered only after checks with the company’s head office.

The Arup deepfake incident

The engineering firm Arup experienced a significant loss of HK$40 million when an employee transferred funds after being persuaded by a deepfake video of a senior executive. This incident, classified legally in Hong Kong as “obtaining property by deception,” led to the resignation of Arup’s East Asia Chair, illustrating the potential career and reputational consequences beyond the financial loss.

Navigating new rules and regulations

Regulators worldwide are working to adapt legal frameworks to counter AI-enabled crime. While Hong Kong currently uses existing laws like the Theft Ordinance (Section 17) to prosecute such fraud, there’s a recognised need for legislation specifically targeting AI-driven deception.

Internationally, the European Union’s AI Act is a significant development. It aims to impose strict regulations on AI technologies, requiring transparency and penalising misuse. President Biden previously emphasised the need for legislative action against AI-driven fraud and disinformation in the United States.

Corporate security challenges and insights

AI-generated deepfakes exploit the element of trust, bypassing financial controls that rely on verifying identity through sight and sound. Cybersecurity experts warn that AI significantly lowers the difficulty and cost for criminals to execute convincing impersonation fraud on a large scale.

Furthermore, profits from successful deepfake scams are often reinvested into developing even more sophisticated attack methods, creating a dangerous cycle. Companies must urgently review and enhance their internal verification processes, particularly for financial transactions susceptible to executive impersonation. Robust multi-factor authentication and ongoing employee awareness training are becoming critical components of cybersecurity strategy.

The broader impact of deepfake fraud

Beyond the direct financial losses, corporate deepfake fraud undermines business confidence and public trust. The technology’s potential misuse also extends to political manipulation, such as creating fake videos of candidates during elections, posing a threat to democratic processes.

Given how quickly these threats are evolving, collaboration between organisations, governments, and technology providers is essential to develop adequate safeguards, policies, and public awareness initiatives.

Strengthening defences against corporate AI deepfake fraud

To protect against AI-driven deception, companies should take these steps:

  • Implement strong authentication: Use rigorous multi-factor authentication (MFA) protocols for authorising sensitive financial transactions.
  • Enhance employee training: Regularly train staff to recognise the signs of potential AI-generated fraud attempts and establish clear procedures for verifying unusual requests.
  • Conduct AI-specific audits: Perform periodic cybersecurity audits that specifically assess vulnerabilities related to AI and deepfake threats.
  • Stay updated on regulations: Monitor evolving regulatory requirements, such as the EU AI Act, and proactively incorporate relevant standards into company policies.

Actions you can take next

Corporate AI deepfake fraud is not a future possibility; it is a current and significant strategic threat. The incidents in Hong Kong demonstrate the severe consequences of underestimating this risk. Strengthening internal controls, improving employee awareness, and supporting the development of clear regulatory frameworks are essential actions for organisations to take now. You can:

  • Protect your organisation by immediately reviewing your verification procedures and cybersecurity policies for financial transactions. We can help you with your cybersecurity law compliance.
  • Enhance your awareness by familiarising yourself with recent regulatory developments like the EU Artificial Intelligence Act. Have a look at our EU Artificial Intelligence Act summary. You can also read the full EU AI Act.
  • Influence policy change by advocating for clearer legal frameworks specifically addressing AI-generated fraud within your industry.