Deepfake Threat: Why Financial Institutions Must Act Now

 
AI vs. Human Compliance: Who Wins in Risk Management? - Regulatory Articles - High Risk Education
 

According to the World Economic Forum’s Global Risks Report 2024, misinformation and disinformation are ranked as the top threat the world is likely to face within the next two years. Deepfake-related crimes have recently spiked, which caught the attention of global financial institutions and regulators across the world. Deepfakes, once started as a fun activity by youth, now emerge as an alarming threat. We all have seen that celebrities, political figures, and business leaders' faces were humorously reimagined. But it’s no longer a fun activity, as scammers, fraudsters are continuously using this technology to exploit public trust. Regulators and global watchdogs across the world have already published a number of advisory notices to remain vigilant on the threats and take preventive measures.

A survey by Deloitte on more than 1,111 C-suite and other executives revealed that 25.9% of them had experienced one or more deepfake incidents targeting financial and accounting data in the 12 months prior, while 50% of all respondents said that they expected a rise in attacks over the following 12 months. Certainly, this survey result is alarming for global compliance professionals as risks are heightened and unanimously acknowledged by all. Hence, organizations must strive to deploy anti-deepfake technologies to safeguard their organizations, customers, and data. Deploying anti-deepfake technologies is not the only solution, as organizations should also update their compliance procedures and train their personnel and customers.

How Regulators are Responding

Finding the right balance between AI risk management tools and human expertise has become the defining challenge for compliance leaders. Neither can win alone.

Recently, the Treasury Department’s Financial Crimes Enforcement Network (FinCEN) issued an alert aimed at assisting financial institutions in recognizing fraudulent schemes linked to the utilization of deepfake media. The US is drafting new laws to protect against AI-generated deepfakes. The Federal Trade Commission (FTC) of the US Proposes New Protections to Combat AI Impersonation of Individuals. The proposed laws are being put forward by the US Federal Trade Commission (FTC). “Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” said FTC Chair Lina M. Khan. “Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”


Final Remarks

Deepfake technology is rapidly emerging as a high-risk area for organizations across industries. Newly released Federal Trade Commission data show that consumers reported losing more than $10 billion to fraud in 2023, which marks a 14% increase over reported losses in 2022. With incidents on the rise, it is obvious that businesses must act decisively to stop future attacks by investing in advanced detection tools, strengthening internal controls, and educating employees about this evolving threat.


About the Author:


Rezaul Karim, CAMS, is a seasoned compliance expert with a proven track record in KYC Compliance, AML Investigations, Risk Governance, and the effective handling of various Financial Crime and Regulatory Compliance issues. He has a decade of experience in the field and has worked for top multinational banks, including HSBC and Standard Chartered Bank. At HSBC, he has held a range of AVP roles in compliance and financial crime, building a strong track record of thought leadership and knowledge in the field.

🡨 Back to Regulatory Articles
Previous
Previous

Inside the Mind of a Financial Crime Investigator

Next
Next

From Thresholds to Thrust: The Real Meaning Behind FinCEN’s New SAR FAQs