Why Should Your Org Plan For Deepfake Fraud Before It Happens

(CTN News) – Deepfake are unwittingly fueling an expanding fraud vector that could be difficult for businesses and consumers.


Deepfake are based on deep learning, an area of artificial intelligence (AI) that mimics human learning.

Using deep learning, algorithms learn from large datasets without supervision from humans.

Video and audio clips created using deepfakes can resemble those of a third party – for example, a video of a celebrity saying something they did not actually say.

There are many forms of fake news, including satire, entertainment, fraud, and political manipulation.


The obvious dangers associated with deciphering and manipulating information from powerful, powerful, or trusted individuals, such as politicians, journalists, and celebrities, make deepfakes a real and ongoing threat to society.

  • Extortion: Release of faked, compromising footage of an executive to gain access to corporate systems, data, or financial resources.
  • Fraud: The use of deepfake to impersonate an employee or customer in order to gain access to corporate systems, data, or financial resources.
  • Authentication: Using deepfake to access systems, data, or financial resources that rely on biometrics such as voice patterns or facial recognition.
  • Reputation risk: The use of deepfake to damage the reputation of a company and/or its employees.


One of the most troubling aspects of deepfake today is its impact on fraud. Traditional fraud schemes, like phishing and account takeover, have been declining in earnings, which has led criminals to turn to deepfake technology to make up for it.

As anti-fraud strategies have improved (such as multifactor authentication callbacks), these older fraud types have become more difficult to execute.

image 1 1

On the dark web, deepfakes tools make it much easier and cheaper for criminals to execute such fraud schemes, even if they have no technical knowledge.

People are also posting huge amounts of pictures and videos of themselves on social media platforms, which provides excellent inputs for deep learning algorithms.

Businesses should be aware of three key new fraud types:

  • Ghost fraud: When a criminal uses the data of a deceased person to create a deepfake that can, for example, be used to access online services or apply for credit cards.
  • Synthetic ID fraud: A fraudster mines data from many different people to create an identity for a person who doesn’t exist. The identity is then used to apply for credit cards or to conduct large transactions.
  • Application fraud:  New bank accounts are opened using stolen or fake identities. Credit cards and loans are then maxed out by the criminal.

A number of high-profile and costly fraud schemes have already used deep-fakes.

A fraudster impersonated a bank branch manager and demanded a fraudulent transfer of €220,000 ($223,688.30 USD) from the executive’s junior officer using a deepfake.


As deep-fake fraud has become increasingly sophisticated and prevalent, what can businesses do to protect their data, finances, and reputations? All businesses should take the following five actions today:


Metaverse and Web3 migration will likely lead to avatars being used to access and consume a wide range of services in the future.

The digitally native avatars will probably be harder to fake than humans.

As technology advances to exploit this, it will also advance to detect it. To help combat this threat, security teams should stay abreast of new discoveries in detection and innovative technologies.

Now is the time for businesses to begin preparing.

David Fairman is Netskope’s chief information officer and chief security officer.

Related CTN News:


Leave a Comment