How to defeat the “magic” of “AI face-swapping” scam that defrauded 4.3 million using the power of law?

Key points

  • The person who directly uses AI face swapping to cheat money in the news “AI face swapping cheats 4.3 million in 10 minutes” constitutes fraud . If the provider of AI face swapping technology conspires with the perpetrator, the provider (natural person) of AI face swapping technology is an accomplice to fraud. If there is no conspiracy between the two parties, the provider of AI face swapping technology may constitute the crime of assisting information network criminal activities.

  • The technology mentioned in the news “AI face swapping cheats 4.3 million in 10 minutes” is the AI deep synthesis technology, which is a component of the broad AIGC (artificial intelligence-generated content) technology group. Currently, deep synthesis technology is directly regulated by the “Internet Information Service Deep Synthesis Management Regulations” in China. In April of this year, the National Internet Information Office solicited public opinions on the “Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)”. After the final version of the draft is published, deep synthesis technology will be further regulated.

  • In addition to criminal offenses, the unauthorized use of AI face swapping and other deep synthesis technologies may also constitute civil infringement disputes such as infringement of portrait rights and reputation rights , and there are already judgments in judicial practice that AI face swapping technology constitutes civil infringement.

  • AIGC start-up companies involving deep synthesis and other technologies need to comply with 20 compliance obligations such as information release review and personal information protection to ensure their business compliance. The specific compliance content can be found at the end of the article.

AI face swapping new scam, China Internet Association warns

Just a few days ago, there was a new AI face swapping scam in Baotou, Inner Mongolia and other places. The victim’s friend suddenly contacted the victim via WeChat video, claiming that he was bidding in another city and needed a deposit of 4.3 million yuan, and he needed to transfer money through a public account and borrow the victim’s company’s account. Based on trust in the friend, and having already confirmed his identity through video chat, the victim transferred two installments totaling 4.3 million yuan to the friend’s bank card. The victim only found out that he was cheated after calling his friend afterwards. In this case, the fraudster used AI face swapping technology to disguise himself as the victim’s friend in the WeChat video, and then used voice synthesis technology to synthesize the victim’s friend’s voice to defraud the victim.

As AI face-swapping scams become increasingly rampant, many regions have issued warnings to remind the public to be aware of fraud risks. On May 23, the Anti-Telecom Fraud Center of Mianyang, Sichuan Province issued an alert to warn the public of corresponding fraud risks. Just yesterday, the China Internet Association issued a warning to the public. Since the opening of deep synthesis technology, the number of deep synthesis products and services has gradually increased. The illegal use of false audio and video such as “AI face-swapping” and “AI voice-swapping” for fraud and defamation is not uncommon.

The first proven case of using deep synthesis technology for fraud occurred worldwide in 2019.

The first case of deep synthesis fraud

In March 2019, the CEO of an international energy company received an unknown call, and soon recognized the person on the other end of the phone – the CEO of the German parent company, who was also his immediate superior. In this call, the parent company CEO said that the company was currently facing operational crises and needed urgent financial support of 220,000 euros or 243,000 dollars from branch offices around the world. The parent company CEO then provided a bank account in Hungary, claiming that the funds needed to be paid directly to a Hungarian supplier, and that the parent company would reimburse the branch office for the expenditure after the funds were turned over. Although the instruction sounded neither compliant nor reliable, and involved such a large amount of money flow, the victim still chose to execute the command after much consideration. The reason was simple: the voice on the phone was indeed his immediate superior, even the slight German accent and tone habits were no different from those of his boss when speaking normally. Thus, although the victim still had doubts, he still completed the transfer to the Hungarian account provided by the “boss” on the phone. In this case, the transferred funds flowed from Hungary to Mexico, and then were dispersed elsewhere. As of May 2023, the involved funds have not yet been recovered.

It is precisely because such cases are emerging worldwide that the traditional technology-neutral viewpoint has little support in the legal community, and various countries are gradually strengthening the supervision of deep synthesis technology. The “Internet Information Service Deep Synthesis Management Regulations” is a normative document specifically regulating deep synthesis technology in our country. In addition, the “Cybersecurity Law”, “Data Security Law”, “Personal Information Protection Law”, “Internet Information Service Management Measures”, “Network Information Content Ecological Governance Regulations”, “Internet Information Service Algorithm Recommendation Management Regulations”, “Network Audio and Video Information Service Management Regulations” and other regulations also involve the regulation of deep synthesis technology. Of course, the “Criminal Law” as the final protection measure also has many charges to regulate the use of AI deep synthesis technology to commit crimes.

AI face-swapping service providers face criminal risks of aiding and abetting

There is no denying that deep synthesis technology is a “sunrise industry.” With the rapid iteration of technology, the cost of deep synthesis technology has dropped sharply. According to investigations, the cost of producing a set of videos that show the face and speech has dropped to tens of yuan, and only a frontal photo is required to generate the video. The generated video not only moves the eyes, mouth, and head but also matches the mouth shape to the content of the text and sound if provided. All of this can be completed in just a few hours. Some merchants claim that using photos to generate a 1-minute video only costs 30 yuan, and a few yuan can be achieved with low-quality requirements.

It is worth noting that such behavior carries enormous criminal risks. Article 287(2) of the Criminal Law of the People’s Republic of China stipulates the crime of aiding network information crimes. This refers to knowingly providing technical support such as internet access, server hosting, network storage, and communication transmission, or providing advertising promotion, payment settlement, and other assistance to others who use the information network to commit crimes, with severe consequences. This “knowing” is often determined by objective evidence in practice, rather than relying on the subjective statements of the perpetrator. At the same time, providing AI face-swapping technology to customers can also be interpreted as “providing… technical means.”

In other words, in the current situation where AI face-swapping technology has caused rampant telecommunications fraud cases, if customers have no legitimate reason and no significant evidence to prove that they have obtained the consent of the person in the photo and request AI face-swapping technology providers to use the photo to generate videos, and the relevant technology providers make the video, which is then used by customers to commit fraud, AI face-swapping technology providers are at high risk of committing the crime of aiding and abetting network information crimes.

Unauthorized AI face-swapping technology may also constitute serious civil infringement

In addition to the criminal risk of aiding and abetting network information crimes, unauthorized AI face-swapping technology may also constitute a serious civil infringement, and in fact, such judgments have already been made. For example, in the (2022) Shanghai 0116 Min Chu No. 13856 portrait right dispute judgment, the defendant Shanghai Science and Technology Co., Ltd. used the ancient Hanfu model Liao’s ancient style video template in its “AI face-swapping” software. In this case, the plaintiff Liao has the right to the portrait and the defendant has replaced the face of the character in the plaintiff’s video, but the corresponding subject can still be identified by other unmodified scenes and details as Liao. The plaintiff claims that the defendant has infringed on the right to the portrait. AI face-swapping technology, as a typical technology related to portraits, destroys the identity between the portrait and the subject. On the one hand, the portrait subject has the right to refuse his body image being crowned with the face of others. On the other hand, the portrait subject also has the right to refuse his facial image being grafted onto the body of others. Combined with the constituent elements of a general portrait right infringement, the constituent elements of AI face-swapping technology infringing on portrait rights are: 1. A natural person has the portrait right to a specific portrait; 2. The actor, whether or not they have a profit-making purpose, uses information technology means to tamper, forge, and automatically generate the portrait of the right holder without their consent, generating an effect that is difficult to discern.

Aside from infringing on portrait rights, utilizing AI face-swapping technology to create malicious videos or images of others and widely disseminating them on the internet can potentially constitute an infringement on reputation rights. In reality, there are many instances of using face-swapping videos to mock or ridicule celebrities and internet celebrities, such as making fun of their singing, dancing, or basketball skills, which can potentially constitute an infringement on reputation rights.

Final Thoughts

As the most vital application in the field of artificial intelligence, deep synthesis technology is sure to unleash tremendous productivity in the future; as a result, compliance issues in this field must be taken seriously. This is a prerequisite for the healthy development of deep synthesis technology. In today’s legal profession, the pure technical neutrality theory is rarely advocated, as the value orientation of technology itself has penetrated deeply into people’s minds. Based on the various risks associated with deep synthesis technology, the Sa team has summarized the compliance boundaries and 20 compliance obligations of relevant practitioners to assist the healthy development of the deep synthesis industry and provide a reference for practitioners:

Compliance Borders for Generated Content

  • Do not engage in political and yellow content;

  • Do not synthesize fake news, and non-licensees shall not disseminate news released by units beyond the scope prescribed by the state;

  • Do not violate public order and good customs (such as how a third party seizes the original family property);

  • Do not generate illegal and harmful information. In this regard, relevant “feature libraries” can be perfected to screen healthy information;

  • Do not maliciously demean the reputation of others and the commercial reputation of enterprises (the legal consequence is being sued for infringement of reputation rights);

  • For uncontrollable creative content, delete it in a timely manner to reduce social harm.

Specific Compliance Obligations

  • Perfect the user registration system;

  • Algorithm mechanism review;

  • Scientific ethics review;

  • Information release review;

  • Data security management system;

  • Personal information protection system;

  • Anti-telecom fraud system;

  • Emergency disposal management system;

  • On the App or website, publicly manage rules, platform conventions, improve service agreements, and prominently remind all parties to assume information security obligations;

  • Perfect the feature library for identifying illegal and harmful information, and retain logs;

  • Find illegal and harmful information and have the obligation to report to the competent authority;

  • Find illegal and harmful information, and take disposal measures for users in accordance with laws and contracts;

  • Full rumor-refuting mechanism;

  • Set up convenient complaint and report channels and timely feedback results;

  • Strengthen training data management;

  • Regularly review, evaluate, and verify the mechanism and mechanism of generating and synthesizing algorithms;

  • Conduct security assessments on their own or entrust professional organizations in accordance with the law;

  • For the generated content, take technical measures to add identification that does not affect user use;

  • Do not delete, tamper with, or hide deep synthesis identification;

  • Perform filing, change, and cancellation filing procedures in accordance with the “Regulations on the Management of Internet Information Services Algorithm Recommendations.”

Like what you're reading? Subscribe to our top stories.

We will continue to update Gambling Chain; if you have any questions or suggestions, please contact us!

Follow us on Twitter, Facebook, YouTube, and TikTok.


Was this article helpful?

93 out of 132 found this helpful

Gambling Chain Logo
Digital Asset Investment
Real world, Metaverse and Network.
Build Daos that bring Decentralized finance to more and more persons Who love Web3.
Website and other Media Daos

Products used

GC Wallet

Send targeted currencies to the right people at the right time.