V God Talking about my views on Worldcoin and proof of personality biometrics

Author | Vitalik Buterin Translator | Huohuo


Today, WorldCoin, a Web3 encryption project co-founded by Sam Altman, the founder of OpenAI, has officially announced its launch. It is reported that the WorldCoin team has designed a biometric identity verification system called World ID, which uses an eye scanner called Orb to verify users’ identities by scanning their irises.

On the same day, Vitalik Buterin, the founder of Ethereum, published an article titled “What do I think about biometric proof of personhood?” to express his views on biometric proof. The following is the full translation:

In the Ethereum community, people have been trying to build a decentralized solution called The Proof of Humanity, which has been a challenging but valuable problem. The Proof of Humanity is a limited form of real-world identity that allows a given registered account to be controlled by a real person (and a different real person than each other registered account), ideally without revealing which real person it is. Many efforts have been made in the past to try to solve this problem: BrightID, Idena, and Circles are representative examples. Some of them have their own application processes (usually UBI Token), and some have found solutions in Gitcoin Passport to verify which accounts are valid for second voting. Zero-knowledge technologies like Sismo have added privacy to many similar solutions. Recently, we have seen the rise of a larger and more ambitious project called WorldCoin: The Proof of Humanity. WorldCoin was founded by Sam Altman, who was previously known as the CEO of OpenAI. The idea behind the project is simple: AI will create a lot of wealth for humanity, but it may also kill many people’s jobs because the system will develop to the point where it is almost impossible to distinguish who is human and who is a robot. Therefore, we need to fill this gap in the following ways: (1) Create a very good human identity proof system so that humans can prove that they are actually humans; (2) Provide UBI for everyone. The uniqueness of WorldCoin lies in its reliance on highly sophisticated biometric technology, which uses a dedicated hardware called “the Orb” to scan the iris of each user. Our goal is to produce a large number of these orbs and distribute them widely around the world, placing them in public places so that anyone can easily obtain their own ID. It is commendable that WorldCoin is also committed to decentralization. This means technological decentralization: using the Optimism stack as an L2 on Ethereum and using ZK-SNARK and other encryption technologies to protect users’ privacy, including the decentralization of the system itself. WorldCoin has been criticized for privacy and security issues with Orb, design issues with its “Token,” and ethical issues with some of the choices the company has made. In fact, the WorldCoin project itself is still under development and being improved. However, others have raised more fundamental concerns about biometric technology – not only the eye scanning biometric technology used by WorldCoin, but also the simpler facial video upload and verification games used in The Proof of Humanity and Idena – and whether they can gain public acceptance. There are certainly criticisms, and the risks include inevitable privacy leaks, further erosion of people’s ability to browse the Internet anonymously, coercion by authoritarian governments, and the possibility of ensuring security while decentralizing, and so on. The rest of this article will discuss these issues and help you decide whether it is a good idea to scan your eyes in front of our new spherical tool. Should we give up developing The Proof of Humanity and what other alternatives are there?

01 What is The Proof of Humanity and why is it important?

The Proof of Humanity is valuable because it solves the problem of centralized power in the current traditional internet, avoids reliance on central authorities, and minimizes the disclosure of personal information. Without solving The Proof of Humanity, decentralized governance (including voting on social media posts and other “micro-governance”) is just a castle in the air.

Many major application processes in the world today address this issue by using government-supported identity systems (such as identity cards and passports). This does solve the problem, but it makes a huge and unacceptable sacrifice in terms of privacy.

The dual risks faced by our current human proof system

In many human identity proof projects – not only Worldcoin, but also flagship applications such as Circles – “Token that each person can receive” code (also known as “UBI Token”) is built-in. Each registered user in the system receives a fixed number of tokens every day (or every hour or every week). There are many other applications, including:

– Airdrop mechanism for token distribution

– Token or NFT sales with preferential conditions for less wealthy users

– Voting in DAO

– Secondary voting (payment of funds and attention)

– Guarding against bots/witch attacks in social media

– Alternative solutions to captchas for preventing DoS attacks

The common desire is to create open and democratic mechanisms, avoiding the centralized control of project operators and the dominance of wealthy users. The latter is especially important in decentralized governance.

In such cases, the existing solutions today rely on: (1) highly opaque artificial intelligence algorithms (2) centralized IDs, also known as “KYC”.

Therefore, an effective identity proof solution would be a better alternative that can achieve the security attributes required by these application processes without encountering the shortcomings of existing centralized methods.

02 What were the early attempts at internet identity proof?

Human identity proof has two main forms: social graph and biometrics. Human identity proof based on social graph relies on some form of endorsement: if Alice, Bob, Charlie, and David are all verified humans and they all say Emily is a verified human, then Emily may also be a verified human. Endorsements are usually strengthened through incentives: if Alice says Emily is a human but it turns out she is not, both Alice and Emily may be punished. Biometric proof of identity involves verifying some physical or behavioral characteristics of Emily that distinguish humans from robots (and individual humans from each other). Most projects use a combination of these two technologies.

The four systems I mentioned at the beginning of the post are roughly as follows: (1) The Proof of Humanity: You upload your own video and provide a deposit. To get approval, existing users need to vouch for you, and other challengers will question you for a period of time. If there are challengers, the decentralized court Kleros will verify if your video is genuine; if it is fake, the deposit will be confiscated and the challenger will be rewarded. (2) BrightID: You join a video call “verification party” with other users and verify each other. Bitu can perform more advanced verification through this system, and if there are enough other Bitu verified users vouching for you, you can pass the verification. (3) Idena: You play a captcha game at a specific point in time (to prevent people from participating multiple times); part of the captcha game involves creating and verifying captchas, which are then used to verify others. (4) Circle: Existing Circle users vouch for you. The uniqueness of Circles is that it does not attempt to create a “globally verifiable ID”; instead, it creates a trust graph where someone’s trustworthiness can only be verified from the perspective of your own position in the graph.

How does Worldcoin operate?

Each Worldcoin user installs an application process on their phone, which generates a private key and a public key, similar to an Ethereum wallet. Then they personally visit “the Orb”. Users stare at the Orb’s camera while showing a QR code generated by their Worldcoin application process, which contains their public key. The Orb scans the user’s eyes and uses complex hardware scanning and machine learning classifiers to verify: (1) whether the user is a real person; (2) whether the user’s iris does not match that of any other user who has previously used the system.

If both scans pass, the Orb will sign a message approving the private hash of the user’s iris scan. The hash is uploaded to a database – currently a centralized server, but is intended to be replaced by a decentralized on-chain system once they confirm the effectiveness of the hash mechanism. The system does not store the complete iris scan; it only stores hashes that are used to check uniqueness. From that point on, the user will have a “World ID”.

World ID holders can prove that they possess the private key corresponding to the public key in the database by generating ZK-SNARK, without revealing which key they possess, thereby proving that they are a unique human being. Therefore, even if someone rescans your iris, they cannot see any actions you have taken.

What are the main issues with Worldcoin construction?

People are mainly concerned about four risks:

(1) Privacy: The registry of iris scans may leak information. At the very least, if someone scans your iris, they can check against the database to determine if you have a World ID. Iris scans may reveal more information. (2) Accessibility: Unless there are enough orbs that anyone in the world can easily access, World IDs will not be reliably accessible. (3) Centralization: The Orb is a hardware device, and we cannot verify its construction for correctness and absence of backdoors. Therefore, even if the software layer is perfect and fully decentralized, the Worldcoin Foundation still has the ability to insert backdoors into the system, allowing it to create arbitrary multiple false human identities. (4) Security: Users’ phones may be hacked, and users may be forced to scan their own iris while presenting someone else’s public key, and it is possible to 3D print a “fake person” that can pass iris scans and obtain a World ID.

It is important to distinguish between: (1) the specific issues related to the choices made by Worldcoin; (2) the inherent issues with any form of biometric proof of identity; and (3) the issues that exist with any general form of human identity proof. For example, signing a “proof of humanity” means publishing your face on the internet. Joining the BrightID verification party doesn’t fully accomplish this, but it still exposes your identity to many people. Joining Circles exposes your social graph. Worldcoin is much better at protecting privacy than both of these. On the other hand, Worldcoin relies on specialized hardware, which brings the challenge of whether the Orb manufacturer can be fully trusted to build the orbs. This challenge does not exist in Proof of Humanity, BrightID, or Circles. Perhaps in the future, others outside of Worldcoin will create different specialized hardware solutions with different trade-offs.

05 How does the biometric human identity proof plan address privacy issues?

The most obvious and significant potential privacy leak of any human identity proof system is linking every action taken by an individual to their real-world identity. This data leak is huge and can be considered unacceptable. Fortunately, it is easily resolved using zero-knowledge proof technology. Instead of directly using a private key for signing, with the corresponding public key stored in a database, users can create a ZK-SNARK to prove that they possess the private key, with the corresponding public key stored somewhere in the database, without revealing which specific key they possess. This can typically be achieved using tools like Sismo, and Worldcoin has its own built-in implementation. It is important to provide “crypto-native” human identity proof: this fundamental step provides anonymization, something that virtually all centralized identity solutions lack.

The existence of a public registry of biometric scans is a more subtle privacy leak. In the case of Proof of Humanity, this centralizes a large amount of data: you get a video of every Proof of Humanity participant, making it very clear to anyone in the world who is willing to investigate all Proof of Humanity participants. In the case of Worldcoin, the leak is much more limited: the Orb performs the calculations locally and only publishes the “hash” of each person’s iris scan. This hash is not a regular hash like SHA256; instead, it is a specialized algorithm based on machine learning Gabor filters that can handle the inherent imprecision in any biometric scan and ensure that consecutive hashes of the same person’s iris have similar outputs.

Blue: Percentage of differences in bits between two scans of the same person’s iris Orange: Percentage of differences in bits between two scans of irises from different people

These iris hashes leak only a small amount of data. If an adversary can forcibly (or secretly) scan your iris, they can compute the hash of your iris themselves and check against the iris hash database to determine whether you have participated in the system. This ability to check whether someone has registered is necessary for the system itself to prevent people from registering multiple times, but it also has the potential for abuse. Additionally, the iris hash may leak a certain amount of medical data (gender, race, perhaps even medical condition), but this leak is much smaller than what almost any other massive data collection system used today (e.g. street cameras) can capture. Overall, for me, the privacy of storing iris hashes seems sufficient.

06 What accessibility issues are there in the biometric human identity verification system?

The introduction of dedicated hardware brings accessibility issues because dedicated hardware is not accessible. Currently, 51% to 64% of sub-Saharan Africans own smartphones, and this proportion is expected to increase to 87% by 2030. However, despite the billions of smartphones, there are only a few hundred Orbs. Even with larger-scale distributed manufacturing, it is difficult to achieve a world where there is an Orb within a five-kilometer range of every person.

But it is worth praising that Worldcoin has been working hard!

It is also worth noting that many other forms of human identity verification have worse accessibility issues. Joining a human identity verification system based on a social graph is very difficult unless you already know someone in the social graph. This makes it easy for such systems to be limited to a single country or community.

Even centralized identity systems have learned this lesson: India’s Aadhaar ID system is biometric-based, as it is the only way to quickly enroll its large population and avoid large-scale fraud caused by duplicate and false accounts (thus saving a lot of costs). However, the Aadhaar system as a whole is much weaker in terms of privacy than any system proposed on a large scale within the encryption community.

From an accessibility perspective, the best-performing systems are actually systems like the existing proof of work system, where you only need to use a smartphone to register. However, as we have seen and will see, such systems bring various other trade-offs.

07 What are the centralization issues in the biometric human identity verification system?

There are three main issues:

(1) Centralization risk in the top-level governance of the system; (2) Centralization risk specific to systems using dedicated hardware; (3) Centralization risk if proprietary algorithms are used to determine who the genuine participants are. Any human identity verification system must counter (1), and if the system uses incentive measures priced in external assets (e.g. ETH, USDC, DAI), then it cannot be entirely subjective, and governance risk becomes inevitable.

For Worldcoin, the risk is much greater than The Proof of Humanity (or BrightID) because Worldcoin relies on dedicated hardware, while other systems do not.

In particular, in a “logically centralized” system where only one system is being verified, it is not a risk unless all algorithms are open source and we can ensure that they are actually running the code they claim. This is not a risk for systems that rely purely on user verification of other users.

08 How does Worldcoin solve the hardware centralization problem?

Currently, the affiliated entity of Worldcoin (Tools for Humanity) is the only organization manufacturing Orbs. However, most of the Orb’s source code is open: you can see the hardware specifications in this GitHub repository, and other parts of the source code are expected to be released soon. The license is another “shared source code but not technically open source” license similar to Uniswap BSL, which prevents forking and also prevents what they consider unethical behavior – they specifically list mass surveillance and three international human rights declarations.

The team’s established goal is to allow and encourage other organizations to create Orbs, and over time transition from Orbs created by Tools for Humanity to having some kind of DAO to approve and manage which organizations can manufacture system-approved Orbs.

This design is flawed:

It fails to achieve true decentralization. This may be due to a common flaw in the joint protocol: ultimately, one manufacturer will come to dominate in practice, leading to the system becoming centralized again. It turns out that this distributed manufacturing mechanism is difficult to ensure security. Here, I see two risks: (1) the emergence of bad Orb manufacturers: that is, there is a malicious Orb manufacturer or one that has been hacked, which can also generate an unlimited number of fake iris scan hashes and assign them World IDs. (2) Government restrictions on the Orb: governments that do not want their citizens to participate in the Worldcoin ecosystem can ban the Orb from entering their country. Furthermore, they can even force citizens to scan their irises, allowing the government to gain access to their accounts, to which citizens will be unable to respond. In order to enable the system to effectively identify and counteract bad Orb manufacturers, the Worldcoin team proposes to conduct regular audits of the Orbs to verify that they are built correctly, that critical hardware components are built according to specifications, and that they have not been tampered with afterwards. This is a challenging task: it is essentially similar to a nuclear inspection bureaucratic institution for Orbs. It is hoped that even an imperfect audit system can greatly reduce the number of fake Orbs.

To limit the damage caused by any bad Orb manufacturers, it makes sense to implement a second mitigation. World IDs registered at different Orb manufacturers, preferably using different Orb registrations, should be distinct from each other. If this information is private and only stored on the World ID holder’s device, that’s okay, but it does need to be proven on demand. This allows the ecosystem to respond to (inevitable) attacks by selectively removing individual Orb manufacturers or even individual Orbs from the whitelist. If we see the North Korean government going around forcing people to scan the Orb, any resulting accounts can be immediately traced and disabled.

09 Security issues in general human identity verification?

In addition to the specific issues with Worldcoin, there are some issues that affect human identity verification designs in general. The main ones I can think of are:

(1) 3D-printed fake people: people can use AI to generate photos of fake people or even 3D-printed objects, which are convincing enough to be accepted by the Orb software. Even if only a minority does this, they can generate an unlimited number of identities. (2) Possibility of selling identities: someone can provide someone else’s public key instead of their own when registering, allowing that person to control their registered identity document in exchange for money. This seems to be happening already. Besides selling, IDs can also be rented for short-term use in an application process. (3) Phone hackers: if someone’s phone is hacked, the hacker can steal the key that controls their World ID. (4) Government coercion to steal identities: governments can force their citizens to verify their identities while presenting a government-owned QR code. In this way, malicious governments can access millions of IDs. In a biometric identification system, this can even be done secretly: the government can extract World IDs from everyone entering their country through passport inspection booths using confusing orbs. These are serious weaknesses. Some have been addressed in existing protocols, some can be addressed through future improvements, and some seem to be fundamental limitations waiting to be resolved.

  • How do we deal with fake identities?

For Worldcoin, the risk is much smaller compared to systems like The Proof of Humanity: Face-to-face scanning can check many characteristics of a person, which is difficult to forge compared to just deepfake videos. Specialized hardware is inherently more difficult to deceive than commodity hardware, and commercial hardware is more difficult to deceive than digital algorithms that validate remotely sent images and videos.

Can someone 3D print something that can even deceive specialized hardware? Perhaps. I expect that at some point, we will see an increasing tension between keeping the mechanism open and keeping the mechanism secure: Open-source AI algorithms are inherently more susceptible to adversarial machine learning. In the more distant future, even the best AI algorithms may be fooled by the best 3D printed fake people.

However, from my discussions with the Worldcoin and The Proof of Humanity teams, it seems that neither of these protocols has seen significant deepfake attacks. The reason is quite simple: It is very cheap and easy to hire real low-wage workers to register on your behalf.

  • Can we prevent the sale of identity cards?

In the short term, preventing this outsourcing is difficult because most people in the world are not even aware of the human identity protocol. If you tell them to raise a QR code and scan their eyes to earn $30, they will do it. Once more people know what the human identity protocol is, a fairly simple mitigation measure becomes possible: allowing those who own registered IDs to re-register, canceling the previous ID. This greatly reduces the credibility of “identity card sales” because the people selling you the identity card can re-register and cancel the identity card they just sold. However, to achieve this, the protocol needs to be widely known and Orbs needs to have very broad access to make on-demand registration practical.

This is one of the reasons why integrating UBI Tokens into the human identity protocol is valuable: UBI coins provide people with an easily understandable incentive to understand the protocol and register, and if they register on behalf of others, their own account is canceled.

  • Can we prevent coercion in biometric human identity systems?

It depends on what kind of coercion we are talking about. Possible forms of coercion include: – Governments scanning people’s eyes (or faces) at border controls and other routine government checkpoints and using it to register their citizens. – Governments banning the use of the Orb domestically to prevent people from independently re-registering – (potentially government-operated) application processes requiring people to “log in” by directly using a public key signature, showing them the corresponding biometric scan to view the link between their current ID and any future ID obtained from re-registration. A common concern is that this makes it too easy to create “permanent records” that follow a person for life.

Especially in the hands of inexperienced users, it seems difficult to completely prevent these situations. Users can leave their own country and (re)register in a more secure country/region, but this is a difficult and costly process. In a truly hostile legal environment, it seems too difficult and risky to find an independent Orb.

Requiring a person to provide proof of identity by speaking a specific phrase during registration is a good example: it is enough to prevent hidden scanning, requires higher coercion measures, and the registration phrase can even include a statement confirming that the interviewee knows they have the right to independently re-register and may receive UBI Tokens or other rewards. If coercion measures are detected, the device used to enforce mandatory registration may have its access revoked. In order to prevent the application process from linking people’s current and past IDs and attempting to leave a “permanent record”, the default human identity verification application process can lock the user’s keys in trusted hardware, preventing any application process from directly using the key without an intermediate anonymous ZK-SNARK layer. If the government or application process developers want to address this issue, they need to force the use of their own custom application process.

It seems possible to lock those truly hostile regimes and maintain the honesty of those that are only neutral (like most areas in the world) through the combination of these technologies and proactive warnings. This task can be accomplished by maintaining one’s own bureaucracy through projects like Worldcoin or The Proof of Humanity, or by revealing more information about how IDs are registered (for example, in Worldcoin, which Orb it comes from) and leaving this categorization task to the community.

  • Can we prevent ID rental (e.g. selling votes)?

Re-registering does not prevent renting out your ID. This is acceptable in some applications: the cost of renting out your right to receive UBI coin shares for the day will only be the value of the UBI coin shares for that day. However, in voting and other applications, vote selling is a big problem.

Systems like MACI can prevent you from selling your vote, allowing you to vote again later and invalidating your previous vote, so no one can know if you really cast such a vote. However, if a briber controls the key you obtained during registration, this won’t help.

I see two solutions here:

(1) Run the entire application process inside an MPC. This also covers the re-registration process: when a person registers with the MPC, the MPC assigns them an ID that is separate from and unlinkable to their proof of identity ID, and when a person re-registers, only the MPC knows which account to deactivate. This can prevent users from proving their behavior, as every important step is done using private information known only to the MPC. (2) Decentralized registration ceremony. Essentially, implement a protocol similar to a face-to-face key registration protocol, which requires four randomly selected local participants to jointly register someone. This can ensure that the registration is a “trusted” process where attackers cannot eavesdrop. Systems based on social graphs may actually perform better here, as they can automatically create a locally decentralized registration process as a byproduct of how they work.

10 Human Identity Verification with Biometric Technology vs. Social Graph-Based Verification

In addition to biometric methods, the other major competitor for human identity verification to date has been social graph-based verification. Social graph-based verification systems operate on the same principle: if a large number of existing verified identities attest to the validity of your identity, then you can obtain a valid verification status.

If only a few genuine users (accidentally or maliciously) help verify false users, you can use basic graph theory techniques to set an upper limit on the number of false users the system verifies. Source: https://www.sciencedirect.com/science/article/abs/pii/S0045790622000611.

Advocates of social graph-based verification often describe it as a better alternative to biometric technology for the following reasons:

– It does not rely on specialized hardware, making it easier to deploy; – It avoids a permanent arms race between manufacturers trying to create fake people and orbs that need updates to reject such fake people; – It does not require the collection of biometric data, providing more privacy protection; – It may be more friendly to pseudonyms, as if someone chooses to divide their internet life into multiple separate identities, both identities could potentially be verified (though maintaining multiple genuine and independent identities would sacrifice network effects and be costly, so it is not something attackers can easily do).

Biometric methods give a binary score of “human” or “not human,” which is singular: those accidentally rejected will be unable to obtain UBI and may be unable to participate in online life. Social graph-based methods can provide more nuanced numeric scores, which may be somewhat unfair to certain participants, but it is unlikely to judge someone completely “dehumanized.” My view on these arguments is that I generally agree with them! These are the true advantages of social graph-based methods and should be taken seriously. However, it is also worth considering the weaknesses of social graph-based methods: regional limitations, privacy leaks, centralization risks, etc.

11 In the real world, are human identity verification and pseudonyms compatible?

In principle, human identity verification is compatible with various pseudonyms. In fact, there is no ideal form of human identity verification. Instead, we have at least three different method paradigms, each with its own unique advantages and disadvantages. A comparison chart might look like this:

Ideally, we should consider these three technologies as complementary and combine them. As demonstrated by India’s Aadhaar, dedicated hardware biometric technology has the advantage of large-scale security. They are very weak in terms of decentralization, though this could be addressed by making individual orbs responsible.

Today, generic biometric technology is easy to adopt but its security is rapidly declining and may only work for another 1-2 years. Social graph-based systems guided by hundreds of people closely connected to the founding team may face constant trade-offs, either completely missing out on most regions of the world or being vulnerable to attacks from communities they cannot see. However, a social graph-based system guided by tens of millions of biometric ID holders could actually work. Biometric guidance may be more effective in the short term, while social graph-based technology may be more robust in the long term and take on greater responsibilities as algorithms improve over time.

Possible Paths for Future Hybrid Development

The problem of creating an effective and reliable human identity verification system, especially in the hands of those who are far away from the existing crypto community, seems quite challenging. I absolutely do not envy those who attempt this task, and it may take several years to find an effective solution. In principle, the concept of human identity verification seems very valuable, although there are risks associated with various implementations. However, there are also risks in a world without human identity verification: a world without human identity verification seems more likely to be dominated by centralized identity solutions, money, small closed communities, or a combination of the three. I look forward to seeing more progress in all types of human identity verification and hope to see different methods eventually form a coherent whole.

Like what you're reading? Subscribe to our top stories.

We will continue to update Gambling Chain; if you have any questions or suggestions, please contact us!

Follow us on Twitter, Facebook, YouTube, and TikTok.

Share:

Was this article helpful?

93 out of 132 found this helpful

Gambling Chain Logo
Industry
Digital Asset Investment
Location
Real world, Metaverse and Network.
Goals
Build Daos that bring Decentralized finance to more and more persons Who love Web3.
Type
Website and other Media Daos

Products used

GC Wallet

Send targeted currencies to the right people at the right time.