Proof of Validator The Key Security Puzzle on the Road to Ethereum Scalability

Today, a new concept has quietly emerged in the Ethereum research forum: Proof of Validator.

This protocol mechanism allows network nodes to prove that they are Ethereum validators without revealing their specific identities.

What does this have to do with us?

In general, the market is more likely to focus on the surface narratives brought about by certain technological innovations on Ethereum, rather than delving into the technology itself. For example, the Shanghai upgrade, the merge, the transition from PoW to PoS, and scalability on Ethereum, the market only remembers the narratives of LSD, LSDFi, and re-staking.

But don’t forget, performance and security are of utmost importance for Ethereum. The former determines the upper limit, and the latter determines the bottom line.

It is clear that on the one hand, Ethereum has been actively promoting various scalability solutions to improve performance; but on the other hand, on the road to scalability, in addition to cultivating internal strength, it is also necessary to guard against external attacks.

For example, if the validation node is attacked and the data becomes unavailable, then all the narratives and scalability solutions built on Ethereum’s staking logic may be affected. However, these impacts and risks are hidden behind the scenes, and end users and speculators may not notice them, and sometimes they may not even care.

The Proof of Validator mentioned in this article may be a key security puzzle on Ethereum’s path to scalability.

Since scalability is inevitable, reducing the risks that may be involved in the scalability process is an unavoidable security issue, which is also relevant to each of us in the industry.

Therefore, it is necessary to understand the full picture of the newly proposed Proof of Validator. However, due to the fragmented and hardcore nature of the full text in the technical forum, and the involvement of many scalability solutions and concepts, DeepChain Research Institute integrates the original post and sorts out the necessary information to interpret the background, necessity, and potential impact of Proof of Validator.

Data Availability Sampling: the breakthrough in scalability

Don’t rush, before formally introducing Proof of Validator, it is necessary to understand the current logic of Ethereum’s scalability and the risks it may contain.

The Ethereum community is actively promoting multiple scalability plans. Among them, Data Availability Sampling (DAS) is considered the most critical technology.

The principle is to divide the complete block data into several “samples”. Nodes in the network only need to obtain a few samples that are relevant to themselves to verify the complete block.

This greatly reduces the storage and computation costs for each node. To give an easier-to-understand example, this is similar to conducting a sample survey. By accessing different people, we can summarize the overall situation of the entire population.

Specifically, the implementation of DAS can be summarized as follows:

  • Block producers divide block data into multiple samples.

  • Each network node only obtains a small number of samples it is interested in, rather than the complete block data.

  • Network nodes can randomly sample and verify the availability of complete block data by obtaining different samples.

Through this sampling, even if each node only processes a small amount of data, they can together verify the availability of the entire blockchain data. This can greatly increase the block size and achieve rapid expansion.

However, this sampling scheme has a crucial problem: where should the massive samples be stored? This requires a complete decentralized network to support.

Distributed Hash Table (DHT): Home of the Samples

This is where the Distributed Hash Table (DHT) comes into play.

DHT can be seen as a huge distributed database that uses hash functions to map data to an address space, and different nodes are responsible for storing and accessing data in different address ranges. It can be used to quickly search for and store samples among a massive number of nodes.

Specifically, after dividing the block data into multiple samples, DAS needs to distribute these samples to different nodes in the network for storage. DHT can provide a decentralized method for storing and retrieving these samples, and the basic idea is:

  • Use a consistent hash function to map samples to a huge address space.

  • Each node in the network is responsible for storing and providing data samples within a specific address range.

  • When a sample is needed, the corresponding address can be found through hashing, and the node responsible for that address range in the network can be located to obtain the sample.

For example, according to certain rules, each sample can be hashed into an address, node A is responsible for addresses 0-1000, and node B is responsible for addresses 1001-2000.

So, the sample with the address 599 will be stored in node A. When this sample is needed, the address 599 is searched through the same hash function, and the node A responsible for that address can be located in the network to obtain the sample.

This approach breaks the limitations of centralized storage and greatly improves fault tolerance and scalability. This is exactly the network infrastructure required for DAS sample storage.

Compared to centralized storage and retrieval, DHT can improve fault tolerance, avoid single point failures, and enhance network scalability. In addition, DHT can also help defend against attacks such as “sample hiding” mentioned in DAS.

Pain Point of DHT: Sybil Attack

However, DHT also has a fatal weakness, which is the threat of Sybil attacks. Attackers can create a large number of fake nodes in the network, which will overwhelm the surrounding legitimate nodes.

By analogy, an honest vendor is surrounded by rows and rows of counterfeit goods, making it difficult for users to find genuine products. In this way, attackers can control the DHT network and make the samples unavailable.

For example, to obtain a sample of address 1000, you need to find the node responsible for this address. However, when surrounded by thousands of fake nodes created by attackers, the request will be continuously redirected to fake nodes instead of reaching the node that is actually responsible for that address. As a result, the sample cannot be obtained, and both storage and verification fail.

To solve this problem, it is necessary to establish a highly trusted network layer on the DHT, which is only participated by validator nodes. However, the DHT network itself cannot identify whether a node is a validator.

This seriously hinders the scalability of DAS and Ethereum. Is there any way to resist this threat and ensure the trustworthiness of the network?

Proof of Validator: ZK Scheme Safeguarding Scalability Security

Now, let’s get back to the focus of this article: Proof of Validator.

In the Ethereum technical forum, today, George Kadianakis, Mary Maller, Andrija Novakovic, and Suphanat ChunhaLianGuainya jointly proposed this scheme.

The overall idea is that if we can come up with a way to only allow honest validators to join the DHT in the previous section’s scalability scheme, then malicious actors who want to launch a Sybil attack must also stake a significant amount of ETH, significantly increasing the cost of malicious behavior.

In other words, the idea can be summarized as: I want to know that you are a good person without knowing your identity, and I can identify bad actors.

Zero-knowledge proofs are obviously useful in this limited information proof scenario.

Therefore, Proof of Validator (referred to as PoV) can be used to establish a highly trusted DHT network composed only of honest validator nodes, effectively resisting Sybil attacks.

The basic idea is to let each validator node register a public key on the blockchain and then use zero-knowledge proof technology to prove that they know the private key corresponding to this public key. This is equivalent to presenting their identification to prove that they are validator nodes.

In addition, for the resistance against DoS (Denial of Service) attacks on validator nodes, PoV also aims to hide the identity of validators on the network layer. In other words, the protocol does not want attackers to be able to distinguish which DHT node corresponds to which validator node.

So how exactly is this done? The original post used a lot of mathematical formulas and derivations, which will not be repeated here. We will provide a simplified version:

In terms of specific implementation, Merkle trees or lookup tables can be used. For example, using a Merkle tree, it can be proven that the registered public key exists in the Merkle tree of the public key list, and then it can be proven that the network communication public key derived from this public key is a match. The entire process is implemented using zero-knowledge proofs and does not reveal the actual identity.

Skip these technical details, the ultimate effect achieved by PoV is:

Only authenticated nodes can join the DHT network, greatly increasing security, effectively resisting Sybil attacks, and preventing samples from being deliberately hidden or modified.PoVprovides a reliable underlying network for DAS, indirectly helping Ethereum achieve rapid scalability.

However, the current PoV is still in the theoretical research stage, and there is uncertainty about whether it can be implemented.

However, several researchers in this post have already conducted experiments on a small scale, and the results show that PoV has good efficiency in proposing ZK proofs and the efficiency of validators receiving proofs. It is worth mentioning that their experimental equipment is just a notebook with a 5-year-old Intel i7 processor.

Finally, the current PoV is still in the theoretical research stage, and there is uncertainty about whether it can be implemented. However, regardless, it represents an important step for blockchain to achieve higher scalability. As a key component in the Ethereum scalability roadmap, it deserves continuous attention from the entire industry.

Like what you're reading? Subscribe to our top stories.

We will continue to update Gambling Chain; if you have any questions or suggestions, please contact us!

Follow us on Twitter, Facebook, YouTube, and TikTok.

Share:

Was this article helpful?

93 out of 132 found this helpful

Gambling Chain Logo
Industry
Digital Asset Investment
Location
Real world, Metaverse and Network.
Goals
Build Daos that bring Decentralized finance to more and more persons Who love Web3.
Type
Website and other Media Daos

Products used

GC Wallet

Send targeted currencies to the right people at the right time.