Freeing Ethereum Performance: The Innovative Path Beyond EVM Bottlenecks

Performance of Ethereum Virtual Machine (EVM)

Every operation on the Ethereum mainnet costs a certain amount of Gas, and if we put all the computation required to run basic apps on the chain, either the app will crash or the user will go bankrupt. This has given rise to L2: OPRU has introduced sequencer to bundle a bunch of transactions and then submit them to the mainnet. This not only helps the app to leverage Ethereum’s security, but also gives users a better experience. Users can submit transactions faster and the fees are also cheaper. Although the operation has become cheaper, it still uses the native EVM as the execution layer. Similar to ZK Rollups, Scroll, Polygon zkEVM uses or will use EVM-based zk circuit, and zk Proof will be generated in every transaction or a large batch of transactions in its prover. Although this can enable developers to build “fully on-chain” applications, can it still run high-performance applications efficiently and economically?

What are these high-performance applications?

The first thing that comes to mind is games, on-chain order books, Web3 social, machine learning, genome modeling, etc. All of these require a lot of computation and running on L2 will also be very expensive. Another issue with the EVM is that its computing speed and efficiency are not as good as other systems available today such as SVM (Sealevel Virtual Machine). Although L3 EVM can make computation cheaper, the structure of EVM itself may not be the best way to execute high computation as it cannot compute parallel operations. Every time a new layer is built on top, to maintain the spirit of decentralization, new infrastructure (a new node network) needs to be established, which still requires the same number of providers to scale, or a completely new set of node providers (individuals / companies) to provide resources, or both.

Therefore, every time a more advanced solution is built, the existing infrastructure needs to be upgraded or a new layer needs to be built on top. To solve this problem, we need a post-quantum secure, decentralized, trustless, and high-performance computing infrastructure that can truly efficiently compute for decentralized applications using quantum algorithms.

Alt-L1s like Solana, Sui, and Aptos can achieve parallel execution, but they won’t challenge Ethereum because of market sentiment, liquidity shortage, and lack of developers. Because of the lack of trust and the moat built by Ethereum with network effects, there is no milestone killer for ETH/EVM yet. The problem here is, why should all computation be on-chain? Is there an equally trustless, decentralized execution system? This is what the DCompute system can achieve.

DCompute infrastructure needs to be decentralized, post-quantum secure, and also trustless, which should not or rather not be blockchain/distributed technologies, but the verification of computing results, correct state transitions, and final confirmation is very important. This is how the EVM chain operates, where decentralized, trustless, and secure computing can be moved off the chain while maintaining network security and immutability.

The issue of data availability is mainly ignored here. This article is not unconcerned with data availability, as solutions like Celestia and EigenDA are already moving in this direction.

1: Only Compute Outsourced

(Source: Off-chaining Models and Approaches to Off-chain Computations, Jacob Eberhardt & Jonathan Heiss)

2. Compute and Data Availability Outsourced

(Source: Off-chaining Models and Approaches to Off-chain Computations, Jacob Eberhardt & Jonathan Heiss)

When we see Type 1, zk-rollups are already doing this, but they are either limited to the EVM or require developers to learn a completely new language/instruction set. The ideal solution should be efficient, effective (cost and resources), decentralized, private, and verifiable. ZK proofs can be built on AWS servers, but they are not decentralized. Solutions like Nillion and Nexus are trying to solve the problem of general computation in a decentralized way. But these solutions are unverifiable without ZK proofs.

Type 2 combines the off-chain computation model with the separate data availability layer, but the computation still needs to be verified on-chain.

Let’s take a look at different decentralized computation models that are available today, which are either partially trusted or potentially completely trustless.

Alternative Computation Systems

Ethereum Outsourced Computing Ecosystem Map (Source: IOSG Ventures)

– Secure Enclave Computations/Trusted Execution Environments

TEE (Trusted Execution Environment) is like a special box inside a computer or smartphone. It has its own lock and key, and only specific programs (called trusted applications) can access it. When these trusted applications run inside the TEE, they are protected from other programs or even the operating system itself.

This is like a secret hideout that only a few special friends can enter. The most common example of TEE is a secure enclave, which exists on devices we use, such as Apple’s T1 chip and Intel’s SGX, for running critical operations inside the device, such as FaceID.

Since TEE is an isolated system, the authentication process cannot be compromised because there is a trust assumption in the authentication. You can think of it as there being a secure door that you trust is secure because Intel or Apple built it, but there are enough security attackers in the world (including hackers and other computers) who can compromise that secure door. TEE is not “post-quantum secure,” which means that a quantum computer with infinite resources can break TEE security. As computers rapidly become more powerful, we must keep in mind post-quantum security when building long-term computing systems and cryptographic schemes.

– Secure Multi-Party Computation (SMPC)

SMPC (Secure Multi-Party Computation) is also a computational scheme known to blockchain technology practitioners. The approximately three parts that make up the SMPC network’s workflow are:

  • Step 1: Convert the input of the calculation into shares and distribute them among SMPC nodes.

  • Step 2: Perform the actual calculation, typically involving message exchange between SMPC nodes. At the end of this step, each node will have a share of the computed output value.

  • Step 3: Send the resulting shares to one or more result nodes, which run the LSS (Secret Sharing Recovery Algorithm) to reconstruct the output result.

Imagine an automobile production line, where the construction of the car and manufacturing of components (engine, doors, rearview mirrors) are outsourced to the original equipment manufacturer (OEM) (working nodes), and then there is an assembly line that puts all the components together to make the car (result nodes).

Secret sharing is critical for protecting privacy in decentralized computing models. This prevents a single party from obtaining the complete “secret” (in this case, the input) and maliciously producing incorrect output. SMPC may be one of the easiest and most secure decentralized systems. Although there is currently no fully decentralized model, it is logically possible.

MPC providers like Sharemind provide the MPC infrastructure for computations, but the provider is still centralized. How to ensure privacy, how to ensure that the network (or Sharemind) does not have malicious behavior? This is where zk proofs and zk verifiable computation come into play.

– Nil Message Compute (NMC)

NMC is a new distributed computing method developed by the Nillion team. It is an upgrade of MPC, where nodes do not need to communicate by exchanging results. To do this, they use a cryptographic primitive called one-time masking (OTM), which uses a series of random numbers called blinding factors to mask a secret, similar to a one-time pad. OTM aims to provide correctness in an efficient way, which means that NMC nodes do not need to exchange any messages to perform a computation. This means that NMC does not suffer from the scalability issues of SMPC.

– Zero-Knowledge Verifiable Computation

ZK Verifiable Computation is the generation of a zero-knowledge proof for a set of inputs and a function, proving that any computation performed by the system is done correctly. Although ZK Verifiable Computation is a new concept, it is already a critical part of the Ethereum network’s scaling roadmap.

ZK proofs have various implementation forms (as summarized in the paper “Off-Chaining Models”), as shown below:

(Source: IOSG Ventures, Off-chaining Models and Approaches to Off-chain Computations, Jacob Eberhardt & Jonathan Heiss)

Now that we have a basic understanding of the implementation of ZK proofs, what are the requirements for using ZK proofs to verify computations?

  • First, we need to choose a proof primitive, which ideally has a low cost of generating proofs, is not memory-intensive, and is easy to verify.

  • Second, choose a zk circuit and design a proof for the above primitive by computation.

  • Finally, calculate the given function for the provided input in a given computing system/network and output the result.

The Developer’s Challenge – Proof Efficiency Dilemma

Another thing that must be said is that the threshold for building circuits is still quite high, and it is not easy for developers to learn Solidity, now requiring developers to learn Circom and other things to build circuits, or to learn a specific programming language (such as Cairo) to build zk-apps, which seems like a distant goal.

(Source: https://app.artemis.xyz/developers)

(Source: https://www.statista.com/statistics/1241923/worldwide-software-developer-programming-language-communities)

As shown in the above statistics, making the environment of Web3 more suitable for development seems to be more sustainable than introducing developers to a new Web3 development environment.

If ZK is the future of Web3 and Web3 applications need to use existing developer skills to build, then ZK circuits need to be designed to support algorithm execution written in languages like JavaScript or Rust to generate proofs.

Such solutions do exist, and two teams come to mind: RiscZero and Lurk Labs. Both have a very similar vision, allowing developers to build zk-apps without a steep learning curve.

Lurk Labs is still in the early stages, but the team has been working on this project for a long time. They focus on generating Nova proofs using generic circuits. Nova proofs were proposed by Abhiram KothaBlockinglli of Carnegie Mellon University, Srinath Setty of Microsoft Research and Ioanna Tziallae of New York University. Compared to other SNARK systems, Nova proofs have a particular advantage in incremental verifiable computation (IVC). IVC is a concept in computer science and cryptography that aims to achieve computation verification without having to recalculate the entire computation from scratch. When computation is long and complex, proof optimization is needed for IVC.

(Source: IOSG Ventures)

Nova proofs are not “ready to use” like other proof systems. Developers still need a proof system to generate proofs. That’s why Lurk Labs built Lurk Lang, a LISP implementation. Since LISP is a lower-level language, it makes generating proofs on generic circuits easy and also easy to transpile into JavaScript, which will help Lurk Labs get support from 17.4 million JavaScript developers. It also supports other generic languages, such as Python.

In summary, Nova proofs seem to be a great primitive proof system. While their downside is that the proof size increases linearly with the size of the computation, on the other hand, Nova proofs have further compression space. STARK proofs are not subject to an increase in proof size with an increase in computation, so they are better suited for verifying very large computations. To further improve the developer experience, they also released the Bonsai Network, a distributed computing network that verifies proofs generated by RiscZero. This is a simple schematic diagram representing how RiscZero’s Bonsai Network works.

(Source: https://dev.bonsai.xyz/)

The wonderful thing about Bonsai network design is that calculations can be initialized, verified, and output all on the chain. All of this sounds like utopia, but STARK proof brings a problem – high verification costs.

Nova proof seems to be very suitable for repetitive calculations (its folding plan is economically efficient) and small-scale calculations, which may make Lurk a good solution for ML reasoning verification.

Who is the winner?

(Source: IOSG Ventures)

Some zk-SNARK systems require a trusted setup process during the initial setup phase to generate a set of initial parameters. The trust assumption here is that the trusted setup is honestly executed, without any malicious behavior or tampering. If attacked, it may lead to the creation of invalid proofs.

STARK proof assumes the security of low-degree tests for verifying low-degree properties of polynomials. They also assume that the hash function behaves like a random oracle.

The correct implementation of both systems is also a security assumption.

SMPC networks depend on the following:

  • SMPC participants can include “honest but curious” participants who try to access any underlying information by communicating with other nodes.

  • The security of the SMPC network depends on the assumption that participants correctly execute the protocol and do not intentionally introduce errors or malicious behavior.

  • Some SMPC protocols may require a trusted setup phase to generate encryption parameters or initial values. The trust assumption here is that the trusted setup is honestly executed.

  • Same as SMPC network, the security assumption remains unchanged, but due to the existence of OTM (Off-The-Grid Multi-Blockingrty Computation), there are no “honest but curious” participants.

OTM is a multiparty computation protocol designed to protect participants’ privacy. It achieves privacy protection by making participants not disclose their input data in the calculation. Therefore, “honest but curious” participants do not exist, as they cannot try to access underlying information by communicating with other nodes.

Is there a clear winner? We don’t know. But each method has its own advantages. Although NMC looks like an obvious upgrade to SMPC, the network has not yet gone live and has not been tested in practice.

The benefit of using ZK-verifiable computation is that it is secure and privacy-preserving, but it does not have a built-in secret sharing functionality. The asymmetry between proof generation and verification makes it an ideal model for verifiable outsourced computation. If the system uses pure zk-verifiable computation, the computer (or individual node) must be very powerful to perform a large amount of computation. To enable load sharing and balancing while protecting privacy, secret sharing is required. In this case, systems such as SMPC or NMC can be combined with zk generators such as Lurk or RiscZero to create a powerful distributed verifiable outsourced computation infrastructure.

Today’s MPC/SMPC networks are centralized, which makes this point particularly important. The largest MPC provider currently is Sharemind, and its ZK verification layer has been proven useful. The economic model for decentralized MPC networks has yet to be worked out. In theory, the NMC model is an upgrade to MPC systems, but we have not yet seen its success.

In the competition of ZK proof schemes, there may not be a one-size-fits-all solution. Each proof method is optimized for specific types of computation, and there is not one model that fits all types. There are many types of computational tasks, and it also depends on the trade-offs made by developers on each proof system. The author believes that STARK-based systems and SNARK-based systems and their future optimizations will have a place in the future of ZK.

Like what you're reading? Subscribe to our top stories.

We will continue to update Gambling Chain; if you have any questions or suggestions, please contact us!

Follow us on Twitter, Facebook, YouTube, and TikTok.

Share:

Was this article helpful?

93 out of 132 found this helpful

Gambling Chain Logo
Industry
Digital Asset Investment
Location
Real world, Metaverse and Network.
Goals
Build Daos that bring Decentralized finance to more and more persons Who love Web3.
Type
Website and other Media Daos

Products used

GC Wallet

Send targeted currencies to the right people at the right time.