a16z Dialogue with Solana Co-founder Why didn’t Solana become an EVM-based public blockchain?

Original title: Debating Blockchain Architectures (with Solana)

Hosts: Ali Yahya, General Partner at a16z crypto; Guy Wuollet, Trading Partner at a16z crypto

Guest: Anatoly Yakovenko, CEO of Solana Labs, Co-founder of Solana

Translation: Qianwen, ChainCatcher

“But what I want to say is that people should try to create greater ideas instead of repeating existing ones. The best analogy I’ve heard is that when people discovered cement, everyone focused on using cement to build bricks, and then someone thought, I can build skyscrapers. They figured out a way to combine steel, cement, and architecture, which no one could have imagined. The new tool is cement. You just need to figure out what a skyscraper is and then go build it.”

In this episode, a16z crypto talks to Anatoly Yakovenko, co-founder and CEO of Solana Labs. Anatoly Yakovenko previously worked at Qualcomm as a senior engineer and engineering manager.

Summary

  • The ultimate goal of decentralized computing
  • The ideas behind Solana
  • Differences between Solana and Ethereum
  • The future development of blockchain
  • The Web3 community and development
  • Talent recruitment for Web3 startups

The ultimate goal of decentralized computing

a16z crypto: First of all, I want to know how you view the ultimate goal of decentralized computing. How do you view blockchain architecture?

Anatoly Yakovenko: My position is quite extreme. I believe that settlement will become less and less important, just like in traditional finance. You still need someone to provide guarantees, but these guarantees can be achieved in many different ways. I believe that what is truly valuable to the world is having a globally distributed, globally synchronized state, which is also the real challenge. You can think of it as Google Slicer’s role in Google or Nasdaq’s role in the financial market.

From a macro perspective, blockchain systems are permissionless, programmable, and highly open, but there is still some kind of market behind the stack. For all these markets, achieving global synchronization at a speed as close to the speed of light as possible is very valuable because everyone can reference it. You can still operate local markets, but if there is fast and synchronized global pricing, then global finance will become more efficient. I believe this is the ultimate goal of blockchain, to synchronize as many states as possible at the speed of light.

a16z crypto: If cryptocurrencies and blockchain achieve mainstream adoption, what do you think will be the biggest driving force behind activities on the blockchain?

Anatoly Yakovenko: I think the form will still be similar to Web2, but it will be more transparent and realize the vision of the long tail distribution. There will be various small-scale companies on the Internet that can control their own data, instead of a few dominant companies as it is now (although what these large companies are doing is also great). I think in the long run, creators should have more control and more autonomy in order to achieve a truly meaningful internet with widespread distribution and markets.

a16z crypto: Another way to think about or raise this question is how to balance. You said you think settlement will become less important in the future. I’m curious, as Solana is a hub for a large amount of global business, especially financial activities, how can it accelerate or complement the goals you just mentioned?

Anatoly Yakovenko: The Solana system is not designed as a store of value, and it actually has a low tolerance for network failures. It is designed to use as many available resources on the internet as quickly as possible. In fact, it relies on most of the free cross-border communication and finance in the world. It is different from tokens that can be used for emergency refuge (bunker coins). Of course, I think there is also a need for bunker coins that can survive local geopolitical conflicts.

However, optimistically speaking, the connections between things in the world are becoming increasingly tight. I think we will see a world with trillions of connections between us. In that world, you will have a fully interconnected world. I think this globally synchronized state machine can absorb a lot of execution-related content.

From experience, settlement can happen in many places because settlement is easy to guarantee. Again, I take this position for the sake of discussion. Since 2017, we have witnessed hundreds of privacy networks, each with many different instances from a design perspective. We basically see no voting algorithm (Quorum) failures because settlement is relatively easy to achieve. Once you have established a complex Byzantine fault-tolerant mechanism among 21 decentralized participants, you will not see settlement failures. We have also solved other extension problems. From experience, Tendermint is very feasible, although we experienced the Luna meltdown in the early stage, the problems that occurred were not in the voting algorithm mechanism.

I believe that we spend too much on settlement in terms of security, resources, and engineering, while we spend far too little on research and execution, which is where most of the financial industry’s profits come from. Personally, I think that if these technologies are to truly impact and touch the global scope, they must be superior to traditional finance in terms of price, fairness, and speed. This is where we need to focus on research and competition.

a16z crypto: You think settlement is one of the aspects you choose to optimize blockchain. People may over-optimize blockchain for settlement and ignore other aspects such as throughput, latency, and composability, which are often in opposition to the security of settlement. Can you talk about the architecture of Solana?

Anatoly Yakovenko: The mission of Solana’s architecture is to simultaneously transmit information from all over the world to all participants in the network as quickly as possible. So there is no need for sharding or complex consensus protocols. We actually want to keep things simple. Or you could say we were lucky enough to solve a computer science problem, which is box synchronization (using verifiable delay functions as a time source in the network). You can imagine it as two radio towers transmitting signals at the same time or frequency, which creates noise. The first protocol people thought of when building cellular networks was to equip each tower with a clock and let them alternate transmitting signals according to time.

An analogy is that the Federal Communications Commission is like a truck full of villains. If your signal tower is not synchronized with the open permission list network, they will drive to your signal tower and shut it down. Solana is inspired to use verifiable delay functions to schedule block producers, so collisions do not occur. For example, in a network like Bitcoin, if two block producers produce a block at the same time, a fork will occur, similar to noise in a cellular network. If we can force all block producers to produce alternately in time, you can get a good time division protocol, where each block producer can produce in turn according to the schedule, and they will never collide. Therefore, forks will never occur, and the network will never enter a noisy state.

After that, everything we do is operating system and database optimization. We transmit data blocks globally like a Bitcoin flood, and transmit encoding blocks to different machines. In fact, they eventually look very similar to data availability sampling and have the same effect. They forward bits to each other, rebuild blocks, and then vote, and so on. The main design idea of Solana is that we strive to ensure that every process of the network or codebase only needs to update the kernel when designing.

If we can get twice the kernel for every dollar spent within two years, we can adjust it so that the number of threads per block is twice as high as it is now, or the computational workload per block is twice as high. Therefore, the network can accomplish twice as much. All of this will happen naturally without any changes to the architecture.

This is the main goal we really want to achieve, based on my experience. From 2003 to 2014, I worked at Qualcomm. Every year, we can see improvements in mobile terminal hardware and architecture. If you don’t consider the possibility of expanding software without rewriting it in the next year, as an engineer, you are unqualified. Because your device will expand rapidly in scale, and to take advantage of this, you have to rewrite the code.

So, if you really need to think ahead, everything you build will only evolve faster and faster. The biggest learning experience in my engineering career is that you can choose a carefully designed algorithm, but it may be wrong because the benefits of using this algorithm become negligible as the hardware scale expands, and now the complexity of implementing it is like a waste of time. So, if you can only do very simple things and only need to expand the kernel, then you may already be able to achieve 95% of the things.

Solana’s Building Concept

a16z crypto: Using Proof of History as a way to synchronize time across validators is a very groundbreaking idea, which is why Solana is different from other consensus protocols.

Anatoly Yakovenko: This is a part of Amdahl’s Law, which is why it is difficult for people to replicate Solana in terms of accountless, latency, and throughput. This is because classic consensus implementations are based on step functions. An entire network, such as Tendermint, must agree on the content of the current block before moving on to the next block.

The cell tower uses a schedule, and all you need to do is send a signal. Because there is no need for step functions, the network can run quickly. I think it’s like a kind of synchronization, but I don’t know if that word is appropriate. They transmit continuously and never stop to wait for consensus to run. We are able to do this because we have a strict understanding of time. To be honest, we can establish some clock synchronization protocols for redundancy, but the process may be very difficult. It is a huge project that requires reliable clock synchronization.

This is the idea of Solana. Before I started building Solana, I liked trading, being a broker, and so on, although I didn’t make any money. At that time, “flash boys” were prevalent in traditional finance. Whenever I thought my algorithm was good enough, my order would be delayed and it would take longer for the order to enter the market, and the data would also arrive slower.

I think if we want to disrupt the financial industry, the basic goal of these open business systems is to make this situation never possible. This system is open, and anyone can participate. Everyone knows how to gain access and how to gain rights, such as priority or fairness.

Within the limits allowed by physics and achievable by engineers, achieving all of this at the fastest speed is what I think is the fundamental problem. If blockchain can solve this problem, it will have a significant impact on other parts of the world, benefiting many people globally. It could become a cornerstone that you can use to disrupt advertising transactions and monetization models on the internet, and so on.

a16z crypto: I think there is an important distinction between pure latency and malicious activity, especially within a single state machine. Could you elaborate on which one you think is more important and why?

Anatoly Yakovenko: It is impossible to atomize the entire state because that would mean the entire state has only one correct global lock, which would result in a very slow sorting system. Therefore, you need atomic access to the state and that needs to be guaranteed. It is difficult to build software that operates on remote states that are not atomic if you do not know what side effects it will have on your computation. So the idea is like submitting a transaction, either it executes completely or it fails completely, without any side effects. This is one of the characteristics that these computers must possess. Otherwise, I think it is impossible to write reliable software for them. You simply cannot build any reliable logic or financially reliable logic.

You may be able to build a system that maintains consistency, but I believe that is another type of software. So, there is always a tension between maintaining system atomicity and performance. Because if you guarantee this, it ultimately means that at any given moment, you have to choose a specific writer globally to handle a specific part of the state. To solve this problem, you need a single sequencer and linearize these events. This creates points where value can be extracted and system fairness can be improved. I think it is indeed difficult to solve these problems. Not only Solana, but Ethereum and Lightning Robotics also face these problems.

Solana and Ethereum

a16z crypto: One of the frequently debated issues, especially in the Ethereum community, is the verifiability of execution, which is very important for users because they do not have very powerful machines to verify activities in the network. What is your opinion on this?

Anatoly Yakovenko: I think the ultimate goals of these two systems are very similar. If you look at the Ethereum roadmap, you will find that its idea is that the overall network bandwidth is greater than that of any single node, and the network is already processing more events than any single node can compute or handle. You have to consider the security factors of such a system. There are also protocols for publishing fraud proofs, sampling schemes, etc., all of which are actually applicable to Solana as well.

So, if you take a step back, there is actually not much difference. You have a system like a black box that creates so much bandwidth, which is not very practical for a random user. Therefore, they need to rely on sampling techniques to ensure the authenticity of the data. Just like a very powerful rumor network that can spread fraud proofs to all clients. The things guaranteed between Solana and Ethereum are the same. I think the main difference between the two is that Ethereum is largely constrained by its narrative as a global currency, especially its narrative of competing with Bitcoin as a store of value.

I think it makes sense to allow users to have very small nodes. Even if they only partially participate in the network, instead of having the network run entirely by professionals. Frankly, I think this is a fair optimization, for example, if you don’t care about execution and only care about settlement, why not minimize the requirements for nodes and let people partially participate in network activities? I don’t think doing this can create a trust-minimized or absolutely secure system for the majority of people in the world. People still have to rely on data availability sampling and fraud proofs. And users only need to execute the majority of people’s signatures to verify if the blockchain has done anything wrong.

On Solana, a single transaction describes the action state fragments of everyone who has interacted with the transaction. It can be executed easily by the majority of people’s signatures on any device, such as a browser in a mobile phone, because everything on Solana is predetermined. So it is actually easier to build on Solana. EVM or any smart contract can access any state and randomly jump between them during execution. In a way, it is almost simpler. But I think, from a high-level perspective, users still have to rely on DAS and fraud proofs. In this regard, all designs are the same.

a16z crypto: I think the difference lies in zero-knowledge proofs and validity proofs, especially fraud proofs. You seem to think that zkEVM is almost impossible to audit, and they won’t develop in the next few years. I want to ask you why Solana didn’t prioritize zero-knowledge proofs and validity proofs like Ethereum?

Anatoly Yakovenko: I think there are two challenges here, one is how we prioritize them because there is a company called “white protocol” building zero-knowledge proofs for applications. The proofs are fast. Users will not notice them when interacting with the chain.

In fact, you can combine them. You can have a Solana transaction call five different zk programs. So, this environment can save computational resources for users or create privacy for users, but it doesn’t really validate the entire chain. The reason I think it is difficult to validate the entire chain is that zero-knowledge systems don’t handle a large number of sequential state dependencies well, and the most typical example is VDF (Verifiable Delay Function). When you try to prove a sequential SHA, recursive SHA is 56, you will find that it crashes because the ordering state dependencies during execution greatly increase the constraints that the system must have. And verification takes a long time. I don’t know if this is the best result in the industry. The latest result I saw on Twitter is that a 256-byte SHA takes about 60 milliseconds. This is a long time for a click instruction.

So, sorting calculations, classical calculations are necessary. And in an environment designed for execution, there are a lot of markets, and you actually have a lot of sequential dependencies. The market is very hot. Everyone will directly submit data to a pair of transactions, and everything around this pair of transactions depends on this pair of transactions. Therefore, like execution, this kind of sequential dependency is actually very large, which will result in a very long proof system.

Solana does not prohibit someone from running a zero-knowledge prover in a recursive way to prove the entire computation if it is feasible. But what users need is that my information is quickly written into the chain during the transaction, and it is written in microseconds or milliseconds, and I need quick access to the state and some guarantees on the state. This is the key to gaining benefits.

Therefore, I think we need to solve this problem, which requires the practical competitiveness of traditional finance. If this can be achieved, then we can start looking at zero-knowledge and figure out how we can provide these guarantees to users who don’t want to verify the chain or rely on these events, but maybe we can do it once every 24 hours or something like that. I think there are two different use cases, first, we must truly solve the market mechanism problem, and then it is for other long-tail users.

a16z crypto: It sounds like you mean that validity proofs and ZK proofs are very good in terms of settlement, but they are not helpful in terms of execution because the latency is too long and their performance needs to be improved.

Anatoly Yakovenko: So far, it’s all true. This is my intuition, and the reason is simple, because the more active the chain is, the more hotspots it depends on. They are not completely parallel and will never interact with each other. It’s just a bunch of poorly written code.

a16z crypto: Another counterargument could be that zero-knowledge proofs are experiencing exponential progress because there is a lot of investment in this area now. Maybe in 5 years, 10 years, the cost could decrease from the current 1000x to a more feasible level. You have a background in hardware engineering, and I would love to hear your thoughts on the idea of having a node perform calculations and generate proofs, and then distribute the proofs to others, which may be more efficient than having each node perform calculations on its own. What do you think about this viewpoint?

Anatoly Yakovenko: This trend is helpful for optimizing zero-knowledge systems. More and more things are happening on the chain. The number of constraints will increase, and the speed will far exceed the speed at which you add hardware, and then you continue to add hardware. This is my intuition. My feeling is that as the demand increases, such as the increasing computational load on the chain, it will become increasingly difficult for zero-knowledge systems to keep up with low latency. I’m not even sure if it will be 100% feasible. I think it is very likely that you can build a system that can handle super large recursive batches, but you still have to run classic executions and take snapshots every second. Then, invest an hour of computation time in a large parallel farm, verify between each snapshot, and start recalculating from there, but this takes time, which I think is a challenge.

I’m not sure if ZK will catch up unless the demand stabilizes, but I think the final demand will be leveled off. Assuming hardware continues to improve, at some point the demand for cryptocurrencies will saturate, just like the current saturation of Google searches per second. Then, you will start to see this happen. I think we are still far from that goal.

a16z crypto: Another major difference between these two models is Ethereum’s Rollup-centric worldview, which is essentially a pattern of computation sharding, data availability sharding, and bandwidth and network activity sharding. Therefore, it can be imagined that larger throughput can eventually be achieved because you can almost infinitely increase Rollups based on a single Rollup, but this means compromising on latency. So, what is more important? The overall throughput of the network or access latency? Maybe both are important?

Anatoly Yakovenko: I think the main problem is that you have Rollups and sequencers, and people will extract value from the construction of sequencers and Rollups. In this system, you more or less have some common sequencers. Their operations are no different from Citadel, Jump, brokers, traders, etc. They are all routing orders. These systems already exist. This design actually does not break the entire monopoly. I think the best way is to build a completely permissionless commercial system that prevents intermediaries from truly participating and start capturing the value of a globally synchronized state machine.

It is very likely that the actual cost of using it will be lower because it is like creating a bunch of different small channels (pipes).

Generally, the pricing of any given channel is based on the remaining capacity of that channel, not the overall network capacity. It is difficult to establish a system that fully shares network bandwidth. You can try to design it like Rollup, where blocks can be placed anywhere, but they will all compete and bid. It’s not as simple as a huge pipe, the price is based on the remaining capacity of this chained pipe. Because it is a bandwidth aggregation source, its pricing will be lower, but the final speed and performance will be higher.

Block Space and the Future

a16z crypto: I remember you saying that you don’t think the demand for block space is infinite. Do you think that when web3 achieves mainstream adoption, the demand for block space will reach a balance point?

Anatoly Yakovenko: Imagine if engineers at Qualcomm were told that people’s demand for cellular bandwidth is infinite, and code is designed for infinity, that would be absurd.

Actually, you would design a target that can be designed for such demand, such as thinking about how much hardware is needed, do I need to start it, what is the simplest implementation, how much is the deployment cost, etc. My intuition is that 99.999% of the most valuable transactions may only need less than 100,000 TPS, this is my intuitive guess. And it is actually quite feasible to achieve a system with 100,000 TPS, the current hardware can achieve it, Solana hardware can do it. I think a speed of 100,000 TPS may be the block space for the next 20 years.

a16z crypto: Could it be that block space is so affordable that people want to use it for all sorts of things, so the demand for it will soar?

Anatoly Yakovenko: But there is still a bottom price. The price of purchase must cover the bandwidth cost of each validator. Just as the egress cost dominates the validation cost. If you have 10,000 nodes, you probably need to price the network’s byte usage at 10,000 times the normal egress cost, but that sounds expensive.

a16z crypto: So I guess my question is, do you think at some point Solana will reach its limit, or do you think the monolithic architecture is sufficient?

Anatoly Yakovenko: So far, people have been doing sharding because they have built systems with much lower bandwidth than Solana, so they have encountered capacity limitations and started bidding for bandwidth, which is much higher than the egress cost. For example, for the egress cost of 10,000 nodes, the last time I saw the price, it should be $1 per megabyte for Solana validators, which is a bottom price, you can’t use it to play videos. But it’s cheap, you can use it for searching, you can basically put every search on the chain, and then get the results from your search engine.

a16z crypto: I think this is actually an interesting perspective because we raised the question of “what is the ultimate goal of blockchain scalability” at the beginning of the podcast, which means that scalability is the most important issue for blockchain.

Chris has also used this analogy before, and the progress of artificial intelligence in the past decade has largely been attributed to better hardware, which is the real key. So I think when we talk about the scalability of blockchain, it is also for the same purpose. If we can achieve a significant increase in TPS, everything will work fine. But an interesting counterargument is that Ethereum can only complete 12 transactions per second, and the throughput of Ethereum itself is still larger than any single L2, with relatively high transaction fees. On Solana, many simple transfer transactions have low fees. When we discuss this issue, we usually come to the conclusion that if our throughput reaches the next order of magnitude, there will be many new applications that we cannot currently reason or think about. To some extent, Solana has been a place to build applications in the past few years, and many things are quite similar to those built on Ethereum.

Do you think higher throughput or lower latency will unleash a lot of new applications? Or will most of the things built on the blockchain in the next 10 years be very similar to the designs we have already proposed?

Anatoly Yakovenko: Actually, I think most applications will be very similar. The hardest part to crack is how to establish a business model, such as how to apply these new tools. I think we have found these tools.

The reason Ethereum transactions are so expensive is that its state is very valuable. When you have this state, anyone can write to it. They will create economic opportunity costs to become the first person to write to this state, and all of this effectively increases the cost. This is why valuable transaction fees are generated on Ethereum. Many applications need to create this valuable state to make people willing to continuously write to it and start competing to raise fees.

a16z crypto: Here, I present a counter-opinion. I think we easily underestimate the creativity of developers and entrepreneurs in the entire field. In fact, if you look back at history, such as the first wave of networks and the internet boom starting from the 1990s, it took a long time for the main driving force to develop truly interesting applications. For example, with cryptocurrencies, we didn’t really have programmable blockchains like Ethereum starting around 2014, and things like Solana have only existed for about 4 years. The time for people to explore and design is actually not long.

The fact is, there are still very few developers in this field. For example, we probably have tens of thousands of developers who know how to write smart contracts and truly understand the prospects of blockchain as a computer. So I think it is still early to develop interesting ideas on the blockchain. The design space it creates is so vast that I guess we will be amazed by what people will create in the future. They may not only be related to transactions, markets, or finance. They may appear in the form of shared data structures that are very valuable but fundamentally unrelated to finance.

Decentralized social networks are a good example, where the social graph is put on the chain as a public product, enabling entrepreneurs and developers to build on top of it. Because the social graph is on the blockchain and open, all developers can access it, making it a valuable state maintained by the blockchain. You can imagine people wanting to publish a large number of transactions for various reasons, such as real-time updates to this data structure. If these transactions are cheap enough, developers will find ways to leverage them.

Historically, whenever computer speed increases, developers find ways to utilize the extra computing power to improve their applications. Our computing power is never enough. People always want more computational power, and I believe the same will happen with blockchain computers. And there won’t be a limit, maybe not unlimited, but I believe the upper limit of the blockchain space requirement will be much higher than we can imagine.

Anatoly Yakovenko: But on the other hand, use cases of the internet were discovered early on, such as search, social graphs, and e-commerce, back in the 90s.

a16z crypto: There are some things that are hard to predict. For example, shared bikes were hard to predict. In fact, the form that search eventually took was also hard to predict. I use things like streaming videos extensively in social networks, which were unimaginable at the beginning.

I think, just like here, we can think of some applications that people might build on the blockchain. But given the current limitations and infrastructure constraints, some of these applications feel unimaginable. Once these constraints are lifted and more people enter this field to build, we can let our imagination run wild, and there may be many heavyweight applications in the future. Therefore, if we let it develop freely, we may be surprised at how powerful it becomes.

Anatoly Yakovenko: There is an interesting card game called “dot bomb,” the goal of which is to lose money as slowly as possible. You can’t actually win or earn money. You manage a group of different startups using 90s internet ideas. Without exception, every so-called bad idea, such as online grocery delivery and online pet stores, became at least a billion-dollar business at some point after 2010. So, I think many ideas may initially seem bad or fail in the initial implementation, but they will eventually be widely adopted in the future.

The Future Adoption of Blockchain

a16z crypto: So, the question is, what do you think is the key for blockchain to go from its current applications to becoming mainstream on the internet? If not scalability, is it other obstacles like cultural acceptance of blockchain? Is it privacy issues? User experience?

Anatoly Yakovenko: This reminds me of the development history of the Internet. I still remember how the whole experience changed. After I went to college, I got an email address, and everyone who worked had an email address. I started receiving links with various content, and then the user experience on the Internet improved. For example, Hotmail was born, and Facebook also developed.

Because of this, people’s thinking changed, and they understood what the Internet was. At first, people even had a hard time understanding what a URL was. What did it mean to click on something? What did it mean to enter a server? We have the same problem in self-regulation. We need to help people truly understand these concepts, such as what mnemonic words mean? What do wallets and transactions mean? People’s thinking needs to change, and this change is happening slowly. I believe that once every user who eventually purchases cryptocurrency and deposits it into their self-regulating wallet has this experience, they will understand this point. However, so far, not many people have had this experience.

a16z crypto: You created a mobile phone. Maybe you can tell us where the inspiration for making a phone came from and how you think the current promotion is going?

Anatoly Yakovenko: My experience at Qualcomm made me realize that this is a problem with limitations, and we can solve it, and it won’t make the whole company turn to the mobile phone business. So for us, this is a low-cost opportunity that may change the cryptocurrency or mobile industry.

It’s worth doing. We worked with a company to manufacture a device, and when we worked with them to launch specific cryptocurrency features, we received great feedback from people and developers, who saw it as an alternative to the app store. But everything is unknown, such as whether the application of cryptocurrency is so eye-catching under macro conditions that people are willing to switch from iOS to Android? Some people are willing to, but not many. Launching a device is very difficult. Basically, every device launched outside of Samsung and Apple has ended in failure. The reason is that Samsung and Apple’s production lines have been well optimized, and any new startup company is far behind these giants in terms of hardware.

So, you need some “religious” reasons to make people change, perhaps cryptocurrency is that reason. We haven’t proven this yet, but we haven’t overturned it either. Just like we haven’t seen a breakthrough use case, in which self-regulation is seen as a key feature that people need and are willing to change their behavior for.

a16z crypto: You are one of the few founders who can build both hardware and decentralized networks. Decentralized protocols or networks are often compared to building hardware because they are very complex. Do you think this metaphor is valid?

Anatoly Yakovenko: Like my previous work at Qualcomm. If there is a hardware problem, it can cause a lot of problems. For example, if a tape drive fails, the company has to spend millions of dollars every day to repair it, which can be catastrophic. In a software company, you can still quickly identify problems and patch the software 24 hours a day, which makes it easier.

Community and Development

a16z crypto: Solana has done a great job in building its own community and has a very strong community. I am curious about what methods you have taken to build the company and the ecosystem.

Anatoly Yakovenko: It can be said that there is an element of luck in this. We started as Solana Lab in 2018, at the end of the previous cycle. Many of our competitors actually raised several times more funds than us. At that time, our team was very small. We didn’t have enough funds to build and optimize CDM, so we built a runtime that we believed could demonstrate this key feature – a scalable and unrestricted blockchain that would not be affected by the number of nodes or severe latency. We really wanted to make breakthroughs in these three aspects.

At that time, we only focused on building this fast network without paying much attention to other aspects. In fact, when the network was launched, we only had a very rudimentary explorer and command-line wallet, but the network speed was very fast. This was also the key to attracting developers because there was no other fast and cheap network that could serve as a substitute, and there was no programmable network that could provide such speed, latency, and throughput.

This is actually why developers were able to develop. Because many people couldn’t copy and paste Solidity code at that time, so everything started from scratch. The process of building from scratch is actually an engineer’s entry process. For example, if you can build the primitives you are familiar with in Stack A and Stack B, you can learn Stack B from start to finish. If you can accept certain trade-offs, you may become its supporter.

If we had more funds, we might have made a mistake at that time, which was trying to build EVM compatibility. But in fact, our engineering time was limited, which forced us to prioritize the most important thing, which was the performance of this state machine.

My intuition is that if we can remove the limitations on developers and give them a very large, very fast, and low-cost network, they can also remove their own constraints. And this has indeed happened, which is amazing and admirable. I’m not sure if we would have been successful if the timing wasn’t right, for example, if the macro environment wasn’t right at that time. We announced it on March 12, and then on March 16, both the stock market and the cryptocurrency market crashed by 70%. I think those 3 days may have saved us.

a16z crypto: Another important factor here is how to win developers?

Anatoly Yakovenko: It’s a bit counterintuitive. You have to build your first program by chewing glass, which requires people to truly invest their time. We call it “chewing glass”. Not everyone does it, but once enough people do, they will build libraries and tools to make it easier for the next developer to develop. For developers, doing this is actually something to be proud of. Naturally, libraries will be built, and software will naturally expand. I think this is what we really want the developer community to build and chew on, because it truly gives those people ownership of it and makes them feel like they own the ecosystem. We are trying to solve problems that they cannot solve, such as long-term protocol issues.

I think this is the origin of this spirit. You are willing to chew glass because you get rewarded and you gain ownership of the ecosystem. We are able to focus on making the protocol cheaper, faster, and more reliable.

a16z crypto: What are your thoughts on the developer experience and the role that programming languages will play in this field as they gain more mainstream adoption? It’s quite challenging to integrate into this field, learn how to use these tools, and learn how to think.

In this new paradigm, programming languages may play an important role in this regard, as the security of smart contracts has become an important task for engineers in this field. The risks involved are significant. Ideally, we will eventually see a world where programming languages provide much more help through tools, such as formal verification, compilers, and automation tools, which can help you determine if your code is correct.

Anatoly Yakovenko: I believe formal verification is necessary for all DeFi applications. Many innovations happen here, such as creating new markets, which are the places with the greatest hacker threats and the places that truly require formal verification and similar tools.

I think there are many other applications that are quickly converging towards single-node implementation and becoming trustworthy in terms of effectiveness. Once you can establish a single standard for a certain type of problem, it is much easier than a new startup building a new DeFi protocol because no one has written this type of code before, so you have to bear a lot of implementation risk, then make people believe in it and take risks by putting money into this protocol. That’s where you need all the tools. Formal verification, compilers, Move language, and so on.

a16z crypto: The programming world is changing in a very interesting way because in the past, most programming was traditional imperative programming, similar to JavaScript. And when you write some code, it is likely to be incorrect and will be broken, and then you fix it.

However, more and more applications are mission-critical, and for these applications, you need a completely different programming approach, a pattern that better guarantees the correctness of the code you write. On the other hand, there is another type of programming emerging, which is machine learning, such as using data to synthesize programs. And both of these things are eating away at the original form of imperative programming. Ordinary JavaScript code in the world will become less and less, and there will be more code written based on machine learning algorithms and more formally verified techniques, which look more like mathematics and formal verification.

Anatoly Yakovenko: Yes, I can even imagine that at some point, the verifier optimizes the smart contract language and then tells LLM to translate it into Solidity or other Solana anchors. Two years ago, people might not have believed it, but there have been many leap functions on Gpt 4.

a16z crypto: I like this idea. You can use an LLM to generate program specifications that meet the requirements of certain formal verification tools. Then, you can ask the same LLM to generate the program itself. Then, you can run formal verification tools in the program to see if it really meets the specification requirements. If it doesn’t, it will give you an error, and you can feed this error back to another LLM to try again. You can do this repeatedly, and finally generate a verifiable, formally verified program.

Ecosystem and Talent Recruitment

a16z crypto: We are discussing how to build a strong ecosystem. Many blockchains become decentralized almost immediately after launch, to the point where the core team no longer participates in forum discussions or tries to help other partners participate. But you seem to be very committed from the start, from launching on the network to entering the market. I think this may be a major advantage in building the Solana ecosystem.

Anatoly Yakovenko: To quote a saying, decentralization doesn’t mean no leadership, but rather diverse leadership. I still remember how difficult it was to take Linux seriously in big companies like Qualcomm, even the idea of running Linux on mobile devices seemed laughable. When I first joined, the whole community was trying to convince everyone that open source was meaningful, and I think that’s what we need to do, the network needs to be decentralized.

But that doesn’t mean there is no leadership. In fact, you need a lot of experts constantly telling people the benefits of using this particular network and its architecture, constantly getting more people to join and cultivating more leaders who can spread knowledge worldwide. But that doesn’t mean everything happens under one roof. If the network and code are open, anyone can contribute and run it. Naturally, it becomes decentralized. You will naturally see leadership emerge from unexpected places.

Our goal is to develop everything around us, to make our voice one among many, rather than silencing others. We pay a lot of attention to hackathon fans, trying to connect them with each other and involve them in this cycle. It’s like a flywheel. We try to connect people with developers from around the world, have as many one-on-one interactions as possible, and then have them all join hackathons and compete, motivating them to build their first or second product.

Among cryptocurrency users, only a few products can enter the market, obtain venture capital, and have a scalable user base. In my opinion, this means that we don’t have enough creativity. We don’t have enough founders aiming for targets and finding truly scalable business models that can reach millions of users. Therefore, we need a large number of companies to compete and see if they can come up with brilliant ideas. That is the biggest challenge.

a16z crypto: One related question is how to involve the community in developing the core protocol itself. This is one of the most challenging balancing issues for any blockchain ecosystem. On one hand, you can allow the community to actively participate, but on the other hand, your flexibility may be reduced. Moreover, the governance process involves more people and coordination becomes difficult. On the other hand, you can control things in a more top-down way and therefore develop faster. But in terms of community involvement, you will be influenced to some extent. How do you strike a balance?

Anatoly Yakovenko: Generally speaking, when I worked at the foundation, we saw people actively contributing to what they wanted to do. Then they go through a proposal process and there would be a grant or something else attached to it. It’s similar to the interview process, like when I hire people for the lab, it may be that the company culture doesn’t match the person, or it may be some other reason, but it doesn’t mean the person is bad, it just means something didn’t work out. Similarly, you’ll find engineers have been submitting code and contributing to the codebase. They already know how to culturally merge code and deal with open source directions. When you find people who can solve problems on their own, you can give them funding, which is very important to ensure that you can find truly excellent people who can submit code and are willing to work on it long term.

a16z crypto: What do you think is the best way to run decentralized governance protocols today?

Anatoly Yakovenko: Like L1, the approach we take seems to be effective, just like Linux, moving forward and avoiding vetoes from any participants as much as possible. It takes the path of least veto. Honestly, there are many participants who can veto any change, they may think the change is not good or should not be made. But we have to make the system faster, more reliable, and use less memory, and no one will oppose these changes.

Ideally, we have a process where you post a design, and everyone spends three months discussing it. So, before merging, everyone has plenty of opportunities to look at the code and decide if it’s good or bad. This process may seem a bit lengthy, but it’s not really. If you’ve worked at a big company, like Google or Qualcomm, for example, you know you have to talk to a lot of people, push it, make sure all the key stakeholders, like the key people who touch the codebase, can accept it, and then gradually finish it. It’s more difficult to make sweeping reforms because many smart people are looking at the same thing, and they may really find some mistakes and then ultimately decide.

a16z crypto: How do you approach talent recruitment?

Anatoly Yakovenko: In terms of engineering, our requirements are often high, and we usually hire very experienced personnel. My approach to recruitment is that in the early stages, I will put effort into something so that I know how it should be done, and then I will tell the new employees how I did it. I don’t expect them to complete it within 90 days or surpass me. I can evaluate them during the interview and tell them about the problem I am currently solving. I need someone to take over so that I can work on unknown things. In a startup company, if you are the CEO, it is best not to give others an unknown problem because you don’t know if they can solve these problems.

When the ecosystem develops to a certain extent, a project manager (PM) is needed. At that time, I spent too much time answering questions, and I was still answering questions until 2 o’clock in the morning. At that time, I thought someone else should do this, and now I know exactly what this job is about.

a16z crypto: How important do you think privacy is for blockchain in the future?

Anatoly Yakovenko: I think the whole industry will undergo a transformation. First, some visionary people will pay attention to privacy, and then suddenly, large payment companies or other companies will adopt this technology, and it will become a standard. I think it needs to be a feature – if you don’t have this feature, you can’t compete. We have not reached the level of market maturity yet, but I think we will. Once many people start using blockchain, every merchant in the world will need privacy. This is just the minimum requirement.

a16z crypto: What impact does the Solana architecture have on MEV? Does the leader have too much authority to reorder transactions?

Anatoly Yakovenko: Our initial idea was to have more than one leader per slot. If we can get as close to the speed of light as possible, which is about 120 milliseconds, then you can have a discrete batch time auction every 120 milliseconds globally. Users can choose the nearest or the one with the highest discount from all available block producers. In theory, this may be the most efficient way to operate financially, either I choose delay and send it to the nearest block producer, or I choose the highest discount and make a delayed dollar transaction. This is a theory that we have not tested yet with multiple leaders per slot, but we are getting closer to this goal, and I think it may be feasible, maybe it can be achieved next year.

I think once we achieve this solution, we will have a very powerful system that can basically force competition and minimize MEV.

a16z crypto: What is your favorite system optimization in the Solana architecture?

Anatoly Yakovenko: My favorite is the way we propagate blocks. This was one of our early ideas and one of the things we really needed to do. We can have a very large number of nodes in the system that can scale in the network, and we can transmit a large amount of data, but each node must share the amount of egress, which is the outbound load it must bear, is fixed and has an upper limit.

If you consider it from a high-level perspective, each leader, when creating a block, will slice it into threads and create encodings for these fragments. Then, they will transfer the fragments to a node, which will then send them to other nodes in the network. Because all the data is mixed with the encodings, as long as someone receives this data, the reliability of the data is very high. This is because the number of nodes propagating the data is very large, unless 50% of the nodes fail, which is highly unlikely. So this is a very cool optimization, and it has very low cost and high performance.

a16z crypto: How do you view the future application development of cryptocurrencies? How will users who are not familiar with blockchain adopt blockchain in the future?

Anatoly Yakovenko: I think we have some breakthrough applications and payment methods because using cryptocurrency for payments has obvious advantages compared to traditional systems. I believe that once regulations are in place, and Congress passes several bills, payments will become a breakthrough use case. Once we have means of payment, I think the other aspect will also develop, such as social applications, which can be messaging apps or social graph apps. These applications are currently growing slowly. I think they are in a golden period of taking off and will reach significant numbers.

Once they become mainstream adoption products, it is possible to iterate, understand what people really want, and provide them with these products. People should use products for their utility, not for the tokens.

a16z crypto: What advice do you have for builders in this field or builders outside of this field? Or for those who are curious about cryptocurrencies and Web3?

Anatoly Yakovenko: What I want to say is that now is the best time. The current market is relatively quiet on a macro level, with not much noise, so you can focus on the fit between your product and the market. When the market turns around, these discoveries will greatly accelerate your development. If you want to work in the field of artificial intelligence, you shouldn’t be afraid to start an AI company or a cryptocurrency company or any other company right now. You should try and build these ideas.

But what I want to say is that people should try to create greater ideas instead of repeating what already exists. The best metaphor I’ve heard is that when people discovered cement, everyone focused on building bricks with cement. Then, someone thought, maybe I can build skyscrapers. They found a way to combine steel bars and architecture, which no one could have imagined. The new tool is cement, you just need to figure out what a skyscraper is and then build it.

Like what you're reading? Subscribe to our top stories.

We will continue to update Gambling Chain; if you have any questions or suggestions, please contact us!

Follow us on Twitter, Facebook, YouTube, and TikTok.

Share:

Was this article helpful?

93 out of 132 found this helpful

Gambling Chain Logo
Industry
Digital Asset Investment
Location
Real world, Metaverse and Network.
Goals
Build Daos that bring Decentralized finance to more and more persons Who love Web3.
Type
Website and other Media Daos

Products used

GC Wallet

Send targeted currencies to the right people at the right time.