Dialogue with Taiko Co-founder Daniel How will the final outcome of the zk L2 scalability debate evolve?

Taiko founder and CEO Wang Dong discusses the ultimate scalability of Ethereum, the development direction of Layer 2 and its classification standards, and the impact of the Cancun upgrade on the competitive landscape of Layer 2.

Interview and compilation: 0xRJ.eth

Article compiled from recent interviews with Wang Dong: https://youtu.be/iUqmElZyTnA?si=jN-7dovHteftxmuV

Article overview:

  • Founder’s journey

  • The reasons behind Taiko’s decentralized choice (and the balance found between security, efficiency, and decentralization)

  • The impact of the Cancun upgrade on the competitive landscape of Layer 2: pros and cons

  • Opportunities for Taiko to overtake

  • The ultimate scalability of Ethereum

  • The development direction of the “classification standards” for zk Layer 2 track

  • Looking back on the experiences, judgments, choices, and thoughts of these years, what do you think is the most correct thing you have done? And what is the most regrettable thing?

Founder’s journey

RJ: Hello everyone, I’m RJ. Today, I’m very pleased to invite Daniel, the founder and CEO of Taiko, a Layer 2 scaling network based on zero-knowledge proofs for Ethereum, to have a conversation about the ultimate scalability of Ethereum, the competition landscape of the Layer 2 track, and the development direction.

Daniel is a former Google senior engineer and former senior R&D director at JD.com. He is also the founder of Loopring, the earliest transaction matching protocol based on ZK Rollup for Ethereum. Personally, I think it’s very important to understand the views of the OG in the Rollup track and the thinking behind it in this year of 2023, when the L2 track is receiving a lot of attention and before the major upgrade of Ethereum, the Cancun upgrade.

So why don’t we start with your journey? I believe that you are not unfamiliar to most people in the cryptocurrency circle. Everyone knows that you are the founder of Loopring. But I think people may not know how you went from traditional giants like Google and JD.com to being exposed to crypto and entering this track, and what opportunities led you to start Loopring, and now to start a Layer 2 network for Ethereum based on zero-knowledge proofs. Whether it’s the projects you do or the public chains, what is your vision?

Daniel: The initial idea was to accumulate enough experience in traditional Web2 companies like Google and then start my own big Web2 company. Many ambitious young people at that time had this idea. The state of things back then was “copy to China”, meaning whatever was in the foreign market, we would copy it to China. But now things are different, many Americans are copying things from China.

At the beginning, there wasn’t much idealism involved. But later on, when I chose blockchain, I felt that on one hand, the work in traditional big companies became a bit boring, especially in China’s internet companies, which are not completely technology-driven. So as a technical person, it didn’t feel very comfortable. There were various people and management issues, and blockchain gave me a refreshing feeling. Suddenly, you realize that engineers can change the world. Of course, there is still a long way to go before changing the world, but it ignites a small hope in you. So after spending some time at JD.com, I felt like the story of Web 2.0 wasn’t suitable for me. I wanted to use technology to change the world.

At that time, the concept of blockchain wasn’t widely mentioned. It was mainly about Bitcoin and how it could change traditional finance, whether it could allow people in third world countries to have their own wallets. Before starting my first blockchain company, I actually visited Kenya and Nigeria in Africa. People there used their mobile phones as their wallets, although the amount of money in their wallets was very small and the transfer fees were very high. But they could make payments through text messages, so Africa has already developed a solution for micro-payments, even though it may sound basic. But the demand for this is huge, and the market is also very large.

Perhaps around 2014, when I went to Africa, I was inspired and decided to try blockchain. So the first blockchain company I started was a Bitcoin exchange. At that time, I didn’t really touch the underlying blockchain technology. In fact, what I did was similar to what Coinbase and Binance are doing now, bringing Bitcoin to the general public.

It wasn’t until I worked on Loopring that I thought about using blockchain technology to do things in the traditional financial sector, to change the middlemen in finance. Why do financial practitioners make so much money? It’s because the interests in the middle are huge, and the fees are very high. So you see that big banks make a lot of money, financial companies and asset management companies make a lot of money. So our starting point at that time was to eliminate this middle layer through blockchain, so that end users could directly transact with each other through the trustless mechanism of blockchain.

Building Taiko is actually aiming for a bigger goal, thinking about whether we can create a general platform for others to build applications on it. This path is also explored step by step and accumulated gradually. Although I have an engineering background, it’s unimaginable for me to suddenly build a (general) Layer2 solution without experience. So the journey we have taken over the years is necessary up until now.

The reason behind the decentralization choice of Taiko

RJ: Brecht, the CTO and co-founder of Taiko, mentioned that the current scalability solution Taiko is working on was derived from a problem they faced during their time at Loopring.

Daniel: When I was at Loopring, we actually had the idea of building an app. We wanted to create a decentralized matching system, but no matter how we changed our approach, the cost was very high. At one point, it could cost over a hundred US dollars for a transaction.

So we asked ourselves if this could really change the world. Scalability was an idea that came from our initial entrepreneurial thinking. When we were working on Loopring, we wanted to build a dApp, a decentralized matching system. After some time, we realized that the cost was very high. Then we asked ourselves if this solution could truly replace traditional exchanges. Traditional exchanges have their trust and security issues, but they are also very cheap.

So the cost is a very important factor for users. We found that projects like 0X, which were considered good at the time, were not widely accepted by users because they were seen as good demos or prototypes, but they couldn’t replace existing Web 2.0 products. So we decided to try a different approach. We had this core idea that we shouldn’t just please others; we should dare to do things that others haven’t thought of. So when we were at Loopring, we said that our funds were limited, and we only had about 20 people, maybe even less, a dozen or so. We decided to try using zero-knowledge proofs to achieve scalability with fewer resources. We didn’t know how far we could go with it. At that time, we didn’t even use the term ZK Rollup, or even the term Rollup itself, which was extremely niche.

We were close to launching our project. Once, when I was in New York attending a meeting organized by Consensys in Brooklyn, Vitalik was also there. During my keynote speech, I emphasized that Loopring’s performance was excellent. However, many people doubted it because we were a Chinese team presenting these numbers. At that time, around 2019 or so, many Chinese teams were going around saying they were building a new blockchain with a performance of one million transactions per second, which was very exaggerated.

So at that time, I felt that our reputation wasn’t very good. The overall Chinese developer community was too exaggerated and driven by capital. So at that meeting, I said our performance was about several hundred transactions per second. This was already a significant improvement in scalability compared to Ethereum, and our cost at the time was 2 cents or 1 cent in USD. Many people just listened and thought I was trying to deceive them. After the keynote, I met Vitalik at the venue, and he was interested in scalability and paid close attention to it. It was then that we realized that what we were doing was what he called ZK Rollup. However, ZK Rollup was not an idea proposed by Vitalik himself; it was an idea from Barry WhiteHat. But at that time, no one had implemented it, and we were actually the first ones to do it, and we did it in a more orthodox way by putting the data into the first layer network. So we were forced to do it, and unintentionally became the first ZK Rollup. However, our thinking at the time was not bold enough, and the technology might not have been mature enough, and there were few available code libraries, etc. The Rollup we had at that time was relatively simple, and we had to make many trade-offs, and there were many aspects that we couldn’t implement, including programmability.

Because the challenge was too great, at that time, it was not that we didn’t try, but we simply didn’t dare to think about it. At that time, there was an evolutionary process before this Taiko. My idea at the time was to create a social network, to get more people to use blockchain, and to change people’s lives to some extent.

But I asked myself a question, if we create this thing, where should we put it? Currently, the best dapp platform is Ethereum, or Ethereum Layer 2 network or sidechains (although people don’t talk about sidechains much anymore). About a year and a half or less than two years ago, at that time, the only option was to build on Ethereum, for example, using OP, which many people are familiar with.

These few chains of Optimistic Rollup are relatively centralized, so they are not suitable for decentralized networks, especially for social networks that require resistance to censorship. They cannot be implemented on those sidechains or L2. So later on, we thought about continuing to develop such an application, and keep waiting, until L2 becomes more decentralized and secure after a few years, and then our app can be launched. Another option is to double down and make more efforts. After all, we have experience in Loopring and have some knowledge in Rollup. So we thought about trying to build a general-purpose L2. That’s why I said that building a general-purpose L2 and implementing Rollup is actually a byproduct, or a last resort, and we have come to this point.

Actually, building an app is very interesting. After building an app, your community becomes active, with sharing, pictures, and videos. As a user, it is comfortable to use. But building infrastructure is actually a relatively boring task. Of course, you can talk about how many users it will support in the future, but in daily use, you won’t use that chain for transactions every day. So

My biggest interest is actually in building social applications. But now that we have chosen the path of L2 general-purpose L2, I think it’s also good because it allows others to build various applications on top of it.

RJ: I actually find this very interesting. It means that because you originally started as a protocol/DApp, from the perspective of developers, the bottlenecks, difficulties, and needs you encountered actually represent the real needs of the market. This also explains why Taiko chose to be completely decentralized from the beginning, and is completely decentralized in terms of proposers. Today, talking with you about this part of your journey has helped me better understand why Taiko has such persistence.

Daniel: Yes, the degree of decentralization required for an app depends on its characteristics and positioning. If it is just an app without the need for censorship resistance, we won’t propose such requirements. Social networks have this need, so it forced us to do it this way. But there is actually another reason why I don’t believe an L2 can achieve decentralization when built in a centralized manner. There are two paths to take. The first path is to start with decentralization and gradually improve performance. The second path is to start with centralization, have good performance, and gradually decentralize. I think the first path is relatively easier to take. Not to say it’s very easy, but relatively easier. The second path is like making Alibaba Cloud very powerful and then immediately decentralizing it, which is extremely difficult. Because it may involve more changes to the underlying infrastructure. So we decided at that time that even if the performance is not as good as others, we must achieve true decentralization. At least from the perspective of protocol design, it is fully capable of being decentralized.

But whether it is completely decentralized on the first day after going online is also a question mark. After all, the priority of user asset security is much higher than the degree of decentralization. So we still need to ensure cautious decentralization under the premise of a relatively large codebase, because decentralization means removing our own responsibility and control. But in case of a hacker incident, saying “I am powerless” is not a good explanation that the community can accept. As a user, I would rather you have control first, and when there is a hacker attack, you can recover my assets, and then slowly give up your rights. But in this process of giving up, the network infrastructure does not need major changes, because it originally supports decentralization. It’s just that it needs to be gradually configured to completely abandon centralized control. So I think Taiko will not be completely decentralized after going online, because asset security is too risky for users.

RJ: It means finding a balance between security, efficiency, and decentralization, and adjusting (and maximizing) the degree of decentralization from different dimensions. For example, in the overall mechanism design, it initially supports complete decentralization, and the subsequent speed of decentralization is more of a choice. And after balancing these three aspects, Taiko can achieve the maximum degree of decentralization in the initial stage—especially in terms of the proposer mechanism. As far as I know, Taiko should be the first and currently the only L2 that is completely decentralized in the proposer aspect from Day 1 to now. (Essentially, it is designed for complete decentralization from the beginning.)

Daniel: We are also exploring, and it is actually difficult to quantify how decentralized a network is with a single indicator. Look at Ethereum now, the miners are also very centralized, like Lido’s mining pool, which actually controls a lot of validator resources or affects validator resources. So it is difficult for us to say who is more decentralized than whom. But I think we allow a greater degree of decentralization. For example, assuming that more people are willing to participate in Taiko’s block production or prove these blocks, first of all, they are permissionless, which means they can join without our permission. But whether they are in a competitive or collaborative relationship is actually difficult to control at the protocol layer of the blockchain. But if 100 provers want to compete instead of cooperate, the protocol allows you to compete. And then make the cost lower and lower, and of course, there may be a supplemental design on this protocol that can incentivize different provers to compete rather than cooperate. Currently, we have not found such a solution, but in general, as long as you are permissionless, you allow the possibility of decentralization, and then the second step is how to strengthen this decentralization through incentives, ideally, neither of you cooperate and earn more. Whether this can become a reality is still a question mark, and we don’t know yet.

Regarding the Proposer aspect, we are currently possibly the only base rollup. In fact, it is highly unlikely that we will have a dedicated professional Taiko proposer. Instead, L1 miners/validators also serve as L2 proposers. This way, they can earn more and the entire solution relies heavily on Ethereum, so we don’t have to reinvent the wheel. Taiko may have a similar pool of validators, so MEV solutions are not necessary. Because of this money-making opportunity, L1 validators are also discussing how L2 can handle MEV. So we tell L1 validators that MEV opportunities in L2 are for them to seize. As long as they do it well and propose in a timely manner, L2 users will benefit.

So roughly speaking, there won’t be a comprehensive solution, but rather a small and beautiful one. For example, we try to reuse L1’s infrastructure as much as possible, so that our progress will be lighter. Otherwise, the team would need many more people, and too complex a solution would increase the probability of errors.

The Future of Ethereum Scalability

RJ: The design of Type1 ZK-EVM is actually very clever for the team. It is also very user-friendly for developers and has great potential for long-term project development.

Daniel: We definitely believe in it. As for the ultimate result, we’ll have to wait and see. From a longer-term perspective, whether Type1 is actually that important or not, all future L2 standards for app developers should gradually evolve. For example, ZK-EVM may gradually have a standard for supported OP Codes and what is not supported, and its state tree may be a certain format. Then all ZK-VMs may use a certain language, like Starkware’s language or another popular one, and the underlying VM will become a competitive or complementary execution environment with Ethereum’s EVM. So there may be one or two standards in place, and all L2s will converge towards them.

In the future, the programming experience on Ethereum itself will not be that important anymore, because on Ethereum, the applications may just be Rollups, and other applications may not be willing to pay such high transaction fees. Therefore, Ethereum’s own programming experience will gradually no longer represent the programming experience of all apps. I actually think this change will become increasingly clear. It’s just that currently, Ethereum cannot completely give up its role as an app platform. Once L2 comes into play, Ethereum may make certain op codes very expensive, and regular users may not be able to afford them, only L2s will use them. At that time, Ethereum may be more mature, and as L1, its programming experience should also be simplified, not everything should be possible.

RJ: You just mentioned “when the entire L2 Rollup comes up”, when do you think this might happen?

Daniel: In my opinion, it may take 5 to 10 years. 5 years is relatively fast, it may take ten years to replace Ethereum, or replace all other L1s, and become the default App development platform. Because zero-knowledge proofs are still evolving, and there are no particularly satisfactory indicators and good solutions in various aspects.

Recently, new ideas have emerged, so it needs to keep changing. And the codebase is also quite large, so this solution really needs to stand the test of time, and constantly optimize during this process. I don’t think that in, for example, three or four years, there will be an L2 that can prove its security is exactly the same as Ethereum.

RJ: When you mentioned “replace”, you’re not talking about replacing the Ethereum chain, right? It’s more about replacing the function of “hosting dApps”. However, the data availability (DA) part still relies on Ethereum, and the security also relies on Ethereum.

Daniel: That’s right, it means taking the responsibility of Ethereum as an App application platform to L2, and Ethereum becomes an L2 platform or a secure aggregator platform for multiple chains. I don’t know how to say it in terms of the name. But in general, ordinary users don’t have to go to the central bank to transfer money, they can transfer money in local banks or commercial banks. The cost of using Ethereum (central bank) for transfers will become higher and higher.

Development direction of “classification standards” in the zk L2 track

RJ: If we focus more on all the ZK L2 scaling solutions, do you think that in your imagination, in the end, will this track gradually converge towards Ethereum, like Vitalik envisioned in his article, where all types of ZK-EVM will slowly move towards Ethereum? Or will it be evenly distributed among Type1-4? Or will it present two extremes, Type1 and Type4?

Daniel: I think the so-called Type1 and Type4 actually have a reference standard, which is the current programming experience of Ethereum. But one question we might ask is, in the future, will there really be only one reference standard? Or are there multiple ones? For example, let’s look at traditional non-blockchain programming, such as having a Java virtual machine. And you have native code, which is at least two completely different programming solutions. In Ethereum, we can say that in the future, some L2s may gradually converge towards this EVM. But at that time, the EVM in L2 may be different from the EVM in L1. Gradually, everyone will converge on this standard in L2, so the so-called Type1 will be different from the programming of Ethereum itself. And then there will be another VM or multiple VMs. Just like Starkware’s VM, and everyone will move towards that standard, so there will be multiple different standards that may be very different from each other.

I think in the future, from the perspective of the application layer, all L2 solutions may have different standards. One standard is ZK-EVM, which is an evolution of EVM and will gradually converge towards a standard between Type1 and Type2, but it will still have some differences compared to Ethereum’s EVM.

Let me give you an example. After this hard fork, the blob transaction will have an op code on Ethereum, which means it can be used with this blob. However, you cannot use it on L2, so it will break the definition of Type1. Other engineers and teams are also researching ZK-VM of different types, including Starkware’s ZK-VM, which may gradually converge towards one or two different standards.

When there are more developers, there may be more standards. Because different people have different preferences for programming languages and programming models, currently, everyone is more interested in Solidity in Ethereum because it is simple and easy to use.

But in the future, there may not be just one or two standards, but more standards. Some may be more popular than others, so the programming experience of apps will be very different. For example, some apps in the financial sector may use a more secure language to write the financial app.

And some may require higher performance, especially optimizing their computational complexity. So they may use languages similar to C++ or RUST, which have better performance. But programming in these languages may be more difficult and flexible, so these standards may gradually be established in the future. But when we discuss these standards now, the only standard we have is what Vitalik said, from Type1 to Type4, but this standard still has limitations because it is based on Ethereum.

RJ: Vitalik’s assumption is that Ethereum is immutable, but Ethereum itself may make further adjustments in terms of programming or its own positioning.

Daniel: Ethereum has many upgrade plans for the future, such as Verkle Tree, Purge, Splurge, etc., which will have some impact. And we also need to consider that Ethereum already has a consensus layer and an execution layer. Its standard is actually a combination of these two different layers. However, L2 does not have these two layers, it only has the execution layer of Ethereum as our base layer, and its own execution layer, which actually has two execution layers, one is the base layer and the other is L2. This creates a slightly different structure for the entire network. It is difficult to say that the comprehensive programming experience of L2 can become exactly the same as Ethereum, which is almost impossible in the future.

The Impact of the Cancun Upgrade on the L2 Competition: Pros and Cons

RJ: Mentioned this Blob – a very core underlying change in the Cancun upgrade. Let’s discuss this in detail because I think the Cancun upgrade can be said to be the biggest and most noteworthy upgrade of Ethereum at present. At the same time, this upgrade undoubtedly directly benefits all Layer 2 solutions and the entire L2 space. So, what kind of impact do you think the Cancun upgrade will have on the current landscape of Layer 2 and, in particular, how will the design of this Blob you mentioned earlier affect the entire L2?

Daniel: The core concept of Rollup is that you need to put the data of your L2 transactions on L1, which is called Data Availability. In other words, through this data, you can reconstruct the world state of any specific time point in L2. So, Data Availability is very important for Rollup. However, the data on Ethereum is relatively expensive, which means that although the computation is done off-chain in Rollup, the cost of putting data on-chain remains high, making it difficult to reduce the cost of L2.

Ethereum wants to use Rollup as a central point for scalability, so it wants to solve the problem of expensive data. The Blob actually turns the data into another type of data market, separating its pricing from the pricing of Ethereum’s Op code. If you have enough hard drive space and are willing to store the data, the data on Ethereum will gradually become cheaper. This is a core concept. Therefore, it is beneficial for the entire Rollup ecosystem. As long as a Rollup can be transformed in a way that it does not look at the specific content of the data, it can become cheaper.

However, some Rollups cannot do this. For example, Loopring’s Rollup requires checking the content of the data when it is put on-chain in L1. Rollups like this cannot be transformed to use the Blob data. When designing Taiko, I assumed that the data does not need to be examined and only knowing its hash or commitment is sufficient.

This also raises a question for other L2 chains. For the Rollups that have already launched on the mainnet or testnet, do their designs require looking at the content of the data in the L1 contract? If they have this assumption, then (after the deployment of Blob) they will face difficulties in redesigning the protocol and rewriting the code.

We have already included this assumption in our design, so the Blob is very important and friendly to our protocol. I believe that generic L2 solutions in the future will have to be compatible with this Blob data, otherwise they cannot be widely adopted. Currently, because Blob EIP-4844 has not been deployed and operated for a while, we don’t know how cheap the Ethereum L1 data will become, or whether this design can allow it to compete with other chains specialized in data and even surpass them.

Currently there are many chains: I don’t do anything else, I only do data availability. From the perspective of a hardware provider, for example, if I have many hard drives, I am willing to use these hard drives for data availability. So whether I use Ethereum L1 or other chains, which one should be cheaper? So if Ethereum says it can be cheaper, then there is no need for everyone to use other (modular) chains for data availability. But currently, this is still a question mark, we don’t know yet. We also don’t know how much cheaper it can be compared to Ethereum itself, even without comparing it to other chains, just comparing it to the current cost of this data, it is still a question mark, because it is ultimately a separate market.

There is a supply and demand relationship inside it, the more people use it, the more expensive it becomes, and the fewer people use it, the cheaper it becomes. So currently, although everyone is optimistic, they are also cautious about how much cheaper it can be, for example, 10 times to 50 times, it’s about the same, but if you say it can be 1,000 times cheaper or even say that this data cost can be ignored, it seems too optimistic.

I think this blob may be further transformed into something cheaper in the future through EIP, but there is currently no initial data. In fact, Ethereum is also unlikely to declare that this is the best design, so let’s wait and see, let’s wait until the end of this year or the beginning of next year when everyone starts using it, and it may take a long time to use, such as a year or two, to know how cheap this thing can be. But overall, this is a huge benefit, this is a feature that we L2 have been looking forward to for a long time, without it (Proto-danksharding), we Taiko don’t really want to go live because it’s too expensive.

Taiko’s opportunity to overtake in the bend

RJ: Actually, this explanation partially explains why Taiko is expected to go live in the first half of next year. After all, the market, especially this year, everyone is saying that the L2 track is going to explode, and most of the L2 projects are actually going live on the mainnet in Q3 and Q4 of this year.

Daniel: After EIP-4844 goes live, there is a benefit that you will be able to support EIP-4844 faster. Because once you go live on the mainnet and then run and change the wheels on the car in the future, the time it takes (to support EIP-4844) will be longer. If our initial preparations are better and we can test all of these in the testnet before going live, then Taiko may become the first project on the mainnet to apply EIP-4844, and we strive to be such a project.

But the timing is not the most important, because you need to provide developers with a very user-friendly overall programming experience, which is more important. It’s not necessary to insist on being the first or the fastest.

RJ: It mainly depends on whether the first-mover advantage brought by “being ahead” is sustainable. If there are larger-scale changes that may reshape the entire field, it could be a great opportunity for projects that have not yet launched their mainnet to overtake others on the curve.

Daniel: Exactly, the more significant changes there are, the more opportunities there will be. If Ethereum itself becomes fully mature, there may actually be fewer opportunities for new projects. Changes in Ethereum will make it easier for new projects to seize such opportunities.

Retrospect and Outlook

RJ: Looking back at your experiences, judgments, choices, and thoughts over the years, what do you think was the best thing you did? And what do you regret the most?

Daniel: I have always been a long-term believer in blockchain. I don’t tend to follow trends and engage in short-term activities. I prefer to think more and try to avoid the influence of noise, which is crucial. Because everyone’s energy is limited, there are various stories and interesting things in the world of blockchain. Pursuing short-term profit maximization or interesting operational projects will turn you into a completely different person in the blockchain world.

I am more driven by technology and want to do something meaningful. So, being able to focus and stay away from temptations is very important. For example, I hardly play DeFi and never use leverage. Once you get caught up in them, it’s hard to extricate yourself and pursue a long-term philosophy. So, I think this is where I have done well.

As for areas where I haven’t done well, I may have taken some detours. There may not be many obvious mistakes I have made. I consider myself lucky in making certain choices. For example, I have a positive view of Ethereum, and many people doubted my persistence in it several years ago, saying that Ethereum was slow and another blockchain was better. I would try to convince them why Ethereum is good and why I have more confidence in it. So, luck plays a certain role as well.

RJ: If you had the opportunity to go back to that time, would you make any adjustments and different choices?

Daniel: I haven’t been consistently engaged in blockchain. Around 2017 and 2016, I took a year off because the market was very bad, and I went back to Web2. So, even though I claim to be a believer in blockchain, I have had doubts. People still have to succumb to reality. When you have real-life pressures and can’t find opportunities in the blockchain field, you will return to a normal life to make a living. So, I am a practical person. Of course, if I had persisted at that time, it might have been a different situation, but in life, you can’t always compare all the choices to find the best path because sometimes you have no choice. You naturally end up where you are, and as long as you work hard at every step, there will be no regrets.

RJ: I don’t actually think that leaving Web3 during this period is “suboptimal”. I think, on the contrary, switching (in and out) due to various practical reasons will make you clearer about what you want. And it will make you think better.

Daniel: Yes. From this perspective, I also tell my team members that they have practical needs, and I can’t sell them the perfect ideal and tell them to be idealists. In fact, we always try to do something that we believe will contribute to society or the industry in the future, based on meeting practical needs. Even if everyone does a little, I think it’s already good enough. If we completely sell the ideal, it’s not much different from being a scammer.

And many times, ideals and these things cannot be sold directly, they can only be indirectly influenced, subtly affected.

I especially hope that our team members have more persistence in ideals, and can do things like what we are doing now, completely open source, with no proprietary code at all. This is a positioning of our team that I am quite proud of. Everyone is willing to contribute in this way, instead of applying for patents or seeking copyright protection, etc. I think Taiko is doing quite well in this regard.

RJ: Thank you very much for your time today and these valuable insights. Looking forward to discussing more detailed content related to the track next time.

Daniel: Alright, thank you RJ!

Like what you're reading? Subscribe to our top stories.

We will continue to update Gambling Chain; if you have any questions or suggestions, please contact us!

Follow us on Twitter, Facebook, YouTube, and TikTok.

Share:

Was this article helpful?

93 out of 132 found this helpful

Gambling Chain Logo
Industry
Digital Asset Investment
Location
Real world, Metaverse and Network.
Goals
Build Daos that bring Decentralized finance to more and more persons Who love Web3.
Type
Website and other Media Daos

Products used

GC Wallet

Send targeted currencies to the right people at the right time.