The subtle relationship between AI and Web3 is best described as competitive cooperation.
From a capital perspective, the story of AI is obviously more appealing and practical than Web3. The arrival of ChatGPT easily diverted the funds that were originally flooding into Web3 to the field of AI, clearly demonstrating the substitutability of capital. However, from a market perspective, Web3, which has always been advancing with hot topics, is also unwilling to give up opportunities and has incorporated AI into various projects on a large scale. The shell of Web3 is like a chameleon, changing from the metaverse to AI, it’s just a matter of a tweet.
For the workers in the Web3 industry, this relationship becomes even more complicated. They worry about being replaced by machines and also worry about not being able to use machines and thus being replaced. Their hesitation towards machines comes and goes like the tide. However, what is left in the poorly regulated Web3 is only various absurd imitations and pseudo-concepts, making it rare to accurately describe the workers in this industry.
From a macro perspective, the existential crisis surrounding the existence of machines and humans has never ceased since the advent of machines. Humans can create things far smarter than themselves, but they also fear these things. This is not only a manifestation of human intelligence but also a subconscious instinct to resist those who are different.
In the field of Web3, which fully embodies human greed and desire, this conflict can only intensify.
Will Web3 workers be replaced by AI? With this question in mind, the author interviewed four individuals in the industry who are currently using AI. While they have different jobs, some have a pessimistic view and believe that their jobs will be replaced in the long run and are transitioning accordingly; others are unconcerned and emphasize that AI cannot handle the deceitful nature of humans in Web3. Although their opinions differ slightly, the interviewees unanimously believe that AI can never gain the upper hand in the struggle against complex human nature.
It seems that at least for Web3 workers, they are still safe in the realm that even AI finds difficult to comprehend.
01. “AI can only guarantee correct processes” – Ivan, a worker in a small-scale asset management firm
Ivan, who works in an asset management institution, can be considered a senior worker to some extent. His main job is centered around various tasks related to funds, including storage, value appreciation, and operations. However, when examined in detail, it is more similar to traditional institutional risk control, which involves monitoring transactions, identifying potential risks, and intervening effectively. From his perspective, AI is far from capable of affecting his work because it struggles to identify the issues that humans face. Here is his account:
My job sounds good when spoken about. I work in an institution as an asset operator, and my work involves managing assets to achieve value appreciation and risk control. Prior to this job, I worked in a local bank in risk control. In essence, the two jobs are not qualitatively different; the difference lies in risk control emphasizing monitoring and early warning, while operations have a demand for value appreciation.
After the emergence of ChatGPT, our boss mentioned the need to learn to use it to improve work efficiency. We even invited so-called professional questioners to train us. However, since there was no mandatory requirement, as far as I know, the frequency of usage is not high for everyone. But this does not mean that ChatGPT is not useful or that we are simply criticizing it. It is just that for our industry, AI is difficult to replace human work.
A simple example is that many of our large B-end customers transfer amounts of nearly millions of dollars. The risks of using AI for approval are obvious because AI is difficult to see the processes and conspiracies behind these funds. On the surface, it may be just a sum of money, but behind it, there may be complex incidents such as money laundering, distribution, and fraud. This is similar to banks, where seemingly automated processes have each step reviewed by human personnel in the back office of the bank. Several or even a dozen departments may be involved in the operation of this money.
Compared to that, retail customers have a higher degree of automation. AI in the background can automatically identify occasional operations like witches and double spending. However, it cannot be completely entrusted to AI. It’s like when we apply for a bank card, we often require the customer manager to verify our identity in person at the last step. Why? Because AI finds it difficult to verify people through processes. AI only needs to ensure that the process is correct, but only humans can identify human problems. Throughout the entire financial industry, despite the improvement in efficiency through digitization and the existence of various complex systems, human labor is still the main force, and anything related to money needs to be accountable to humans. Many tears have been shed in history due to this.
But you may ask if AI is completely useless. Of course not, otherwise the giants wouldn’t be investing so much in its development. Speaking for myself, even LianGuaiperwork, when I don’t want to do something, I would use AI and then refine and modify it. Also, it should be mentioned that assignments and contracts that everyone thinks can be replaced are not that simple to handle. They are templates approved by the legal and financial departments line by line. So you see, people are the most important.
02. “What’s more important is what AI can do in the future” – Iris, Content Media Editor
I am engaged in an editing position in a content media company, which is also one of the positions that people usually think are easily replaceable by AI. Generally speaking, AI, such as GPT, is already quite common in our daily work. Especially in the current downturn of the industry, when everyone’s originality is decreasing, we use AI for tasks like translation and news grabbing, which significantly improves efficiency.
In terms of original content, the role of AI varies depending on the requirements of the article. For text processing, GPT’s ability is quite strong, so the basic drafts it produces based on the given outline are generally not too far off. However, due to the industry-specific terminology, understanding, and adaptation to human reading habits, it still has limitations, so manual adjustments from multiple perspectives are needed. There is relatively weak correlation between in-depth articles and AI. In-depth articles rely more on personal on-site visits and investigations, which AI cannot replace. Even if we feed AI with data to generate articles, the effect is poor.
The performance of the slightly lower version of GPT is even more unsatisfactory. A colleague used GPT3.5 to write an article. Since there is no data after 2021, the latest data needs to be fed to the colleague sentence by sentence for pre-learning. After trying for a whole day, he finally produced an article, but most of it is correct nonsense, and there are even illusions of fabricated content. In the end, either it needs major revisions or becomes a worthless draft. Of course, this is also due to our insufficient questioning ability.
Back to your question, will AI replace human labor? I believe that to a certain extent, it will definitely happen. It may seem limited in what it can currently do, and everyone takes it for granted. But what is more important is what it can do in the future? No one can have a stronger learning ability than AI, and no one can understand the black box composition of the emergence of AI. This means that through continuous deep learning and analysis, AI will emerge with more intelligent capabilities at some stage. Creative work with repetitive nature may be replaced earlier, such as programmers, editors, and other cultural industry practitioners. This type of work is creative, but it also has a learning and repetitive nature. As long as there is text to learn, AI will definitely run faster than humans. Therefore, humans need to free up more space for imagination.
Looking back at every technological progress, there are always positions being replaced. This is the tool revolution brought about by technological revolution, and it is irreversible. The Luddite movement ultimately became a footnote in history. Currently, from the bear market to now, the company has already laid off 40% of its employees, and there are even fewer content department employees. This may also have something to do with AI?
Personally, I will try to find the more core parts of work, the parts that are difficult for AI to replace. For example, I will value opportunities to go out and accumulate contacts more than before. In any case, these are aspects that AI has difficulty accessing.
03, “Legal and Ethical Risks Cannot be Avoided” – Vivian, Crypto Lawyer
In fact, this question can be broadened. Can AI replace lawyers in traditional fields? Personally, I think it is not possible, for a simple reason: legal work involves dealing with people, and it involves legal and ethical risks that cannot be avoided.
For example, in litigation, lawyers have a responsibility and obligation to keep the information provided to clients confidential. This is not defined by the good or bad of the onlookers, but defined by the interests of the parties to the case. Unless it involves acts that endanger national security, lawyers should keep the information confidential. The public nature of AI makes it difficult to achieve this. The information you give to AI can be sent to other inquirers at will, which obviously does not comply with the professional ethics of lawyers.
Similarly, in non-litigation areas such as legal consultation, AI lawyers will also appear more impersonal. It is difficult for them to perceive the true needs or implicit meanings of the parties involved. In many civil litigation cases, the core issue is how to safeguard one’s own interests, even if one is already at fault. But AI has difficulty understanding this type of demand. If you ask how to reasonably avoid risks, it may ask you to confess everything or even turn yourself in. Therefore, currently AI can only handle some consultations with clearly known problems and established answers, and do some non-private desk work. Its limitations are obvious.
Another thing worth noting is that AI can have machine hallucinations, which is a big taboo for legal service professionals. Some time ago, a peer fell into a trap. Lawyers from Levidow & Oberman law firm in the United States submitted documents assisted by AI in a dispute, only to have the judge discover that some case precedents did not actually exist. As a result, the law firm was fined $5,000 for providing false information to the court. Providing false information is a violation of legal ethics and a rule explicitly prohibited in legal practice. However, until now, AI hallucinations have not been resolved.
In Web3, the above limitations will be further amplified. Web3 is a highly chaotic and ever-changing industry. Although laws and regulations have been gradually introduced both domestically and internationally in recent years, virtual currencies are still relatively new in the civil and commercial fields. Judging from various court cases, there is no unified understanding. In this field where fresh concepts emerge frequently, using AI as a lawyer will only bring more complex problems. For example, stablecoins. Regulatory measures vary greatly in different regions. Hong Kong is still a vacuum in this field, but Singapore and Europe have already introduced regulatory schemes. It is difficult for AI to provide targeted responses to such questions.
In addition, lawyers engaged in the virtual currency industry have strong expertise themselves. Besides having a deep understanding of the industry, they also need to possess strong financial services regulatory and securities law capabilities. They need to act as connectors between the court and the parties involved, reducing information asymmetry and noise. It is not easy to achieve all of these, so I believe that AI can assist in certain tasks such as desk research and other text-related work to some extent, but complete substitution is still a long way off.
04, “Elimination is just a natural social law” – Leo Developer
In fact, this issue has already emerged several years ago with the appearance of Alpha Go, followed by the continuous emergence of many automated programming software. This time, due to the outstanding performance of ChatGPT, it has caused a panic in the industry.
If this question was asked before last year, I would firmly say that it won’t replace programmers. But now, I might think that some programmers are at risk of being replaced. Judging from the advantages of AI, it can already handle technical problems with some difficulty, such as arrays/strings and dynamic programming. At the same time, it has certain capabilities in dealing with common problems such as repetitive code generation, documentation and comments, and testing.
In my daily work, I also use Github Copilot. I am not an exception. Previously, GitHub conducted a survey and found that 92% of the 500 surveyed developers use AI coding tools to complete their work and other projects. The use of AI tools is not without reason. It can effectively improve efficiency. Our work essentially involves translating between machine language and human language, and there are many repetitive specific tasks. AI’s performance in these tasks with chain calls is excellent.
However, for developers, the most core ability is programming logic, and the most difficult point is building requirements, not just simple coding. In this field, AI does not yet have complete engineering capabilities.
In actual software engineering, code is written according to customized requirements. In this complex relationship, the role, technical background, and objective laws of the product itself between modules are difficult for AI to discover through learned databases. The reason is that this type of data is usually confidential, which means that AI is difficult to compare with humans in terms of business abstraction, modeling, and architecture. In addition, AI also faces issues such as code security and intellectual property rights.
On the other hand, compared to traditional internet, Web3 has decentralized differences in the architecture of front-end and back-end. The most obvious is that after smart contract code is deployed, developers cannot simply repair or update it. Web3 is more related to money and has high sensitivity. Leaving out the option of humans will bring many practical problems. In the industry, it can also be seen that many projects have exit scams, and anonymity often makes people feel insecure.
In the industry, it is often joked that “when there is no problem, go for decentralization; but when there is a problem, still go to centralized institutions.” Therefore, regardless of the situation, humans are very important in this process. However, from the perspective of the programming industry, in the future, programmers who do not use AI or only use limited tools such as CRUD will be eliminated, which is also a natural social law.
In the competition with machines, humans, made up of flesh and blood, are often at a disadvantage. Therefore, not only in Web3, but also in many aspects, the idea of machines replacing humans is not unfounded.
However, precisely because of the existence of fragile human nature, the complex network formed by human beings and the spiritual connection within it are difficult for machines to enter, and this also nourishes human beings themselves.
In the future, perhaps more important is to protect and cherish our own humanity, unleash our creativity, and not become walking dead in the concrete jungle, ultimately becoming nourishment for AI.