Global AI giants gathering What did the Salt and Iron Conference in the AI era talk about?

Source: Lei Technology

Perhaps no technology has become a consensus in the technology industry and society as quickly as AI large-scale models have in just a few months. Today, all technology giants and major governments are almost certain that AI will change the world and everything.

But behind a large consensus, there are still too many issues. On the one hand, there are practical issues such as job replacement, the proliferation of false content, and unequal usage thresholds. On the other hand, there is the “threat to civilization” expressed in human artistic works for a long time. According to Wired magazine, OpenAI even stipulates an exit mechanism in its financial documents when AI destroys the human economic system.

In July of this year, at an event held at IBM headquarters, Chuck Schumer, the majority leader of the US Senate, said that he would convene a series of artificial intelligence conferences to “bring the best people to the table, communicate with each other, answer questions, and work towards some consensus and solutions, while the senators, our staff, and others just listen.” Ultimately, Schumer hopes to lay the foundation for artificial intelligence policy legislation based on the exchanges at the conference.

Two months later, the conference began. The closed-door conference was attended by 22 participants, including CEOs of top technology companies such as OpenAI CEO Sam Altman, NVIDIA CEO Jensen Huang, Google CEO Sundar Pichai, Meta CEO Mark Zuckerberg, as well as tech leaders from the previous generation of the Internet, such as Microsoft founder Bill Gates and former Google CEO Eric Schmidt.

AI must be “regulated”

No one denies the huge potential of artificial intelligence, but there is still a lack of consensus on the risks, security measures, regulation, and the future of artificial intelligence. The only thing that can be certain is that humans cannot allow artificial intelligence to grow wildly without any control.

After leaving the first closed-door conference, Elon Musk, CEO of Tesla, SpaceX, Neuralink, publicly stated that the meeting was “historic.” He agreed with the idea of establishing a new institution to regulate artificial intelligence and reiterated the enormous risks posed by artificial intelligence.

“The consequences of artificial intelligence going wrong are serious, so we must be proactive rather than reactive,” Musk said. “The problem is actually a risk to civilization, which has potential risks to people all over the world.”

Musk did not elaborate on the specific harm of artificial intelligence to human civilization, but when artificial intelligence becomes more involved in the management and control of power grids, water supply systems, and vehicle systems, once it goes wrong, it will bring a lot of problems. Deb Raji, a computer scientist and AI researcher, also mentioned that the bias of artificial intelligence decisions affects all aspects of society.

Furthermore, there has long been concern among humans about artificial intelligence.

But how can we reduce and prevent the risks posed by artificial intelligence? Brad Smith, President of Microsoft, proposed that we need to establish “emergency brakes” for the enormous risks that artificial intelligence may pose, especially in artificial intelligence systems that manage critical infrastructure such as power grids and water supply systems, “so that we can ensure that the threats that many people worry about remain only in science fiction and do not become new realities.”

Brad Smith/Image

Smith believes that just as every building and home has circuit breakers that can immediately stop the operation of the power system if needed, artificial intelligence systems also need similar “emergency brakes” to ensure that humans are not subject to massive harm from such systems.

Similarly, William Dally, Chief Scientist and Senior Vice President of NVIDIA, also pointed out that the possibility of artificial intelligence as a computer program making errors and causing serious harm lies in “human involvement,” that is, placing humans in key roles in the work of artificial intelligence, restricting some decision-making power of artificial intelligence.

William Dally/Image/NVIDIA

“So I think as long as we remain cautious about how to deploy artificial intelligence, placing humans in key roles, we can ensure that artificial intelligence will not take over and shut down our power grid or cause planes to fall from the sky,” Dally said.

Will AI Open Source be Pandora’s Box?

At the end of last year, when ChatGPT first gained popularity in the tech community, there were concerns about the misuse of AI models for fraudulent activities. Open source seems to exacerbate the risk of “misuse.”

At the conference, one of the debates revolved around “open source” AI models that are available for public download and modification. The open-source code of these models allows companies and researchers to use AI technologies similar to those behind ChatGPT without investing a huge amount of money and resources in training.

But bad actors could also abuse open-source AI systems, including OpenAI, which stopped open-sourcing starting from GPT-3.5. Ilya Sutskever, Co-founder and Chief Scientist of OpenAI, stated, “In a few years, it will be clear to everyone that open-source artificial intelligence is not wise.”

Ilya Sutskever and Huang Renxun in conversation at GDC, Image/NVIDIA

When asked about the reasons, he mentioned, “These models are very powerful and will become even more powerful. At some point, it will be easy for someone to cause significant harm using these models.”

Furthermore, DeepMind co-founder and Inflection AI CEO Mustafa Suleyman recently mentioned the risks of open source. He pointed out that the key issue with open sourcing AI is that the “rapid diffusion of power” will allow a single entity to cause unprecedented harm and impact on the world. “In the next 20 years, naïve open sourcing will almost certainly lead to disaster.”

Mustafa Suleyman, Image/DeepMind

However, Meta clearly disagrees with these statements. Meta CEO Mark Zuckerberg argues that open source will “democratize AI, promote fair competition, and foster individual and corporate innovation.” But at the same time, he also acknowledges that the open source model may bring dangers, and Meta is working to build this technology as safely as possible.

When AI meets open source, it is still a pending question.

Final Thoughts

Almost 2104 years ago, seven years after the death of Emperor Wu of Han, the Han court held an unprecedented policy discussion. One of the main issues was the abolition and modification of the state monopoly policy on salt, iron, and alcohol implemented during the reign of Emperor Wu. For this purpose, more than 60 “virtuous and literary” individuals and government officials, including the Grand Minister of Justice Sang Hongyang, were convened. This meeting came to be known as the “Salt and Iron Conference” in later generations. However, this Salt and Iron Conference fundamentally discussed the economic policy direction of the Han Dynasty.

Although the topics are different and the times are vastly different, whether it is the Salt and Iron Conference more than two thousand years ago or the current Artificial Intelligence Insights Forum, they are all facing a huge problem and hope to use “wise and virtuous” exchanges and debates as the basis for legislation.

Image/Google

In a recent commemorative article on Google’s 25th anniversary, current Google CEO Sundar Pichai wrote:

(Google) participates in important debates about how these technologies will shape our society and then collectively seeks answers. Artificial intelligence is a key part of this. Although we are excited about the potential of AI to benefit humanity and society, we know that like any early technology, AI also brings complexity and risks. We must address these risks in the development and use of AI and help develop technology responsibly.

A few days later, eight companies, including Adobe, Cohere, IBM, NVIDIA, LianGuailantir, Salesforce, Scale AI, and Stability, became the second batch of companies to sign the “Responsible Development of Artificial Intelligence (AI) Technology” agreement. Many times, when faced with problems that cannot be solved by individual individuals or companies/governments alone, we must recognize the value of “joint efforts.”

Like what you're reading? Subscribe to our top stories.

We will continue to update Gambling Chain; if you have any questions or suggestions, please contact us!

Follow us on Twitter, Facebook, YouTube, and TikTok.

Share:

Was this article helpful?

93 out of 132 found this helpful

Gambling Chain Logo
Industry
Digital Asset Investment
Location
Real world, Metaverse and Network.
Goals
Build Daos that bring Decentralized finance to more and more persons Who love Web3.
Type
Website and other Media Daos

Products used

GC Wallet

Send targeted currencies to the right people at the right time.