in

1,100 Signatories Including Elon Musk and Yuval Noah Harari Call for Pause in AI Advancement

About 1,100 signatories raised their voices to stop the immediate pause for 6 months in AI system advancement.

They agreed that there should be no more powerful AI than ChatGPT for the duration. Nominated personalities are included in this appeal, i.e.,  Elon Musk, Tristan Haris, Steve Wozniak, and Yuval Noah Harari (Israeli historian). They signed an open letter that was posted on Tuesday.

The main crux of the latter was as follows;

As AI systems continue to advance, they can now perform general tasks at a level comparable to humans. This raises important questions about the future of automation. Will we allow machines to flood our information channels with propaganda and falsehoods? Will we automate all jobs, even those that provide us a sense of purpose? Are we willing to create nonhuman minds that may one day surpass us in intelligence and make us obsolete? These are crucial decisions that cannot be left solely in the hands of tech leaders who are not elected by the public. We must only develop powerful AI systems once we are confident that their impact will be positive and that we can manage the associated risks.

This later spread like wildfire for many reasons. Like, it is asking about the regulations and planning of AI so that it remains fruitful and positive for the human race. The other reason for becoming a headline is the type of signatories agreeing to raise their voices against regulations. It includes some engineers from big tech giants, i.e., Google and Meta, the CEO and founder of Stability AI, and people from the electrician field. The other thing that makes it more interesting is that no one is from Open AI, the company behind the next big giant Chat GPT 4 and no one even from the Antropic.

According to a recent report, OpenAI CEO Sam Altman has confirmed that the company has not yet begun training its next-generation language model, GPT-5. Altman emphasized the company’s commitment to safety and stated that they conducted rigorous safety tests on GPT-4 for over six months before releasing it. Altman also commented on OpenAI’s long-standing focus on AI safety and their efforts to raise awareness of the issue. In an interview with Lex Fridman, Altman spoke about his relationship with Elon Musk, who co-founded OpenAI but left in 2018 due to conflicts of interest. Altman empathized with Musk’s concerns about AGI safety but found some of his behavior hurtful. Despite this, Altman stated that he wishes Musk would acknowledge OpenAI’s efforts to improve AI safety.

Written by Muhammad Tanveer