AI firms should welcome regulation, as Mira Murati suggested. If this system is to succeed, regulators, governments, and the public must be involved, says OpenAI’s chief technology officer. Despite knowing that the company’s newest, most powerful model would be released in less than a month, he did not specify the form of oversight. The interview, however, revealed Murati’s cultural preferences, beginning with his love of Radiohead’s ‘Paranoid Android,’ a beautiful but not uplifting song.
Companies must deal with regulation most of the time since it is an essential evil. Since OpenAI calls on watchdogs to investigate it as if it was a long line of investigating angels, its work is infused with mysticism. It’s possible that the developers are experimenting with something brilliant but unexpected, which is definitely beyond the conventional understanding of its users, and it’s possible they’re right after all. GPT-4 technology is expected to disrupt everything from the arts to education to legal services in the near future since it is more intelligent, agile, and adept than GPT-3. Furthermore, this model and similar ones may also be used for more nefarious purposes, such as creating code-perfect malware or spreading vile misinformation.
The regulators have largely remained silent despite generative AI’s immense potential to transform the industry as we know it forever. In an ideal world, foundation models would be tightly regulated by comprehensive legislative frameworks. In the event that their creations inflict harm on others, their creators punish them for it. In ENSDS’s law and ethics program, Philipp Hacker, a professor of law and ethics, says that two MEPs are proposing that foundation models be classified as high-risk. Hacker, however, does not view regulating the models themselves as a priority. The emergence of generative AI after the passage of the law has also unnerved EU parliamentarians. Consequently, legislators are being forced to regulate things they were not prepared to regulate.
EU vs. US GPT-4
There is a partial difference between the definition of the GPT-4 in the EU and the definition in the US. Under the AI Act, an array of general-purpose AI systems can be developed that can perform a variety of AI functions, including pattern recognition, answering questions, and recognizing speech patterns. In addition to the fact that these extreme-risk models are considered high-risk, the EU also imposes a strict reporting requirement on their creators as a condition for their continued operation. Apparently, Hacker thinks that this situation is absurd on two levels and that the release of a foundation model has a number of theoretical risks that should be taken into consideration. It is important to note, however, that categorizing GPT-4 as high-risk may also cause relatively gentle applications to appear unusually risky from a regulatory viewpoint, even though there are many theoretical risks involved with it. These models are adapted by independent developers as well as companies, making it difficult for a single creator to keep track of how others are abusing these models. It must be noted that due to the inherently high level of risk of the GPAIS classification, it even has basic models that must meet the requirements of the classification. By this definition, Hacker says, if I write a linear classifier that cannot distinguish humans from rats, then that is potentially an AI system that can be used for anything, based on that definition.
There has been a proposal to include corporations that substantially modify the foundation model in the list of organizations responsible for reporting misuse. This has created some confusion among big tech companies and AI think tanks. Hacker remains skeptical despite being enthusiastic about AI governance changes. Hacker suggests promulgating AI governance principles rather than regulating AI models so closely. These regulations can be applied in a technology-neutral manner. Firms can also explore new LLMs freely without immediate regulatory repercussions by creating technical safe harbors. According to Hacker, existing laws could also be amended or re-implemented in order to accommodate generative AI in the future. According to him, the DSA should be amended. A good old-fashioned anti-discrimination law and the GDPR are also worth reviewing. As a result, it will cover some major aspects of the job, and in the US, that’s already happening, even if it’s the default. Most agencies are dealing with generative AI independently because there is no federal legal framework for AI governance. The Federal Trade Commission has targeted companies that falsely advertise their capabilities.
Despite the fact that some federal agencies are discussing how to accommodate GPT-4 and the abundance of generative AI services that will undoubtedly result from it, Andrew Burt, a specialist lawyer, says that legislation modeled after the European model is unlikely to occur over time. According to Burt, the most practical outcome would be some form of bipartisan privacy regulation relating to algorithmic decision-making at the national level. The Biden administration and the GOP-controlled House will likely not compromise on anything else in this era of cohabitation. There is a reason for this, in part, because it seems beyond the comprehension of many congresspeople, despite Speaker McCarthy’s promise to provide AI courses for members of the House Intelligence Committee and despite lobbying efforts from the US Chamber of Commerce for a regulatory framework that might cover the subject, and despite such measures being supported by a few voices, they are very vocal.
Regulating AI: capacity problems
A number of anti-discrimination and transparency laws have been passed at the state level. In reference to the consultations currently being held at the new Department of Science, Innovation, and Technology regarding AI regulation, Engler points out that the UK has a clear idea of what it wants and is working toward its goals through a series of well-integrated policy documents. Similarly, the UK establishes best practices for AI and lets sectoral regulators determine how to apply them. During discussions about self-driving cars and the dangers of GPT-4 proliferation, generative AI didn’t come up much. Number 10 may be waiting for a report from a task force led by ARIA on foundation models. Well-resourced teams, let alone underfunded watchdogs, cannot keep up with the rapid pace of cybersecurity changes. An Alan Turing Institute study found significant gaps in artificial intelligence regulation and artificial intelligence regulation based on artificial intelligence regulation.
People have realized that if you want to build these dual teams, you need to hire computer scientists as well. Nevertheless, we would be better able to understand the capacities of various federal agencies’ executive orders mandating that every federal agency creates a plan explaining how they will deal with AI-related challenges in the future. Although the Brookings associate has not determined whether generative AI will hinder regulators’ ability to oversee their work. In spite of the serious harms identified, he suggests that proposals for addressing these harms can be adapted from recent discussions about algorithmic discrimination, cybersecurity best practices, and platform governance. Since he believes that policymakers are taking a responsible stance on generative AI, he finds it indicative of responsible policymaking that he doesn’t hear anything from them about how they plan to regulate it specifically. For many people, generative artificial intelligence is both exciting and fascinating. Politicians should acknowledge AI’s second coming, as well as assess its long-term effects. Regulatory agencies could clarify what needs to be done in this case.