The Tech Gist
OpenAI CEO, Sam Altman, has told Congress that government intervention is needed to regulate the risks of advanced artificial intelligence (AI) systems. Altman proposed creating a regulatory agency to licence the most powerful AI systems, which could also take that licence away in the event of non-compliance with safety standards. The Startup has seen growing concerns over the latest crop of “generative AI” tools such as ChatGPT, which can potentially mislead people, steal copyrighted information, and upend jobs. Sen.Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology, and the law, called for AI companies to be required to test their systems, disclose known risks before releasing them, and expressed concern on how future AI systems could destabilise the job market.
A Regulatory Agency to Safeguard AI Systems
Altman’s proposal for an international regulator that safeguards AI models that could “self-replicate and self-exfiltrate into the wild” hints at future concerns around advanced AI systems that could manipulate humans into ceding control. The focus on far-fetched concepts such as super-powerful AI systems can create a distraction from current concerns such as data transparency, discriminatory behaviour, and potential for disinformation. This current fear, which is unfounded, distracts from the concerns we are dealing with now, says Suresh Venkatasubramanian, a Brown University computer scientist.
DeepAI and Microsoft
OpenAI, co-founded by Altman in 2015 with backing from Elon Musk, started as a non-profit research lab with a safety-focused mission, which has evolved into a business. Its most popular AI products include ChatGPT, the free chatbot tool that answers questions with human-like responses, and the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.
Purposeful AI Regulation: Certainty over Caution
IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University called on OpenAI and other tech firms to halt developing more powerful AI models for six months to give society more time to consider the risks. At the same hearing, Sen. Josh Hawley of Missouri mentioned that AI technology has significant implications for elections, jobs, and national security and called this meeting “a critical first step towards understanding what Congress should do.” A number of tech industry leaders have said that while they welcome some form of AI oversight, they are cautious of overly heavy-handed rules. The speakers agreed that an AI-focused regulator, preferably international, is a necessary step. However, IBM’s Montgomery asked Congress to take a “precision regulation” approach, regulating AI technology at the point of risk, by setting up rules that govern the deployment of specific uses of AI.
OpenAI, the San Francisco-based startup that created ChatGPT, an AI-powered conversational assistant, has expressed concern over the potential risks of new AI technologies and their impact on society. Due to growing concerns, AI regulatory intervention is required to mitigate the risks of advanced AI systems. OpenAI CEO, Sam Altman, proposes creating a licensing regulatory agency that can take away licences if strict safety standards are not adhered to. The agency would safeguard AI models, which could have the potential to harm the world if they went wrong. Many have expressed concerns and warnings, however legislation is yet to be made for AI in the US.