After weeks and months of meetings and discussions, the Big Tech and global governments have agreed to enact rules and regulations for safeguarding artificial intelligence (AI) technology.
However, a number of Silicon Valley heavyweights, including venture capitalists, chief executives of mid-sized software companies, and advocates of open-source technology, are growing skeptical of the upcoming AI regulatory framework and have started to push back on it. They argue that implementing laws could wipe out any chances of competition in the crucial new field.
Silicon Valley Companies say AI Regulations could Stifle Healthy Competition in the Field
The dissenters say the decision by big players in the AI industry such as Google, Microsoft, and OpenAI to support the new rules was simply a ploy to maintain their dominance over the sector. The tech giants went into a frenzy after President Joe Biden signed an executive order last week asking the federal government to develop testing and approval guidelines for machine learning models.
Garry Tan, the CEO of Y Combinator, said the tech industry is in the early stages of generative artificial intelligence and it was “imperative” that governments do not anoint winners and shut down competition by adopting “burdensome” regulations that only the big players can comply with. He pointed out that the opinions of smaller tech companies have not been incorporated in the discussions, which he thinks is key to fostering competition and developing AI.
Referring to influential AI startups like Anthropic and OpenAI, Tan said they have close ties to Big Tech through strategic partnerships that allow them to receive billions of dollars in investment to develop their AI models.
Y Combinator is a San Francisco-based start-up incubator that helped fund Airbnb and DoorDash during their initial stage.
VC Executives Argue Influential AI Startups and Big Tech Don’t Speak for the Industry
Meanwhile. Martin Casado, a general partner at tech venture capital firm Andreessen Horowitz, said that OpenAI and Anthropic don’t speak for the “vast majority of people” who have contributed to the industry. He said most AI engineers and entrepreneurs are not involved in the regulatory discussions that are taking place and are instead focused on improving their technology rather than trying to lobby politicians. Casado claimed them to be the “silent majority” of the AI industry.
According to the Andreessen Horowitz partner, regulations requiring AI companies to report to the government are likely to make it harder and more expensive to develop the technology. He argued that Biden’s executive order could also affect the open-source community.
AI researcher Andrew Ng, who helped found Google’s AI Labs and now heads startup firm Landing AI, also shared the same thoughts on the matter.
Big Tech is Urging Governments to Regulate AI Industry
Andreessen Horowitz, which made early investments in Facebook, Slack, and Lyft, sent a letter to President Biden outlining its concerns with the upcoming AI regulation. The letter was signed by CEOs of prominent AI and tech start-ups like Replit’s Amjad Masad, Mistral’s Arthur Mensch, and Shopify’s Tobi Lutke.
While AI companies are moving forward with releasing newer tools and finding ways to monetize them, governments are struggling to regulate the industry.
In the US, numerous congressional hearings were held to address the issue, with bills being proposed at federal and state levels. At the same time, the European Union has had to renew its proposed AI regulation that had been under work for several years. The UK is trying to establish itself as an AI-friendly jurisdiction and recently held the AI Safety Summit which was participated in by heads of governments, and tech, and business leaders.
During the high-level meeting, representatives of established AI and tech companies said the technology could pose serious threats, and regulation of the industry was an absolute necessity. Days after Biden’s executive order, leaders who attended the AI Summit in the UK signed a statement supporting the idea of giving governments control over AI.
Demis Hassabis, CEO of Google’s DeepMind AI, Sam Altman of OpenAI, and Dario Amodi of Anthropic extended their support for the policy statement. British Prime Minister Rishi Sunak said people should no longer just rely on safety assurances made by companies that develop AI tech.
AI Posing Existential Threat to Humanity is an Exaggerated Statement
During a congressional hearing in May, Sam Altman said that if AI technology goes wrong, “it can go completely wrong”. Various US lawmakers say they want regulation over AI rather than take a laid-back approach similar to what the federal government did with social media.
Microsoft’s president and vice chairman, Brad Smith, said that he supports the idea of an independent government agency licensing the creation and development of AI models.
Influential AI leaders are constantly issuing warnings that AI could be an existential threat to humans if not regulated. Many prominent AI researchers are saying that the technology is developing so rapidly that it may soon surpass human intelligence and start making decisions on its own.
However, AI engineer and researcher Ng says these claims are overly exaggerated and only pushed to bring in regulations that would help established companies assert their dominance in the field. He admitted to regretting his previous statements about AI posing “existential risk”, adding that he just has a hard time figuring out “how the human race could become extinct”.