Skip to content
Risk Fraud & Compliance

Can AI companies still move fast and break things despite pending regulations?

Irene Liu  Executive in Residence / UC Berkeley / School of Law and Founder / Hypergrowth GC

· 5 minute read

Irene Liu  Executive in Residence / UC Berkeley / School of Law and Founder / Hypergrowth GC

· 5 minute read

The ethos of tech start-ups and industry entrepreneurs may run headlong into a brick wall of federal & state regulations around new iterations of generative AI and other related technologies

SAN FRANCISCO — During last week’s Tech Week 2023, the streets were buzzing with entrepreneurs and investors all excited about the future of generative artificial intelligence (AI).

How does that future really jibe with the entrepreneurial ethos of moving fast and breaking things when you consider the future of regulations around AI? More precisely, can tech entrepreneurs and investors fake it till they make it, as so many have in tech, with generative AI? Indeed, in the post-Silicon Valley Bank era, it seems that you can still move fast and break things so long as you go to legislators and regulators and say you’re open to regulations.

But with so much up in the air, startup founders and other AI-focused entrepreneurs, and investors should be aware of certain realities in regard to the future of AI, the law, and government regulations. In the short-term at least, the future of generative AI will likely be a patchwork of global regulations that companies will need to follow, similar to the patchwork they face in privacy rules today. And the rules around AI will likely be similarly opaque and less straightforward for companies’ comfort.

Still, here are some developments to keep an eye on:

      • Watch for the EU AI Act — The European Union will lead again with regulations, just as it led in rulemaking around privacy with the launch of the General Data Protection Regulation (GDPR). The E.U. will likely be the first to enact a strong regulation around AI; and in fact, has already drafted a proposal with a tiered categorization of AI systems ranging from unacceptable, high, low, and minimal. While enforcement will not be immediate and will have a grace period of around two years, fines will be substantial once the AI Act comes into force.
      • The U.S. will be slower than the E.U. to draft an AI Act but enforcement will continue — The United States has missed many opportunities to regulate privacy and social media. And while the U.S. Congress and the White House are very well aware of these missed opportunities and want to showcase that they actually can take action, it will take time for them to figure out their next steps. In the meantime, a slew of U.S. federal oversight agencies — the Equal Employment Opportunity Commission, the Securities and Exchange Commission, the Department of Justices, the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau, and other agencies — will continue to aggressively go after technology companies that are violating laws under their remit through their use of AI. FTC Chair Lina Khan explicitly said that there is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.
      • The U.K. will create more moderate laws than the E.U. — The United Kingdom is trying to position itself as a more business-friendly country that is distinct from the E.U. So, in keeping with that image, the U.K. is likely not to go as far as the E.U. in terms of its AI regulations. However, this will mean another set of regulations to keep an eye on, further causing compliance headaches for many companies. U.K. authorities have already issued a policy paper on AI, which essentially is a precursor to legislation. The U.K. approach may be lighter than the E.U.’s, but it is stricter than the National Institute of Standards and Technology Framework. And the U.K. will very likely release laws faster than the U.S.
      • Other countries, like China, are also planning laws focused on AI risk — However, these plans are somewhat more difficult to determine, since much of the chatter coming out of China, for example, seems more focused on semiconductors and chips than on AI, at least according to some news reports. As the AI regulatory race heats up, however, many more countries will certainly be heard from.

Once these laws, wherever they are generated, are developed and passed, they will bring increased compliance obligations for any company that leverages AI in almost any region in the world.

That means that while these laws are being developed, it would be wise for companies and their tech and compliance officers to develop AI responsibly and leverage voluntary AI risk management frameworks in order to stay under the radar of regulators.

More insights