Skip to main content
Company Blog

Policymakers around the world are paying increased attention to artificial intelligence. The world’s most comprehensive AI regulation to date was just passed by a sizable vote margin in the European Union (EU) Parliament, while in the United States, the federal government has recently taken several notable steps to place controls on the use of AI, and there also has been activity at the state level. Policymakers elsewhere are also paying close attention and are working to put AI regulation in place. These emerging regulations will impact the development and use of both standalone AI models and the compound AI systems that Databricks is increasingly seeing its customers utilize to build AI applications.

Follow along our two-part “AI Regulation” series. Part 1 provides an overview of the recent flurry of activity in AI policymaking in the U.S. and elsewhere, highlighting the recurring regulatory themes globally. Part 2 will provide a deep dive into how the Databricks Data Intelligence Platform can help customers meet emerging obligations and discuss Databricks’ position on Responsible AI.

Major Recent AI Regulatory Developments in the U.S.

The Biden Administration is driving many recent regulatory developments in AI. On October 30, 2023, the White House released its extensive Executive Order on the Safe, Secure and Trustworthy Development and Use of AI. The Executive Order provides guidelines on:

  • The use of AI within the federal government
  • How federal agencies can leverage existing regulations where they reasonably relate to AI (e.g., prevention of discrimination against protected groups, consumer safety disclosure requirements, antitrust rules, etc.)
  • How developers of highly capable “dual-use foundation models” (i.e., frontier models) can share results of their testing efforts, and lists a range of studies, reports and policy formulations to be undertaken by various agencies, with a notably important role to be played by the National Institute of Standards and Technology, within the Commerce Department (NIST).

In quick response to the Executive Order, the U.S. Office of Management and Budget (OMB) followed two days later with a draft memo to agencies throughout the U.S. government, addressing both their use of AI and the government’s procurement of AI.

The Role of NIST & The U.S. AI Safety Institute

One of NIST’s primary roles under the Executive Order will be to expand its AI Risk Management Framework (NIST AI RMF) to apply to generative AI. The NIST AI RMF will also be applied throughout the federal government under the Executive Order and is increasingly being cited as a foundation for proposed AI regulation by policymakers. The recently formed U.S. AI Safety Institute (USAISI), announced by Vice President Harris at the U.K. AI Safety Summit, is also housed within NIST. A new Consortium has been formed to support the USAISI with research and expertise – with Databricks¹  participating as an initial member. Although $10 million in funding for the USAISI was announced on March 7, 2024, there remain concerns that the USAISI will require additional resources to adequately fulfill its mission. 

Under this directive, the USAISI will create guidelines for mechanisms for assessing AI risk and develop technical guidance that regulators will use on issues such as establishing thresholds for categorizing powerful models as “dual-use foundation models” under the Executive Order (models requiring heightened scrutiny), authenticating content, watermarking AI-generated content, identifying and mitigating algorithmic discrimination, ensuring transparency, and enabling adoption of privacy-preserving AI.

Actions by Other Federal Agencies

Numerous federal agencies have taken steps relating to AI under mandate from the Biden Executive Order. The Commerce Department is now receiving reports from developers of the most powerful AI systems regarding vital information, especially AI safety test results, and it has issued draft rules applicable to U.S. cloud infrastructure providers requiring reporting when foreign customers train powerful models using their services. Nine agencies, including the Departments of Defense, State, Treasury, Transportation and Health & Human Services, have submitted risk assessments to the Department of Homeland Security covering the use and safety of AI in critical infrastructure. The Federal Trade Commission (FTC) is heightening its efforts around AI in enforcing existing regulations. As part of this effort, the FTC convened an FTC Tech Summit on January 25, 2024 focused on AI (including Databricks’ Chief Scientist-Neural Networks, Jonathan Frankle, as a panelist). Pursuant to the Executive Order and as part of its ongoing efforts to advise the White House on technology matters including AI, the National Telecommunications and Information Administration (NTIA) has issued a request for comments on dual-use foundation models with widely available model weights.

What’s Happening in Congress?

The U.S. Congress has taken a few tentative steps to regulate AI thus far. Between September and December 2023, the Senate conducted a series of “AI Insight Forums” to help Senators learn about AI and prepare for potential legislation. Two bipartisan bills were introduced near the end of 2023 to regulate AI — one introduced by Senators Jerry Moran (R-KS) and Mark Warner (D-VA) to establish guidelines on the use of AI within the federal government, and one introduced by Senators John Thune (R-SD) and Amy Klobuchar (D-MN) to define and regulate the commercial use of high-risk AI. Meanwhile, in January 2024, Senate Commerce Committee Chair Maria Cantwell (D-WA) indicated she would soon introduce a series of bipartisan bills to address AI risks and spur innovation in the industry.

In late February, the House of Representatives announced the formation of its own AI Task Force, chaired by Reps. Jay Obernolte (R-CA-23) and Ted Lieu (D-CA-36). The Task Force’s first major objective is to pass the CREATE AI Act, which would make the National Science Foundation’s National AI Research Resource (NAIRR) pilot a fully funded program (Databricks is contributing an instance of the Databricks Data Intelligence Platform for the NAIRR pilot).

AI Regulation is Rolling Out…And the Data Intelligence Platform is Here to Help

Regulation at the State Level

Individual states are also examining how to regulate AI, and in some cases, pass and sign legislation into law. Over 91 AI-related bills were introduced in state houses in 2023. California made headlines last year when Governor Gavin Newsom issued an executive order focused on generative AI. The order tasked state agencies with a series of reports and recommendations for future regulation on topics like privacy and civil rights, cybersecurity, and workforce benefits. Other states like Connecticut, Maryland, and Texas passed laws for further study on AI, particularly its impact on state government.

State lawmakers are in a rare position to advance legislation quickly thanks to a record number of state governments under single-party control, avoiding the partisan gridlock experienced by their federal counterparts. Already in 2024, lawmakers in 20 states have introduced 89 bills or resolutions pertaining to AI. California’s unique position as a legislative testing ground and its concentration of companies involved in AI make the state a bellwether for legislation, and several potential AI bills are in various stages of consideration in the California state legislature. Proposed comprehensive AI legislation is also moving forward at a fairly rapid pace in Connecticut.

Outside the United States

The U.S. is not alone in pursuing a regulatory framework to govern AI. As we think about the future of regulation in this space, it’s important to maintain a global view and keep a pulse on the emerging regulatory frameworks other governments and legal bodies are enacting.

European Union

The EU is leading in efforts to enact comprehensive AI regulation, with the far-reaching EU AI Act nearing formal enactment. The EU member states reached a unanimous agreement on the text on February 2, 2024 and the Act was passed by Parliament on March 13, 2024. Enforcement will commence in stages starting in late 2024/early 2025. The EU AI Act categorizes AI applications based on their risk levels, with a focus on potential harm to health, safety, and fundamental rights. The Act imposes stricter regulations on AI applications deemed high-risk, while outright banning those considered to pose unacceptable risks. The Act seeks to appropriately divide responsibilities between developers and deployers. Developers of foundation models are subject to a set of specific obligations designed to ensure that these models are safe, secure, ethical, and transparent. The Act provides a general exemption for open source AI, except when deployed in a high risk use case, or as part of a foundation model posing “systemic risk” (i.e., a frontier model).

United Kingdom

Although the U.K. thus far has not pushed forward with comprehensive AI regulation, the early November 2023 U.K. AI Safety Summit in historic Bletchley Park (with Databricks participating) was the most visible and broadly attended global event so far to address AI risks, opportunities and potential regulation. While the summit focused on the risks presented by frontier models, it also highlighted the benefits of AI to society and the need to foster AI innovation. 

As part of the U.K. AI Summit, 28 countries (including China) plus the EU agreed to the Bletchley Declaration calling for international collaboration in addressing the risks and opportunities presented by AI. In conjunction with the Summit, both the U.K. and the U.S. announced the formation of national AI Safety Institutes, committing these bodies to closely collaborate with each other going forward (the U.K. AI Safety Institute received initial funding of £100 million, in contrast to the $10 million allocated thus far by the U.S. to its own AI Safety Institute). There was also an agreement to conduct additional global AI Safety Summits, with the next one being a “virtual mini summit” to be hosted by South Korea in May 2024, followed by an in-person summit hosted by France in November 2024.

Elsewhere

During the same week the U.K. was hosting its AI Safety Summit and the Biden Administration issued its executive order on AI, leaders of the G7 announced a set of International Guiding Principles on AI and a voluntary Code of Conduct for AI developers. Meanwhile, AI regulations are being discussed and proposed at an accelerating pace in numerous other countries around the world.

Pressure to Voluntarily Pre-Commit

Many parties, including the U.S. White House, G7 leaders, and numerous attendees at the U.K. AI Safety Summit, have called for voluntary compliance with pending AI regulations and emerging industry standards. Companies using AI will face increasing pressure to take steps now to meet the general requirements of regulation to come.

For example, the AI Pact is a program calling for parties to voluntarily commit to the EU AI Act prior to it becoming enforceable. Similarly, the White House has been encouraging companies to voluntarily commit to implementing safe and secure AI practices, with the latest round of such commitments applying to healthcare companies. The Code of Conduct for advanced AI systems created by the OECD under the Hiroshima Process (and released by G7 leaders the week of the UK AI Safety Summit) is voluntary but is strongly encouraged for developers of powerful generative AI models.

The increasing pressure to make these voluntary commitments means that, for many companies, various compliance obligations will be faced fairly soon. In addition, many companies see voluntary compliance as a potential competitive advantage.

What Do All These Efforts Have in Common?

The emerging AI regulations have varied, complex requirements, but carry recurring themes. Obligations commonly arise in five key areas:

  1. Data and model security and privacy protection, required at all stages of the AI development and deployment cycle
  2. Pre-release risk assessment, planning and mitigation, focused on training data and implementing guardrails - addressing bias, inaccuracy, and other potential harm
  3. Documentation required at release, covering steps taken in development and regarding the nature of the AI model or system (capabilities, limitations, description of training data, risks, mitigation steps taken, etc.)
  4. Post-release monitoring and ongoing risk mitigation, focused on preventing inaccurate or other harmful generated output, avoiding discrimination against protected groups, and ensuring users realize they are dealing with AI
  5. Minimizing environmental impact from energy used to train and run large models

What Budding Regulation Means for Databricks Customers

Although many of the headlines generated by this whirlwind of governmental activity have focused on high risk AI use cases and frontier AI risk, there is likely near-term impact on the development and deployment of other AI as well, particularly stemming from pressure to make voluntary pre-enactment commitments to the EU AI Act, and from the Biden Executive Order due to its short time horizons in various areas. As with most other proposed AI regulatory and compliance frameworks, data governance, data security, and data quality are of paramount importance.

Databricks is following the ongoing regulatory developments very carefully. We support thoughtful AI regulation and Databricks is committed to helping its customers meet AI regulatory requirements and responsible AI use objectives. We believe the advancement of AI relies on building trust in intelligent applications by ensuring everyone involved in developing and using AI follows responsible and ethical practices, in alignment with the goals of AI regulation. Meeting these objectives requires that every organization has full ownership and control over its data and AI models and the availability of comprehensive monitoring, privacy controls, and governance for all phases of AI development and deployment. To achieve this mission, the Databricks Data Intelligence Platform allows you to unify data, model training, management, monitoring, and governance of the entire AI lifecycle. This unified approach empowers organizations to meet responsible AI objectives to deliver data quality, provide more secure applications, and help maintain compliance with regulatory standards. 

In the upcoming second post of our series, we’ll do a deep dive into how customers can utilize the tools featured on the Databricks Data Intelligence Platform to help comply with AI regulations and meet their objectives regarding the responsible use of AI. Of note, we’ll discuss Unity Catalog, an advanced unified governance and security solution that can be very helpful in addressing the safety, security, and governance concerns of AI regulation, and Lakehouse Monitoring, a powerful monitoring tool useful across the full AI and data spectrum.

And if you’re interested in how to mitigate the risks associated with AI, sign up for the Databricks AI Security Framework here.

 

¹ Databricks is collaborating with NIST in the Artificial Intelligence Safety Institute Consortium to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies. NIST does not evaluate commercial products under this Consortium and does not endorse any product or service used. Additional information on this Consortium can be found at: Federal Register Notice - USAISI Consortium.

Try Databricks for free

Related posts

Company blog

Furthering Our Commitment to Responsible AI Development Through Industry and Government Organizations

February 8, 2024 by Scott Starbird in Company Blog
At Databricks, we've upheld principles of responsible development throughout our long-standing history of building innovative data and AI products. We are committed to...
See all Company Blog posts