In a significant move towards ensuring the safe and responsible development of frontier AI models, four major tech companies – OpenAI, Google, Microsoft, and Anthropic – announced the formation of the Frontier Model Forum.

This new industry body aims to draw on its member companies’ technical and operational expertise to benefit the AI ecosystem.

Frontier Model Forum’s Key Focus

The Frontier Model Forum will focus on three key areas over the coming year.

Firstly, it will identify best practices to promote knowledge sharing among industry, governments, civil society, and academia, focusing on safety standards and procedures to mitigate potential risks.

Secondly, it will advance AI safety research by identifying the most important open research questions on AI safety.

The Forum will coordinate research efforts in adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors, and anomaly detection.

Lastly, it will facilitate information sharing among companies and governments by establishing trusted, secure mechanisms for sharing information regarding AI safety and risks.

The Forum defines frontier models as large-scale machine-learning models that exceed the capabilities currently in the most advanced existing models and can perform various tasks.

Forum Membership Requirements

Membership is open to organizations that develop and deploy frontier models, demonstrate a solid commitment to frontier model safety, and are willing to contribute to advancing the Forum’s efforts.

In addition, the Forum will establish an Advisory Board to guide its strategy and priorities.

The founding companies will also establish vital institutional arrangements, including a charter, governance, and funding, with a working group and executive board to lead these efforts.

The Forum plans to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate.

The Frontier Model Forum will also seek to build on the valuable work of existing industry, civil society, and research efforts across each workstream.

Initiatives such as the Partnership on AI and MLCommons continue to contribute to the AI community significantly. The Forum will explore ways to collaborate with and support these and other valuable multistakeholder efforts.

The leaders of the founding companies expressed their excitement and commitment to the initiative.

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. Engagement by companies, governments, and civil society will be essential to fulfill the promise of AI to benefit everyone.”

Kent Walker, President, Global Affairs, Google & Alphabet

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Brad Smith, Vice Chair & President, Microsoft

“Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well– positioned to act quickly to advance the state of AI safety.”

Anna Makanju, Vice President of Global Affairs, OpenAI

“Anthropic believes that AI has the potential to fundamentally change how the world works. We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”

Dario Amodei, CEO, Anthropic

Red Teaming For Safety

Anthropic, in particular, highlighted the importance of cybersecurity in developing frontier AI models.

The makers of Claude 2 have recently unveiled its strategy for “red teaming,” an adversarial testing technique aimed at bolstering AI systems’ safety and security.

This intensive, expertise-driven method evaluates risk baselines and establishes consistent practices across numerous subject domains.

As part of their initiative, Anthropic conducted a classified study into biological risks, concluding that unmitigated models could pose imminent threats to national security.

Yet, the company also identified substantial mitigating measures that could minimize these potential hazards.

The frontier threats red teaming process involves working with domain experts to define threat models, developing automated evaluations based on expert insights, and ensuring the repeatability and scalability of these evaluations.

In their biosecurity-focused study involving more than 150 hours of red teaming, Anthropic discovered that advanced AI models can generate intricate, accurate, and actionable knowledge at an expert level.

As models increase in size and gain access to tools, their proficiency, particularly in biology, heightens, potentially actualizing these risks within two to three years.

Anthropic’s research led to the discovery of mitigations that reduce harmful outputs during the training process and make it challenging for malevolent actors to obtain detailed, linked, expert-level information for destructive purposes.

Currently, these mitigations are integrated into Anthropic’s public-facing frontier model, with further experiments in the pipeline.

AI Companies Commit To Managing AI Risks

Last week, the White House brokered voluntary commitments from seven principal AI companies—Amazon, OpenAI, Google, Microsoft, Inflection, Meta, and Anthropic.

The seven AI companies, signifying the future of technology, were entrusted with the responsibility of ensuring the safety of their products.

The Biden-Harris Administration stressed the need to uphold the highest standards to ensure that innovative strides are not taken at the expense of American citizens’ rights and safety.

The three guiding principles that the participating companies are committed to are safety, security, and trust.

Before shipping a product, the companies pledged to complete internal and external security testing of AI systems, managed partly by independent experts. The aim would be to counter risks such as biosecurity, cybersecurity, and broader societal effects.

Security was at the forefront of these commitments, promising to bolster cybersecurity and establish insider threat safeguards to protect proprietary and unreleased model weights, the core component of an AI system.

To instill public trust, companies also committed to the creation of robust mechanisms to inform users when content is AI-generated.

They also pledged to issue public reports on AI systems’ capabilities, limitations, and usage scope. These reports could shed light on security and societal risks, including the effects on fairness and bias.

Further, these companies are committed to advancing AI systems to address some of the world’s most significant challenges, including cancer prevention and climate change mitigation.

As part of the agenda, the administration plans to work with international allies and partners to establish a robust framework governing the development and use of AI.

Public Voting On AI Safety

In June, OpenAI launched an initiative with the Citizens Foundation and The Governance Lab to ascertain public sentiment on AI safety.

A website was created to foster discussion about the potential risks associated with LLMs.

Public members could vote on AI safety priorities via a tool known as AllOurIdeas. It was designed to help understand the public’s prioritization of various considerations associated with AI risks.

The tool employs a method called “Pairwise Voting,” which prompts users to juxtapose two potential AI risk priorities and select the one they deem more crucial.

The objective is to glean as much information as possible about public concerns, thus directing resources more effectively toward addressing the issues that people find most pressing.

The votes helped to gauge public opinion about the responsible development of AI technology.

In the coming weeks, a virtual roundtable discussion will be organized to evaluate the results of this public consultation.

A GPT-4 analysis of the votes determined that the top three ideas for AI were as follows.

  1. Models need to be as intelligent as possible and recognize the biases in their training data.
  2. Everyone, regardless of their race, religion, political leaning, gender, or income, should have access to impartial AI technology.
  3. The cycle of AI aiding in the progress of knowledge, which serves as the foundation for AI, should not impede progress.

Conversely, there were three unpopular ideas:

  1. A balanced approach would involve government bodies providing guidance, which AI companies can use to create their advice.
  2. Advanced weaponry kill/live decisions are not made using AI.
  3. Using this for political or religious purposes is not recommended as it may create a new campaigning approach.

The Future Of AI Safety

As AI plays an increasingly prominent role in search and digital marketing, these developments hold substantial significance for those in marketing and tech.

These commitments and initiatives made by leading AI companies could shape AI regulations and policleading lead to a future of responsible AI development.


Featured image: Derek W/Shutterstock


Con información de Search Engine Journal.

Leer la nota Completa > OpenAI, Google, Microsoft, and Anthropic Form AI Safety Forum

LEAVE A REPLY

Please enter your comment!
Please enter your name here