33.7 C
Casper
Sunday, July 14, 2024

MongoDB Joins US AI Safety Institute Consortium

Must read

A new U.S. government AI safety institute brings together leaders from industry, government, academia, and non-profit organizations to develop standards for the responsible use of artificial intelligence.

MongoDB, Inc. announced that it is a founding member of the U.S. Artificial Intelligence Safety Institute Consortium (AISIC), which was established by the National Institute of Standards and Technology (NIST) under the U.S. Department of Commerce. Leaders from industry, government, academia, and non-profit organizations participating in the consortium will collaborate with NIST to support its efforts to create safe and trustworthy artificial intelligence (AI). The consortium will focus on the most advanced AI systems, such as state-of-the-art foundation models, to assess the risk and impact of current and next-generation AI technology on individuals and society. By defining a new measurement science to identify proven, scalable, and interoperable techniques and metrics for testing and verifying the impact of AI systems, the consortium will create standards and guidelines that promote AI’s development and responsible use.

“We believe that technology driven by software and data makes the world a better place, and we see our customers building modern applications achieving that every day,” said Lena Smart, Chief Information Security Officer at MongoDB. “New technology like generative AI can immensely benefit society, but we must ensure AI systems are built and deployed using standards that help ensure they operate safely and without harm across populations. By supporting the U.S. Artificial Intelligence Safety Institute Consortium as a founding member, MongoDB aims to use scientific rigor, our industry expertise, and a human-centered approach to guide organizations on safely testing and deploying trustworthy AI systems without stifling innovation.”

As a founding member of the AISIC, MongoDB will draw on its extensive experience working with startups, enterprises, and governments that are continually building and deploying modern, mission-critical applications to provide expertise across multiple fields, including:

  • Globally distributed, multi-cloud software systems that are highly performant, reliable, and secure.
  • Vector embedding, storage, and retrieval technologies power AI systems at scale.
  • Developer experience engineering that integrates AI models and frameworks across programming languages and hyperscale cloud providers.
  • First-of-its-kind encryption techniques, data governance, and cybersecurity technologies and methods that ensure data protection, lineage, and integrity.

“The U.S. government has a significant role in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Secretary of U.S. Department of Commerce Gina Raimondo. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack—and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

The AISIC includes more than 200 member organizations that are on the frontlines of developing and using AI systems, as well as teams from civil society and academia that are building the foundational understanding of how AI can and will transform our society. These stakeholders represent the nation’s largest companies and its innovative startups, creators of the world’s most advanced AI systems and hardware, key members of civil society and the academic community, and representatives of professions with deep engagement in AI’s use today. The consortium also includes state and local governments, as well as non-profit organizations. The consortium will also work with organizations from like-minded nations that have a key role to play in setting interoperable and effective AI safety standards around the world.

More articles

Latest posts