Nvidia, Adobe Companies Join White House AI Standards Agreement
Chipmaker Nvidia and seven other companies have agreed to a set of artificial intelligence (AI) standards led by the White House. The standards include requirements to disclose AI-generated content, share vulnerabilities, and commit to external testing before releasing products. The companies joining the agreement include Adobe, Palantir, IBM, and Salesforce. The White House has been engaging with industry on AI development and is promoting steps to ensure AI safety and security. The agreement will go into effect immediately and requires companies to prioritize research into minimizing harm and addressing security challenges.
Nvidia, a leading chipmaker, has joined a group of eight companies in committing to a set of artificial intelligence (AI) standards led by the White House. The companies, including Amazon, Alphabet, Microsoft, and OpenAI, have agreed to disclose AI-generated content, share vulnerabilities, and undergo external testing before releasing their products.
The White House has been actively engaging with industry leaders to promote AI development and ensure the implementation of necessary rules and regulations. This comes as AI becomes more prevalent in society and policymakers consider the implications of its widespread use.
Among the companies joining the commitment are Adobe, which has incorporated AI tools into its Photoshop software, and Stability, which offers AI-generated images. One of the key aspects of the standards is the requirement for clear labeling of AI-generated content, such as through watermarks.
Government data mining service provider Palantir, known for its strong performance driven by AI, has also joined the agreement. The sharing of information across the industry, as well as with government agencies, academics, and risk management organizations, is another crucial provision in the agreement.
Other companies focused on generative AI development, such as Cohere and Scale AI, have also joined. These companies are required to report the capabilities and limitations of their AI systems, as well as identify appropriate and inappropriate uses.
IBM and Salesforce, both developing their own AI platforms, have also committed to the agreement. The voluntary commitment emphasizes the importance of prioritizing research to minimize the potential harm caused by AI tools, including addressing security challenges, eliminating biases, and protecting privacy.
The agreement, which will take effect immediately, mandates internal and external security testing of AI systems before release. It also highlights the significance of safety and security measures, such as addressing insider threats and facilitating third-party discovery and reporting of AI vulnerabilities.
The White House has been proactive in promoting AI safety and security, including the development of a "Blueprint for an AI Bill of Rights" to protect individuals' rights. The Office of Management and Budget (OMB) is also set to introduce a policy that will establish guidelines for the government's use of AI.
In conclusion, the commitment of these companies to the White House-led AI standards underscores the importance of responsible AI development and the need for transparency, safety, and security in the rapidly evolving field of artificial intelligence.
Comments on Nvidia, Adobe Companies Join White House AI Standards Agreement