Author | Kevin Roose
compile | Jingyu
Link:https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html
original title:How Do the White House’s AI Commitments Stack Up?
(ChinaIT.com News) Just three days after Meta announced the new generation of open-source large language model Llama 2, the company’s top management, Microsoft, OpenAI, Amazon, Anthropic and other seven giants in the AI industry and executives of hot start-up companies gathered again at the White House in the United States.
In this event, seven companies signed a “Voluntary Commitments” (Voluntary Commitments), making “8 major commitments” on the safety, transparency, and risks of AI technology and R&D.
The AI company’s “seeking supervision” drama has reached a small climax.
NYT columnist Kevin Roose wrote an article, explaining in detail what the “8 major promises” have said, and the changes they have brought about.
The result of the research is that these 8 commitments, such as security testing, sharing AI security information, reporting vulnerabilities, and “prioritizing AI to solve social challenges”, seem to be some big problems, and the corresponding commitments lack details. Many of the problems themselves have been “publicized” to the public by AI companies.
thisThis “Resource Commitment Letter” is more like an indirect “result display” of several inquiries from the US regulators to AI companies in the past six months, and it probably does not have much implementation significance.. However, regulators have expressed concern about AI several times, indicating the government’s position on the application of AI technology is the greatest significance of this show.
The following is Kevin Roose’s interpretation of the 8 commitments in the “Voluntary Commitment Letter”:
Commitment 1: The company commits to internal and external safety testing of AI systems before releasing them.
These AI companies have all conducted safety tests on their models—often referred to as “red team testing”—before they are released. To some extent,It’s not really a new promise, it’s vague.No details provided about what kind of testing is required or by whom.
In a subsequent statement, the White House said only that testing of the AI model “will be conducted in part by independent experts” and will focus on “AI risks such as biosecurity and cybersecurity, and their wider societal implications“.
It’s a good idea to have AI companies publicly commit to continuing such testing, and to encourage more transparency in the testing process. There are also types of AI risks—such as the danger that AI models could be used to develop biological weapons—that government and military officials may be better placed to assess than companies.
I’m glad to see the AI industry agree on a set of standard safety tests, such as Alignment Research Center’s “self-replication” tests for pre-release models from OpenAI and Anthropic. I’d also like to see the federal government fund such tests, which can be costly and require engineers with significant technical expertise. at present,Many security tests are funded and overseen by corporations, raising obvious conflict of interest issues.
Commitment 2: The company commits to sharing information on managing AI risks across industry and with government, civil society and academia.
This promise is also a bit vague. Some of these companies have published information about their AI models — often in academic papers or corporate blog posts. Some of these companies, including OpenAI and Anthropic,Also released a document called “System Cards” outlining the steps they are taking to make these models more secure.
But they also sometimes withhold information, citing security concerns.when When OpenAI released the latest artificial intelligence model GPT-4 this year, itBreaking with industry practice, choose not to disclose the amount of its training data or how big its model is (a metric called “parameters”). The company said it declined to disclose the information due to competition and safety concerns. It also happens to be the type of data that tech companies like to keep away from competitors.
Under these new commitments, will AI companies be forced to disclose such information? What if doing so risks accelerating an AI arms race?
i doubt the white house’s goalsInstead of forcing companies to disclose the number of their parameters, they are encouraged to exchange information with each other about what their models do (or do not) pose risks.
But even this information sharing can be risky. If Google’s AI team, during pre-release testing, prevents new models from being used to design deadly biological weapons, should it share that information outside of Google? Does this give bad actors ideas on how to get a less protected model to perform the same task?
Commitment 3: The company commits to invest in cybersecurity and insider threat protection measures to protect proprietary and unpublished model weights.
This question is very simple and not controversial among the AI insiders I spoke to.“Model Weights” is a technical term referring to the mathematical instructions that give an AI model its ability to function. If you’re an agent of a foreign government (or a rival company) looking to build your own version of ChatGPT or other AI product, weights are what you want to steal. AI companies have a vested interest in tightly controlling this.
The problem of model weight leakage is well known.
For example, the weights for Meta’s original LLaMA language model were leaked on 4chan and other sites just days after the model was released publicly. Given the risk of more leaks, and the possibility that other countries may be interested in stealing the technology from U.S. companies, requiring AI companies to invest more in their own security seems like a no-brainer.
Commitment 4: The companies commit to facilitate third-party discovery and reporting of vulnerabilities in their AI systems.
I’m not quite sure what that means. After every AI company releases a model, it discovers holes in its model, usually because users are trying to do bad things with the model, or circumvent “jailbreaking” in ways the company didn’t foresee.
The White House has pledged to require companies to create “robust reporting mechanisms” for the vulnerabilities, but it’s unclear what that might mean.
An in-app feedback button, similar to the ones that allow Facebook and Twitter users to report offending posts? A bug bounty program, like the one OpenAI launched this year to reward users who find bugs in its systems? Is there anything else? We will have to wait for more details.
Commitment 5: The company is committed to developing strong technical mechanisms to ensure users know when content is generated by artificial intelligence, such as watermarking systems.
It’s an interesting idea, but leaves a lot of room for interpretation.
so far,AI companies have been working hard to design tools that allow people to tell if they are viewing AI-generated content. There are good technical reasons for this, but it’s a real problem when people can pass off AI-generated jobs as their own (ask any high school teacher.)
Many of the current tools advertised as being able to detect the output of AI actually cannot do so with any degree of accuracy.
I’m not optimistic that this issue will be fully resolved, but I’m glad companies are committing to work on it.
Commitment 6: Companies commit to publicly reporting on the capabilities, limitations, and areas of appropriate and inappropriate use of their AI systems.
Another sensible-sounding commitment with plenty of wiggle room.
How often do companies need to report on the capabilities and limitations of their systems? How detailed must this information be?Considering that many companies building AI systems are surprised by the capabilities of their systems after the factso to what extent can they really describe these systems in advance?
Commitment 7: Companies commit to prioritizing research on the risks to society that AI systems may pose, including avoiding harmful bias and discrimination and protecting privacy.
Commitment to “prioritizing research” is a vague commitment. Still, I believe this commitment will be welcomed by many in the AI ethics community, who want AI companies to prioritize preventing near-term harm like bias and discrimination, rather than worrying about doomsday scenarios, as AI safety folks do.
If you’re confused about the difference between “AI ethics” and “AI safety,” know that there are two rival factions within the AI research community, each of which believes the other is focused on preventing harm from mistakes.
Commitment 8: Both companies are committed to developing and deploying advanced artificial intelligence systems to help solve society’s greatest challenges.
I don’t think many people would argue that advanced AI shouldn’t be used to help solve society’s biggest challenges. I wouldn’t disagree with the White House citing cancer prevention and climate change mitigation as two areas it wants AI companies to focus on.
Complicating this goal somewhat, however, is that in artificial intelligence research, things that seem boring at first tend to have more serious consequences.
Some of the techniques employed by DeepMind’s AlphaGo, an artificial intelligence system trained to play the board game Go, are very effective at predicting the three-dimensional structure of proteinsvery usefulwhich is a major discovery that promotes basic scientific research.
Overall, the White House’s deals with AI companies appear to be more symbolic than substantive. There are no enforcement mechanisms to ensure companies abide by these commitments, many of which reflect precautions already taken by AI companies.
Still, it’s a reasonable first step.
Agreeing to abide by these rules shows that AI companies have learned from the failures of early tech companies that waited until they got in trouble before engaging with governments. In Washington, at least when it comes to tech regulation, it pays to show up early.
*Head image source:Visual China
Articles on the website are limited to providing more information and do not represent the views of this website. For reprint, please indicate the source. The reproduced articles come from the Internet. If you have any copyright issues, please contact us: content@chinait.com.