A little more than two years ago, technology leaders at the forefront of developing artificial intelligence made an extraordinary request from legislators. Washington wanted to organize it.
The executive directors of the technology have warned that the Turedi artificial intelligence, which can produce a text and images that mimic human creations, have the ability to disrupt national security and elections, and can eventually eliminate millions of jobs.
Amnesty International can become “completely wrong”, Sam Al -Tamman, CEO of Openai, in Congress in May 2023. “We want to work with the government to prevent this from happening.”
However, since the election of President Trump has changed technology leaders and their companies, in some cases, the opposite of the path, with the government’s bold requests to stay far from their way, while it became the most powerful batch to develop its products.
In recent weeks, Meta, Google, Openai and others have requested the Trump administration to prevent the laws of the State of Amnesty International and declare that it is legal to use copyrights to train artificial intelligence models. They are also pressing the use of federal data to develop technology, as well as for easy access to energy sources of computing requirements. They requested tax exemptions, grants and other incentives.
This shift was enabled by Mr. Trump, who announced that Amnesty International is the most valuable weapon of China in advanced technologies.
On his first day in office, Mr. Trump signed an executive order The rules of safety test for artificial intelligence declined Used by the government. Two days later, he signed another thing, as he requested suggestions in the industry to create a policy “to maintain Aira’s global domination in America.”
Laura Caroli, an older colleague at the Wadwani Center at the Center for Strategic and International Studies, a non -profit research company, said technology companies “really encouraged by the Trump administration, and even issues such as safety and AI fully responsible for its concerns.” “The only thing that matters is to establish the US leadership in artificial intelligence.”
Many artificial intelligence policy experts are concerned that such a wild growth can be accompanied, among other possible problems, the rapid spread of political and healthy misinformation; Discrimination by financial financial, job and housing monitoring applications; And electronic attacks.
Reflection by technology leaders is blatant. In September 2023, more than ten of them supported the list of artificial intelligence at a summit on the Capitol Hill organized by Senator Chuck Schumer, Democrat in New York and the leader of the majority at that time. At the meeting, Elon Musk warned of the “civilized risks” offered by artificial intelligence
In the aftermath, the Biden Administration began working with the largest artificial intelligence companies to test its systems for safety and security weaknesses and safety standards allocated to the government. States like California have provided technology regulation legislation with safety standards. Publishers, authors and representatives sued technology companies about their use of copyrights to train artificial intelligence models.
(The New York Times filed a lawsuit against Openai and its partner, Microsoft, accusing them of violating copyright regarding the content of news related to artificial intelligence systems. Openai and Microsoft denied these claims.)
But after Mr. Trump won the elections in November, technology companies and led them to intensify pressure immediately. Google, Meta and Microsoft donated a million dollars to install Mr. Trump, as did Mr. Altman and Tim Cook from Apple. Mark Zuckerberg from Meta threw a installment party and met Mr. Trump several times. Mr. Musk, who has his artificial intelligence company, Xai, spent almost every day on the side of the president.
On the other hand, Mr. Trump praised artificial intelligence advertisements, including a plan from Openai, Oracle and SoftBank to invest $ 100 billion in artificial intelligence data centers, huge buildings full of servers that provide computing power.
“We must tend to the future of artificial intelligence with optimism and hope,” Vice President JD Vance told government officials and technology leaders last week.
At the Amnesty International Summit in Paris last month, Mr. Vance also called for “pro -growth” policies of artificial intelligence, and warned world leaders against “excessive organization” that could “kill a transformative industry exactly as it starts.”
Now technology and other artificial intelligence companies provide responses to the second executive order of the president, “removing the barriers that prevent American leadership in artificial intelligence”, which has imposed the development of Amnesty International’s growth policy within 180 days. Hundreds of them provided comments with the National Science Corporation and the Science and Technology Office Policy to influence this policy.
Openai presented 15 pages of comments, and asked the federal government to anticipate the states to create artificial intelligence laws. The San Francisco company also called for its Deepseek, which is a Chinese Chatbot created for a small part of the cost of Chatbots developed in the United States, saying it was an “important measure of the state of this competition” with China.
If the Chinese developers “access to data and leave American companies without accessing fair use, the race for artificial intelligence has ended effectively,” the United States government is required to deliver data to feed their systems.
Several technology companies also argued that their use of copyrights protected to train artificial intelligence models was legal and that the administration should take its side. Openai, Google and Meta said they believe they have legal access to copyrights such as books, movies and art for training.
Meta, which has its artificial intelligence model, called Llama, prompted the White House to issue an executive order or another procedure “to explain that using the data available to the public to train models is unambiguously unambiguous.”
Google, Meta, Openai and Microsoft said that its use of copyright data is legal because the information has turned in the process of training its models and was not used to repeat the intellectual property of rights owners. Actors, authors, musicians and publishers have argued that technology companies must compensate for their work and use.
Some technology companies also pressed the Trump administration to support the “open source” artificial intelligence, which makes the computer code free to copy, modify and reuse.
Meta, which owns Facebook, Instagram, and WhatsApp, prompted the effort to obtain a recommendation for policy on open sources, which other artificial intelligence companies, such as Antarbur, described as increased weakness in security risks. Meta said that open source technology is speeding up the development of artificial intelligence and can help startups to catch up with the most firm companies.
Andressen Horowitz, a silicon investment capital company, has also called for stakes in dozens of artificial intelligence companies, to support open source forms, on which many of its companies depend to create AI products.
The company said that Andrink Horwitz gave the most accurate arguments against new regulations for current safety and consumer protection and civil rights.
“It prohibits damage and punishing bad actors, but it does not require developers to jump through the arduous organizational collars based on speculative fear,” Andrink Horwitz said in its comments.
Others continued a warning from Amnesty International until it was organized. Civil rights groups called for regulations to ensure non -discrimination against the weak population of housing and employment decisions.
Artists and publishers said that artificial intelligence companies need to reveal their use of copyright materials and asked the White House to reject the arguments of the technology industry that their unauthorized use of intellectual property to train their models was within the limits of the law of copyright. The Artificial Intelligence Policy Center, a research and pressure group, called for third -party auditors for national security points systems.
“In any other industry, if the product harms or harms consumers negatively, this project is defective and the same standards should be applied to artificial intelligence,” said KJ Bagchi, Vice President of the Civil Rights and Technology Center, who submitted one of the requests.
adxpro.online