During the Trump era, artificial intelligence scientists are required to remove “ideological bias” from strong models

The National Institute for Standards and Technology (NIST) has issued new instructions for scientists that participate with the American AI (AISI) Institute that eliminates the “artificial intelligence integrity”, “AI responsible”, and “Adel AI” in the human skills that you expect from members and applies to determine the priority of the “optological reduction”, to competing human competition.

Information comes as part of the updated cooperative research and development agreement for members of the Institute of Artificial Intelligence Institute, sent in early March. In the past, this agreement encouraged researchers to contribute to artistic work that can help define the behavior of the discriminatory model and its reform related to sex, race, age, or inequality in wealth. Such biases are very important because they can directly affect the final users and damage unparalleled economically minorities and groups.

The new agreement removes the tools to develop “ratification of content and tracking their source” as well as “placing signs on artificial content”, indicating less interest in tracking misconception and deep falsehood. It also adds the focus on America’s status first, and he asks for one working group to develop testing tools “to expand the position of global artificial intelligence in America.”

“The Trump administration has removed safety, fairness, wrong information and responsibility as things that you can for artificial intelligence, which I think is talking about itself,” says one of the researchers in an organization working with the Institute of Safety of Artificial Intelligence, who asked not to be named for fear of revenge.

The researcher believes that ignoring these problems may harm ordinary users by allowing algorithms that are characterized by income or other population composition to move away. “Unless you are a technical billionaire, this will lead to a worse future for you and the people you care about. Expect that Amnesty International will be unfair, discriminatory, inappropriate, and has been published in an irresponsible manner,” the researcher claims.

“He is wild,” says another researcher with the Institute of Artificial Intelligence in the past. “What does it mean even to flourish humans?”

Elon Musk, who is currently leading a controversial effort to reduce government and bureaucratic spending on behalf of President Trump, criticized the artificial intelligence models designed by Openai and Google. Last February, Mimi was posted on X, where Gemini and Openai were ranked on “racist” and “wake up”. Often Cite An accident in which one of the Google models discussed whether it is wrong for someone to make mistakes even if it would prevent a nuclear attack – a very unlikely scenario. Besides Tesla and Spacex, it runs Musk Xai, an Amnesty International company that competes directly with Openai and Google. A researcher has recently developed a new technology to change the political tendencies of large language models, as WIRED mentioned.

An increasing group of research shows that political bias in artificial intelligence models can affect both Liberals And conservatives. For example, A study of the Twitter recommendation algorithm In 2021 she explained that the users were more likely to show the views of right -wing tendencies on the platform.

Since January, the so -called government efficiency (DOGE) in Musk has been sweeping the United States government, where they had effectively shot civilian employees, stopped spending, and created an environment believed to be hostile to those who may oppose the Trump administration’s goals. Some government departments, such as the Ministry of Education, have guided and deleted the documents mentioned. Dog also targeted NIST, AISI’s mother organization, in recent weeks. Dozens of employees were launched.

Leave a Comment