Lina Khan, commissioner of the Federal Trade Commission, issued a joint statement Tuesday along with three other federal agencies warning companies that are monitoring the use of artificial intelligence for bias. File photo of the pool by Graeme Jennings/UPI | license photo
April 25 (UPI) — The Biden administration is warning businesses and financial firms that their federal agencies have the ability to crack down on civil rights and other bias violations committed through the use of artificial intelligence.
The warning from the four federal agencies of the Department of Justice, the Bureau of Consumer Financial Protection, the Federal Trade Commission and the Equal Employment Opportunity Commission comes amid growing concern about the widespread use of advanced technology in daily life and its potential effects on civil rights. as well as fair competition and consumer protection.
The four agencies previously raised concerns about illegal uses of the rapidly advancing technology, and their warning comes as lawmakers work on legislation on a new regulatory framework for AI.
On Tuesday they said in a joint statement that automated systems are advertised as having cost savings and efficiencies, among other benefits, but “their use also has the potential to perpetuate illegal bias, automate illegal discrimination, and produce other harmful results.”
In their warning, the federal agencies said they are armed with legal authorities to combat the harmful use of automated systems and that they “take seriously” the responsibility of ensuring that the deployment of AI is consistent with federal law.
“We already see how artificial intelligence tools can fuel fraud and automate discrimination, and we will not hesitate to use the full reach of our legal authorities to protect Americans from these threats,” said FTC Chairman Lina Khan, in a declaration.
“There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”
The agencies explained that potential AI discrimination can come from automated systems that use unrepresentative or lopsided data sets or data sets that incorporate historical biases, as well as systems designed by developers based on faulty assumptions about their users, context, or practices. underlying. The inner workings of the systems are also not always made public, which creates a lack of transparency and the inability to know if it is working without bias.
Assistant Attorney General Kristen Clarke of the Justice Department’s Civil Rights Division warned social media platforms, banks, landlords, employers and other businesses that use AI and algorithms to make decisions that her agency stands ready to hold them to account. if they violate any law.
“This is a time for active engagement, and the Department of Justice will continue to work with our government partners to investigate, challenge, and combat discrimination based on automated systems,” he said in a declaration.
The announcement comes about two weeks after Senate Majority Leader Chuck Schumer, DN.Y., Announced launching an effort to regulate AI to ensure the United States stays one step ahead of China in shaping how the technology looks.