ugc_banner

Racial bias found in facial recognition tools: US government study

WION Web Team
Washington, United StatesUpdated: Dec 20, 2019, 06:01 PM IST
main img
In this file photo, a representative image of Artificial Intelligence can be seen. Photograph:(Twitter)

Story highlights

The study said it found the American-Indian category showed the highest rate of 'false positives'.

According to a US government study, facial recognition systems can produce inaccurate results, especially for non-whites.

The study by the National Institute of Standards and Technology (NIST) identified "false positives rates" for Asian and African American. It also found that African-American females are more likely to be misidentified.

NIST said it tested 189 algorithms from 99 developers including companies like Microsoft, however, tech giants Facebook, Amazon, Apple and Google did not submit their profile for scrutiny. The study found "false positives" where an individual is mistakenly identified and "false negatives" in which the algorithm fails to match face in the database.

The revelations come even as facial recognition technologoy is being increasingly deployed by police in the US to identify criminal suspects and is also used in airport security, banking, online retail and in mobile unlocking systems.

It said Microsoft had almost ten times more "false positives" for women of colour than men of colour in one-to-many tests.

The study said it found American-Indian category showed the highest rate of "false positives" with black women likely to be falsely identified almost 35 per cent of the time. It said  "false positives" rates for Asians and African-Americans was as much as hundred times higher than for whites.