The AI Now Institute at New York University (NYU) has released a report urging lawmakers to set limits on AI, especially in the use of emotion-detecting technology.
The
AI Now Institute is a research institute based at NYU, which studies AI’s impact on society.
AI Now releases a yearly report regarding the state of AI research and the ethical implications of how AI is currently being used. It said action against 'affect recognition' was its top priority because science cannot justify the technology's use and there is still time to stop widespread adoption.
The report argues that these institutions could potentially wield the data to make decisions such as “who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school.”
The ethics of 'reading' a person's mind are already questionable but there are also concerns the technology simply doesn't work. ProPublica found that “aggression detectors” developed by Sound Intelligence—which have been implemented at schools, prisons, hospitals, and banks—read coughs as a sign of aggression.
There are also real concerns the technology is racially biased. The AI Now report highlights a study which ran a set of basketball players’ photos through Face++ and Microsoft’s Face API, both of which deemed black players to have more negative emotional scores than all other players. According to the study, Face++ deemed black players more “aggressive,” and Microsoft’s Face API classified them as having more “contempt” - despite smiles.
“There remains little to no evidence that these new affect-recognition products have any scientific validity,” says AI Now.