TECH NEWS – an artificial intelligence recommendation system has asked Facebook users who watched a newspaper video featuring black men if they “want to continue seeing videos about primates”.
Facebook responded to BBC News by saying this was a “clearly unacceptable mistake”, blocking the system and launching an investigation, “We apologise to anyone who may have seen these offensive recommendations”. This is the latest in many errors that have raised concerns about racial bias in artificial intelligence.
In 2015, the Google Photos app labelled images of black people as “gorillas”. The company said it was “shocked and sincerely sorry”, although Wired reported in 2018 that it had simply censored photo searches and tags for the word “gorilla”. Then, in May, Twitter admitted that its “saliency algorithm” had used racial bias to crop previews of images. Studies have also shown bias in the algorithms that power some facial recognition systems.
In 2020, Facebook announced a new “inclusive product board” – and a new equity team on Instagram – which will, among other things, test whether its algorithms show racial bias. IBM, on the other hand, prefers to forgo “biased” facial recognition technology. Would Google and big tech companies be “institutionally racist”? Probably not.
The “primate” recommendation was “an algorithmic error on Facebook” and did not reflect the content of the video; a representative told BBC News. But Facebook itself argued: ‘As soon as we realised this was happening, we immediately disabled the entire topic recommendation feature to investigate the cause and prevent it from happening again. As we said, while we have improved our AI, we know it’s not perfect, and we still have room for improvement,” they admitted.
Source: BBC News