IBM has been accused of using Flickr photos without consent for a facial recognition project.
IBM is said to have extracted up to one million photos from a collection of Flickr images for its facial-recognition project, without the consent of the people in the images.
The images were extracted from a dataset of Flickr images originally compiled by Yahoo, with the full dataset, called the YFCC100M, comprising 99.2.million photographs.
NBC reported that many people who were photographed were unaware that images of them had been annotated with facial recognition notes, allowing the potential for the image to be used to train algorithms.
A photographer told NBC news that "None of the people I photographed had any idea their images were being used in this way.”
IBM explained that it needed a dataset of this size to ensure “fairness”, to prevent bias against certain groups.
IBM defended its decision in a statement, saying that " We take the privacy of individuals very seriously and have taken great care to comply with privacy principles.
"Individuals can opt out of this dataset."
However, it would only be possible to opt out if an individual was aware that their data had been used.
Privacy International, a UK based charity that advocates for privacy rights, condemned IBM’s usage of the photos, "Flickr's community guidelines explicitly say, 'Don't be creepy.' Unfortunately, IBM has gone far beyond this.
"Using these photos in this way is a flagrant breach of anti-creepiness - as well as a huge threat to people's privacy."