banner



ImageNet Roulette Shows How AI Stereotypes Humans - ortegabeent1988

imagenet roulette rami malek

While I have fall upon a plenty of AI and AI-allied applications making a positive impact in the real domain, I'm afraid ImageNet Roulette is for sure not one among them. Rather, IT exposes the dark part of AI. The model is presently being showcased As part of the Training Humans exhibition by Trevor Paglen and Kate Crawford at the Fondazione Prada Museum in Milan.

"ImageNet Roulette is a incitement designed to help us see into the ways that humans are sorted in machine learning systems. It uses a neural net trained on the "Person" categories from the ImageNet dataset which has over 2,500 labels used to classify images of people.", describes the creators.

The arrangement uses an give-source Caffe deep learning framework that is trained on the images and labels that lie in to the "person" category. ImageNet Roulette first detects faces in the image inputs. If it finds ane, the image will be sent to the Caffe model for classification. If not, the original image gets returned along with a label in the upper left corner.

The categories are worn from WordNet and they contain a good deal of offensive categories that might be racist and even misogynistic at multiplication. "We want to shed light on what happens when technical systems are trained on problematic training data. AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that march – and to show the ways things can go wrong.",say the creators.

As you stool see below, the neural net describes the sample image of Leonardo da Vinci DiCaprio as "pretende", "dissembler", "dissimulator", and a couple many offensive words. The worst part, however, is the description IT provided: "a individual who professes beliefs and opinions that he or she does not restrain in order to hide his or her real feelings OR motives".

ImageNet Roulette Shows How AI Stereotypes Humans

In fact, ImageNet Roulette sets an ideal example of what an AI model should not be and how worse it could get if the wrong dataset is used for training AI-settled models. Thus, what are your thoughts on ImageNet Roulette? Let us screw in the comments.

Source: https://beebom.com/ai-stereotyping-humans-imagenet-roulette/

Posted by: ortegabeent1988.blogspot.com

0 Response to "ImageNet Roulette Shows How AI Stereotypes Humans - ortegabeent1988"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel