A study by Michal Kosinski and Yilun Wang from Stanford University in the US has shown that a machine with specific algorithms can decipher if a person is gay or straight.
The study which was first reported on in The Economist has shown that the machine can use face detection to analyse people’s faces and point out subtle difference in face structure.
The algorithm was tested on pictures of 36 630 men and 38 593 women.
The pictures were taken from online dating profiles of homosexual and heterosexual people.
Speaking to The Economist the researchers said, “The algorithm was able to detect differences in facial structures that may relate to the level of hormones such as testosterone that foetuses are exposed to in the womb, which may determine sexuality.”
The results of the test were astounding. The algorithm was able to guess a man’s sexual orientation correctly 81% of the time and a woman’s 74% of the time.
According to the study, humans guessed correctly only 61% of the time for men and 54% for women.
However, as revolutionary as this new technology is, the LGBTQA+ community hasn’t received it well, and several dangers of using such a machine were pointed out.
Chief digital officer at GLAAD Jim Halloran – which is the world’s largest LGBTQ media agency – told Mashable he doesn’t believe technology can correctly tell a person’s sexuality.
"What their technology can recognise is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar.
Those two findings shouldn’t be conflated,” he told Mashable.
The Human Rights Campaign (HRC) also disapproved of the technology.
Director of public education and research at the HRC told the Washington Post, “Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay.”
“Stanford should distance itself from such junk science,” he said.
In response to the comment, Kosinski and Wang said that although they could be wrong, dismissal of this study by organisations such as GLAAD and the HRC could put the very people they’re trying to advocate for at risk.
Glad to see that our work inspired debate. Your opinion would be stronger, have you read the paper and our notes: https://t.co/dmXFuk6LU6 pic.twitter.com/0O0e2jZWMn
— Michal Kosinski (@michalkosinski) September 8, 2017
Several limitations of the study were pointed out by various LGBTQA+ groups who said that the technology was used only on one race and was limited to just two sexual orientations – gay and straight – while there are also people who are bisexual, transgender or asexual.
Let me be clear: technology cannot identify someone's sexual orientation. https://t.co/iibXZZgo1M
— Jim Halloran (@jimhalloran) September 11, 2017
The study has also received bad feedback from many LGBTQA+ advocates.
I consider myself peaceful but think researchers @michalkosinski & @simonywang should be burned alive for this.https://t.co/F2hDXXwLpJ
— Tim Wayne (@redtimmy) September 10, 2017
#AI #IoT #ML #DL Why, exactly, would anyone want to use AI to decide whether I’m gay or straight? |… https://t.co/KMcDHvHXol @theguardian pic.twitter.com/1iyYxZsWtr
— Bonnie Robinson (@BonnieFRobinson) September 12, 2017
Why is this even a thing? WHO CARES?? #youdoyoubooboo
— Meag (@gijane74d) September 8, 2017
Sources: The Economist, Mashable, The Guardian, Washington Post