Major Social Media Sites Are Failing LGBTQ+ People: Report
Major social media companies are failing to protect LGBTQ+ users from hate speech and harassment, according to a new report from GLAAD out Tuesday.
The Social Media Safety Index report highlights how major platforms either do not have policies to protect user data or fail to enforce them; refuse to safeguard users against against online hate; and can’t or won’t stop the proliferation of harmful stereotypes and disinformation about LGBTQ+ people.
Advertisement
Now in its fourth year, the report ranked six social media platforms, including Meta’s Facebook, Instagram and Threads, as well as TikTok, YouTube and X (formerly Twitter), on 12 different criteria. Among these metrics were whether each company has explicit policies to protect trans, nonbinary and gender-nonconforming users against deadnaming and misgendering; has options for users to add their pronouns to profiles; protects legitimate LGBTQ+-related advertisements; and tracks and discloses violations of LGBTQ+ inclusivity policies.
GLAAD found social media companies miss the mark on almost all of these metrics — and allow harmful rhetoric to proliferate on their platforms, even as they rake in billions in advertising profits.
Almost all of the platforms received an F rating and a corresponding percentage. TikTok, however, received a D+, a slight improvement from last year’s rating, because it recently adopted a policy to block advertisers from targeting users based on their sexual orientation or gender identity.
Although many of these social media companies currently have policies that on paper seem to protect LGBTQ+ users, the report notes that the platforms do little to actually stop the spread of harmful and false information.
Advertisement
For example, X, which received the lowest rating by percentage, has seen a sharp uptick in misinformation about LGBTQ+ people by “anti-LGBTQ” influencers. The Libs of TikTok account, for example, run by Chaya Raichik, is known for posting misinformation about gender-affirming care and for equating LGBTQ+ people with “groomers″ and “pedophiles.” There have been dozens of reports of bomb threats at schools, gyms, and children’s hospitals that have been singled out by the account.
Elon Musk, the owner of X, has also promoted anti-trans content from Raichik and others, including posts that have praised restrictions on trans women participating in sports. Republican legislators, who have introduced a record number of anti-LGBTQ bills in statehouses across the country annually since 2020, have similarly amplified and promoted anti-LGBTQ+ sentiment on social media.
“There is a direct line from dangerous online rhetoric and targeting to violent offline behavior against the LGBTQ community,” Sarah Kate Ellis, GLAAD’s CEO, wrote in the report.
Though X has been one of the biggest platforms of anti-LGBTQ+ rhetoric, it only taken in $2.5 billion in advertising revenue in 2023. Meta — which has allowed posts equating trans people to “terrorists,” “perverts,” and the “mentally ill” to remain on its platforms — generated $134 billion in revenue last year.
Social media companies have also targeted legitimate LGBTQ+ content and made their platforms less safe and accessible to LGBTQ+ users, the report says.
Advertisement
The report notes one instance from March of this year, when the nonprofit Men Having Babies shared a photo of two gay dads and their newborn child in an Instagram post. Soon after posting, the organization saw that the platform had flagged Men Having Babies’ post as “sensitive content” that may “contain graphic or violent content.”
That label is typically used to “mitigate extreme content,” Leanna Garfield, GLAAD’s social media safety program manager, told Pink News earlier this year. “That shouldn’t include something as innocuous as a photo of two fathers with their newborn.”
Increased usage of artificial intelligence tools for content moderation could lead to LGBTQ+ posts being targeted even more. An investigation by Wired in April found that AI systems like OpenAI’s Sora displayed biases in its depiction of queer people.
Companies like Facebook have at times relied “exclusively” on automated systems to review content, forgoing any human review in the process, Axios reported last year. A GLAAD report released around the same time said this practice was “gravely concerning” and could jeopardize the safety of all users, including those who are LGBTQ+.
The new GLAAD report claims that other tech companies, which the report did not name, have created “automated gender recognition” technology that purports to predict a person’s gender in order to better sell products through targeted ads. But privacy advocates have warned that these technologies could be taken a step further, to try to categorize and surveil people in gendered or sex-segregated spaces like bathrooms and locker rooms.
Advertisement
Some countries and regions, like the European Union have adopted restrictions on AI and have regulated social media platforms’ practices, but the United States has lagged. The GLAAD report recommends that platforms strengthen and enforce their current policies to protect LGBTQ+ people — including by stopping advertisers from targeting LGBTQ+ users and by improving content moderation without simply automating it.
Comments are closed.