/Why Can’t Tech Companies Do the ‘Right’ Thing the First Time?

Why Can’t Tech Companies Do the ‘Right’ Thing the First Time?

Illustration: Intelligencer

YouTube announced on Wednesday it has demonetized the channel of one Steven Crowder, a right-wing commentator who has spent months hurling slurs like “lispy queer” and “anchor baby” at Vox video host Carlos Maza, a gay Latino. Whenever Crowder would mention Maza in his content, Maza says he faced “a wall of homophobic/racist abuse on Instagram and Twitter.” One time Crowder’s fans even doxed Maza, finding his phone number and texting him repeatedly. So YouTube saying Crowder was out for “a pattern of egregious actions” that “harmed the broader community” and violated the platform’s policies seemed, on its face, like a proportional response. Right? Well, right, but only if you don’t know that at first the company did absolutely nothing to help Maza.

At the end of May, Maza, fed up with Crowder’s storied history of calling him out on his show, posted a compilation video of clips of Crowder. Almost a week later, YouTube replied, directly to Maza via its customer service Twitter account, letting him know that it had reviewed “the videos flagged” but that the language, while “clearly hurtful,” wasn’t in violation of policy. “Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site,” @TeamYouTube tweeted. In another tweet, YouTube made sure to tell Maza that the platform hosting Crowder’s videos was not an endorsement of his opinions.

YouTube’s response was quickly lambasted by media outlets and Maza for its lack of teeth. “I put myself out there and showed a bunch of people the abuse I had been experiencing. It was humiliating and degrading, and I can’t believe it STILL wasn’t enough to get YouTube to take it seriously,” Maza told Intelligencer. “I don’t know what LGBT people are supposed to do to get this company to actually protect us.” Not even a day later, YouTube course corrected and made the demonetization announcement. Which, while maybe not a perfect way to handle the scenario, was a markedly less anemic reaction than YouTube’s initial response.

As Maza pointed out on Twitter, Crowder’s brand and income are not solely tied to YouTube ad revenue: On YouTube Crowder has 3.8 million subscribers who are likely to buy his merchandise, notable his “Socialism Is for F*gs” shirts. YouTube responded that “in order to reinstate monetization on this channel, he will need to remove the link to his T-shirts.” [Brief pause here while I roll my eyes so far back in my head I am permanently blinded and unable to ever read another bad tweet.]

This second response could have been YouTube’s first response if it had just taken more time. YouTube, in its initial tweets to Maza, said it was still investigating Crowder’s channel. A lot of grief, for YouTube, could have been spared if the company had just opted to say, “Hey, we’re still looking into this.” (Maza told Intelligencer that no one from YouTube ever contacted him directly.) But also … this is par for the course with YouTube. Back when Logan Paul posted a video of an apparent suicide victim that he found hanging from a tree in Japan, the company’s first response was similarly nonresponsive. First it offered a nothing statement of apology to the family of the deceased and links to the National Suicide Prevention Lifeline. Later, the platform, following further deliberation, announced that Paul would no longer be included in its Partner Program, the same one Crowder was booted from on Wednesday. (A month after that YouTube temporarily suspended all ads on Paul’s videos after he filmed himself tasering a rat.) Paul, however, later indicated that he wasn’t worried about money thanks to his merchandise line. His brand, thanks largely to YouTube, was strong enough to weather losing monetized ads. Crowder is in the same boat.

Doing as little as possible and hoping the public outcry isn’t big enough to warrant taking more of a stand is the norm not only at YouTube. On Wednesday morning, I woke up to an email from a fellow journalist working on a story on revenge porn. She’d discovered a Twitter account using a photo of me as its avatar. The account was purporting to be a 19-year-old girl and was selling nude photographs of “me.” The few tweets from the account were of semi-nude, headless women who are not me. I immediately reported the account to Twitter, explaining the situation. The platform responded saying the account technically wasn’t violating any of its policies — like if the avatar photo contained nude or pornographic content — and would remain. I tweeted about it immediately, complaining and tagging Twitter. Only after I did that, and after the tweet was shared by many accounts with significantly larger followings than my own, did Twitter send me a second email informing me the account had been suspended.

I’m lucky. I’m a journalist with a verified Twitter, which means my tweets are algorithmically more inclined to be seen. I have friends and contacts at Twitter. But if that wasn’t my situation, if I was just a typical Twitter user who found her likeness being misrepresented in a degrading and alarming way that skirted around Twitter’s terms of service, chances are the account would have remained online indefinitely.

Much of moderation on tech platforms is automated or, if it does pass through human hands, is done by people, often contractors, so low down the corporate food chain they aren’t in positions to effect real change and dictate policy. Which means the people — or computers — who have the most firsthand contact with issues like these aren’t the ones, ultimately, who decide how to handle them. Instead, the decisions are made by people who appear to lack full understanding of situational nuance — YouTube is still tweeting trying to clarify what it meant by that inane tweet about Crowder’s “Socialism Is for F*gs” shirts — and make knee-jerk decisions in the hope that doing as little as possible will be enough to quell flak. These companies could opt to take the time to make these decisions. They could opt to have their secondary reactions be their first. But they don’t. Because for every time things blow up in their faces, there are likely many more instances we don’t hear about because the strategy works, the initial non-reaction silences people. Companies like YouTube and Twitter, up until now, have always been able to afford to take that gamble. It’s possible that luck is finally running out.