How did Facebook become a pit of trolls and misinformation? You used to be able to trust the people on the social network, and therefore the social network itself, according to recent comments by Facebook's VP of public policy. But when people no longer had to tie their online and offline identities together, the digital versions were unleashed. That's when many of the problems started.
At first, only students who had an elite college email address could join Facebook. That connected their profile to their IRL identity. They had to play by the rules because they couldn't just set up a new email address to get another account. There was a social incentive too. It'd be harder for friends to find them if they didn't use the main email address others had in their address books.
At the time, there was a sense that Facebook was actually more real than reality because it was a place just for peers. Facebook hit Stanford like a tidal wave my freshman year, signing up nearly everyone in a matter of weeks. Because you were only visible to the rest of your college network, not to parents or employers, people weren't shy about sharing the more expressive, silly, or immature parts of their life. But they also treated each other with a level of respect because you knew everyone on there was an actual person, and that your actions impacted perceptions of your identity beyond the edges of the screen. Being a jerk on Facebook could get you ostracized from the relatively tightly-knit college community.
It didn't help that compared to other online social products like the Wild West of Myspace or the unlit alleyways of random forums, Facebook looked even more civil and dignified. This all combined to see Facebook neglect to exercise the muscles of transparent policy, consistent moderation, and predictive safeguarding.
When Facebook opened up to the public in 2006, 2.5 years after launch, it never built a strong enough system of accountability to replace authentication through finite, genuine college email addresses. The company seemed to think people would be on their best behavior simply of their own accord, absent the technical and communal incentives. That idealistic shortsightedness has become central to many of its worst mistakes. Facebook's leaders saw the best of humanity and how it'd manifest on their product, not the worst-case scenarios.
Then in 2009 it began recommending everyone set to their photos and status updates to be public, further demoting the sense of intimacy that drives accountability with little concern for how its digital soapbox could be abused. An effort to beat Twitter at news and article sharing in the mid-2010s awarded Likes to those sharing increasingly shocking and polarized stories, regardless of their accuracy. Hyper-partisan Pages emerged to capitalize on the News Feed's thirst for what stoked our divisions. Facebook had flipped from rewarding civil community participation to rewarding sensational community exploitation.
This idea that Facebook was ill-prepared to operate entirely in public was crystallized by statements made at the Social 2030 conference from Elliot Schrage in March just before quarantine started. He's Facebook's VP of comms and public policy who promised step down in mid-2018 in the wake of the Cambridge Analytica scandal but is still with the company.
"It got started . . . as a college service where authenticity was determined by your college email address. And so the idea was that individuals would be authentic as a result of the identity they had brought online from their real-world existence. They would be accountable through their authenticity, and authenticity would be adequate to provide guardrails for expression in the establishment of community norms" Schrage said at the conference.
What was left to drive accountability and authenticity after Facebook dropped the .edu requirement did not prove adequate. If a user has their profile or Page terminated for inappropriate conduct, they can just start another one. All that's at risk is their friend or follower account. That might be sufficient to deter malicious or insensitive activity by long-standing, rule-abiding users. But if accounts are created for the explicit purpose of abuse, this doesn't amount to much of a threat, as they're unlikely to have invested time in building up their network such that it's a loss that would be a significant punishment.
Zooming out, this mirrors the long-term arc of the internet. It arose as a communication tool for academic institutions. These stable, prestige-focused, communities of researchers provided built-in offline consequences for abusing the network. What wasn't a big problem from the start didn't get the attention to be properly fixed with core, scalable safeguards.
Facebook's contemporaries didn't push it to compete on moderation either. YouTube's seemingly innocuous start in viral videos gave way to comment reel cess pools, and its algorithm became a radicalizing misinformation machine. Twitter's in-group origins amongst tech early adopters and journalists, and the stereotype that it was about sharing what you had for breakfast gave way to legions of anonymous bots and weak enforcement around harassment. Subsequent social apps without deep enough network effects or utility to offset these shortcomings buckled under the weight of their toxic communities. For example, anonymous apps like Secret and YikYak that ignored the initial red flags couldn't sufficiently bolt on safety and moderation, and eventually shut down.
None of these companies fully foresaw the problems, or established adequate protections or positive norms. They all propagated the status quo of a hands-off approach to moderation, allowing Facebook to coast. Each faulty social network provided air cover for the others to fail upwards at safety.
So if the best time to build in accountability, authenticity, and safety is at inception, the second best is right now. There are plenty of opportunities to heap friction on bad actors to discourage problematic behavior:
Require a unique phone number for registration or access to harassment vectors such as the ability to reply to strangers, as I've suggested for Twitter. While not totally finite, phone numbers could stand in for .edu email addresses by at least making it more cumbersome to spin up new troll accounts after being banned.
Make business or media accounts such as Facebook Pages pay a behavioral security deposit, relative to local buying power, that they lose if the Page is suspended for policy violations, but could get refunded if they voluntarily delete their account or after a certain period of time of innocent behavior. This mirrors the 'staking' concept in cryptocurrency where validators lose currency they've put up as a deposit if they act fraudulently against the health of the network. Some early forums like SomethingAwful used a similar system.
Reduce the visibility of resharing debunked or low-legitimacy content, and those that reshare it. If your crazy uncle wants to keep posting verifiably false conspiracy theories, they shouldn't be algorithmically amplified and his followers should have to visit his account to see them. This lets social network maintain a degree of free speech without dispensing free reach.
Verify the identities of more large Group moderators and heavily-followed accounts, not just Pages and Advertisers, to ensure they’re authentic.
Thankfully, newer social networks are taking a harder line on this, building accountability into their policies and products earlier. Snapchat for instance curates its premium content rather than offering an open platform, and has taken a stand against figures promoting violence including Trump by declining to promote them on its Discover page. Safety-as-as-service providers like Hive and L1ght offer powerful moderation capabilities for companies that don't want to build the function in-house. And founders of newer social networks I've spoken to seem perfectly happy to wield the ban hammer without hesitation, while pushing their communities to be overtly inclusive of new members and underrepresented minorities.
Regardless of the specific methods, what's clear is that moderation and accountability on social networks require investment. Perhaps these kinds of apps aren't as wildly profitable and fast-growing as they initially appeared once you factor in the cost of protecting the vulnerable, or reduce user counts by eliminating legions of bots. Facebook at least now seems cognizant of the problem and has made quantifiable pledges to staffing up headcount and spending on moderation, unlike its peers (though they could all do more to ensure the mental health of these teams).
But the financial burden of civility for social networks seems like a reasonable tax given they merely aggregate the creativity and attention of everyone else. The heart of a social networking product is nothing but the chrome you share into and whatever keeps that space safe.
App Of The Week: HAGS - The Snapchat Yearbook
High school seniors got robbed of their grand finale — those last days celebrating and signing yearbooks with “Have a great summer”. So 22-year-old Suraya Shivji, her 18yo brother Jameel who just graduated, and 19yo James Dale turned the acronym into an app called HAGS that lets friends trade signatures through a SnapKit-powered digital yearbook. High schoolers login with their Snapchat account, browse friends on Class Pages for their school, use the clever “sign to unlock” feature to give and get signatures, and ask for them by posting a sticker to Snapchat.
I love HAGS’s handwritten fonts, page-turn animations, and imperfect doodle design style that make it feel like it was surreptiously sketched in the back of a classroom. They give it much more personality than competing Saturn Yearbook from high school calendar and collaboration app Saturn. Sign to unlock drives playful virality while accurately mimicking the offline ritual. HAGS could have monetized by selling you a physical version of your yearbook, but is instead donating all the proceeds to Know Your Rights Camp, which puts on education and self-empowerment events for black and brown communities.
Suraya, who has interned at Apple, Spotify, and Figma, tells me “Our greater vision includes a network based on a high school student’s unity with their class.” I’m excited to see more purpose-built apps like HAGS emerge atop [[Snapchat’s teen operating system]] and make use of the [[quarantine user loan]] while it lasts. Though the mobile app market has matured and made discovery difficult, there’s never been a better time to grow a product dedicated to the Gen Z market.
Get ready for the rise of “Hospital At Home” care where telemedicine and nurse visits can replace esome xpensive hospital stays and free up healthcare resources for COVID-19 - from UCSF’s coronavirus influencer/doctor Bob Wachter’s weekly update
Mod culture, where indie developers alter existing video games to create new ones, has birthed Counter-Strike and the Fortnite-popularized Battle Royale format. Quick distribution and cheap experimentation poises modding as an increasingly popular creative space - according to Pace Capital’s Chris Paik
Meme Of The Week: Stop doomscrolling
Feedback? I’m still tinkering with the format of the newsletter, and considering shorter, more frequent updates rather than burying too much beyond the main essay. Reply with your thoughts! And if you liked it, fwd it! Thanks to Bobby Goodlatte, Jason Prado, and Ronen V for ideas that helped refine the Facebook essay up top. Best of luck with *motions to everything*