Broken Code: Understating the Poisoned Facebook

The company would later say that it was trying to improve the quality of reports, not stifle them. But Bejar didn’t have to see that memo to recognize bad faith.

by Jeff Horwitz

Arturo Bejar’s return to Facebook’s Menlo Park campus in 2019 felt like coming home. The campus was bigger than when he’d left in 2015—Facebook’s staff doubled in size every year and a half—but the atmosphere hadn’t changed much. Engineers rode company bikes between buildings, ran laps on a half-mile trail through rooftop gardens, and met in the nooks of cafés that gave Facebook’s yawning offices a human scale.

Bejar was back because he suspected something at Facebook had gotten stuck. In his early years away from the company, as bad press rained down upon it and then accumulated like water in a pit, he’d trusted that Facebook was addressing concerns about its products as best it could. But he had begun to notice things that seemed off, details that made it seem like the company didn’t care about what its users experienced.

Facebook CEO Mark Zuckerberg looks down as a break is called during his testimony before a joint hearing of the Commerce and Judiciary Committees in April 2018. (Andrew Harnik / AP Photo)

Bejar couldn’t believe that was true. Approaching fifty, he considered his six years at Facebook to be the highlight of a tech career that could only be considered charmed. He’d been a Mexico City teenager writing computer games for himself in the mid-1980s when he’d gotten a chance introduction to Apple co-founder Steve Wozniak, who was taking Spanish lessons in Mexico.

After a summer being shown around by a starstruck teenage tour guide, Wozniak left Bejar an Apple computer and a plane ticket to come visit Silicon Valley. The two stayed in touch, and Wozniak paid for Bejar to earn a computer science degree in London.

“Just do something good for people when you can,” Wozniak told him.

Success followed. After working on a visionary but doomed cybercommunity in the 1990s, Bejar spent more than a decade as the “Chief Paranoid” in Yahoo’s once-legendary security division. Mark Zuckerberg hired him as a Facebook director of engineering in 2009 after an interview held in the CEO’s kitchen.

Though Bejar’s expertise was in security, he’d embraced the idea that safeguarding Facebook’s users meant more than just keeping out criminals. Facebook still had its bad guys, but the engineering work that Facebook required was as much social dynamics as code.

Early in his tenure, Sheryl Sandberg, Facebook’s chief operating officer, asked Bejar to get to the bottom of skyrocketing user reports of nudity. His team sampled the reports and saw they were overwhelmingly false. In reality, users were encountering unflattering photos of themselves, posted by friends, and attempting to get them taken down by reporting them as porn. Simply telling users to cut it out didn’t help. What did was giving users the option to report not liking a photo of themselves, describing how it made them feel, and then prompting them to share that sentiment privately with their friend.

Nudity reports dropped by roughly half, Bejar recalled.

A few such successes led Bejar to create a team called Protect and Care. A testing ground for efforts to head off bad online experiences, promote civil interactions, and help users at risk of suicide, the work felt both groundbreaking and important. The only reason Bejar left the company in 2015 was that he was in the middle of a divorce and wanted to spend more time with his kids.

Though he was away from Facebook by the time the company’s post-2016 election scandals started piling up, Bejar’s six years there instilled in him a mandate long embedded in the company’s official code of conduct: “assume good intent.” When friends asked him about fake news, foreign election interference, or purloined data, Bejar stuck up for his former employer. “Leadership made mistakes, but when they were given the information they always did the right thing,” he would say.

But, truth be told, Bejar didn’t think of Facebook’s travails all that much. Having joined the company three years before its IPO, money wasn’t a concern, and Bejar was busy with nature photography, a series of collaborations with the composer Philip Glass, and restoring cars with his daughter Joanna, who at fourteen wasn’t yet old enough to drive. She documented their progress restoring a Porsche 914—a 1970s model derided for having the aesthetics of a pizza box—on Instagram, which Facebook had bought in 2012.

Joanna’s account became moderately successful, and that’s when things got a little dark. Most of her followers were enthused about a girl getting into car restoration, but some showed up with rank misogyny, like the guy who told Joanna she was getting attention “just because you have tits.”

“Please don’t talk about my underage tits,” Joanna Bejar shot back before reporting the comment to Instagram. A few days later, Instagram notified her that the platform had reviewed the man’s comment. It didn’t violate the platform’s community standards.

Bejar, who had designed the predecessor to the user-reporting system that had just shrugged off the sexual harassment of his daughter, told her the decision was a fluke. But a few months later, Joanna mentioned to Bejar that a kid from a high school in a neighboring town had sent her a picture of his penis via an Instagram direct message. Most of Joanna’s friends had already received similar pics, she told her dad, and they all just tried to ignore them.

Bejar was floored. The teens exposing themselves to girls who they had never met were creeps, but they presumably weren’t whipping out their dicks when they passed a girl in a school parking lot or in the aisle of a convenience store. Why had Instagram become a place where it was accepted that these boys occasionally would—or that young women like his daughter would have to shrug it off?

Bejar’s old Protect and Care team had been renamed and reshuffled after his departure, but he still knew plenty of people at Facebook. When he began peppering his old colleagues with questions about the experience of young users on Instagram, they responded by offering him a consulting agreement. Maybe he could help with some of the things he was concerned about, Bejar figured, or at the very least answer his own questions.

That was how Arturo Bejar found himself back on Facebook’s campus. Eager and highly animated—Bejar’s reaction to learning something new and interesting is a gesture meant to evoke his head exploding—he had unusual access due to his easy familiarity with Facebook’s most senior executives. Dubbing himself a “free-range Mexican,” he began poring over internal research and setting up meetings to discuss how the company’s platforms could better support their users.

The mood at the company had certainly darkened in the intervening four years. Yet, Bejar found, everyone at Facebook was just as smart, friendly, and hardworking as they had been before, even if no one any longer thought that social media was pure upside. The company’s headquarters—with its free laundry service, cook-to-order meals, on-site gym, recreation and medical facilities—remained one of the world’s best working environments. It was, Bejar felt, good to be back.

That nostalgia probably explains why it took him several months to check in on what he considered his most meaningful contribution to Facebook—the revamp of the platform’s system for reporting bad user experiences.

It was the same impulse that had led him to avoid setting up meetings with some of his old colleagues from the Protect and Care team. “I think I didn’t want to know,” he said.

Bejar was at home when he finally pulled up his team’s old system. The carefully tested prompts that he and his colleagues had composed—asking users to share their concerns, understand Facebook’s rules, and constructively work out disagreements—were gone. Instead, Facebook now demanded that people allege a precise violation of the platform’s rules by clicking through a gauntlet of pop-ups. Users determined enough to complete the process arrived at a final screen requiring them to reaffirm their desire to submit a report. If they simply clicked a button saying “done,” rendered as the default in bright Facebook blue, the system archived their complaint without submitting it for moderator review.

What Bejar didn’t know then was that, six months prior, a team had redesigned Facebook’s reporting system with the specific goal of reducing the number of completed user reports so that Facebook wouldn’t have to bother with them, freeing up resources that could otherwise be invested in training its artificial intelligence–driven content moderation systems. In a memo about efforts to keep the costs of hate speech moderation under control, a manager acknowledged that Facebook might have overdone its effort to stanch the flow of user reports: “We may have moved the needle too far,” he wrote, suggesting that perhaps the company might not want to suppress them so thoroughly.

The company would later say that it was trying to improve the quality of reports, not stifle them. But Bejar didn’t have to see that memo to recognize bad faith. The cheery blue button was enough. He put down his phone, stunned. This wasn’t how Facebook was supposed to work. How could the platform care about its users if it didn’t care enough to listen to what they found upsetting?

There was an arrogance here, an assumption that Facebook’s algorithms didn’t even need to hear about what users experienced to know what they wanted. And even if regular users couldn’t see that like Bejar could, they would end up getting the message. People like his daughter and her friends would report horrible things a few times before realizing that Facebook wasn’t interested. Then they would stop.

When Bejar next stepped onto Facebook’s campus, he was still surrounded by smart, earnest people. He couldn’t imagine any of them choosing to redesign Facebook’s reporting features with the goal of tricking users into depositing their complaints in the trash; but clearly they had.

“It took me a few months after that to wrap my head around the right question,” Bejar said. “What made Facebook a place where these kinds of efforts naturally get washed away, and people get broken down?”

***

Unbeknownst to Bejar, a lot of Facebook employees had been asking similar questions. As scrutiny of social media ramped up from without and within, Facebook had accumulated an ever-expanding staff devoted to studying and addressing a host of ills coming into focus.

Broadly referred to as integrity work, this effort had expanded far beyond conventional content moderation. Diagnosing and remediating social media’s problems required not just engineers and data scientists but intelligence analysts, economists, and anthropologists. This new class of tech workers had found themselves up against not just outside adversaries determined to harness social media for their own ends but senior executives’ beliefs that Facebook usage was by and large an absolute good. When ugly things transpired on the company’s namesake social network, these leaders pointed a finger at humanity’s flaws.

Staffers responsible for addressing Facebook’s problems didn’t have that luxury. Their jobs required understanding how Facebook could distort its users’ behavior—and how it was sometimes “optimized” in ways that would predictably cause harm. Facebook’s integrity staffers became the keepers of knowledge that the outside world didn’t know existed and that their bosses refused to believe.

As a small army of researchers with PhDs in data science, behavioral economics, and machine learning was probing how their employer was altering human interaction, I was busy grappling with far more basic questions about how Facebook worked. I had recently moved back to the West Coast to cover Facebook for the Wall Street Journal, a job that came with the unpleasant necessity of pretending to write with authority about a company I did not understand.

Still, there was a reason I wanted to cover social media. After four years of investigative reporting in Washington, the political accountability work I was doing felt pointless. The news ecosystem was dominated by social media now, and stories didn’t get traction unless they appealed to online partisans. There was so much bad information going viral, but the fact-checks I wrote seemed less like a corrective measure than a weak attempt to ride bullshit’s coattails.

Covering Facebook was, therefore, a capitulation. The system of information sharing and consensus building of which I was a part was on its last legs, so I might as well get paid to write about what was replacing it.

The surprise was how hard it was to even figure out the basics. Facebook’s public explainers of the News Feed algorithm—the code that determined which posts were surfaced before billions of users—relied on phrases like “We’re connecting you to who and what matters most.” (I’d later learn there was a reason why the company glossed over the details: focus groups had concluded that in-depth explanations of News Feed left users confused and unsettled—the more people thought about outsourcing “who and what matters most” to Facebook, the less comfortable they got.)

In a nod to its immense power and societal influence, the company created a blog called Hard Questions in 2017, declaring in its inaugural post that it took “seriously our responsibility—and accountability—for our impact and influence.” But Hard Questions never delved into detail, and after a couple of bruising years of public scrutiny, the effort was quietly abandoned.

By the time I started covering Facebook, the company’s reluctance to field reporters’ queries had grown, too. Facebook’s press shop—a generously staffed team of nearly four hundred—had a reputation for being friendly, professional, and reticent to answer questions. I had plenty of PR contacts, but nobody who wanted to tell me how Facebook’s “People You May Know” recommendations worked, which signals sent controversial posts viral, or what the company meant when it said it had imposed extraordinary user-safety measures amid ethnic cleansing in Myanmar. The platform’s content recommendations shaped what jokes, news stories, and gossip went viral across the world. How could it be such a black box?

The resulting frustration explains how I became a groupie of anyone who had a passing familiarity with Facebook’s mechanics. The former employees who agreed to speak to me said troubling things from the get-go. Facebook’s automated enforcement systems were flatly incapable of performing as billed. Efforts to engineer growth had inadvertently rewarded political zealotry. And the company knew far more about the negative effects of social media usage than it let on.

This was wild stuff, far more compelling than the perennial allegations that the platform unfairly censored posts or favored President Trump. But my ex-Facebook sources couldn’t offer much in the way of proof. When they’d left the company, they’d left their work behind Facebook’s walls.

I did my best to cultivate current employees as sources, sending hundreds of notes that boiled down to two questions: How does a company that holds sway over billions of people actually work? And why, so often, does it seem like it doesn’t?

Other reporters did versions of this too, of course. And from time to time we obtained stray documents indicating that Facebook’s powers, and problems, were greater than it let on. I had the luck of being there when the trickle of information became a flood.

A few weeks after the 2020 election, Frances Haugen, a mid-level product manager on Facebook’s Civic Integrity team, responded to one of my LinkedIn messages. People needed to understand what was going on at Facebook, she said, and she had been taking some notes that she thought might be useful in explaining it.

Haugen was nervous about saying anything further via LinkedIn or on the phone, so we met on a hiking trail in the hills behind Oakland that weekend. After a quarter-mile stroll through California’s coastal redwoods, we pulled off the trail to talk in privacy.

Haugen was an unusual source from the start. Facebook’s platforms eroded faith in public health, favored authoritarian demagoguery, and treated users as an exploitable resource, she declared at our first meeting. Rather than acknowledging its problems, Facebook was pushing its products into remote, impoverished markets where she believed they were all but guaranteed to do harm.

Since Facebook wasn’t dealing with its flaws, she said, she thought she might have to play a role in making them public.


Copyright © 2023 by Jeff Horwitz

Jeff Horwitz is an investigative reporter in the DC bureau of the Associated Press. After a career that included daily newspapers, alt-weekly and trade pubs, he was happily writing about large bank misbehavior and regulatory failures. Then Donald Trump ran for president and that’s pretty much been the only subject matter he’s dealt with since.