Monday morning, the Verge's Casey Newton published a blockbuster investigation about Facebook's reliance on American contract workers to moderate its platform. Contract content moderators at Cognizant, a "professional services" company with an office in Arizona, can make as little as $28,800 per year to moderate posts that can include hate, graphic violence and sexual content, and suicide and self injury.
Newton's piece is a rare window into what it's actually like to be a Facebook contractor; some of his sources say they've been radicalized by the content they click through on a daily basis, others have anxiety and panic attacks, and one moderator says that they've started sleeping with a gun. At Cognizant, contract content moderators for Facebook get nine minutes of "wellness time" per day to take mental health breaks. Some of them say they can be fired for having too low of an "accuracy rate."
Last year, Motherboard senior staff writer Joseph Cox and I spent several months reporting an investigation about how Facebook makes its content moderation policies, and about how its larger apparatus works. As part of that investigation, I went to Facebook's Menlo Park, California headquarters for on-the-record interviews with the people in charge of the company's sprawling content moderation operation. One of those leaders was Brian Doegan, who at the time was Facebook's "Global Learning Leader." Doegan was in charge of Facebook's content moderator training practices—essentially, he helped set up the guidelines by which all moderators would be trained, and the best practices for actually training them.
This interview took place at the end of June 2018. Doegan has since left the company. According to active job listings, Facebook has not yet replaced him. This interview has been lightly edited for length and clarity. Besides Doegan, Facebook spokespeople Carolyn Glanville and Ruchika Budhraja participated in the interview and added clarifications when necessary. Though Doegan has since left the company, this is still the most extensive interview Facebook has ever given about the specific protocols it has for training moderators and contract employees, as far as I know.
MOTHERBOARD: What is the onboarding process like for a content moderator?
BRIAN DOEGEN: After the talent is placed for this role, and obviously we look at language ability, local marketability and things like that, but once the talent is placed, that's when their learning journey begins. We start off by just acclimating folks to the Facebook culture, what we're all about, what is our mission. From there, we engage in basically a three-phase approach. I bucket our training onboarding experience into three distinct categories. There's pre-training which is your onboarding and shadowing, and I'll say more about that, there's your formal curriculum, so the comprehensive curriculum, and the materials that we leverage to have that conversation. And then there's post-training and ongoing reinforcement.
The pre-training is we ensure that all of our reps have an opportunity to actually shadow and observe more experienced people that are actually doing this role. They get a real good sense early on for a day in the life. And there's a lot of learning there, just by nature of the fact that they have the opportunity to really interact with people that are already doing that job well. So, when they segue into their formal curriculum, it's several weeks, and it's very comprehensive. We cover all of our comprehensive community standards, and that exchange, there's an opportunity for practice, for discussion, for examples. There's actually a conversation that happens around all of the material.
Does this apply to both contractors as well as…
Yeah, it does.
And that's one thing we're really proud of is the consistency we have in that space. But on that, it's really three things working together, right, training is never one and done. So, yes, there's a healthy element of lecture, which most of us are familiar with just from having been exposed through academia, etcetera, but there's also an on the job component, but very practical. So, trial and error.
We look at metrics, we actually give you an opportunity to assess your knowledge, in a very safe environment, so that we can provide individual coaching. So, really, lecture, formal training, on the job coaching and mentoring, and just really practical support. And it's basically, we weave all of that together into a multi-week experience. And every abuse type is different, actioning hate is very different from actioning bullying, which is very different from sexual content. So, from a design perspective, we truly tried to get the right blend of modalities targeted at the right policy. And we're always evolving that and revisiting that.
Do people specialize in different content types?
On the whole, we upskill our moderators, our content reviewers so that they can action all of our abuse types.
"Hypothetically, if you're struggling with hate, then we can look at the data based on your simulation results, and we can work with you on that topic."
Even, the trainers, is it like, this is our expert on hate speech, this is the team that's an expert on hate speech, this is the team that's the expert on sexual stuff…
We definitely have subject matter experts that do specialize in abuse types, and really, the goal of the training function right, is we partner very closely with our subject matter experts so that we can take that information and convert that into the best possible training experience, and then we deliver that to our sites across the world.
CAROLYN GLANVILLE: I'd also elaborate slightly on that...
BRIAN DOEGEN: Sorry if I misspoke.
CAROLYN GLANVILLE: You didn't at all, you didn't at all, I just wanted to get a little bit more. So, for our reviewers as he said, they're trained on all of our community standards, when, what you talked to James about, you have that escalation channel, where they're slightly more knowledgeable about certain policies, you have the policy team that can dig in on some of those things, all of the nuances are better served there. But, then you also have the markets team, and some of these things kinda go to there, but naturally through the process of our reviewers being onboarded for all our policies, there are certain areas where it takes a little bit more either training, or speciality, like, child exploitative images, that's not going to be shared widely, there's a specific subset who are going to be able to deal with those things and deal with them right, through the correct channels. But, training for him globally, is the masses, how do we make sure that the masses are moving together.
BRIAN DOEGEN: Right, and thank you for that, that's a good point. Just on that for example, training and monitoring against hate in Turkey is very different from monitoring hate in Texas, just culturally, so, to Carolyn's point, we do leverage what we call our market teams which is a body of professionals that specialize in certain markets and languages, and we often partner with them to make sure that our training has the most relevant examples.
"We don't teach with an iron fist for lack of a better term"
This is a hard job, content moderator, super hard job, how is that portrayed to the employees when they're getting started? Is there a period of time where it's like, this is the type of stuff that you're going to have to look at every day? This is how you're going to be judged, is this right for you, is this not right for you, that sort of thing?
BRIAN DOEGEN: And we recognize that it's a hard job. I spend a lot of time monitoring the folks actually actioning this content too, so that I can stay somewhat relevant with everything that's going on, and monitoring the learning experience. This job is not for everyone, candidly and we recognize that. In my mind, that starts at talent selection. Engaging just through the interview process and selection process, making sure that folks have a high level of understanding what it is that they would be getting into. We also do that through various assessments and other creative approaches to make sure that we're getting the right people into the roles to begin with. From that point on, one of the reasons we engage in shadowing early on, is we want to give folks the opportunity to live the life of [a moderator], so that they get a realistic flavor of that. And of course, we're supportive and one of the things I'm quite proud of is that we have such a broad resiliency program, so that everybody does feel supported, and they always have access to resources and services and things like that at any point during this process.
I'm certainly not an expert in HR strategies or training strategies, but I'm curious if the training that you give is modeled on anything like, modeled on an academic theory or modeled on some sort of best practices that have been seen in other industries, or other parts of Facebook, or was it built from the ground up specifically for content moderation?
In some regards we're very unique. We're the only organization that is providing— this is what gets me excited about my job, right—this kind of training at this scale, based on a relatively emerging area, and of course training and corporate training has a lot of legacy models.
The one that we seem to have identified with the most is called the 70-20-10 model. And again, take the numbers aside for just a minute, but the elevator pitch of that model suggests that roughly 70 percent of what you know as a professional, as an editor, as a reporter, you probably learned through on the job, practice. 20 percent through coaching, and having had the pleasure of working with people who can show you the ropes. And 10 percent through formal learning. So, we try to balance the formal learning, on the job practice, and then coaching as well, so that model does fit very well in terms of what we're trying to do, and we keep that in the heart of everything we do. Because, as we talked about, training is never a one size fits all, and it's never one modality, it's a mix of different kinds of formats.
Is there a period of time where people are making content moderation decisions on a dummy site or test site or something like that?
Yes, we offer opportunity for practice, and a very valuable tool that we have at our disposal that we're now introducing is a simulation mode that allows you to action true to life content. It's a replica of the same system that these folks are using every day, purely for the purposes of learning and practice. So, the decisions made there don't impact the community of 2 billion people, so, for us that's been a really valuable tool because we get data and a practice opportunity, but we go beyond that, and that gives us a really great avenue to provide personalized coaching. So, hypothetically, if you're struggling with hate, then we can look at the data based on your simulation results, and we can work with you on that topic.
We also launched assessments and tests, and again, they replicate, they look more or less just like the same application that folks are using [on the live site], so, alongside what we call the simulation mode, we have and we will continue to use assessments as well. You will get a sampling of let's say, 20 jobs, 30 jobs, however many it takes in a certain abuse type, and that is our version of what we would call a classic assessment. That plus practice typically work together really well, I just wanted to get that plug in there, because I feel like that is a truly active ingredient that we use as well.
What is success for a content moderator, in terms of obviously you want 100 percent accuracy, but what are they shooting for realistically? What makes a good content moderator, is it 99 percent accuracy, according to someone who audits it later, or…
I feel like we don't publicize the actual metrics, like 100 percent is always the goal, and maybe Carolyn you can say a little bit more about that, but, that is always the goal. Because at the end of the day, behind every abuse type, around every report is a person and that's what makes this such a challenging position.
"Whereas some it's fine to just go walk across the hall to a counselor, and they don't care, in other cultures, they don't do that, they would do it off hours, and other people might not know about it."
CAROLYN GLANVILLE: I think you would be surprised at how high the accuracy rate is, but the reasons that we haven't really talked about it to date is because, like he said, those errors, that do exist, is a person, and those are the ones that get publicized, and those are the ones where people are upset, and so, no matter what our number is, it's never going to be enough, and we're always trying to work towards reaching that state of not having mistakes.
When you're training people, or when you're onboarding them, you obviously stress that you want 100 percent accuracy, but of course mistakes are going to get made, and I assume that working under the assumption that if I make any mistake, it could be catastrophic, is probably not realistic. So, how is that messaged to people?
BRIAN DOEGEN: Learning is a safe environment, it's just for the purposes of learning, right? So, we don't teach with an iron fist for lack of a better term, and I can tell you, from design through execution through monitoring, my team is also consistently evaluating the quality of these programs, the focus is really on the community.
RUCHIKA BUDHRAJA: I wonder if some of that is on us to be setting expectations externally. Like, we aren't going to be perfect, I think we say that a lot, but what's the alternative of not being perfect? Everything being swept up by automation, you know, people don't want that, and so when you need to have people, you're going to have mistakes. I think we're also trying to talk more about automation and our algorithms, so, maybe the two go hand in hand. If reviewers see us talking about this externally, then they're not as overwhelmed internally.
CAROLYN GLANVILLE: I would say there's probably very few cases, I can't speak to specific examples, where someone is let go for making a mistake, from Brian's world. If we start to see that mistakes are being made, that either means retraining needs to happen, or it'll end up in kind of the other flow where we're starting to see people making mistakes on this kind of content, maybe there's something in our policy we need to look at, maybe there's a gap in our tooling, maybe there's some other thing that's not quite identified yet. So, it's not so much, I think the mistakes also help to surface some of those things, which is a good thing that we need to address.
"There's actual physical environments where you can go into, if you want to just kind of chillax, or if you want to go play a game, or if you just want to walk away, you know, be by yourself, that support system is pretty robust"
Are most of the trainers content moderators who have just been through the ranks?
BRIAN DOEGEN: Some are, yes, absolutely, because, that has been great for us because it also provides career pathing. Folks that are truly exceptional have a drive to do a little bit more, and have a drive for coaching and mentoring, often do come into the role of the trainer, or training lead, even.
As far as resiliency goes, I know a lot has already been written about that, this job is difficult, you're looking at graphic images often, hate speech, things like that. But what do you offer?
From a training perspective, we touch upon [resiliency] in each of our abuse types. It's something we talk about, we revisit it early on, in that training module, so we don't just radically expose you, but rather we do have a conversation about what it is, and what we're going to be seeing etcetera, to make sure that we're level set. I think the broader resiliency efforts for me, and what I admire is that at any point in this role, you have access to counsellors, you have access to having conversations with other people. There's actual physical environments where you can go into, if you want to just kind of chillax, or if you want to go play a game, or if you just want to walk away, you know, be by yourself, that support system is pretty robust, and that is consistent across the board.
CAROLYN GLANVILLE: From the training perspective, I think one thing that's interesting, not even so much from the training, but resiliency in general, I guess, you look at the global nature of what we do, we offer resiliency counseling to anyone that reviews content, but look at the cultural acceptance in certain places in the world, and it's not necessarily culturally acceptable to take that sort of help, or to talk to someone publicly, so we have to work very closely with either our vendor partners or our sites or whatever it might be, whatever kind of setup it is, to make sure that their employees know what is available in a way that is also culturally acceptable for them. Whereas some it's fine to just go walk across the hall to a counselor, and they don't care, in other cultures, they don't do that, they would do it off hours, and other people might not know about it. It's just interesting that we have to take those cultural nuances into account, when ensuring that people even know about the resources that are available.
Do you have to do that in terms of the actual training as well, just like, people in this part of the world learn in this way, this is best practices in India, this is best practices in like…
BRIAN DOEGEN: Learning is as much of an art as it is of a science. As an insanely data-driven company, right, it's not down, to, like, there's no research that says this is the way to teach in India, that doesn't exist. But what we do though, is that people can see here, and interact with what they're learning. Our goal is really to accommodate a wide variety of different cultural learning styles. And that's why we place so much emphasis on different modalities. If that answers your question. But, you're asking a question that no learning professional has been able to answer with military precision, I can tell you that. But it's a great question, don't get me wrong.
You mentioned that teaching is art, not science. Content moderation is art not science, as well…
Or art and science. Yeah, there is a science, there's a methodology for how we triage need and so on that we use, the ISD methodology.
Do you think that Facebook values the role of the human in this process?
Absolutely, in my mind that's evidenced based on the pure nature of, like, our scaling and growing, and we're placing so many resources behind this to get it right. In my 17 year career, for what that's worth, I've not quite seen an attempt to invest this highly in that function.
This article originally appeared on VICE US.