Jump to content


Ghost in the Machine

Member Since 12 Aug 2015
Offline Last Active Jan 21 2018 08:26 AM
-----

Topics I've Started

The Horrifying Job of Facebook Content Moderators

23 December 2017 - 03:03 PM

The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed
 
8Nuhmh8.jpg
 
Baybayan is part of a massive labor force that handles “content moderation”—the removal of offensive material—for US social-networking sites. As social media connects more people more intimately than ever before, companies have been confronted with the Grandma Problem: Now that grandparents routinely use services like Facebook to connect with their kids and grandkids, they are potentially exposed to the Internet’s panoply of jerks, racists, creeps, criminals, and bullies. They won’t continue to log on if they find their family photos sandwiched between a gruesome Russian highway accident and a hardcore porn video. Social media’s growth into a multibillion-dollar industry, and its lasting mainstream appeal, has depended in large part on companies’ ability to police the borders of their user-generated content—to ensure that Grandma never has to see images like the one Baybayan just nuked.
 
This work is increasingly done in the Philippines. A former US colony, the Philippines has maintained close cultural ties to the United States, which content moderation companies say helps Filipinos determine what Americans find offensive. And moderators in the Philippines can be hired for a fraction of American wages. Ryan Cardeno, a former contractor for Microsoft in the Philippines, told me that he made $500 per month by the end of his three-and-a-half-year tenure with outsourcing firm Sykes. Last year, Cardeno was offered $312 per month by another firm to moderate content for Facebook, paltry even by industry standards.
 
When I asked Microsoft, Google, and Facebook for information about how they moderate their services, they offered vague statements about protecting users but declined to discuss specifics. Many tech companies make their moderators sign strict nondisclosure agreements, barring them from talking even to other employees of the same outsourcing firm about their work.
 
“I think if there’s not an explicit campaign to hide it, there’s certainly a tacit one,” says Sarah Roberts, a media studies scholar at the University of Western Ontario and one of the few academics who study commercial content moderation. Companies would prefer not to acknowledge the hands-on effort required to curate our social media experiences, Roberts says. “It goes to our misunderstandings about the Internet and our view of technology as being somehow magically not human.”
 
A list of categories, scrawled on a whiteboard, reminds the workers of what they’re hunting for: pornography, gore, minors, sexual solicitation, sexual body parts/images, racism. When Baybayan sees a potential violation, he drills in on it to confirm, then sends it away—erasing it from the user’s account and the service altogether—and moves back to the grid. Within 25 minutes, Baybayan has eliminated an impressive variety of dick pics, thong shots, exotic objects inserted into bodies, hateful taunts, and requests for oral sex.
 
More difficult is a post that features a stock image of a man’s chiseled torso, overlaid with the text “I want to have a gay experience, M18 here.” Is this the confession of a hidden desire (allowed) or a hookup request (forbidden)? Baybayan—who, like most employees of TaskUs, has a college degree—spoke thoughtfully about how to judge this distinction.
 
“What is the intention?” Baybayan says. “You have to determine the difference between thought and solicitation.” He has only a few seconds to decide. New posts are appearing constantly at the top of the screen, pushing the others down. He judges the post to be sexual solicitation and deletes it; somewhere, a horny teen’s hopes are dashed. Baybayan scrolls back to the top of the screen and begins scanning again.
 
While a large amount of content moderation takes place overseas, much is still done in the US, often by young college graduates like Swearingen was. Many companies employ a two-tiered moderation system, where the most basic moderation is outsourced abroad while more complex screening, which requires greater cultural familiarity, is done domestically. US-based moderators are much better compensated than their overseas counterparts: A brand-new American moderator for a large tech company in the US can make more in an hour than a veteran Filipino moderator makes in a day. But then a career in the outsourcing industry is something many young Filipinos aspire to, whereas American moderators often fall into the job as a last resort, and burnout is common.
 
The worst was the gore: brutal street fights, animal torture, suicide bombings, decapitations, and horrific traffic accidents. The Arab Spring was in full swing, and activists were using YouTube to show the world the government crackdowns that resulted. Moderators were instructed to leave such “newsworthy” videos up with a warning, even if they violated the content guidelines. But the close-ups of protesters’ corpses and street battles were tough for Rob and his coworkers to handle. So were the videos that documented misery just for the sick thrill of it.
 
“If someone was uploading animal abuse, a lot of the time it was the person who did it. He was proud of that,” Rob says. “And seeing it from the eyes of someone who was proud to do the fucked-up thing, rather than news reporting on the fucked-up thing—it just hurts you so much harder, for some reason. It just gives you a much darker view of humanity.”
 
Given that content moderators might very well comprise as much as half the total workforce for social media sites, it’s worth pondering just what the long-term psychological toll of this work can be. Jane Stevenson was head of the occupational health and welfare department for Britain’s National Crime Squad—the UK equivalent of the FBI—in the early 2000s, when the first wave of international anti-child-pornography operations was launched. She saw investigators become overwhelmed by the images
 
“From the moment you see the first image, you will change for good,” Stevenson says. But where law enforcement has developed specialized programs and hires experienced mental health professionals, Stevenson says that many technology companies have yet to grasp the seriousness of the problem.
 
 
Underpaid and overburdened: the life of a Facebook moderator 
 
“There was literally nothing enjoyable about the job. You’d go into work at 9am every morning, turn on your computer and watch someone have their head cut off. Every day, every minute, that’s what you see. Heads being cut off.”
 
That’s how one man, who wished to remain anonymous, characterized his job as a content moderator at Facebook.
 
“We were underpaid and undervalued,” said the man, who earned roughly $15 per hour removing terrorist content from the social network after a two-week training course.
 
Pictures, videos, profiles and groups would be flagged either by users or algorithms to review, and his team would decide if they needed to be removed or escalated.
 
“Every day people would have to visit psychologists. Some couldn’t sleep or they had nightmares.”
 
Psychologists say that the emotional toll of looking at extreme imagery, whether violent terrorist acts or child sexual abuse, can be punishing. Workers exposed to such content should have extensive resiliency training and access to counsellors, akin to the support that first-responders receive. However, testimony from those working to keep beheadings, bestiality and child sexual abuse images off Facebook indicates that the support provided isn’t enough.
 
“The work entails looking at the underbelly of human nature, including betrayal and cruelty. We need to offset the dark side with human connections,” said Naheed Sheikh, co-founder of the Workplace Wellness Project.
 
If moderators have not been trained effectively to handle a particular type of content, it can be especially distressing.
 
 
 

Normalizing the Alternative Agenda

16 December 2017 - 02:16 PM

Enter the territory of forbidden words. A realm of politically correct jargon fortified with extra jibberjabber. Protein free and guaranteed to stifle your brain
 
CDC gets list of forbidden words: fetus, transgender, diversity
 
The Trump administration is prohibiting officials at the nation’s top public health agency from using a list of seven words or phrases — including “fetus” and “transgender” — in any official documents being prepared for next year’s budget.
 
Policy analysts at the Centers for Disease Control and Prevention in Atlanta were told of the list of forbidden words at a meeting Thursday with senior CDC officials who oversee the budget, according to an analyst who took part in the 90-minute briefing. The forbidden words are “vulnerable,” “entitlement,” “diversity,” “transgender,” “fetus,” “evidence-based” and “science-based.”
 
In some instances, the analysts were given alternative phrases. Instead of “science-based” or ­“evidence-based,” the suggested phrase is “CDC bases its recommendations on science in consideration with community standards and wishes,” the person said. In other cases, no replacement words were immediately offered.
 
 
 

NASA Testing Planetary Defense System on Asteroid

08 October 2017 - 11:49 AM

How NASA Plans to Test its Planetary Defense Systems on Close-Approach Asteroid
 
NASA is preparing to test out its planetary defense systems on an asteroid that will come extremely close to Earth next week.
 
Asteroid TC4 will pass us by on October 12 at an estimated distance of 31,000 miles—that’s an eighth of the distance between our planet and the moon, so just a whisker in astronomical terms. About 50 feet wide, the asteroid poses absolutely no threat to Earth. But it presents NASA with an opportunity to practice for a real-life impact event.
 
 
asteroidrev-16-1.gif
 
"This is the perfect target for such an exercise because while we know the orbit of 2012 TC4 well enough to be absolutely certain it will not impact Earth, we haven't established its exact path just yet," says Paul Chodas, manager of NASA's Center for Near Earth Object Studies.
 
This experiment will test how well NASA can determine the orbit of a newly detected asteroid that might pose a danger to us. If the agency can successfully track an asteroid of this size, it would then be possible to determine where it is likely to impact the planet. Then, we could decide whether or not we need to intercept it.
 
2012tc4-graphic.jpg
 
 
Small asteroids hit Earth almost daily, breaking up harmlessly in the upper atmosphere. Objects large enough to do damage at the surface are much rarer. Objects larger than 0.6 miles (1 kilometer) in diameter -- large enough to cause global effects -- have been the focus of NASA’s ground-based search for potentially hazardous objects with orbits that bring them near the Earth, and about 93 percent of these sized objects have already been found. DART would test technologies to deflect objects in the intermediate size range—large enough to do regional damage, yet small enough that there are many more that have not been observed and could someday hit Earth. NASA-funded telescopes and other assets continue to search for these objects, track their orbits, and determine if they are a threat. 
 
To assess and formulate capabilities to address these potential threats, NASA established its Planetary Defense Coordination Office (PDCO) in 2016, which is responsible for finding, tracking and characterizing potentially hazardous asteroids and comets coming near Earth, issuing warnings about possible impacts, and assisting plans and coordination of U.S. government response to an actual impact threat.
 
 
To learn more about NASA planetary defense and DART visit: 
 
 

3-D Printed Chainmail Armour from NASA

04 August 2017 - 04:52 PM

The hi-tech 'chainmail' armour suits NASA hopes could protect astronauts and spacecraft in deep space
 
ai0o5vcpwa7ejasy0s3r.jpg
 
3D printed material could be used to protect spaceship and astronauts
 
Could also insulate craft and people on cold planets and protect from radiation
 
Could create flexible feet to allow craft to touch dopwn on remote planets 
 
Andrew Shapiro-Scharlotta of JPL, whose office funds research for early-stage technologies like the space fabric, said that adding multiple functions to a material at different stages of development could make the whole process cheaper.  
 
'We are just scratching the surface of what's possible,' Shapiro-Scharlotta said. 
 
'The use of organic and non-linear shapes at no additional costs to fabrication will lead to more efficient mechanical designs.'
 
The space fabrics have four essential functions: reflectivity, passive heat management, foldability and tensile strength.
 
'If 20th Century manufacturing was driven by mass production, then this is the mass production of functions.' 
 
The fabrics could potentially be useful for large antennas and other deployable devices, because the material is foldable and its shape can change quickly, NASA hopes.
 

If YOU Travel You NEED to See This Video!

17 July 2017 - 04:23 PM

The video is about Biometric ID , facial recognition, and finger printing tech for Travel around the world. It touches on the legislation procedures involved by government agencies integrating their databases with private companies to expedite mandates of ID control. People are being transformed into digital organisms...
 
Never mind the flashlight ad at the beginning. 
 

IPB Skin By Virteq