Instagram Posts May Have Escalated Fatal Standoff, Police Say
The episode highlights Facebook’s increasingly complicated role in documenting violence, and in some cases, its active place in the middle of it. Before the shots were fired, the Instagram posts caught the police’s attention. Read More
+Commentary: Great read from Harvard Business Review, while ostensibly for community managers, it can be applied very broadly to all your social media practices as a micro-business or in any micromarkteting efforts. Read More
Facebook relies on editors’ judgment for trending news feed, documents show
But the documents show that the company relies heavily on the intervention of a small editorial team to determine what makes its “trending module” headlines – the list of news topics that shows up on the side of the browser window on Facebook’s desktop version. The company backed away from a pure-algorithm approach in 2014 after criticism that it had not included enough coverage of unrest in Ferguson, Missouri, in users’ feeds. Read More
But the most appealing factor of live streaming – raw content at the touch of a button – is also its biggest threat: The inability of companies to monitor live content has spawned an entirely new set of serious safety and privacy issues for users. The freedom to live-stream just about anything, anywhere in the world, has prompted a new and uncomfortable predicament for social media companies: What should they do if – or when – a crime is being live-streamed on their platform?
Facebook Removes The Shade Room For “Violating Community Standards”
The Shade Room is a thoroughly modern publication, existing nearly entirely where its audience exists — on social. However, publishing directly to social networks, as Nwandu has pioneered, puts the fortunes, and readership, of TSR into a third party’s hands. Namely, Facebook’s, Instagram’s, Twitter’s, and Snapchat’s.
How should digital news organisations respond to this? Some say it is simple – “Don’t read the comments” or, better still, switch them off altogether. And many have done just that, disabling their comment threads for good because they became too taxing to bother with.
But in so many cases journalism is enriched by responses from its readers. So why disable all comments when only a small minority is a problem?
At the Guardian, we felt it was high time to examine the problem rather than turn away.
+NOTE:some really great interactive visualizations go along with the data accompanying this article at the link below:
Twitter’s status as a platform for public debate is a dog-whistle platitude that has become the gilded shield of First-Amendment-waving journalists everywhere, like our very own #NotAllMen hashtag, to justify the mishandling – and, in some cases, even endangerment – of our sources for digital stories (and, yes, tweets should be considered sources).
Theoretically, anyone can stumble upon your unprotected tweet; therefore, we can embed your tweet in our news story without informing you or asking your permission. But just because journalists can exercise that power, does that mean we ought to?
[Trigger Warning: Teen Exploitation, Pornography & Sexual Predators]
Jessie discovered it accidentally.
“It was on the popular page,” he told me. “I thought it was just a hot guy with his shirt off.”
Jessie, a 20-something male in New York, had clicked on what he thought was an innocuous selfie on Instagram, the kind of photo we’ve come to expect from a generation which thinks the best way to prove your worth is to purse your lips while staring into a water-stained bathroom mirror. But the image, it turned out, wasn’t of a “hot guy” — it was of a young boy.
“Like, 11-years-old young boy,” Jessie said.
Jessie was creeped out, but what he noticed next disturbed him most: The picture had received thousands upon thousands of likes.
Amine Derkaoui, a 21-year-old Moroccan man, is pissed at Facebook. Last year he spent a few weeks training to screen illicit Facebook content through an outsourcing firm, for which he was paid a measly $1 an hour. He’s still fuming over it.
“It’s humiliating. They are just exploiting the third world,” Derkaoui complained in a thick French accent over Skype just a few weeks after Facebook filed their record $100 billion IPO. As a sort of payback, Derkaoui gave us some internal documents, which shed light on exactly how Facebook censors the dark content it doesn’t want you to see, and the people whose job it is to make sure you don’t.
Facebook has turned the stuff its millions of users post into gold. But perhaps just as important as the vacation albums and shared articles is the content it keeps out of user’s timelines: porn, gore, racism, cyberbullying, and so on. Facebook has fashioned itself the clean, well-lit alternative to the scary open Internet for both users and advertisers, thanks to the work of a small army of human content moderators like Derkaoui.
“We work to foster an environment where everyone can openly discuss issues and express their views, while respecting the rights of others,” reads Facebook’s community standards .
But walking the line between keeping Facebook clean and excessively censoring its content is tricky, and Facebook’s zealousness in scrubbing users’ content has led to a series of uproars. Last April, they deleted an innocent gay kiss and were accused of homophobia; a few months before that, the removal of a nude drawing sparked the art world’s ire. Most recently, angry “lactivists” have been staging protests over Facebook’s deletion of breast-feeding photos.