Publishing Violence in the Age of Social Media

Instagram Posts May Have Escalated Fatal Standoff, Police Say

The episode highlights Facebook’s increasingly complicated role in documenting violence, and in some cases, its active place in the middle of it. Before the shots were fired, the Instagram posts caught the police’s attention.


Source: Instagram Posts May Have Escalated Fatal Standoff, Police Say | The New York Times


+Note: Coinciding with the above article about Korryn Gaines incident, New York Magazine looked at Facebook’s further tweaking of its algorithm to urge it to publicly state it is a publisher and accept editorial guidelines and transparency

editorial guideline FB should consider

But while this might be bad for the publishing industry, it’s not necessarily bad for Facebook users. Let’s state it plainly: This is Facebook making an editorial decision on behalf of its users. This makes Facebook not just a middleman between reader and article, but a publisher in itself, issuing edicts on how to present content.

It’s not the first time that Facebook has established a clear role as publisher with editorial responsibilities. Its Trending Topics feature, despite its smooth algorithmic look and tone, is curated and written by a team of editors and writers —

Be transparent about how it works and what gets removed.

The modern era of curiosity-gap clickbait is a direct result of trying to take advantage of the mysterious and powerful Facebook News Feed algorithm. Now realizing that it has created a ghastly content Hydra, Facebook is trying to improve the usefulness of Facebook as a content-provider. If it wants to set a minimum standard for quality, they need to be explicit about what does and does not pass muster.

Get a public editor.

Good columns on Inverse and Motherboard have addressed this idea, which applies not just to brands and websites on Facebook, but to individual users as well. Facebook needs to be accountable when it makes decisions to remove content at the request of law enforcement, or when safety checks get activated during some crises and not others, or when certain articles are classified as “clickbait.” Facebook needs to be more proactive in explaining why it chooses to take the action it does, when it does. It can’t continue to pretend that nobody notices these things.


Source: Some Editorial Guidelines Facebook Should Consider | New York Magazine


+Commentary: This has been brewing in more than a few ways, but as complicated stories keep arising, it has been hard to pick a critique. Starting with the Orlando Shooting, and Pride Month posts in June, through to now you have several viral posts  come to mind of Facebook’s removing posts that do not appear in the least to be offensive, or not nearly offensive as many of the things we have reported on.

Of no threat to anyone, but certainly making it clear that whatever unseen hand or eye guides their enforcement and removal is certain to further marginalize already marginalized voices. To further repress the so-called free-speech guidelines by which they let the most vile and dehumanizing memes, posts, updates glide by on. That you can report something so offensive, only to find their to be “no grounds” under their Terms & Policies to exclude it from your newsfeed.

This is a lived reality, and Facebook will not, and has not anyone who can or will hold them fully responsible. They can or do whatever the hell they please. Despite faux-outrage over them rigging the Trending Topics and such, it is still fairly certain that the algorithms are not only not your friend but subject to all the biases combined of the teams that build them.

Yet leaving it all to Machine Learning, or its predictive abilities is like letting the election be decided by 538 and Nate Silver, right? Supplanting the checks and balances we have in place as a democracy. Its limits have been reached and are not going to be repaired but disrupted by a ‘new technology’ they tell us. Instead what they mean is the machine, and the programs that run it or are influenced by it, have to be designed for how humans are, not ‘what users want.’

Yet we will continue to write and proselytize for better humanities and sociology within UX/UI Design which has ceded entirely too much over to ‘designers’ that have a tenuous grasp on their users, when their own corporate culture does nothing but reflecting entrenched lack of diversity and inclusion. How do we expect those ecosystems to create or produce results that best reflect their users? You can’t test away or interpret data if you are only doing so through rather selective lens you are now saying is meritocratic.

That isn’t how these systems work, and you should honestly question everything about them. Critical Thinking 101.

Advertisements