Bubbles, Part 2: How Social Media Makes It Worse

by ,

My first Bubbles post focused on personal steps we can each take to broaden our perspectives and come to understand our fellow Americans. That half of the equation is extremely important, and I do not want the flaws in our institutions to eclipse it. So with that caveat, let’s move on to the discussion of what social media is doing to aggravate the problem and what they could do about it. I will focus on two platforms because they are familiar to me: Facebook and YouTube.

Facebook Wants to Only Show You What You Will Like

To me, the most obvious symptom of the problem Facebook has is with how posts get reported. When you try to report something on Facebook that’s against their terms of service, they constantly try to tell you to just ignore it.

What you see when you report something that “advocates violence or harm to a person or animal”

The fifth option above is reasonable, although it would reveal to the poster who has probably reported them if it comes to that. I am more concerned that 3 of the 5 options involve keeping the inappropriate/illegal piece of content online and just never seeing similar things from that person again.  So, I guess Facebook wants me to be in a social bubble? It feels like they’re trying to suggest there’s something wrong with me for seeing someone else’s transgression.

If you report something that advocates violence, for instance a link I saw to a video of cars running over protesters, they may not do anything about it. Here’s the reply I got from that one:

“Thanks for letting us know about this. We looked over the post, and though it doesn’t go against one of our specific Community Standards, you did the right thing by letting us know about it. We understand that it may still be offensive or distasteful to you, so we want to help you see less of things like it in the future.

From the list above, you can block [redacted] directly, or you may be able to unfriend or unfollow them. We also recommend visiting the Help Center to learn more about how to control what you see in your News Feed. If you find that a person, group or Page consistently posts things you don’t want to see, you may want to limit how often you see their posts or remove them from your Facebook experience.

We know these options may not apply to every situation, so please let us know if you see something else you think we should take a look at.”

Hey, Facebook: what is wrong with you? “We want to help you see less of these things” is not the solution. I’m not an idiot, I know I can unfriend or block people. That’s not going to solve the problem of a group of my fellow Americans advocating and glorifying violence against other Americans, including myself. That’s not okay! Ignorance is not bliss in a democracy, but ignorant is exactly what you are encouraging me to be.

I have reported two incidents of people sharing content that advocated violence and harm to either a specific individual or group, and you took no action on either one. The bigger problem here, though, is the attitude that your perfect social network would only consist of people seeing things they want to see.

Facebook, I accept that you probably need to explain to users trying to report things that are merely offensive that they have the ability to block or unfriend the person who shared it. With that said, there are several ways you could make the process work better:

  1. Offer guidance on how to have a productive dialog with people who are saying offensive things, if they are not already. Ignoring people we do not agree with should not be the only option.
  2. Stop repeating the same block and unfriend options with every report from the same person. At some point the person reporting knows those options exist, and the reminder sounds too much like a recommendation. I’ve only reported twice, so perhaps it will do this eventually.
  3. I can’t believe I have to say this, but you could actually enforce your own guidelines on acceptable content. Is it really okay for people to share videos glorifying running people getting run over? Is it okay to share content that suggests violence against people, even jokingly? The Secret Service takes jokes that threaten the president seriously, shouldn’t you? [1]

Along the same lines, you could make similar reforms to the way the unfriending or blocking process works. Ask for the reason one is blocking or unfriending. In the right circumstances, suggest resources to help people work out their differences of opinion. You could remind people of those they unfriended months ago and suggest maybe it’s time to patch things up (again, in the right circumstances, not “he sexually assaulted me”).

YouTube Helps Spread Ideologically Polarizing Content

YouTube’s recommendations system is incredibly bad considering how much AI Google puts into other systems it owns. It appears to operate largely on key words present in video titles I’ve watched to then suggest others I might like.

In the past year or so, I’ve been taken down the rabbit hole that probably started with political segments on late night talk shows and continued on to MSNBC videos, to tiny channels that upload recorded segments of cable news shows which make the conservative contributor look bad, to The Young Turks and David Pakman. Now my recommendations are clogged full of this stuff and I don’t want to watch any of it. In essence, YouTube built a bubble for me and it feels like my only way out is to just stop using its recommendations entirely. Is it really YouTube’s goal for me to stop watching? Why is it that they have such an enormous and varied video platform yet it still feels like I’m watching cable TV with nothing interesting to watch? I know you have good content for me, YouTube, you just are not smart enough to show it to me.

I just detailed how I consume less content on their platform because this situation is so bad; Google might also consider that the state where they are headquartered might vote on secession next year. They should see a business interest in putting some serious effort into repairing the ideological rift.

There are some silver linings which I think are just coincidences. For instance, I occasionally see recommendations for videos interviewing Trump voters/supporters; this is probably because of key words in the title.

I recognize this is a difficult problem to solve generally. If we just think in terms of political content, though, YouTube could experiment with recommendations that try to categorize the ideology of the videos it presents. This would start by predicting an ideology of users and channels, based on an analysis of YouTube’s users and their viewing/rating histories to try to identify what political segments exist in its audience.

Next, a simple messaging tweak: get YouTube users to trust that the site is taking thumbs-up and thumbs-down ratings into account, and doing it well, when generating recommendations. Right now I rarely use these features because I only understand how it affects publishers, and I do not trust that YouTube is going to make my experience better by collecting even more information from me. Maybe they are already taking these into account; there may also be a trust problem.

Channels/videos can then be categorized by how much appeal they get among different political segments. “Appeal” is a fuzzy term and developers don’t like those, so let me define it: a score that represents the share of users in a particular segment who responded positively to the content, either with a thumbs-up or a positive comment (I assume Google has algorithms smart enough to be able to tell the difference between a positive or negative comment). A separate score of “hostility” might also be useful to track, which would be the opposite measure: the share of a segment who responded negatively with a thumbs-down or a negative comment. Videos can then be treated as belonging to some vague categories by comparing these scores, the most interesting combinations being these below.

  • A video with similar appeals between segments would be either apolitical in nature or (less likely) completely nonpartisan.
  • A video with a decent appeal score for one segment but some appeal (and low hostility) among another segment is good bipartisan content.
  • A video with a decent appeal score for one segment but significant hostility from another segment is partisan content. It could also be apolitical, triggering content like the Ghostbusters trailer.
  • Channels may also receive a score that’s similar to a reputation measurement; how often does the channel produce partisan, bipartisan, or apolitical content? This can be used for predictive value of future videos the channel publishes. A past discounting tweak might be needed to account for ideological shifts in a channel over time.

User ideologies have to be reevaluated over time based on how their reactions to videos compares with other users.

Now YouTube could tweak its recommendation algorithms. For existing content, inject some bipartisan videos into the other side’s recommendations by replacing recommended videos that are partisan. Leave apolitical videos alone. Because the selection of videos for this process would in turn influence the scores used to select those videos in the future, this would be a learning algorithm that rewards content with the broad appeal but still challenges one side’s point of view.

An interesting experiment they could do is make a recommendation system like this opt-in for each user (with promotion). That way people start experiencing it after making a commitment to wanting to see things from other points of view. It might work out better than forcing it on people.

The final piece would probably be to help creators understand how these rules work and measure the feedback they are getting from different segments. This feedback could encourage channels to produce more bipartisan content. They will be rewarded with new videos getting broader exposure as soon as they are uploaded.

Another way to propose this that does not get into politics is to suggest merely that YouTube should be focusing on determining which videos a user is most likely to watch and like. My understanding is that this is a key part of YouTube’s business model, and these changes would directly support that. It’s a bonus that it can also help to bridge the ideological divide and get Americans exposed to the same basic information once again.

I recently came across a CGP Grey video [2] that explains well the problem of virally spreading polarizing content that just makes people angry or fearful. Social networks may try to claim that this is not their problem to address, but when they have so much market share–Facebook has more than one billion active accounts every month [3]–they stop being victims of the problem and start becoming the problem itself. Getting it right on any social network is difficult, and though these measures do not fix everything they would be steps in the right direction.

Learn more:

  1. How the Secret Service Protects Obama on the Internet (The Atlantic, 2015)
  2. This Video Will Make You Angry (CGP Grey/YouTube) If I told you how this video actually made me feel, you wouldn’t watch it. Watch the video if you want to understand what that means.
  3. Facebook users worldwide 2016 (Statista)

Leave a Reply

Your email address will not be published. Required fields are marked *