Filter bubble

I think if you take all these filters, if you take all of these algorithms, you get what I call a filter bubble.

And your filter bubble is kind of your own personal unique universe of information that you live in online.

And what’s in your filter bubble depends on who you are and what you do. But the thing is that you don’t decide what gets in, and more importantly you don’t actually see what gets edited out.

Eli Pariser discussing the filter bubble created by online algorithms.

Pariser later described the filter bubble as containing two parts:

Well in the talk and even more in the book there’s two pieces One is the partisan echo chamber challenge and the other is do people get exposed to content about topics that are in the public sphere at all or is it Miley Cyrus and cats all the way down.

I’m concerned with both of these, but I’m more concerned that I’m being exposed to different ideas, to escaping the echo chamber. It’s worth noting that I find Twitter to be more of an echo chamber than Facebook. I use Facebook to communicate with my friends and family. Twitter I use largely to follow and communicate with colleagues and people in my industry. On Twitter, I’m exposed to less content that I disagree with (and to less content that is new and interesting). The fact that this is the case on Twitter is a bit scary since this is arguably a situation of my own making, and one that I’m trying to correct.

Fascinatingly, there is some research that there is a physical analogue to the filter bubble, that we’re living in mono-neighborhoods with people who share our outlook. This, along with Pariser’s filter bubble, seems to have an impact on our ability to listen to those with opinions different from ours. Ultimately, this means that lack practicing the habit of compromise means or politics is increasingly polarized.

Pluralistic ignorance

In the process of examining the reactions of other people to resolve our uncertainty, however, we are likely to overlook a subtle but important fact. Those people are probably examining the social evidence, too. Especially in an ambiguous situation, the tendency for everyone to be looking to see what everyone else is doing can lead to a fascinating phenomenon called “pluralistic ignorance.” A thorough understanding of the pluralistic ignorance phenomenon helps immeasurably to explain a regular occurrence in our country that has been termed both a riddle and a national disgrace: the failure of entire groups of bystanders to aid victims in agonizing need of help.

Robert Ciadini in Influence describes pluralistic ignorance as one of the mechanisms that underlies

Social proof

The principle of social proof states that one means we use to determine what is correct is to find out what other people think is correct. The principle applies especially to the way we decide what constitutes correct behavior. We view a behavior as more correct in a given situation to the degree that we see others performing it. Whether the question is what to do with an empty popcorn box in a movie theater, how fast to drive on a certain stretch of highway, or how to eat the chicken at a dinner party, the actions of those around us will be important in defining the answer.

from Influence by Robert B. Cialdini

I’ve been thinking about social proof lately. I decided to actually pick up and read Cialdini’s Influence, which has been repeatedly recommended to me.

One of the things that occurred to me while I was reading the chapter on social proof is that it might be useful to make a distinction between phenomena that have been observed by psychologists and techniques that are used by marketing, sales and UX people. If the phenomenon is referred to as social proof, what should we call the use of social proof to try to change behavior? Cialdini offers some help here:

There are two types of situation in which incorrect data cause the principle of social proof to give us poor counsel. The first occurs when the social evidence has been purposely falsified. Invariably these situations are manufactured by exploiters intent on creating the impression—reality be damned—that a multitude is performing the way the exploiters want us to perform…

We need only make a conscious decision to be alert to counterfeit social evidence, and the smug overconfidence of the exploiters will play directly into our hands. We can relax until their manifest fakery is spotted, at which time we can pounce…

And we should pounce with a vengeance. I am speaking here of more than simply ignoring the misinformation, although this defensive tactic is certainly called for. I am speaking of aggressive counterattack. Whenever possible we ought to sting those responsible for the rigging of social evidence. We should purchase no products featured in phony “unrehearsed interview” commercials. Moreover, each manufacturer of the items should receive a letter explaining our response and recommending that they discontinue use of the advertising agency that produced so deceptive a presentation of their product.

Cialdini is particularly vitriolic here, but he has a point. Social proof works best when people are uncertain of what action to take. If we think people might be uncertain about our product or service, it’s worth asking ourselves whether we’re fabricating social evidence or simply highlighting. Even then, it’s worth asking whether highlighting social evidence is the best course of action.

If we’re trying to create value or to create opportunities rather than trying to change behavior, perhaps social proof isn’t the best way of addressing that uncertainty. Perhaps the best solution is spending some time listening to the reasons for that uncertainty, rather than trying to change behavior.

Use community

Like many people, I want less clutter and hassle in my life. I already have too much stuff I have to store, too many things I have to maintain and keep track of; I even have, I’ve decided, too much space… All of these things take up much of the time, energy and money I might otherwise apply to having the experiences I want in my life. I want an institutional tool for owning less and doing more.

Let’s call it a use community. Imagine a member-owned facility located in the heart of a dense urban neighborhood where I could not only access a tool library, a laundry room, a gym and a shared car, or what-have-you, but access a whole suite of services designed to outsource my responsibility for owning or buying things.

To an extent, Alex Steffen’s idea in 2007 for a use community has become the sharing economy in 2014. But the sharing economy is a misnomer. It’s largely about more efficient rental and not really about sharing at all. There’s nothing wrong with that, but it replaces community with an economy, which is a very different thing.

There do seem to be genuine use communities out there. Hackspaces, makerspaces and FabLabs are opening up all over the world. I’m curious to see if they provide more than just a service. I’m curious to find out if they provide a community, as well.

Interestingly, that tension between community and economy is very similar to—if not the same as—the tension that exists between civic republicanism and commercial republicanism.

First prototype

This was my first prototype. Yes, those are popsicle sticks, and those are rubber bands at the top. It took me 30 minutes to do this, but it worked. And it proved to me that it worked. And it justified the next couple of years on this project.

Nikolai Begg designed a device that fixes a one hundred year old problem with laparoscopic surgery.

The entire TED talk is fantastic, but my favorite moment of the talk is when he shows his jury-rigged prototype. It’s a great example of testing an idea by making it happen as soon as possible.

Exam results

Research shows high-stakes testing can also produce unintended consequences that fall short of outright cheating. Daniel Koretz, the Henry Lee Shattuck Professor of Education at the Harvard Graduate School of Education and an expert in educational testing, writes in Measuring Up: What Educational Testing Really Tells Us, that there are seven potential teacher responses to high-stakes tests:

1. Working more effectively (Example: finding better methods of teaching)

2. Teaching more (Example: spending more time overall)

3. Working harder (Example: giving more homework or harder assignments)

4. Reallocation (Example: shifting resources, including time, to emphasize the subjects and types of questions on the test)

5. Alignment (Example: matching the curriculum more closely to the material covered on the test)

6. Coaching students (Example: prepping students using old tests or even the current test)

7. Cheating.

Anya Kamenetz, in her article on why the Atlanta testing scandal matters, lists these seven possible outcomes of high stakes testing. She cites outcomes 1-3 as positives. My response was that only the first outcome is really a positive. I’m probably making too many assumptions, though.

My first assumption is that outcome four will always happen: a test will determine what is taught. So teaching more or making students work harder will only mean that they learn how to take the test, not necessarily that they learn what they need to. And yes, I know that “learn what they need to” opens a huge can of worms. Obviously, if the test is testing what they need to learn, then it’s a good thing.

My second assumption is that more teaching and more hard work are good up to a point. My strong suspicion is that we’ve passed that point along while ago, and that the extra work forced on students is counter productive.

These are both assumptions, and I have more to learn in this area, but I wanted to capture these assumptions while they were still fresh in my mind.

Perfect systems

If the Stasi was so well organised, why did Communism collapse?

In the Communist ideology, there’s no place for criticism. Instead, the leadership structures believed that socialism is a perfect system, and the Stasi had to conform them, of course. The consequence was that despite all the information, the regime couldn’t analyze their real problems, and therefore it couldn’t solve them. In the end, the Stasi died because of the structures it was charged with protecting.

Hubertus Knabe on the fall of East German Communism.

I’m not sure how accurate his description is, but it’s an interesting idea. A system that assumes it’s perfect has no way of questioning itself, no way of turning back or correcting their course. If this is true, then the only possible outcome for a perfect system is collapse.