There is no right answer

Education should be an education in how to live in this kind of world. And so it should an education in how you work within constraints, defined questions that you don’t know the answer to—and there will more than many more than one answer to—in collaboration with other people and how you learn to make things together. And you produce things, and you do it in the real world as much as the classroom. That’s what education should be.

Of course it should also be about learning English and maths and doing it through geography and history. Of course it should be that, and you need some element of structure. But actually we need to educate entire generations of people to go and look for problems and opportunities and collaborate to solve them.

And the trouble is that we don’t have any political leadership that is willing to speak that truth because they want to speak a language of safety, caution and education as a sixteen year apprenticeship in diligently coming up with the right answer at the right time. How is that going to help? How is that going to help in this world? There is no right answer. There are new problems the whole time. You have to find new collaborators. You have to think in fresh ways. You have to make and fail and try again. That’s that world. That’s the world they’re already in.

If you judge the education system by whether it did any harm, then we’d be really failing it seems to me.

Charles Leadbeater’s response to a question about education during his brilliant talk on the frugal innovator. His assertion that education is largely about finding the right answer at the right time. This was certainly the case with much of my own education.

I certainly think I would have benefited from an education that provided me with many stories rather a single correct answer. That didn’t expect a right answer, but expected me to use a set of tools to find the best possible solution given my resources, context and situation.

It’s what I’ve spent much of my adult life doing, and I feel that my education left me poorly prepared for the real world. In fact, I feel that I’ve spent much of my adult life unlearning what I was taught in school.

Radicants

I think [innovators in the developing world] build kinds of organizations which are really quite different and they scale in a different way. And they are radicants. Radicants are plants like strawberries. So a strawberry does not grow from a central root. A strawberry grows by putting down roots as it grows and drawing on local resources. So, they’re more like radicants; they’re more like strawberries. They grow in this way that puts down roots and then draws from resources from where they’re growing to sustain themselves, because they don’t have a big, strong central core. That can be a weakness, but it can also be a strength.

From Charles Leadbeater’s RSA talk on the frugal innovator.

This reminded me of two things I continue to be interested in. The first is Russell Davies’ idea of ruricomp, makerspaces and community supported agriculture. I’m not (yet) a member of a makerspace, but we’ve recently joined Farnham Local Food. It’s an impressive local community that we’re just getting to know.

A step in the right direction

And a step backward, after making a wrong turn, is a step in the right direction. ― Kurt Vonnegut

From Player Piano, this is quite possibly my favorite quotes of all time. It’s a reminder that progress is rarely relentlessly forward. There are missteps and backtracking. There are promising leads that turn into dead ends. All of this is progress, though, if you have the courage turn back rather than continuing on a route that isn’t going to get you where you want to be.

(Image courtesy of Will Thomas)

Technology precedes understanding

Engineering was the key. The Wright brothers functioned as engineers, not as scientists. Science, the drive to understand the ultimate principles at work in the universe, had little to do with the invention of the airplane. A scientist would have asked the most basic questions. How does the wing of a bird generate lift? What are the physical laws that explain the phenomena of flight?

The answers to those questions were not available to Wilbur and Orville Wright, or to anyone else at the turn of the century. Airplanes would be flying for a full quarter century before physicists and mathematicians could explain why wings worked.

How is it possible to build a flying machine without first understanding the principles involved? In the late twentieth century, we regard the flow of technological marvels from basic scientific research as the natural order of things. But this relationship between what one scholar, Edwin Layton, has described as the “mirror image twins” of science and technology is a relatively new phenomenon. Historically, technological advance has more often preceded and even inspired scientific understanding.

pp. 174-175, The Bishop’s Boys: A Life of Wilbur and Orville Wright by Tom D. Crouch

This is something I’ve often wondered about: whether it was possible for a technology to be based on an inaccurate model. When I’ve asked friends about this, often over a pint in the pub, they’ve looked at me as if I was crazy.

If Tom D. Crouch is to be believed, the scientific models that the Wright Brothers based their plane on were not inaccurate, they simply didn’t exist.

This is not to disparage science. A better understanding of why wings work has lead to better, faster and safer airplanes.

What interests me here is the a similarity between this and Don Norman’s claim that technologies precede our need for them. There Wright Brothers, Wilbur in particular, certainly lend credence to Norman’s statement that “technologists invent things, not sometimes because they themselves dream of having their capabilities, but many times simply because they can build them.”

The goodness/adoption paradox

The goodness/adoption paradox surfaces if, for fun, we separate goodness (from the expert’s point of view) from the factors that drive adoption. From the expert point of view, better technologies existed for publishing and networking than Tim Berners-Lee’s Web. Ted Nelson and Doug Engelbart talked about and demoed them for decades. But those “better” ideas were demandind in ways that would have raised barriers to adoption in 1991. At best they would have cost more to build and taken more time to engineer. We can’t know whether those additional barriers would have prevented the Web from succeeding or merely change its ascension. It’s also possible these alternative web designs might have had advantages that Berners-Lee’s Web didn’t have, which would have positively impacted adoption.

As I was watching and writing about Bill Buxton’s talk on ubiquitous computing, I had Scott Berkun’s idea of the goodness/adoption paradox from The Myths of Innovation in the back of my mind. I couldn’t quite put my finger on why, but I now think it was the idea of seamless transitions. In Buxton’s narrative of getting out of a car, the transition from voice control to a user interface was certainly seamless. Another example I can think of is having a thermostat change the temperature of your house as you get closer to home (PDF).

But I can think of other examples that aren’t so seamless, but do help ease transitions. Netflix, for example, lets me pick up where I left off when I move from my phone to my computer. Dropbox helps me move files between devices and platforms with ease. While both of these transitions are better than they were before, I don’t think that either of them are seamless. On the contrary, I’d say they are decidedly seamful. In fact, I’d say that Dropbox places itself right along the seam between different platforms. I’d also say that it’s seamfulness is exactly what makes it superior to a seamless solution like iCloud. Dropbox makes it absolutely clear what is going on: it exposes the seam. With iCloud what is being synced is so seamless that I have to idea what is backed up (or indeed how I recover that data when I upgrade my device).

These aren’t seamless transitions. Perhaps they’re not seamless because they’ve placed themselves where there was a gaping hole previously. Perhaps that will change, but I still think there is something to be said for the idea of seamfulness. I don’t think it’s the same as an intrusion. While the transition in the case of both Netflix and Dropbox may be obvious, it certainly doesn’t feel intrusive to me.

Related posts

Carbolic-acid remedies

This has been the pattern of many important but stalled ideas. They attack problems that are big but, to most people, invisible; and making them work can be tedious, if not outright painful. The global destruction wrought by a warming climate, the health damage from our over-sugared modern diet, the economic and social disaster of our trillion dollars in unpaid student debt—these things worsen imperceptibly every day. Meanwhile, the carbolic-acid remedies to them, all requiring individual sacrifice of one kind or another, struggle to get anywhere.

I’ve already written about Atul Gawande’s superb essay on slow ideas, but I wanted to pull one or two more ideas out from the essay. The first is his idea of a carbolic-acid remedy.

It comes from the first question he asks in the essay. Why did anesthesia catch on so quickly when antiseptics took much longer? Both came about at roughly the same time, there were economic incentives for both and both were equally difficult to implement. Gawande’s conclusion is that antiseptics solved an invisible problem: germs. Anesthesia, on the other hand, solved a very visible problem: the patient’s pain during surgery. He also points out that antiseptics was painful for doctors. Carbolic acid often burned doctors hands and they had to operate in a shower of the stuff. Anesthesia was completely pain-free for the doctors who used it.

This is the dual nature “carbolic-acid remedies”: they solve an invisible problem and are painful to implement. As Gawande points out, may of the problems we currently face (global warming, poor nutrition, the banking crisis) are of this nature. They are invisible problems and the solutions to them are not easy. I’m not entirely sure if Gawande’s mentorship-based solution is the answer to all of these problems, but it’s worth considering. I also like the term “carbolic-acid remedy” as a reminder of the challenges that these types of problems raise.

Related posts

Mentoring innovation

To create new norms, you have to understand people’s existing norms and barriers to change. You have to understand what’s getting in their way. So what about just working with health-care workers, one by one, to do just that? With the BetterBirth Project, we wondered, in particular, what would happen if we hired a cadre of childbirth-improvement workers to visit birth attendants and hospital leaders, show them why and how to follow a checklist of essential practices, understand their difficulties and objections, and help them practice doing things differently. In essence, we’d give them mentors.

Atul Gawande has written [a fascinating account of the difficulties faced when trying to spread ideas and change behavior]. For Gawande, behavior change is crucial because it means saving lives. After reviewing a number of medical innovations and considering why the ideas behind them did or didn’t spread, he moves on to the [BetterBirth] project and how try are approaching this problem. Their solution is effectively mentoring: spreading ideas one person at a time.

There are four aspects of the mentorship program that I’d like to briefly discuss.

Innovation is not synonymous with high technology

The first is that not all innovations are technological. Gawande gives the example of infant hypothermia.

We’re infatuated with the prospect of technological solutions to these problems—baby warmers, say… [E]ngineers have produced designs specifically for the developing world. Dr. Steven Ringer, a neonatologist and BetterBirth leader, was an adviser for a team that made a cheap, ingenious, award-winning incubator from old car parts that are commonly available and easily replaced in low-income environments. Yet it hasn’t taken off, either. “It’s in more museums than delivery rooms,” he laments.

As with most difficulties in global health care, lack of adequate technology is not the biggest problem. We already have a great warming technology: a mother’s skin. But even in high-income countries we do not consistently use it.

You can quibble with whether kangaroo care is an “innovation” or not. According to this and other changes could “save thousands of lives.” If that’s true, it certainly meets the Berkun definition of innovation: “significant positive change.”

Innovation is social

The second idea I’d like to pull out is that innovation is deeply social. New technologies and ideas spreads through people talking to one another. Here, Gawande cites the work of Everett Rogers.

Diffusion is essentially a social process through which people talking to people spread an innovation,” wrote Everett Rogers, the great scholar of how new ideas are communicated and spread. Mass media can introduce a new idea to people. But, Rogers showed, people follow the lead of other people they know and trust when they decide whether to take it up. Every change requires effort, and the decision to make that effort is a social process.

The social aspect of innovation is the essence of Bruce Nussbaum’s reply to Donald Norman. While Calthorpe talks about technology and design solutions, but social solutions are often worth considering, as well.

Understanding context

The third point I’d like to consider is the emphasis Gawande places on understanding the context of the mentorees. He mentioned it in the quote I started this blog post with. I think it’s essential to the program that they have put together. This comes across in the story he tells about spreading the word about oral hydration as a treatment for cholera in Indian villages.

Eventually, the team hit upon using finger measures: a fistful of raw sugar plus a three-finger pinch of salt mixed in half a “seer” of water—a pint measure commonly used by villagers when buying milk and oil. Tests showed that mothers could make this with sufficient accuracy.

Without understanding the context in which the villagers lived, a much more expensive solution would have been put in place—measuring spoons with the recipe printed on them. This solution would also have been ineffective, as many of the villagers would not have been able to read the recipes.

Learning by doing

The final point I wanted to pull out was that the mentors didn’t just offer advice, but found that people learned best through doing.

The field workers soon realized that having the mothers make the solution themselves was more effective than just showing them…

Coaxing villagers to make the solution with their own hands and explain the messages in their own words, while a trainer observed and guided them, achieved far more than any public-service ad or instructional video could have done.

Related posts

Innovation vs invention

It is often said that invention is not innovation and I believe it. Invention has to have socio-economic value to become innovation. It has to be socialized or else it sits in the lab. Xerox Parc was famous for the huge number of digital inventions that never became innovations until people outside Xerox connected them to what people wanted in a PC. Dean Kamon’s Segway is a great invention still waiting for socialization to become an innovation that adds value to people’s lives. The entire Japanese robot technology industry is an example of invention that is not innovation because outside the labs, there is no use for them (unlike the lowly iRobot Roomba which does something useful–it cleans our floors).

Bruce Nussbaum’s reply to Don Norman from a few years ago. This is part of what I was trying to get at with my second point. “Conceptual breakthroughs” aren’t an innovation until they have an impact on society, until they “change our lives.” Invention vs innovation is a helpful distinction.

Related posts

Technology precedes needs

New conceptual breakthroughs are invariably driven by the development of new technologies. The new technologies, in turn, inspire technologists to invent things, not sometimes because they themselves dream of having their capabilities, but many times simply because they can build them. In other words, grand conceptual inventions happen because technology has finally made them possible. Do people need them? That question is answered over the next several decades as the technology moves from technical demonstration, to product, to failure, or perhaps to slow acceptance in the commercial world where slowly, after considerable time, the products and applications are jointly evolve, and slowly the need develops.

Several years ago, Don Norman made a convincing argument that technology precedes our need for it. Rereading this article now, a few things strike me.

The first is that he is talking specifically about design research and its outcomes. Much of the discussion of the article came from a much broader audience which reacted as if he was speaking of design as a whole. There are some designers that I’d argue fall into the category of “technologist” in the sense that Norman uses it: “The new technologies, in turn, inspire technologists to invent things, not sometimes because they themselves dream of having their capabilities, but many times simply because they can build them.” That sounds like a lot of “designers” I know. Perhaps we’d call them “makers” these days (and perhaps that’s a better word).

The second thing that struck me is how long it took me to get my head around the way he used the word “innovation”. Early in the article, makes a distinction between two types of innovation: “conceptual breakthroughs” and “incremental improvements.” He then goes on to speak of “revolutionary innovations” and “grand, breakthrough innovations” which I assumed was the same as a “conceptual breakthrough.” On rereading the article I’m not sure this is the case.

The grand, breakthrough innovation is what professors love to teach their students, love to write about, and to discuss. But not only is it rare, even the occasional brilliant concepts are difficult to pull off. Yes, it is exciting to contemplate some brand new concept that will change people’s lives, but the truth is that most fail. The failure rate has been estimated to be between 90 and 95%, and I have heard credible, data-based estimates as high as a 97% failure rate.

Here, it seems to me that it is the “conceptual breakthroughs” fail. What interests me is his mention of “changing people’s lives.” He mentions it again later in the article:

Major innovation comes from technologists who have little understanding of all this research stuff: they invent because they are inventors. They create for the same reason that people climb mountains: to demonstrate that they can do so. Most of these inventions fail, but the ones that succeed change our lives.

Once again, these seem to be the “conceptual breakthrough” innovations (“inventions”). Some fail and others succeed and “change our lives.” What I find interesting here, is that I don’t consider either the “incremental improvements” or the “conceptual breakthroughs” to be innovations. Innovations have an impact on people and the society the live in. I don’t think we’re actually talking about innovation until Norman starts talking about the major / grand / revolutionary innovations.

It might seem that I’m splitting hairs here, and maybe I am. But it took me several readings of the articles to unpick the difference between Norman’s three kinds of innovation (“incremental improvements”, “conceptual breakthroughs/inventions” and “change our lives innovation”). I’m currently reading Scott Berkun’s The Myths of Innovation, and I like his straight-forward best definition of innovation: “Innovation is significant positive change.” Even better, I like the fact that he encourages people not to use the word at all. So in that spirit, here’s my understanding of what Norman is saying:

A technology is created. Someone (a technologist/inventor) has a conceptual breakthrough and invents a use for that technology. It is into a product and released into the market. The product may fail. If it doesn’t, it just may end up changing our lives. We won’t know we needed the technology until well after it’s been invented. Once we recognize that we need the technology, design researchers come along and research those needs and help to incrementally improve the product. (Or to use Norman’s tidy summary: “technology first, invention second, needs last.”)

The final thing that struck me on rereading the article was the discussion of “incremental improvements.”

Revolutionary innovation is what design companies prefer, what design contests reinforce, and what most consultants love to preach. But if you examine the business impact of innovation, you will soon discover that the most frequent gains come from the small, incremental innovations, changes that lower costs, add some simple features, and smooth out the rough edges of a product. Most innovations are small, relatively simple, and fit comfortably into the established rhythm and competencies of the existing product delivery cycle.

It sounds to me as if Donald Norman is describing a local maximum problem. The proposed solution to a local maximum problem is getting out the building and talking to our customers, i.e. design research. Norman does not think this is going to have as much as an impact as we’d like to think:

But the real question is how much all this helps products? Very little. In fact, let me try to be even more provocative: although the deep and rich study of people’s lives is useful for incremental innovation, history shows that this is not how the brilliant, earth-shattering, revolutionary innovations come about.

I’m not writing this blog to convince myself (or anyone reading it) of anything, but to explore the assumptions that underpin the way I live and work. And so, I’d like to leave this as a dangling question for the moment. Can design research go beyond the incremental improvements that attain to the uninspiring heights of a local maximum? Can it lead to significant positive change? I’m not sure of the answer. I’m not even sure it’s the right question. I suspect the right question might have more to do with creating more value than you capture.

Related posts

The myth of methodology

The myth of methodology is, in short form, the belief that a playbook exists for innovation and… it removes risk from the process of finding new ideas. It’s the same wish that fuels secret lists for time-saving gadgets, tasty but low-fat meals (ha), and five-step programs for . And like other myths, this sells faster than truth, explaining the films, novels, and infomercials that play on it.

Scott Berkun, describing one of the myths of innovation.

I’ve recently been getting frustrated with books that offer a methodology with little discussion of the context in which the methodology was developed and tested. The books seem to be written with the expectation that you to follow the methodology to the letter.

At this point in my career, I’m often annoyed by this kind of methodology from on high. I’m much more interested in the details and the context. I’m interested in stories about what happens when people actually get their hands dirty.

But earlier in my career, I think these are exactly the types of books I sought out. Berkun suggests that methodologies reduce risk. In my case, I think I was looking for a way to get started. These books provided a framework, something that I could copy. This almost never worked out. Something almost always went wrong. I realize now that this is OK. You take a methodology, attempt to implement it, expose its flaws, and tweak it for the situation that you find yourself in. Along the way you’re developing the knowledge and skills that you need to evaluate the next methodology that comes along.