What inspired me to finally sit down and write this is the literally ridiculous cover story in Newsweek: ”Is the Internet Making Us Crazy?“ It’s as good an example as you’ll find of something that overuses anecdotal evidence (‘one guy had a meltdown after he became internet-famous!’) and infers that correlations reflect causation (‘people who check their iPhones a lot tend to be more compulsive!’). I’m not going to go into detail on it specifically, but if you’re interested you can read Maia Szalavitz and Vaughan Bell on what the research really says, and this satirical response by Alexis Madrigal.
Instead, I’m just going to share a few beliefs I’ve developed and refined over the years that I use as lenses when reading, observing, and thinking about how technology is changing our world (or not), that I find help keep me grounded in debates about whether technology is making the world the best or worst ever.
1. The internet is real.
It seems absurd to say — who doubts the internet is real? — but for some reason critics keep separating the internet from “real life.”
[Update: I should've been on top of these links: shortly after I wrote this I noticed an uptick in discussion about "digital dualism," a term Nathan Jurgenson coined earlier for this false separation of physical and digital aspects of life. Zeynep Tufekci has also made good arguments that online and offline experiences augment rather than oppose each other, and Jurgensen came through again with a great read on the fetishization of physical experiences.]
Of course the internet is “separate” from a lot of things. Things happen on the internet that don’t happen in other contexts. We have online experiences that don’t transfer offline. But this isn’t really different from, say, having a “work life” and a “family life” and other distinct spheres of experience, each with their own sets of relationships, activities, and norms embedded within them. This is especially true of ‘thinner’ contexts, like our daily commute, or clubs we belong to. We develop relationships within those contexts that rarely extend to our broader sense of the world or ourselves. I see and sometimes talk to a lot of the same people when I get coffee every morning; we only see thin slices of each other and wouldn’t say that we “really” know each other but the difference between Morning Coffee Me and The Real Me is only one by degrees, not a difference in kind.
The internet is (or at least was) one of those thin slices of life, but increasingly it’s a cross-section that has the ability to encompass or integrate every other context, most obviously as a unified calendar and rolodex that helps us manage connections across multiple contexts no matter which context we happen to be in at a given time.
People staring at their iPhone on the train or in line for coffee are probably staying in touch with things in other aspects of their life — doing work, messaging friends, staying current on something that interests them (information which becomes the substance of conversations and real relationships) — more than they’re “escaping” the reality around them [paraphrasing a forgotten source]. And I’m not even going to start listing all the ways that my online presence has enabled and enriched my personal and professional relationships and improved my life in general.
2. Technology makes pre-existing human behaviour seem new by making it more evident.
[There are two sides to this belief: here I deal mainly with biases, but I also think about this a lot in terms of appreciating emerging opportunities, starting with developing better metaphors. I might deal with the opportunity side later -- designing innovations to be continuous with human universals and established behaviour, "paving cow paths," etc -- but for now I'm focused more on interpreting changes that are already happening.]
Whether we’re talking about one person’s behaviour or human behaviour in general, digital technologies inherently make it easier to recognize, measure, and make reference to things. For example, if I exchange some instant messages with you, the details of that chat are documented, we can refer back to it later, other people can potentially refer to it, we can use software to analyze it, over time we can measure the average frequency and duration of our chats with various people, etc. And sometimes things become so prominently apparent when we refer back and analyze this digital “evidence” that we don’t think about all the subtle ways we’ve already been saying, doing, and thinking almost the same things in different forms.
Consider memes. Ideas have always replicated and mutated virally from person to person, but now that the process occurs through digital media we can observe exactly how it happens (e.g. tracing a joke back to its source), we have non-ambiguous points of reference that we can measure and have somewhat sensible conversations about. That’s not to say there aren’t also significant differences — e.g. that ideas can now spread globally within minutes — but too often I think critics neglect to consider the possibility that we’re merely seeing behaviour that existed for a while in more ambiguous forms.
Reconsider my previous points about how our lives and relationships are divided into separate contexts — and then consider how social networking platforms like Google+ render the fuzzy boundaries between those contexts into clear and distinct categories (i.e. “circles”). It might feel a bit cynical or perverse to divvy our relationships up into different buckets like that, but we’ve always intuitively organized and prioritized the many people in our lives.
Now, I think there are legitimate concerns about the effects of that kind of objectification (i.e. once we define a boundary it might become harder to change, it might turn around and compel us to behave differently, having to think about whether to categorize someone as a “friend” on a drop-down menu might affect our feelings toward them, etc). But a lot of technology criticism seems to reflect what basically amounts to xenophobia, or at least a lack of empathy for (and even a reluctance to recognize the existence of) people whose well-being could be genuinely improved by precisely these changes which traditionalists find so disturbing.
I don’t use the word “xenophobia” lightly here, as we’re learning to appreciate more internal dimensions of human diversity. Different people have different levels of tolerance or need for personal interaction, excitement, intimacy, isolation, aesthetic experience, intellectual rigour, novelty, risk, conflict, autonomy, etc, and what traditionalists perceive as a decline in desirable behaviour and a rise in undesirable behaviour is experienced by others as overall improvement and in some cases personal salvation from misery. As Szalavitz pointed out in her response to the Newsweek piece, while research shows that depressed people more likely to use the internet, there’s also research suggesting that technology is more a solution for people’s anxiety and depression than it is a cause.
Although we might reasonably ask whether what some people feel are improvements are legitimate vs. merely perceived and possibly harmful (because we do lots of things that feel good but aren’t good for us), it bothers me when critics assume that technology-enabled changes are new and negative by default, without displaying much empathy or curiosity (e.g. anyone who’s ever denounced Twitter or Facebook without experiencing them the way they’re experienced by people who actually use them).
3. We’ll continue to look for ways to overcome any problems caused by technology.
We’ve always been explorers and inventors. Even if things are as bad as some critics say, we’re not going to take these problems sitting down. Right now there are people developing alternatives and even newer technologies and ideas that will continue to “rewire our brains” in hopefully more fulfilling ways. If you’re skeptical about a new technology, chances are that others have already noticed the same issues. In the time it takes to write a book about how the internet is the end of all of your favourite things, lots of other people have already made progress toward solutions or alternatives.
The first example that comes to mind is the movement toward fewer distractions online. In the amount of time it took Nicholas Carr to conceive, write, and promote The Shallows, other people started doing things like Longreads, Instapaper, The Atavist and other applications or communities that address the challenges Carr discussed.
I think of this especially when critics complain that an emerging tech-enabled activity is pointless, “mind-numbing” or existentially hollow (e.g. playing Farmville and tweeting about breakfast). If an activity is genuinely unfulfilling then most people will stop doing it when the novelty wears off and as people come up with better things to do. And remember, referring back to my second belief, that sometimes a change isn’t so much new as it is newly noticed. As others have pointed out, mundane topics of conversation certainly aren’t new, and even the oft-maligned breakfast tweets have genuine social value (Kevin Marks describes these as “phatic” gestures). Banality is simply more evident now that we’re documenting it instead of just filling large portions of our face-to-face and telephone interactions with it.
Some critics write as if people are helpless automatons who’ll play Angry Birds all day, mindlessly clawing away like beetles turned on our backs unless a clever journalist or wise English professor comes along to flip us around and peddle us in the right direction (I know they probably don’t actually believe that; it’s probably just that they’re trying to sell books and magazines, and asking if the internet makes us stoopid is a pretty effective way to do that).
But recent history shows again and again that the market isn’t permanently seduced by sugar highs; eventually someone figures out how to pull people toward richer activities that help us build our relationships, well-being, and sense of real accomplishment in the world (i.e. accomplishments requiring unique combinations of acquired knowledge and skills). Guitar Hero and Rock Band came and went without diminishing people’s interest in learning to play real instruments, for example, and people continue to buy software like GarageBand that empowers them to make their own music instead of merely pushing buttons on command.
Always bet on a critical mass of people to resist “addiction” in favour of autonomy, discovery, and creation of new possibilities.
[I still need to articulate some more thoughts on the ethical implications of technologies in areas that involve more questions about human rights. Here I'm focused more on human capacities. And obviously these "three things I believe" wont be the only things I believe.]
4. Bonus! It’s still important to be critical.
Being critical about technology-enabled cultural change helps us get a better grasp of what doesn’t change, it helps us develop our vocabulary about progress, it helps us recognize better opportunities, and it helps us develop more effective plans and prototypes for those opportunities sooner. And of course, once in a while it might help us mitigate risk and avoid some awful pitfall or problem of path dependency.
But we must balance criticism with open-mindedness, curiosity and empathy. And while we’re at it, we must turn around and criticize any argument that seems based on the assumption that we should avoid change by default. Because to pretend that we can keep things the way they are is every bit as utopian and unrealistic as the grandest futuristic fantasies.