This is from Connections, James Burke’s documentary television series produced by the BBC in 1978, on how technology and change happens. It’s a personal account of how we got to now, how ideas spread and technology evolves; overall I think what Burke does well is showing how everything is connected. Throughout Connections knowledge is analysed foremost as a distributed system within a community, rather than as a personal asset (as something that I, as an individual, have or not). In Burke’s view then progress happens when a new detail of reality becomes widely known to a group of people, to one civilisation.
His point is also that — as we add layers of technology to our society — it becomes impossible for each and one of us to have a solid understanding of how everything works. Knowledge has to be distributed, by necessity. In our everyday interactions — when we open the tap, flush the toilet, flip the switch — we don’t have to think about how something works: it just works. The functioning of the systems which support the technologies is abstracted for us.
As as result, we’re mostly clueless: we move between abstractions, failing to notice the model, unaware of the complexity of the network we built. To say it differently: any mature technology eventually recedes to the state of nature, to background, to part of the environment and of how things are, unquestioned and taken for granted. What Burke also seems to say is that although our strength — our ability to survive and adapt — derives from technology, the complexity that technology has introduced over time has reduced (if not removed) our individual ability to survive.
I sympathise with this argument — it’s why I was never charmed by the escape to the pond kind of literature. It’s naive at best to believe that at this stage any single one of us is not totally reliant on the layers of technology (water supply, electricity, and so on) that we put in place and on the outsourcing of the knowledge required to keep them running.
Which is another way of saying that reality has an infinite amount of details, most of which we’re unaware of. As soon as we look closely into something, we realise the stark vastness of our ignorance. It might be a useful thing to remind ourselves of, before entertaining any dream of self-sufficiency outside of society.
Some people think Google has stopped indexing old parts of the web. Even supposing that’s not the case, that there isn’t any memory loss going on, it seems to me that in recent years Google has tweaked its ranking to give more prominence to what’s hot and trending, the new over the old. The first page of a Google search is more often than not just a collection of news articles. Smaller design choices — such as the news carousel featuring at the top of most searches — have put more weight on recency over accuracy, on articles about the latest developments of a situation over less noisy sources.
One other major shift in how Google views itself happened around the time voice assistants entered our lives — being that they need to return straight answers to be useful, not a set of options. That’s when Google started using machine learning and algorithms to return direct answers to queries. It works when the answer is factual, such as the height of a mountain or the distance from a place. It’s less trustworthy when asked about an event or a situation, seeing that it seems to return whatever is popular at the moment.
That’s saddening, even if no forgetting was happening. This focus on novelty over knowledge diverges from my mental model of the Web. As Tim Bray writes, a permanent, long-lived store of humanity’s intellectual heritage.
Speaking of how the model of reality which a technology proposes can end up influencing and changing reality itself, here’s George Dyson:
Their models are no longer models. The search engine is no longer a model of human knowledge, it /is/ human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game. Governments, with an allegiance to antiquated models and control systems, are being left behind.
Maps make for a good example here. We’re all aware that the mercator projection is an inaccurate model of reality, one which distorts the true size of countries and is skewed in favour of Europe, nonetheless that’s what we use to describe the world, it’s what we think of when we think of a map — it’s the default, almost natural, choice.
The risk here is for a model to become so ingrained that we end up forgetting about the other options we had — or that what we’re using is, in fact, just a model.