Nightmares of Our Own Making
By Matt Popke, in response to Tilt West’s Roundtable, Computer Lib/Nightmare Machines: Technology’s Impact on Cultural Communities
“A powerful global conversation has begun. Through the Internet, people are discovering and inventing new ways to share relevant knowledge with blinding speed. As a direct result, markets are getting smarter — and getting smarter faster than most companies.”
That’s from The Cluetrain Manifesto, a business text published in 2000 by Rick Levine, Christopher Locke, Doc Searls, and David Weinberger, as an expanded version of the “95 theses” the authors had released a year earlier on the web. The book was a huge bestseller, and it ended up being very influential in the tech boom of post-bubble Silicon Valley. It’s riddled with quotable tidbits like the one above. One of my favorites is this, “[The Internet] undermines unthinking respect for centralized authority, whether that ‘authority’ is the neatly homogenized voice of broadcast advertising or the smarmy rhetoric of the corporate annual report.” That is what many of us genuinely believed almost twenty years ago.
I’ve been thinking about Cluetrain a lot lately. The book, like much of the popular thinking of the time, is filled to the brim with unrealistic technocratic optimism and happy predictions that have failed to come true. One of the core premises of the text is that networked technology will fundamentally alter human culture, and that these changes will liberate, educate and unite people into “markets” engaged in “conversations”. We’ll all form affinity groups online and self organize into humanist-oriented communities of mutual support, keeping each other safe from the manipulative misinformation of the elitist broadcast era that came before the birth of the populist network.
Nearly twenty years later, it’s a little hard to take Cluetrain and its many contemporaries seriously. Markets certainly do not seem to be getting smarter faster than the companies which exploit them. Social media companies promote extreme, polarizing content to increase vaguely defined engagement metrics. Search engines tailor query results to match our profile, narrowing rather than expanding our worldview. Political campaigns have begun using online advertising and social media promotion to knowingly spread disinformation to audiences believed to be vulnerable to such tactics in advance of increasingly contentious elections.
And it is important to remember that all of this was built on the twin premises that:
A. Advertisers will pay more for ads if they think the audience is more receptive to their message.
B. Regular people want ads that are more relevant to their interests rather than, say, fewer ads in their lives or no ads at all.
Neither of these two premises seems to be true, or at least true to any meaningful degree. Online advertising is still quite cheap compared to broadcast advertising in spite of its targeted nature. And I don’t know anyone who deeply appreciates the deluge of ads they find in their social media feeds, email inboxes, search engine results, freemium apps, news websites, and almost anywhere else you care to look online. Nor is there particularly convincing evidence to indicate that targeted online advertising actually works any better than old-fashioned broadcast advertising.
And yet, we have built the most massive surveillance network in human history to support targeted advertising as a business model. This network of surveillance is entirely owned by private companies with little to no obligation to protect public safety or promote public benefit, and there is no reason to believe that anyone other than them benefits from its existence.
And that’s just one issue we find ourselves worrying about now. We also worry about addiction to new media. We worry about the effects of constant distraction on our mental and civic health. We worry about the constant drive toward greater efficiency that has altered and often dehumanized many of our jobs. We worry about the automation of those jobs as robotics and A.I. continue to advance. The tools for liberation, education and unity that were promised to us in the late 90s have turned into sources of anxiety, confusion and division in the present day.
How do cultural organizations like museums participate in online culture without becoming victims to its perils? Perhaps more importantly, how do we utilize the tools of the network without participating in the systems that victimize our visitors? Why do we use Facebook tracking codes and Google Analytics on our websites? Have we ever asked ourselves, or do we just go along with “industry best practices” that were defined for the for-profit sector? Do we have any idea how useful these tools actually are to us, or do we assume their utility based on general consensus? Do we even know if alternatives exist?
In all of the talk of museums learning from the for-profit sector, do we ever stop to ask if the behaviors of the companies we imitate align with the principles of the organizations we serve? Is it normal for us to just adopt tools and behaviors from the for-profit sector without such analysis, or is this unique to the complicated and often confusing world of technology?
Who in the average museum has the time to analyze and examine these practices? Whose role is it to be responsible for questioning these practices? Who decides whether it violates the spirit of our mission to participate in a vast private surveillance network in order to count “clicks”? And again, this is just one issue of many related to technology. How much technology should be deployed in our spaces? What purpose should it serve? Should museums offer a refuge from the encroachment of digital technology in our lives? Or should they engage in the conversation around these issues, providing a space for people to learn, think and talk about the role of technology in their lives? Should we serve an education role? Do people need to be told about individual companies’ power over their lives? We think we know what principles are represented in the mission statements of our organizations, but what principles are represented in our behaviors?
Honestly, our roundtable discussion raised more of these questions than it answered. What seems clear is that it falls to individuals within our institutions to examine and adapt their practices and to drive the necessary conversations forward. There is no institutional force driving toward ethics. There is no role in our organizations responsible for finding the answers to difficult questions. We have to do this ourselves.
And that makes sense. After all, it’s the same answer we face to our larger questions as a society. Everything we’re afraid of now is the result of something we welcomed into our lives. The society that enthusiastically read The Cluetrain Manifesto also enthusiastically adopted social networks, free website analytics, free email services and customized search results. We deferred to best practice and accepted the wisdom of the crowd. We let the companies selling us their products tell us how amazing the world would be if we let them alter it.
We can’t wait for someone else to answer the difficult questions for us because that’s what is happening now. The companies that seek to exploit us as a market are answering these questions in a way that serves their ends, not ours. But these technologies didn’t come to dominate our economy and culture on their own. We adopted them. It’s time for us — as individuals and as communities — to interrogate them, understand them and adapt our use of them to serve our needs. If we don’t, then it’s reasonable to expect more anxiety, confusion and division in our future.
Matt Popke is a software developer and designer who has been at the Denver Art Museum since 2011. Trained in computer science, he somehow became a web designer in the 90s, and he has been trying to figure out why any of this is happening ever since.