Slaying the monster advertising created

We have a big problem. Driven by the goal to harvest attention to sell to advertisers the tech industry figured out how to manipulate our behaviour and now this methodology is being hijacked by a host of other agents with totally different goals. The consequences are far more severe than most of us realise, as Yuval Noah Harari and Steven Harris point out in an interview called When tech knows you better than you know yourself. Jaron Lanier said basically the same in this a Wired interview called 'We Need to Have an Honest Talk About Our Data'.
There are two big mistakes we made. One being that the internet industry based their business model on advertising. A conscious decision mainly made by Eric Schmidt from Google. He pushed Larry Page and Sergey Brin in the advertising direction when they were still looking for a way to make money with their search engine. The problem with paying for services through advertising is that the service will always optimise on attention instead of other properties such as content, privacy, security, etc.
The other being that users of online services have been giving away their data for free. Both consciously and unconsciously. This gave rise to the tech giants who now have so much valuable data that it will be hard to topple them over using regular entrepreneurial means. The advent of AI (especially deep learning) over the past few years makes this mistake even bigger. Before AI can replace us, it needs to learn from us. We have become so used to giving our data away for free (knowingly and unknowingly) that we fail to realise that this is no fair deal. The companies wielding the AI that learns from your data, and will replace your job at some point in the future, are not compensating you for it. While they should. Using the methods and technologies of the advertising industry they are used to, and able, extracting that information for free. What initially looked like an altruistic attempt to enable online communication the tech giants secretly played the role of the third party behind the scene manipulating behaviour to maximise economic return by harvesting attention and selling it to advertisers.
In other words, online businesses unwillingly created a monster while consumers let them by being asleep at the wheel. Point in case: We let Mark Zuckerberg built a machine that maximises our eyeballing of ads and now this machine has been hijacked and turned into a general machine for manipulating our behaviour.
The solution
Killing advertising won's solve the problem. Although I do think Bill Hicks had a (funny) point in his 1993 show 'Arizona Bay' when asking members in the audience working in marketing to kill themselves (watch the video so you don't miss the all-important, and brilliant, intonation). Tongue in cheek, of course, but he did point out the problematic role of advertising in modern societies. Bill Hicks sure was one of the first to ask (albeit a bit blunt) whether it was really necessary to put a dollar sign on everything. A question that has become increasingly relevant with the advent of ubiquitous, powerful and global information systems. Instead of killing advertising we should start considering other ways of paying for services. If a conversation needs to be valued in economic terms because the medium needs to be paid for, then there surely must be a better way than through manipulating either or both parties to look at ads. As Lanier points out there are already examples that work, such as Netflix's subscription model. Netflix could have chosen the advertising route a la YouTube just the same but fortunately they didn't. The massively important side effect being that Netflix is not in the business of behaviour manipulation in the same sense that YouTube is.
Another obvious option is simply making users pay for the services they use. The problem has been that this requires micropayments and proper billing, but these are problems that have largely been solved. Actually, payments were part of the HTTP 1.0 spec but were left out due to time constraints. It is still a work in progress though. But there are tons of other initiatives that try to solve the micropayments problem (from cryptocurrencies to alternative fintech solutions). Funnily enough this problem has been tackled in Africa using, in our view, basic mobile technologies.
Regulation is also an important mean through which we can regain control over our own data and limit the power of the tech giants. The GDPR law was a first step in the right direction, but it didn't provide the industry with a good alternative, it only states what is not allowed.
In my eyes one of the most promising approaches is a revision of the way we build information systems. I truly believe there is a better way to treat our data that is beneficial to both users and third parties. It can be achieved by building 'contracts-based information systems' and I have extensively talked about this idea before.
And finally, I believe the software industry needs to get its act together. I have spoken about professionalism and responsibility within the software engineering industry on many occasions. From ethics to semantic programming and from hacker culture to startups, but mostly to an incrowd audience. So I am delighted to read that one of the best current day philosophers (Harari) shares this insight: "I think it's extremely irresponsible, that you can finish, you can have a degree in computer science and in coding and you can design all these algorithms that now shape people's lives, and you just don't have any background in thinking ethically and philosophically about what you are doing. You were just thinking in terms of pure technicality or in economic terms." Spot on.
So, yes, we have a problem but we also have options. But we sure need to starting acting upon them.

Leave a Reply

Your email address will not be published. Required fields are marked *