Computer generated art? Old skool!

After having spent a couple of months whether there is an area where I could combine my passion for computer science, AI, 3D, music and the cutting edge of the digital revolution I decided to focus all my effort at tools that turn semantic descriptions of an experience into an actual experience. Whether it is real or virtual, and whether it is a game, a movie a live event, or something completely different all together. I will explain this in more detail in subsequent posts, but for now it suffices to highlight a novel area where I can flex my skills and satisfy my curiosity. During this 'soul searching' period I took my oldest son on a trip to Bremen to hang out together for a few days in a wonderful town. While stumbling through Bremen on the first rainy evening we passed the Kunsthalle Bremen, a museum dedicated to graphical art. A poster with that day's program hit my eye coincidently telling that there was a exposition going on called 'PROGRAMMIERTE KUNST. FRÜHE COMPUTERGRAPHIK' and that night would host a talk by someone called Frieder Nake. To be honest, I didn't know who that was but I was intrigued by the description and persuaded my 14-year old son to join me on an evening listening to a German speaking professor at a museum. Quite a large stretch for a 14-year old boy, I can tell you. Both the exposition and the talk were really great. It turned out that Frieder Nake was one of the pioneers of computer generated art (together with Michael Noll and Georg Nees) , going back to the early sixties! I was flabbergasted to learn that many of today's insights were already discussed so many years ago (and I'm no newbie to computer history). Of course, we have come a very, very long way, but some of the fundamentals were already there 55 years ago. What a coincidence to bump into this legend in such a way, and what a great way to find out I'm part of such a rich history.

AI vs humans: A matter of intent.

What surprises me in most of the ongoing AI discussions is how little the role of intent is playing there. For me the biggest differentiator between human and artificial intelligence is the notion of intent. (Besides the non-transferability of the domain knowledge of an AI agent.) Where humans do everything with an intent AI agents simply, well, execute an algorithm. Be it a neural network identifying whether there is a cat in a certain picture, a robot trying to bake an egg or drawing pictures in the style of Van Gogh. AI agents don't want to do those things, it's the only thing they can do once they are initiated. On a much deeper level some would argue that humans are not much more than behaviouristic pattern recognisers, but that discussion is still highly philosophical and not very relevant at the current state of technology. In case you are interested in these matters I highly recommend reading everything from the likes of Daniel C. Dennett and Steven Pinker. Back to intent. It is like in art, it's not that hard to make a painting that looks like it's from Gerrit Rietveld, but the difference is that he had an intent when drawing a picture. He wanted to convey a message, tell a story, inspire, impress. For the same reasons there are almost no pieces of music that were generated with AI that really touch peoples heart. While every piece written by Bach does. Because Bach had an intent with his music, just like Rietveld had with his paintings. Once people understand, consciously or unconsciously, the intent of a certain piece of art they can much easier relate to it and, thus, be touched by it. Agents can imitate and mix from a huge pile of content but they don't do so with intent.
Surprisingly, and no pun intended, missing out on intent is also quite human. Everybody reading this must have been at a gallery at some point in their live, staring at an abstract painting, saying that his nephew of 5 could have created something similar as well. True, he could have drawn something very similar, but he wouldn't do so from a similar intent making the difference between a piece of art and a doodle by a 5-year old.
 
The same goes for AI agents when it comes to recognising things. They can only explain what has happened, not why it happened. It knows nothing about the intent that might have caused the phenomena. And if there was no intent it has no way of seeing the bigger picture that might have lead to the phenomenon. The latter is the other blind spot of the AI community, that an AI can look no further than the data it was trained on. They lack world knowledge and transfer learning between completely different domains is still one of the holy grails of AI, and will be for decades to come.
 
So an AI agent can't create something with intent nor recognise the intent driving a certain phenomena. That doesn't mean they are useless, au contrair, but it is good to be well-informed about both the possibilities and limitations of AI and what causes those.

Know your stuff

When I studied Mass Communication at the university of Nijmegen in the early nineties my special interest was the up and coming computer networks (the web was still to be invented). I considered those networks to be the most promising media ever devised and was very  intrigued by the possibilities it opened up. At least, in my opinion, many of my fellow students or professors were not that much interested in technology. Their focus was on the societal aspects of media. I was stubborn enough to push through so my thesis was about the question whether the broadband television network would be suitable for what were called 'electronic services'. This was a valid question since these networks were broadcast networks meaning there was route for a return signal. Making it usable for electronic services meant the network company had to invest a lot to make it two-way. For my research question I had to interview a lot of different kinds of respondents: users, network operators, content creators, marketeers, networking engineers, software developers, etc. What struck me most when I finished the interviews was the difference in depth of knowledge between the technologists and the non-technologists. The formers had a deep understanding of both the technological basis and were able to translate that into a set of possibilities and impossibilities. The latter were basically starry-eyed dreamers with very little understanding of the ongoing trend and, thus, the possible future. That was when I realised that when I wanted to make a living out of exploring this new digital frontier I'd better re-educate myself. After receiving my master's in Mass Communication and shifted to studying AI. So I shifted from an alpha to a beta study and basically never looked back. Looking back now, I can safely say that it has been one of the best choices in my life. Having the digital revolution from both sides I can say with confidence that a good grasp of underlying technologies gives me an incredibly better insight into technology related trends. And let's be honest, most innovations are (and have been) technology driven. 

There are two recent trends where this became obvious. First of all the rise of bitcoin and blockchains. As I've written extensively before I spent a lot of time a couple of years ago into a technical deep dive into bitcoin and blockchains. Being intrigued by the idea of an immutable legder I became curious both about the underlying technology and the possibilities of a blockchain. So I took the dive and it was a lot deeper than I expected. A blockchain is a genuinely complex technology. But having a deep understanding made me in the end realise that almost all use cases proposed were either impossible or easier and cheaper using existing technologies. Many of the technologists from the early days have come to this conclusion by now. Still there is a very large group that don't know the technical ins and outs of blockchains and thus reside to some sort of belief in the ones that sing the gospel. The latter often lack the technical expertise as well. Without this technological understanding it is very hard to really grasp bitcoin, blockchain and their (im)possibilities.
 
The second trend you see this happening is with AI. There is currently a lot going on in the area of AI, but while it certainly has many, many applications, there is also a lot of non-sense and ignorance. While many AI enthusiasts have drunk the kool-aid spread by the marketing departments of the tech giants, who coincidently have a stake in keeping the hype going, they often are unaware of the limitations and dangers of AI. Because to recognise those you need a proper understanding of the underlying technology. How is it possible that a neural network can have a bias? Is an advanced general AI really possible? On what time scale? Is an AI's domain expertise transferable to another domein? Why not? It is these kinds of questions that will give you a clear understanding of where a trend is coming from and where it is going.
 
As David Deutsch says, progress is the never ending search for better explanations. And in our day and age this means we often have to explain technological aspects of an explanation. So if you want to contribute to finding better explanations through creativity, conjecturing and critical thinking, you will have to take the technological deep dive so you KNOW YOUR STUFF.

The games industry is tough

Over the past couple of months I have immersed myself in the game development world because that is one of the very few environments that combine all my interests:

– 3D computer graphics (zbrush, procedural modelling)
– Music
– Software engineering
– AI (specifically ML and DL)
– Story telling (in both games and cinema)
– Play as the driver of innovation (as Steven Johnson brilliant argues in Wonderland: HowHow Play Made the Modern World)

 

Being a digital omnivore that is curious about all these fields I figured it would be interesting to see whether I would fit in. But also to see if the game developer world is interested in someone like me. Coming from more standard software engineering environments one of the most refreshing experiences was that creativity is a core part of the daily routine of game developer professionals. Discussions at the watering hole easily switch from parallelism in Rust to texturing in Substance Painter, and from the gameplay of the latest GTA to the merits of Unity for game developers. Besides the game developers that I met were all without exception extremely nice, funny, considerate, creative and ambitious. You're probably not surprised that I felt right at home.
 
But I also noted that the gaming industry is really tough. It reminded me a bit of the early days where software professionals had to compete with the cousin of a customer despite "he being only 12, but he is really handy with tablets and can make a website in Word in his basement for almost nothing". Especially the indy game industry is littered with young enthusiasts willing to put in insane hours just because they love games so much. For every position at a professional game studio there are tons of applicants willing to accept a meager compensation for quite a demanding job in terms of complexity, creativity and effort.
 
The other fundamental reason the game developer industry is tough is that most studios are as successful as their latest game. This is common in the creative industry, where it is really hard to create continuity. The only way to do so is to create a certain name in a certain market, but that requires you to have been successful a number of times before in a certain niche. Only a very few studios succeed therein. This is part of the deal of being in the gaming industry and makes that life is pretty tough for most game developers. 
 
Yet another reason making it tough is that it takes a considerable investment to make a game. It is not easy, it takes both technical and creative skills, and a lot of time to make a great game. Indy game developers have to do everything from coming up with the idea, the story, the visuals, the game play, the multi-player mechanics, the promotion, the bugfixes, etc. The broad set of required skills also make it unlikely that a game is created by an individual, adding the complexities of team building and cooperation to the mix.
 
To be honest I find this regrettable. I would love to see all these nice, creative, skilled and ambitious game enthusiasts succeed with their dream but the odds are sadly pretty low for the vast majority of them. Some might get lucky or are so exceptionally talented that they end up at a triple A studio anyway, but most won't. Fortunately many of the skills they develop are valuable for other industries as well, so they'll be fine if they're open for that. And I think they should be. Although I am all for following your passion, I can attest from experience that you can live quite a fulfilling life while not all your passions are part of your daily job.

Slaying the monster advertising created

We have a big problem. Driven by the goal to harvest attention to sell to advertisers the tech industry figured out how to manipulate our behaviour and now this methodology is being hijacked by a host of other agents with totally different goals. The consequences are far more severe than most of us realise, as Yuval Noah Harari and Steven Harris point out in an interview called When tech knows you better than you know yourself. Jaron Lanier said basically the same in this a Wired interview called 'We Need to Have an Honest Talk About Our Data'.
Causes
There are two big mistakes we made. One being that the internet industry based their business model on advertising. A conscious decision mainly made by Eric Schmidt from Google. He pushed Larry Page and Sergey Brin in the advertising direction when they were still looking for a way to make money with their search engine. The problem with paying for services through advertising is that the service will always optimise on attention instead of other properties such as content, privacy, security, etc.
The other being that users of online services have been giving away their data for free. Both consciously and unconsciously. This gave rise to the tech giants who now have so much valuable data that it will be hard to topple them over using regular entrepreneurial means. The advent of AI (especially deep learning) over the past few years makes this mistake even bigger. Before AI can replace us, it needs to learn from us. We have become so used to giving our data away for free (knowingly and unknowingly) that we fail to realise that this is no fair deal. The companies wielding the AI that learns from your data, and will replace your job at some point in the future, are not compensating you for it. While they should. Using the methods and technologies of the advertising industry they are used to, and able, extracting that information for free. What initially looked like an altruistic attempt to enable online communication the tech giants secretly played the role of the third party behind the scene manipulating behaviour to maximise economic return by harvesting attention and selling it to advertisers.
 
In other words, online businesses unwillingly created a monster while consumers let them by being asleep at the wheel. Point in case: We let Mark Zuckerberg built a machine that maximises our eyeballing of ads and now this machine has been hijacked and turned into a general machine for manipulating our behaviour.
 
The solution
Killing advertising won's solve the problem. Although I do think Bill Hicks had a (funny) point in his 1993 show 'Arizona Bay' when asking members in the audience working in marketing to kill themselves (watch the video so you don't miss the all-important, and brilliant, intonation). Tongue in cheek, of course, but he did point out the problematic role of advertising in modern societies. Bill Hicks sure was one of the first to ask (albeit a bit blunt) whether it was really necessary to put a dollar sign on everything. A question that has become increasingly relevant with the advent of ubiquitous, powerful and global information systems. Instead of killing advertising we should start considering other ways of paying for services. If a conversation needs to be valued in economic terms because the medium needs to be paid for, then there surely must be a better way than through manipulating either or both parties to look at ads. As Lanier points out there are already examples that work, such as Netflix's subscription model. Netflix could have chosen the advertising route a la YouTube just the same but fortunately they didn't. The massively important side effect being that Netflix is not in the business of behaviour manipulation in the same sense that YouTube is.
 
Another obvious option is simply making users pay for the services they use. The problem has been that this requires micropayments and proper billing, but these are problems that have largely been solved. Actually, payments were part of the HTTP 1.0 spec but were left out due to time constraints. It is still a work in progress though. But there are tons of other initiatives that try to solve the micropayments problem (from cryptocurrencies to alternative fintech solutions). Funnily enough this problem has been tackled in Africa using, in our view, basic mobile technologies.
 
Regulation is also an important mean through which we can regain control over our own data and limit the power of the tech giants. The GDPR law was a first step in the right direction, but it didn't provide the industry with a good alternative, it only states what is not allowed.
 
In my eyes one of the most promising approaches is a revision of the way we build information systems. I truly believe there is a better way to treat our data that is beneficial to both users and third parties. It can be achieved by building 'contracts-based information systems' and I have extensively talked about this idea before.
 
And finally, I believe the software industry needs to get its act together. I have spoken about professionalism and responsibility within the software engineering industry on many occasions. From ethics to semantic programming and from hacker culture to startups, but mostly to an incrowd audience. So I am delighted to read that one of the best current day philosophers (Harari) shares this insight: "I think it's extremely irresponsible, that you can finish, you can have a degree in computer science and in coding and you can design all these algorithms that now shape people's lives, and you just don't have any background in thinking ethically and philosophically about what you are doing. You were just thinking in terms of pure technicality or in economic terms." Spot on.
So, yes, we have a problem but we also have options. But we sure need to starting acting upon them.