The eternal client server pendulum

A few decades ago when Java applets were all the rage an acquaintance of mine remarked that it was just the next wave in the eternal pendulum of functionality moving from server to client and back. His remark struck me as naive. This acquaintance probably did not hear about the incredible promise of Java applets. Well, truth be told, he was right. And in the meantime he has been right more often than I would ever thought possible. But not immediately.

We all know what happened to Java applets and instead of functionality moving to the client, we got servers doing the heavy lifting of calculating and rendering responses. Clients in general didn't have the resources to take on this responsibility. Remember LAMP and all the variations?

Clients gained power pretty quickly though and before too long functionality was moving back to the client again. This time in the form of Flash, JavaScript and mobile apps. Frontends got smarter and smarter and backends were dumbed down to CRUD machines exposing state via REST API's. This was because clients were becoming capable and, as always, of technical limitations. Bandwidth was limited, unstable and with a latency that killed the user experience. 

But lo and behold! Recent trends indicate functionality is moving back to the server again. Bandwidth and latency limitations are now minimized, and cloud computing has become so available, affordable, powerful and versatile, that it is becoming easier to let the backend take care of all the heavy lifting again. Whowouldhavethunk! But it's also because the complexity on the client side has really gotten out of hand. Anyone who has recently built a reasonably big application knows exactly what I'm talking about. The frontend space is moving so fast that keeping up with the latest trend is a fulltime job. And it's not getting any easier. Just have a look at all the client side state management solutions. Every web framework has at least one unique solution. And there are a gazillion web frameworks out there. Or dive into all the different communication protocols. HTTP 1 or 2? REST or gRPC? Or GraphQL? SignalR? Yaml or JSON? Or protobuf? Or Avro? The solution space is just mind boggling at this point in time. So new kids on the block such as Svelte and Phoenix Liveview  focus on the backend as the place where most of the magic happens. Clients are once again reduced to dumb screens that do nothing more than displaying pretty pictures.
So it's deja vu again (again!). Why is it always going back and forth? I think because when clients get more powerful functionality is shipped their way to compensate for limitations in either the network (bandwidth, latency) or the server (memory, processing power, price, …). But since the state of applications are almost always shared with the outer world, this comes with a ton of extra complexity. Distributed applications are simply a lot harder to get right than centralized ones. So once the server can take over again they will. Just because it's simpler. Clients will take over some of the functionality as long as there are limitations in the network or the backend. And these will be solved one by one over time.
My long standing prediction is that the unstoppable force of centralization will lead us to a future where client side terminals are nothing more than sheets of glass for displaying content and all the magic happens in the back end. But in the meantime we might be going back and forth a few more times.

Tech heroes are in it for the money

At the closing of the decade many tech writers take a step back and contemplate what it has brought us. Such as one of my favourite writers Steven Levy does in this, yet again, insightful Wired article. The main theme across these articles is that falling of the tech heroes. Or to be more precise, the tech startup heroes. Starting the decade with lofty promises of a ‘better world through startups’, they ended up defending themselves in court because it turned out they made the world a worse place. Surely they created valuable services, albeit most of them did nothing more than aggregating supply and make it somewhat easier accessible for the demand side, but it turned out the upside is greatly outweighed by the downside. We got excellent search but paid with our privacy, we got tons of news sources but they turned out to be untrustworthy, we got many free services but lost control of how and where we spend our attention. This list can go on for quite some time. And I think the root of this all is a misunderstanding about the fundamental motive of any company.
Although I am the first to cheer any effort to make our world a better place, I’ve also been quite skeptical of the deranged startup culture where ‘make the world a better place’ was nothing more than a checkbox on a list meant for luring investors. The web is littered with startups offering simple consumer products or services that do nothing for the world at large. Au contraire. But still that is what is on their front page. It is a red herring and the root to the downfall of the tech startup hero. In the end companies have to make money to survive. Despite all the other goals it might have, once this goal is endangered all other goals fly out of the window without a second thought. Every action within a company is therefore mainly aimed at maximising profit. While companies have the winds in their back they have the means to obscure this fact, but once the tide turns and the wind and rain is in their face the truth will surface. And I am not saying it is an ugly truth, but covering it up with a misleading message is kinda ugly. Looking at Zuckerberg testifying before congress you can see the crack appear in realtime. And his company is just the highest tree in a forest of wannatrees.
Tech startups are not companies with magic powers for solving world problems that takes a lot more than just tech, cola & pizza’s, and a can-do attitude. Technology can be an amazing enabler of progress, but it never acts in a vacuum. For the coming decade I plea for a bit more realism. And for every non-tech organisation to jump off of the tech-hero bandwagon and stop misusing words like ‘hackathon, nerd, startup, AI, blockchain, …’ to make themselves look cool. You should not pretend to be what you are not. Just look at the current state of the tech startup industry why not.

Importance of small teams

Software is complex. The complexity of many common information systems is way beyond that of any physical system. Operating systems, search engines, automotive systems, missile guidance systems, navigation systems, game engines,…and I can go on ad infinitum. And the list is growing rapidly since ‘software is eating the world’. This is problematic since the number of humans it takes to develop, expand and maintain an information system grows exponentially with its complexity. Sooner or later we will run out of humans. Another unfortunate consequence is that an increase in humans increases the likelihood of errors. In my experience exponentially as well, but I can’t back that up with hard numbers. This deserves a separate post, though, so I will skip it for now. The logical conclusion is that if we want more, more sophisticated, safer, and more reliable information systems we should keep the number required developers as low as possible. In my opinion the efficiency of a software team increases up to 10. Beyond that scale every other member has less added value for the end product. Most experience software professionals have in some point in their career worked for a large cooperation and wondered how they even managed to make any money. A counter argument could be that this is impossible for domains and processes that are inherently complex. I think there are two ways to attack that problem. First of all you should split up complex systems into separate domains and develop information systems for each domain. And split up the organisation along the same lines. Secondly I am convinced that the introduction of the next abstraction level in information system development will enable smaller teams to build complex information systems. The top-down approach mentioned in previous posts is a great contender in my humble opinion, but there are likely others. We should work on methods and tools that enable smaller teams for developing information systems.


In a recent talk by John Ousterhout called "A Philosophy of Software Design" he asked the audience what they thought was the most important concept in the history of computer science. It was the answer from Donald Knuth he quoted that struck a chord with me. Knuth answered, "layers". It took me a while but over the past years I came to realise that the concept of layers of abstraction has been one of the, of not the, most successful concepts in computer science. By abstracting hardware by machine code by assembly by imperative languages by OO languages, we were able to build information systems of truly mind-boggling complexity. It enables developers to stay within one abstraction level while designing and implementing an information system. Since lower levels were abstracted away developers had more brain cycles to spend on the abstraction level of their choice. It is the job of a compiler to turn higher level into lower level expressions. Although compilers will never be able to catch every error a developer introduces, their use will prevent the vast majority of mistakes they would have made if they had to write the lower levels by hand as well. There are a number of interesting things to note about this idea of layers.
First of all the it seems that we have been stuck at the current abstraction level way too long. Despite efforts such as 5G languages we got stuck at the OO-level since the early nineties. As people like Uncle Bob mentioned, it seems as if we got distracted by the internet and forgot to continue the great work our predecessors did in building all the abstraction layers we came to take for granted. Now that we realise that OO has its serious shortcomings as well, it seems we are kind of clueless where to go next. The most promising area I see is the domain of executable  business process (see for instance Zeebe). Despite the fact that these efforts have been met with a certain dedain from part of the software community. The most important difference with previous abstraction levels is that this approach starts top-down instead of the regular bottom-up. This could very well be the only way to get us out of the current impasse.
Secondly I have noticed that the mixing of abstraction levels is one of the biggest reasons for confusion in any discussion involving software technology. And they often get mixed since people feel comfortable at a certain abstraction level and will, often unconsciously, direct the conversation to that level. Discuss a technical business challenge with a DevOp and you will be talking about Kubernetes, Terraoform and AWS VPC's in no-time. Convincing similar examples the other way around can be thought up just as easily. Realising this I have found that the best professionals in our trade are the ones who feel comfortable at every level of abstraction. From high level business and even philosophical topics to low level technical details. Professionals that can go up and down the stack without losing oversight or getting stuck in one abstraction level while debating certain aspects.

A third interesting aspect is what I think will be the most surprising revolution in the way we build information systems: AI as the compiler. I have written extensively about how I think AI is destined to become the compiler between the highest abstraction level and the highest implementation abstraction level. You can read about it elsewhere, but it is the notion that this is about  creating abstraction layers that make it relevant for this post. 

Thinking, designing and building with a clear notion of layers in your head is key to building efficient information systems. It is a lesson I found so important that I have based the name of my next company on it: Stekz.

PyGrunn 2019: Thanking the giants upon whose shoulders we stand

PyGrunn 2019 was a blast! Last Friday 250 Python software professionals had a wonderful day full of deep tech, inspiring talks, great conversations, good laughs and an occasional beer. PyGrunn is the ‘Python and friends’ developer conference hosted in Groningen that Sietse van der Laan I’ve been organising since 2010. But it was never just us. We had the help of a large number of volunteers over the years. Speakers that traveled across the globe, developers working on the website, artists coming up with inspiring art, speakers giving the most incredible presentations, and the list goes on and on. We would like to thank not only them but also all visitors that made PyGrunn the success that it is.
We want to pay credit where credit is due because we realise that we are standing on the shoulders of giants. Something often forgotten in the high-pace world of internet startups. Not just the success of PyGrunn, but also the success of every software company out there is predominantly the result of work by open source software professionals.

Spending quite some free time over a period of at least 6 months on preparing for PyGrunn is my way sincere way of saying ‘thank you’ to that community. Not only the success of PyGrunn is based on the work of that community, but almost every aspect of my personal professional life as well. That means a lot to me.

The success of PyGrunn makes me proud, the community makes me proud. And it humbles me to be able to put a smile on every face of the 250+ PyGrunners that stays on for at least 24 hours.

Next year PyGrunn is turning 10 and we will move into the new Groninger Forum, all the reason to make it a truly memorable episode. We had so many people thanking us and offering help for next year that we are convinced that we will fix that. But as always, you are welcome to help. As organiser, speaker, sponsor, attendant, whatever you like.

But don’t just take my word for all the above, ask any of the thousands of visitors that participated in PyGrunn over the years. You will be surprised.

Machine Learning is regular computer science

If AI is becoming the compiler in information system design, as I have argued many times before, then machine learning is becoming a central component of computer science. For me this has been a natural view of the future, but with the three godfathers of Machine Learning winning the Turing Award this is echoed by the industry at large. This is a sign whose importance is hard to overestimate. Since, in the future information systems will increasingly be built from semantic descriptions of intent and requirements. Which means that the profession of what is now a programmer will change dramatically. Although I am increasingly convinced this is the future computer science, I also understand that it will still take a while before it reaches every crook and nanny of the field of computer science. Programmers don't have to fear their jobs just yet. And many of the current crop of frameworks will still be around for the coming years. But the general direction is getting clearer and clearer and it will benefit us all. Even if it means some of us have to learn new skills and forget old ones.

My Mirrorworld

It was this recent article AR will spark the next big tech platform — Call it Mirrorworld by Kevin Kelly in Wired that made me want to shout from the rooftops to everybody that ever had to listen to my ramblings about technology, "This is it! This is what I've been been talking about and working on all the time! This is the dream I've been pursuing!". This article tries to give you all the necessary background needed to understand why this is relevant for both you and me. It is a somewhat personal post explaining where my ideas about what is called a 'mirrorworld' come from. And how my professional career has been modelled around that idea. It should give you a better understanding of the idea itself and of my past and future personal journey. Albeit it being about my professional interest there is no denying that it is my passion for the frontier of digital technology and my relentless curiosity got me where I am now. So please forgive me if this reads as being too much about me personally. I think in this case you have to understand not just the idea, but also my personal journey and passion to assess the value of my opinion. In this post I will try to explain how I ended up where I am now and where I will be heading. I hope you enjoy reading it and become inspired yourself.
In augustus 1991 I started studying mass communication at the Universities of Groningen (and later Nijmegen), the same month Tim Berners-Lee posted a short summary of the World Wide Web project on the alt.hypertext newsgroup. I'm not sure if that is a coincidence, ideas almost always pop up around the same time (as Matt Ridley argues in his interesting 2015 book 'The Evolution of Everything: How New Ideas Emerge'). My main interest was how mass media worked. How do they get the message across? Do they color the message? What effect do they have on popular culture? Theories by visionaries such as Marshall McLuhan intrigued me deeply. Being a musician I was especially interested in the how the medium influenced the music it carried. This was before the web became widely available so the latter question was limited to media such as radio, television and tapes exchanges. Probably the most prevalent insight that occurred to me is that the influence of the medium on the message is most of the time grossly underestimated. To understand why this was the case and how this worked required a deep understanding of the inner workings of the medium as well. This became especially obvious while interviewing experts for my thesis who could broadly be divided in technical and non-technical experts. The former had a much better understanding of the medium, where it came from and where it was heading, then the latter.
Having been interested in computers since childhood I was one of the first to pick up on what were called 'electronic services' (think BBS, Minitel, videotext, teletext, etc) and this new buzzword 'internet' (or 'electronic super highway'). Hanging out on BBS' and learning about this new thing called 'the internet' I became intrigued by the idea of a super powerful, new mass medium. It triggered my curiosity and creativity in a way I seldom experienced before. Envisioning a fully digital network that connected everybody with everything I started wondering what messages this medium would transmit, and whether it would color the message, and how this medium would actually work. But there wasn't much interest in it at that time and place. I remember talking to a pop-culture professor in 1993 proposing to research what the influence of this new medium (the internet) would be on popular music. How would music be distributed through such a medium? Would it have influence on the music itself? What would it mean to the music industry. In an in hindsight shocking response the professor said he didn't see much fruit in such a research. It was still unclear whether it could ever take off at all. Remember, it was still very early days in internet time and most people had never heard of the internet. That didn't stop me though, I continued my studies and wrote a thesis about the possibilities of turning the broadcast television cable network into a bi-directional network fit for those newly found 'electronic services'. In other words, how viable was the idea of turning an old style mass medium into a new one. This was the research that convinced me that to truly understand the medium you had to grasp the underlying technology. A lesson I still hold dear, your understanding of a phenomenon is only as deep as the understanding of the underlying system that causes it.
With that in mind the best thing I could do after graduating was to spend the extra time I had on developing my technical/digital skills. Skills that I would need to better understand this new medium and be able to contribute to it. So in 1995 I started what was a new study direction at the University of Groningen called Artificial Intelligence. It was a technical study but with a broad view on intelligence. It offered courses in computer science, biology, language, physics and math. Basically everything needed to truly understand how the brain works, what intelligence is, and how that could be simulated in computers. A wonderful study in an era where nobody had any idea what to do with it, the world already had great chess computers. Then on May 11th 1997 IBM's Deep Blue computer beat the chess world champion Garry Kasparov. I still remember our class being all excited during a lecture the next day. This shocked the world and made the study AI hip again, although it would take over 15 years before there was enough data, processing power and memory to lift AI over the usefulness threshold and make it the runaway success it has become nowadays. Meanwhile my interest in what had become 'the web' only grew deeper. I took the technological deep dive to truly understand how this new medium worked under the hood. And, true to my own mantra, to understand meant being able to do. This meant writing lots of software, studying network architectures and protocols, talking to experts, joining open source initiatives, and so forth. Slowly it dawned on me that this new medium (the internet) was not just a medium between humans. Senders and receivers could be appliances as well. Or just software processes. Any process that could send or receive a message could use the medium. That made the medium infinitely more useful (and complex) than old mass media. The most important question this raised was inspired by two books I read. William Gibson's 'Neuromancer' from 1984 and David Gelernter's 'Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean' from 1991. While Gibson took the creative approach Gelernter, probably inspired by Gibson, took the scientific approach in playing with, and wondering about, a virtual world that exist next to our real world. Possibly overlapping each other. Gibson invented the term 'cyberspace' for this world while Gelernter coined the term 'mirrorworld'. Both books made me realise that this new medium connecting every possible process, whether software or hardware based, created some sort of virtual space where users could find and connect with every other possible user/service/process. Where Gibson envisioned a virtual world that was detached from the real world, Gelernter deepened the idea and talked about a virtual world in which the real world was represented. Hence the name 'mirrorworld'. Again, to understand what such a mirrorworld would look like and how it could be built, I forced myself to dive into all related technologies and become proficient in using them myself. 
An additional passion and skill I picked up during the nineties was 3D computer graphics. This truly caught my imagination and I spent many midnight hours learning myself arcane 3D modelling applications. It literally opened up new vistas. This newfound passion and skill turned out to be very useful for my interest in cyberspace/mirrorworld. For humans understanding such a conceptual environment could become a daunting task in which 3D computer graphics, and especially virtual reality, could turn out to be useful. Although not at all user friendly, virtual reality appeared on the scene during those years and immediately convinced the technologically more advanced audience that one day it would become good enough to open up a whole new world of possibilities. My AI thesis was about the question what such a mirrorworld would look like, how it would work, how physically reality would be represented in the mirrorworld, and what role AI could play in constructing a mirrorworld. 
The topic was so big that my research spilled over into a subsequent PhD project at the DTU (Danish Technical University). The question I addressed for my thesis was how something like a mirrorworld would work, could be built and could be made understandable for users. During my PhD project I got all the freedom I needed to explore every possible angle in thinking about and working on a mirrorworld. One of the most inspiring books I studied during those years was Steven Johnson's 'Interface Culture: How New Technology Transforms the Way We Create & Communicate' from 1999. Johnson argues that the interface we use to access virtual services deeply influence the way we create and communicate. Much in line with Marshall McLuhan's idea, and directly applicable to my interest, this new medium called the internet and the resulting virtual space called mirrorworld. 
One of the most inspiring technologies I got involved with during my PhD period was Jini from Sun Microsystems. Computer science legend Bill Joy gathered some of probably of the smartest persons I've met and started working on 'a network architecture for the construction of distributed systems in the form of modular co-operating services' (a spot on quote from wikipedia). Jini's introduction in 1998 was accompanied by many references to the ideas of William Gibson and David Gelernter, but also to their predecessors (Vannevar Bush, Ted Nelson, Doug Engelbart, Alan Kay, and many others). But Gelernter's ideas of (the implementation of) a mirrorworld were undeniably the most influential on Jini. The Jini team had undoubtedly the biggest influence on my ideas on how a mirrorworld could be built.
It was a truly inspiring time in which I deepened my understanding and widened my skillset. Still, after finishing my PhD this expertise was of only limited use. There was simply no mirrorworld in sight since the technology wasn't ready, it could not live up to the initial hype. The VR hype was over and the enthusiasm about AI was slowly dwindling as well. The Jini project slowly disappeared from the stage because the hardware was nowhere near ready for the type of use cases it was meant to address. Besides, the industry had to recover from the dotcom bubble, the year-2000 'bug' made the general public weary of big statements about technology and the rise of mobile devices grabbed all the attention. I figured that since a mirrorworld was still a bridge too far, I'd better spend my time on technologies that were related. Since mobile devices seemed like a natural part of the trend towards a mirrorworld I decided to join a research institute (TNO) and focus on that. Starting with C on Psion's, going through J2ME up to the introduction of the first iPhone. Those projects involved a lot of innovative software, including the first versions of cloud computing and social media. But after the initial innovative phase was over those technologies became business as usual and I lost my interest. Been there, done that, and I started looking for something else to work on. Something a little less bureaucratic and with different technological challenges.
After leaving TNO I joined an online ticketing company called Paylogic. Not that I'm super interested in ticketing, but the technological challenges they faced were in line with my personal interests. But as I learned over the years, the human aspect of making software is almost as important ast the technical. Some of the most interesting challenges I had to tackle at Paylogic had to do with questions how to build a high tech team, how to choose the right technologies, how to grow a business, how to get VC, etcetera. After Paylogic was sold innovation became of secondary importance so it was time for me to move on.
Around 2014 some of the necessary technologies for a mirrorworld were becoming mainstream. Most notably everything related to the Internet of Things (IoT). So I started an IoT company called XIThing with a couple like-minded entrepreneurs. This kicked off yet another interesting period. Although the IoT predictions from established research firms were booming, the general market simply wasn't ready yet. As always short term predications were overrated while the long term predictions were underrated. We worked on some truly interesting and innovative architectures for networks of sensors and devices. It certainly was a booming technology area. But with this rise grew the first concerns related to privacy, security and safety. Everybody was rushing to connect everything to the internet, but the architecture and core protocols of the internet were not designed with security, privacy or safety in mind. As an avid reader about the history of computing I was aware of the inherit limitations of humans when it comes to designing and implementing highly complex information systems. Something pointed out by most of the computer science pioneers, especially Edsger W. Dijkstra. Looking at the IoT trend with this in mind made me realise that a change was needed to with regards to both the underlying technologies of the IoT as well as in the ways we built those. Else we would end up in dystopian version of a mirrorworld.
During those contemplations an intriguing technology called 'blockchain' appeared on stage. It promised to enable information systems that didn't require third parties to formalise and verify transactions. This could potentially solve some of the security and privacy issue that arose with the advent of the IoT. I spent a lot of time during those early years on figuring out how a blockchain works and how to make something useful with it. It provided me with valuable insights into the way information systems are built, how data should be treated to cater for privacy concerns, how distributed ledgers work and how smart contracts are programmed. Probably my most valuable insight was that if you want to build information systems that respect privacy they should be built around contracts. Contracts that govern the exchange of data and keep the data ownership where it should be. But, as I had learned before, it was only after going through all the effort to deeply understand the underlying technology that I began to see the limitations of a blockchain. Most use cases would simply never work. This meant I had to tell most customers that came to us in dire need of a blockchain solution that it would never work. Even though I could explain there were ways to solve their problem with existing technologies that actually worked, customers seldom listened once they heard 'no'. The best lesson from that period was that the software industry was both on the developer side and on the client side very much a human business. The company I've set up in this space, Contracts11, wasn't able to stay in business. Trying to cut through the hype, create a real working product and selling turned out to be very difficult. And with blockchain involved even impossible.
With this newfound insight and lessons in mind I rethought the concept of a mirrorworld. Mindlessly applying every new technology that comes along was naive and irresponsible. Newspapers spilling over with examples of data breaches, privacy issues, hacked devices, fake news and the unwieldy power of a few tech giants were ample evidence that this was a real and pressing issue. To prevent marching backwards into the future while looking into the rear-view mirror, as Marshall McLuhan used to say, we had to address these issues up front. For this purpose I started the Web11 Foudation, A movement for building the better web required for a non-dystopian mirrorworld. The movement aims to provide a platform for sharing ideas about ways to build a better web. Avoiding the horseless carriage syndrome 'the web' was not just the bunch of web pages for which the laymen would take it, but the web that connects everything. One of the first insights was that an all-connecting web would allow all kinds of entities to participate in ecosystems. Technology could also empower those entities so they could act independently and the ecosystem could reach an optimum by itself. A powerful idea that is applicable to many of the ecosystem challenges we face today.
While Web11 is an ongoing community effort I personally kept gravitating towards the concept of a mirrorworld and my passion for software engineering (especially AI) and creativity (3D, music). Where the rise of IoT showed progress related to the physical part of a mirrorworld, advances in the computer gaming industry were driving innovation with regards to the virtual part of a mirrorworld. It was also a realm where all my passions and skills would be of good use. I am not an avid gamer but I am deeply interested in how those virtual experiences are constructed. Because I realise those methods and technologies will be an important part of building a mirrorworld. And as the previously mentioned Steven Johnson convincingly argues in his 2016 book 'Wonderland: How Play Made the Modern World', almost all progress is driven by play. I expect this to be true for the implementation of a mirrorworld as well. So after selling my stake in XIthing and stopping Contracts11 I decided to submerge myself in the game industry and joined the Game Bakery cooperation with my company Media2B. 
The game industry had been a long time driver in that other technology of my interest, AI. Since around 2012 the applicability of AI had gotten a tremendous boost due to the availability and dropping price of massive amounts of data, processing power and storage. Although AI had been successfully used in games before, this explosion of possibilities opened up completely new ways of both creating and running/playing games. The trend of applying AI for generating new worlds was especially interesting due to the obvious applicability for building a mirrorworld. Whether it was for constructing or running it. For me the final piece of the puzzle fell into place when bumping into cutting edge game development where scenes were described in normal English, an AI interpreted that and automatically generated a possibly infinite number of 3D worlds. Eureka! I realised that this was an instantiation of the general problem of designing information systems. That problem is the gap between the domain experts having problems that software could solve, but that requires them to explain it to software developers since they cannot program themselves. This translation gap is by far the biggest cause of problems with software, whether it is with regards to reliability, price, security, privacy, you name it. And now it turned out that the gaming industry had been running into this problem, it was becoming too labour intensive to create games. Modelling everything by hand was no longer viable for bigger games, and bigger games is what the market wanted. Modelling every small detail by hand would make a game prohibitively expensive. So the game industry first stepped up their game (pun intended) in the creation process. As in the general software industry tools were introduced that worked on an increasingly higher abstraction level. Instead of modelling every leaf of a tree, something I remember doing during the early years of 3D I described above,  those tools allowed designers to specify a few paramaters and generate a tree. This methodology is called 'proceduralism' and is one of the most active fields in the area of game development. It is almost needless to say that AI plays an increasingly large role in proceduralism. On the other hand the game industry starting to experiment with tools that allow designers to express their intent in the language they know best and generated the content from that. An example would be turning a sketch of a landscape into a 3D landscape, or a moodboard of pictures being the source for the mood in a game, or dance to express the behaviour of game characters. Or turning a description in plain English into a virtual world, as PrometheanAI does. Again, it is AI that makes it possible to extract the intent from the semantic description. As I argued before, AI is becoming the compiler that turns semantic descriptions into experiences. It closes the gap providing complete freedom in expressing the intent and generating an unlimited number of variations of experiences from that. With that the game industry is solving one of the biggest problems of our times and provide a very large part of the puzzle for constructing a mirrorworld. Apparently John Hanke (author of the biggest AR hit so far, Pokémon Go) was right when saying, “If you can solve a problem for a gamer, you can solve it for everyone else”. 
So here I am in my 25 -year old quest for a mirrorworld. You might question whether a mirrorworld will become 'reality' at all. It might just be another unrealistic pipe dream of a number of tech aficionados. But after all these years I have no doubt that this is where the world is heading. Of course, it will be different from what we envision with today's knowledge, it might take longer to get there, and the way we get there will probably be different, but the general direction is clear. And since we are going there we should take our responsibility and make sure we build something good. Now is the time to do so, and I'm surprisingly not the only one to say so. As I said at the beginning of this article, it was a recent article by Wired's editor Kevin Kelly called AR will spark the next big tech platform — Call it Mirrorworld that inspired me to write this post. Since you've come so far in this post I urge you to carefully read it. Although I don't agree with everything, for instance his big focus on AI and his view on the usefulness of blockchains, the article touches upon so many relevant topics that are in line with my experiences and ideas that it is too much work to quote them all here. Kevin Kelly has been one of the most respected thinkers in the realm of the digital revolution. In that role he started Wired, in my opinion the best magazine on the same topic publishing quality content since 1993. Kelly has been covering the march towards a mirrorworld for as long as I've been involved in it. And now, after 25 years, he finally dares to state the we are about to embark on the journey towards this next big tech platform, saying, "it will take at least a decade for the mirrorworld to develop enough to be used by millions, and several decades to mature. But we are close enough now to the birth of this great work that we can predict its character in rough detail". So yes, it is still early days but now is the time to act to ensure we build the best possible mirrorworld we can. There is a lot at stake, but also a lot of excitement, wonder and fun to be found. I know where I am heading. In the end, a mirrorworld will be the next big, possibly the ultimate, mass medium and that is where my story started. And that is why I called my company Media2B. Fortunately, in the mirrorworld, there is always room for more, so why don't you join me on this quest to build this new medium? As Kelly says, "There are no experts yet to make this world; you are not late".

Rules, data, answers. How AI flips the flow.

As I said here, AI is becoming the new compiler. This will have immense consequences for the industry building digital information systems/environments. The most important, and most fundamental, change this will bring about is that the design and implementation of information systems will become one step that can be done by domain experts in the language they are comfortable with. It is basically a move up the abstraction ladder empowering domain experts and removing the age old gap between them and the software developer. I just realised that there is a simple way to express the fundamental difference between the way we used to build information systems, and the way AI enables us to do in the future:

data + rules = answers
data + answers = rules
The nice thing is that since you start with the answers you are looking for, you get a complete new way of expressing yourself. Instead of expressing the rules (in other words, programming as we know it), you express the answer. Since AI can extract your intent from that expression, you get a completely new way of building information systems. This won't of course overnight change the way you build your webshops, but the direction is clear. And now you have a clear way of explaining the difference to others as well.

Software professionals need titles

As a software craftsman that is proud of his profession I find it both disturbing and insulting to see so-called 'futurologists' misleading the general public with 'visions' that are both unfounded and unrealistic. Unfortunately the general public is unable to discern the boys from the men. Our profession is still too young for its roles to have been formalised and communicated to a wider audience. But now that our profession has permeated every nook and cranny of our daily lives it is time to do so. The stakes are by now too high to have the blind leading the blind. The sorry state of our privacy, digital security, information sanity and the vulnerability of the fundamental institutions making up our hard-won open and inclusive societies, have made that more than clear over the past couple of years. Even to the general public. It's time for the experts to lead the way, but that requires the general audience to know what an expert is and what isn't. In older professions that are relevant to large parts of the general audience this is solved by formalising roles and communicating that to a larger audience. Take doctors for instance. Nobody in their right mind would consider asking their nephew for advice when they have pain in their chest. Even if that nephew is an enthusiastic healthcare hobbyist that has read all the latest newspaper articles about cardiology. They would go to a cardiologist because they know that there is a difference between someone that is merely interested in a topic and a craftsmen that went through a long and thorough training to become a cardiologist. Imagine a cardiology conference where all the speakers were non-cardiologist and had no formal training in cardiology whatsoever. Although this sounds absurd, it is exactly what is happening in the area of our software profession. Since the general public can't tell the difference the blind that lead the blind were able to get away with it. Even though I'm not too big on formalising roles, since it often leads to a misplaced installation of a formal hierarchy, I do think that it is the only way to teach the general audience how to distinguish between an expert and a hobbyist. And as with doctors I propose to attach an ethical oath to the profession obliging software engineers to stick to an ethical code and taking responsibility for their work. As Yoah Noval Harari said in a recent interview: "I think it's extremely irresponsible, that … you can have a degree in computer science … and you can design all these algorithms that now shape people's lives, and you just don't have any background in thinking ethically and philosophically about what you are doing".
I'm not aware of any effort in this direction but I do think the time is right. We need to get the educational institutes on board and formalise both the titles and they way they are handed out. This would require making the titles legally protected to prevent their value from watering due to abuse. And it would take the communication of these titles to the general public. A route that will likely take many years but will be worth the effort. In the meantime we can start explaining to the general audience that there is a difference between a craftsman and a hobbyist, and that they are both entitled to their opinion, but that the one of the craftsman has a lot more weight. Just as with doctors.

Computer generated art? Old skool!

After having spent a couple of months whether there is an area where I could combine my passion for computer science, AI, 3D, music and the cutting edge of the digital revolution I decided to focus all my effort at tools that turn semantic descriptions of an experience into an actual experience. Whether it is real or virtual, and whether it is a game, a movie a live event, or something completely different all together. I will explain this in more detail in subsequent posts, but for now it suffices to highlight a novel area where I can flex my skills and satisfy my curiosity. During this 'soul searching' period I took my oldest son on a trip to Bremen to hang out together for a few days in a wonderful town. While stumbling through Bremen on the first rainy evening we passed the Kunsthalle Bremen, a museum dedicated to graphical art. A poster with that day's program hit my eye coincidently telling that there was a exposition going on called 'PROGRAMMIERTE KUNST. FRÜHE COMPUTERGRAPHIK' and that night would host a talk by someone called Frieder Nake. To be honest, I didn't know who that was but I was intrigued by the description and persuaded my 14-year old son to join me on an evening listening to a German speaking professor at a museum. Quite a large stretch for a 14-year old boy, I can tell you. Both the exposition and the talk were really great. It turned out that Frieder Nake was one of the pioneers of computer generated art (together with Michael Noll and Georg Nees) , going back to the early sixties! I was flabbergasted to learn that many of today's insights were already discussed so many years ago (and I'm no newbie to computer history). Of course, we have come a very, very long way, but some of the fundamentals were already there 55 years ago. What a coincidence to bump into this legend in such a way, and what a great way to find out I'm part of such a rich history.

AI vs humans: A matter of intent.

What surprises me in most of the ongoing AI discussions is how little the role of intent is playing there. For me the biggest differentiator between human and artificial intelligence is the notion of intent. (Besides the non-transferability of the domain knowledge of an AI agent.) Where humans do everything with an intent AI agents simply, well, execute an algorithm. Be it a neural network identifying whether there is a cat in a certain picture, a robot trying to bake an egg or drawing pictures in the style of Van Gogh. AI agents don't want to do those things, it's the only thing they can do once they are initiated. On a much deeper level some would argue that humans are not much more than behaviouristic pattern recognisers, but that discussion is still highly philosophical and not very relevant at the current state of technology. In case you are interested in these matters I highly recommend reading everything from the likes of Daniel C. Dennett and Steven Pinker. Back to intent. It is like in art, it's not that hard to make a painting that looks like it's from Gerrit Rietveld, but the difference is that he had an intent when drawing a picture. He wanted to convey a message, tell a story, inspire, impress. For the same reasons there are almost no pieces of music that were generated with AI that really touch peoples heart. While every piece written by Bach does. Because Bach had an intent with his music, just like Rietveld had with his paintings. Once people understand, consciously or unconsciously, the intent of a certain piece of art they can much easier relate to it and, thus, be touched by it. Agents can imitate and mix from a huge pile of content but they don't do so with intent.
Surprisingly, and no pun intended, missing out on intent is also quite human. Everybody reading this must have been at a gallery at some point in their live, staring at an abstract painting, saying that his nephew of 5 could have created something similar as well. True, he could have drawn something very similar, but he wouldn't do so from a similar intent making the difference between a piece of art and a doodle by a 5-year old.
The same goes for AI agents when it comes to recognising things. They can only explain what has happened, not why it happened. It knows nothing about the intent that might have caused the phenomena. And if there was no intent it has no way of seeing the bigger picture that might have lead to the phenomenon. The latter is the other blind spot of the AI community, that an AI can look no further than the data it was trained on. They lack world knowledge and transfer learning between completely different domains is still one of the holy grails of AI, and will be for decades to come.
So an AI agent can't create something with intent nor recognise the intent driving a certain phenomena. That doesn't mean they are useless, au contrair, but it is good to be well-informed about both the possibilities and limitations of AI and what causes those.

Know your stuff

When I studied Mass Communication at the university of Nijmegen in the early nineties my special interest was the up and coming computer networks (the web was still to be invented). I considered those networks to be the most promising media ever devised and was very  intrigued by the possibilities it opened up. At least, in my opinion, many of my fellow students or professors were not that much interested in technology. Their focus was on the societal aspects of media. I was stubborn enough to push through so my thesis was about the question whether the broadband television network would be suitable for what were called 'electronic services'. This was a valid question since these networks were broadcast networks meaning there was route for a return signal. Making it usable for electronic services meant the network company had to invest a lot to make it two-way. For my research question I had to interview a lot of different kinds of respondents: users, network operators, content creators, marketeers, networking engineers, software developers, etc. What struck me most when I finished the interviews was the difference in depth of knowledge between the technologists and the non-technologists. The formers had a deep understanding of both the technological basis and were able to translate that into a set of possibilities and impossibilities. The latter were basically starry-eyed dreamers with very little understanding of the ongoing trend and, thus, the possible future. That was when I realised that when I wanted to make a living out of exploring this new digital frontier I'd better re-educate myself. After receiving my master's in Mass Communication and shifted to studying AI. So I shifted from an alpha to a beta study and basically never looked back. Looking back now, I can safely say that it has been one of the best choices in my life. Having the digital revolution from both sides I can say with confidence that a good grasp of underlying technologies gives me an incredibly better insight into technology related trends. And let's be honest, most innovations are (and have been) technology driven. 

There are two recent trends where this became obvious. First of all the rise of bitcoin and blockchains. As I've written extensively before I spent a lot of time a couple of years ago into a technical deep dive into bitcoin and blockchains. Being intrigued by the idea of an immutable legder I became curious both about the underlying technology and the possibilities of a blockchain. So I took the dive and it was a lot deeper than I expected. A blockchain is a genuinely complex technology. But having a deep understanding made me in the end realise that almost all use cases proposed were either impossible or easier and cheaper using existing technologies. Many of the technologists from the early days have come to this conclusion by now. Still there is a very large group that don't know the technical ins and outs of blockchains and thus reside to some sort of belief in the ones that sing the gospel. The latter often lack the technical expertise as well. Without this technological understanding it is very hard to really grasp bitcoin, blockchain and their (im)possibilities.
The second trend you see this happening is with AI. There is currently a lot going on in the area of AI, but while it certainly has many, many applications, there is also a lot of non-sense and ignorance. While many AI enthusiasts have drunk the kool-aid spread by the marketing departments of the tech giants, who coincidently have a stake in keeping the hype going, they often are unaware of the limitations and dangers of AI. Because to recognise those you need a proper understanding of the underlying technology. How is it possible that a neural network can have a bias? Is an advanced general AI really possible? On what time scale? Is an AI's domain expertise transferable to another domein? Why not? It is these kinds of questions that will give you a clear understanding of where a trend is coming from and where it is going.
As David Deutsch says, progress is the never ending search for better explanations. And in our day and age this means we often have to explain technological aspects of an explanation. So if you want to contribute to finding better explanations through creativity, conjecturing and critical thinking, you will have to take the technological deep dive so you KNOW YOUR STUFF.

The games industry is tough

Over the past couple of months I have immersed myself in the game development world because that is one of the very few environments that combine all my interests:

– 3D computer graphics (zbrush, procedural modelling)
– Music
– Software engineering
– AI (specifically ML and DL)
– Story telling (in both games and cinema)
– Play as the driver of innovation (as Steven Johnson brilliant argues in Wonderland: HowHow Play Made the Modern World)


Being a digital omnivore that is curious about all these fields I figured it would be interesting to see whether I would fit in. But also to see if the game developer world is interested in someone like me. Coming from more standard software engineering environments one of the most refreshing experiences was that creativity is a core part of the daily routine of game developer professionals. Discussions at the watering hole easily switch from parallelism in Rust to texturing in Substance Painter, and from the gameplay of the latest GTA to the merits of Unity for game developers. Besides the game developers that I met were all without exception extremely nice, funny, considerate, creative and ambitious. You're probably not surprised that I felt right at home.
But I also noted that the gaming industry is really tough. It reminded me a bit of the early days where software professionals had to compete with the cousin of a customer despite "he being only 12, but he is really handy with tablets and can make a website in Word in his basement for almost nothing". Especially the indy game industry is littered with young enthusiasts willing to put in insane hours just because they love games so much. For every position at a professional game studio there are tons of applicants willing to accept a meager compensation for quite a demanding job in terms of complexity, creativity and effort.
The other fundamental reason the game developer industry is tough is that most studios are as successful as their latest game. This is common in the creative industry, where it is really hard to create continuity. The only way to do so is to create a certain name in a certain market, but that requires you to have been successful a number of times before in a certain niche. Only a very few studios succeed therein. This is part of the deal of being in the gaming industry and makes that life is pretty tough for most game developers. 
Yet another reason making it tough is that it takes a considerable investment to make a game. It is not easy, it takes both technical and creative skills, and a lot of time to make a great game. Indy game developers have to do everything from coming up with the idea, the story, the visuals, the game play, the multi-player mechanics, the promotion, the bugfixes, etc. The broad set of required skills also make it unlikely that a game is created by an individual, adding the complexities of team building and cooperation to the mix.
To be honest I find this regrettable. I would love to see all these nice, creative, skilled and ambitious game enthusiasts succeed with their dream but the odds are sadly pretty low for the vast majority of them. Some might get lucky or are so exceptionally talented that they end up at a triple A studio anyway, but most won't. Fortunately many of the skills they develop are valuable for other industries as well, so they'll be fine if they're open for that. And I think they should be. Although I am all for following your passion, I can attest from experience that you can live quite a fulfilling life while not all your passions are part of your daily job.

Slaying the monster advertising created

We have a big problem. Driven by the goal to harvest attention to sell to advertisers the tech industry figured out how to manipulate our behaviour and now this methodology is being hijacked by a host of other agents with totally different goals. The consequences are far more severe than most of us realise, as Yuval Noah Harari and Steven Harris point out in an interview called When tech knows you better than you know yourself. Jaron Lanier said basically the same in this a Wired interview called 'We Need to Have an Honest Talk About Our Data'.
There are two big mistakes we made. One being that the internet industry based their business model on advertising. A conscious decision mainly made by Eric Schmidt from Google. He pushed Larry Page and Sergey Brin in the advertising direction when they were still looking for a way to make money with their search engine. The problem with paying for services through advertising is that the service will always optimise on attention instead of other properties such as content, privacy, security, etc.
The other being that users of online services have been giving away their data for free. Both consciously and unconsciously. This gave rise to the tech giants who now have so much valuable data that it will be hard to topple them over using regular entrepreneurial means. The advent of AI (especially deep learning) over the past few years makes this mistake even bigger. Before AI can replace us, it needs to learn from us. We have become so used to giving our data away for free (knowingly and unknowingly) that we fail to realise that this is no fair deal. The companies wielding the AI that learns from your data, and will replace your job at some point in the future, are not compensating you for it. While they should. Using the methods and technologies of the advertising industry they are used to, and able, extracting that information for free. What initially looked like an altruistic attempt to enable online communication the tech giants secretly played the role of the third party behind the scene manipulating behaviour to maximise economic return by harvesting attention and selling it to advertisers.
In other words, online businesses unwillingly created a monster while consumers let them by being asleep at the wheel. Point in case: We let Mark Zuckerberg built a machine that maximises our eyeballing of ads and now this machine has been hijacked and turned into a general machine for manipulating our behaviour.
The solution
Killing advertising won's solve the problem. Although I do think Bill Hicks had a (funny) point in his 1993 show 'Arizona Bay' when asking members in the audience working in marketing to kill themselves (watch the video so you don't miss the all-important, and brilliant, intonation). Tongue in cheek, of course, but he did point out the problematic role of advertising in modern societies. Bill Hicks sure was one of the first to ask (albeit a bit blunt) whether it was really necessary to put a dollar sign on everything. A question that has become increasingly relevant with the advent of ubiquitous, powerful and global information systems. Instead of killing advertising we should start considering other ways of paying for services. If a conversation needs to be valued in economic terms because the medium needs to be paid for, then there surely must be a better way than through manipulating either or both parties to look at ads. As Lanier points out there are already examples that work, such as Netflix's subscription model. Netflix could have chosen the advertising route a la YouTube just the same but fortunately they didn't. The massively important side effect being that Netflix is not in the business of behaviour manipulation in the same sense that YouTube is.
Another obvious option is simply making users pay for the services they use. The problem has been that this requires micropayments and proper billing, but these are problems that have largely been solved. Actually, payments were part of the HTTP 1.0 spec but were left out due to time constraints. It is still a work in progress though. But there are tons of other initiatives that try to solve the micropayments problem (from cryptocurrencies to alternative fintech solutions). Funnily enough this problem has been tackled in Africa using, in our view, basic mobile technologies.
Regulation is also an important mean through which we can regain control over our own data and limit the power of the tech giants. The GDPR law was a first step in the right direction, but it didn't provide the industry with a good alternative, it only states what is not allowed.
In my eyes one of the most promising approaches is a revision of the way we build information systems. I truly believe there is a better way to treat our data that is beneficial to both users and third parties. It can be achieved by building 'contracts-based information systems' and I have extensively talked about this idea before.
And finally, I believe the software industry needs to get its act together. I have spoken about professionalism and responsibility within the software engineering industry on many occasions. From ethics to semantic programming and from hacker culture to startups, but mostly to an incrowd audience. So I am delighted to read that one of the best current day philosophers (Harari) shares this insight: "I think it's extremely irresponsible, that you can finish, you can have a degree in computer science and in coding and you can design all these algorithms that now shape people's lives, and you just don't have any background in thinking ethically and philosophically about what you are doing. You were just thinking in terms of pure technicality or in economic terms." Spot on.
So, yes, we have a problem but we also have options. But we sure need to starting acting upon them.

Machine learning: 50 and high on the Gartner Hype Cycle

As I've recently written Gartner's Hype Cycle is a misleading hype itself. Which actually doesn't follow its own projected trajectory at all! It seems to be at a perpetual 'peak of inflated expectations'. As they say, old dogmas hardly die, the hype cycle is a point in case. It has no descriptive or predictive power and is thus misleading instead of enlightning. I predict that blockchain will be another technology that disproves the usability of the hype cycle (as I've written extensively before). But there are also technologies that outperform the hype cycle, the best current day example is in my opinion machine learning. This article shows that it has been at the 'peak of inflated expectations' for a number of years now, which is odd. But even stranger is that the cycle completely misses the fact that machine learning has by now proven itself beyond any doubt. It has outperformed even the wildest of expectations and is nowadays being used in almost every conceivable information system. There are of course limitations with regards to applicability and biases of algorithms that should be taken very seriously, but the number of useful applications is simply staggering. AI, and its sub-field of machine learning, is about 50 years old. It would be interesting to plot one of the most powerful technological trends of our era onto a hype cycle.

Creativity, conjecturing and critical thinking

As I said in my previous post, I've been reading a lot of books lately. And they are broadly speaking topic wise all related. 
My curiosity was sparked by the question what the most fundamental drivers of human progress were. This question came to me pondering about the current state of the world. Although there are valid reasons to be critical of media coverage of the current state of affairs, it became obvious that our species is facing a number of tremendous challenges if it wants to survive and live in harmony. Climate change, erosion of truth, growing inequality, power centralization, threat of nuclear warfare, cyber security, privacy, and so forth. Even compensating for the bias of media coverage one can safely say that our species is facing a formidable challenge. Properly addressing this challenge means that one has to figure out where we are, how we ended up in this position and where it will lead us. Only insight into the fundamental driving forces behind these changes will give us the tools to make the right decisions and not go backward into the future. This boils down to answering the following three questions. As a species:
1) How did we arrived at our current state? 
2) What is our current situation? 
3) Where we are heading? 
Thinking about this there is no denying that a significant driver of change has been technological innovation. And its role will only become bigger. As this is my area of  both expertise and interest I started to wonder what mechanisms drove, and will drive, technological change. (Mind you that I'm talking about 'change' not 'progress'.) And that is not just a technical question.
Although the three questions above are very broad there is a surprising lively and focused debate going on between the world's foremost scientists and thinkers. This debate is not raging on social media but is more like a question and answer game played with books. Although there are many more books that participate in this 'debate', the following is the list of books that I've read over the past months. First and foremost I should say that everyone of these books greatly inspired me and satisfied my intellectual curiosity. The number of deep and novel insights in these books is simply staggering. One can only wonder in admiration how it's possible that so much insight can be packed into a modest stack of paper. 
The following books are broadly speaking all about the three questions I posed above. Although each from a slightly different perspective they are all by brilliant and well-respected scientists that have a track record in both asking the right questions and (at least partly) answering them as well. I recommend reading them, at least an epitome, to enlighten yourself. I won't review them here since that would take me too much time while there are many great reviews to be found elsewhere.
Human universe by Brian Cox and Andrew Cohen
In his three best sellers historian Yuval Noah Harari gives an incredible overview of the big picture from a historical perspective. Even looking into the future. He gives an excellent overview and asks all of the right questions, but he doesn't really attempt to give an answer to the question about the most fundamental driving forces. Steven Pinker in his tour the force shows his readers (with tons of facts) that the human species is doing fine thanks to the powerful mix of reason, science and humanism that has been adopted by Western societies since the Enlightenment. Like Harari, Pinker is more focused on describing than on explaining. In what is probably the most creative and deep books of this list theoretical physicist David Deutsch shows its readers that the search for 'good explanations' through creativity, conjecture and critical thinking has been the fundamental driving force. A force that started during the Enlightenment that is so powerful that he considers it to be the 'beginning of infinity' for our species.
The deep insight that I got from reading these books is, in accordance with Deutsch's view, that the proper combination of creativity, conjecture and critical thinking is the only real driver behind all the positive change our species has gone through, and will be the main driving force shaping our future. Combined these books give ample evidence supporting this insight.
With that in mind I started wondering about the current state of this, almost holy, trinity. And to be honest, going by the latest societal developments the appreciation is currently not in good shape. In other words, they could use some love. Creativity is for instance not widely regarded as a core part of fundamental research in science. An interesting book describing how technological progress is almost always inspired by play is Wonderland: How play made the modern world by respected technology writer Steven Johnson. As a probably unintended side effect his book convincingly argues that creativity is at the very heart of progress. A case made by Amy Wallace and Edwin Catmull in their somewhat more lightheartedly book Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration. Creativity should thus be taken more seriously.
Proper conjecture is often mistaken for daydreaming, flights of fancy or even unscientific behaviour. It is deemed to be an unworthy way of working towards a real goal. While at closer inspection it is a paramount step in the process of coming to good explanations and realizing real progress.
Critical thinking is probably the most widely neglected skill in our current day and age. One has to have been living under a rock for the past few years when still thinking the capacity for critical thinking is still healthy in most societies. It is almost undoable avoiding blatant examples of uncritical thinking in our daily lives. In my opinion it is one of the most valuable skills one can obtain. A judgement that is abundant in another interesting book I've been reading lately. A book about what leading scientists think should be scientific concepts every human should master is This Will Make You Smarter: New Scientific Concepts to Improve Your Thinking. It is a dollection of articles from top scientists, edited by John Brockman from


With the above in mind I came to the conclusion that the most valuable thing one could currently do is to encourage everybody to improve their creative, conjecturing and critical thinking skills. By now I am convinced that is the only way to truly help our species. For now and into infinity (and beyond).

Not all messages are created equal: The fundamental problems of social media

I've been reading a lot of books lately and I'll tell you why. It's been a conscious decision to direct my intellectual curiosity at a specific area instead of letting it wander all over the place. The latter is what happens when you let social media dictate your daily information diet. I always had a love/hate relation with social media. Having been one of the earliest users of the initial crop of social media, I never really felt at home there. After spending quite a lot of time on social media and thinking about their place in the information universe I concluded that there are 2 fundamental problems. 
First of all social media doesn't 'weigh' messages. All messages are created equal and it's up to the reader to give weight to them. Users have to distinguish themselves how important a certain message is to them. The result is that social media promotes chitchat from remote connections with the same vigor as deep, emotionally relevant messages from close relatives. The result is a cacophony of messages from which it is extremely hard and cumbersome to pick out the valuable nuggets. It's like listening to music in which each not is played at the exact same loudness. It might be interesting for a minute, but it becomes very boring and exhausting after one minute. 
The other problem is when you want to share a message via social media. In real live you carefully consider with whom to share your message. Not only because of privacy reasons, but mostly because you don't want to bother all your social circles with every message you send out. Physical reality obviously helps in constraining the picking the audience for your message, but that border disappeared in the virtual realm. This is of course one of the main causes of the first problem. Interestingly enough this was one of the key insights Google+ truly understood and gave them a head start. Unfortunately Google has proven over the past decades that it refuses to take its existing user base seriously and is unable to maintain a product beyond it's beta stage (the list of killed products is as staggering as the list of Google products that are still being supported after more than a year). But I digress. 
After realizing social media were inappropriate for both consuming and sharing knowledge, and after having lost focus in satisfying my intellectual curiosity, I decided to skip social media and carefully picking books, pod casts and documentaries instead. I must say that it has been a soothing experience. Of course you miss some of the social chatter, and the occasional relevant insight or article, but on average it somehow gave me more peace of mind (to be honest I get the feeling that social media consumption doesn't make anybody happy). My regained focus and intellectual deepening really improved my life.
You might be wondering why I share this on my blog, which some might consider to be a social medium. I don't. This blog is where I write down my ideas and questions related to my professional life. Of course my professional life is never fully disconnected from my personal life, but the topic of all my messages here fall within a specific area and are therefore only interesting for a specific audience. You could say that my blog is a specific medium for my messages while social media, as far as they are used practically by most, is a general medium for any message. In my opinion Marshall McLuhan was right in concluding that the medium is the message, certainly when looking at the relation between a medium and all the messages it transmits. An interesting topic, but something for a separate post.

Procedural CGI & the semantic programming parallel

One of my main interests within the realm of the digital revolution has been the progression in the craft of software engineering. The craft has moved to a higher abstraction level over the past, say, 70 years. Broadly speaking from machine code to assembly to C-like languages to OO-languages to the current crop of high level scripting languages. Although there have been many variations along the way, the main trend has been that the craft of programming moved to higher levels of abstraction. Instead of handwriting zeroes and ones software engineers now have languages and tools that allow them to specify for instance classes and inheritance and let the compiler turn that into machine executable instructions. As I've written before one of the questions that have been puzzling me over the past few years is why this progress seem to have stalled in the 90's. At that time the industry had quite a lot of experience with higher level languages such as COBOL (!) and there was a lively discussion surrounding 5th generation programming languages. But then the web happened and all attention was directed at writing as much software as possible to explore this vast new realm of undiscovered opportunities. Somehow that seemed to lure all attention away from fundamental progress towards the latest greatest in web development. Everybody was in a hurry to explore this new world and technologies were invented, and often reinvented, at neck-breaking speed. Today's most widely used programming language, JavaScript, was for instance developed in just 10 days at that time.

There is a problem with this stagnation, though. The demand for new software is rising so fast that there simply aren't enough humans to do the job if we stick to the labour intensive abstraction level the industry is stuck at at the moment. There is already a huge shortage of software engineers. It is actually one of the main limiters of growth in Western industries. 
But there is another problem. The low level work is prone to errors while at the same time the importance of the information systems for society at large is increasing. In other words, our lives depend on software in an accelerating rate while at the same time we ar developing the software in an archaic and error prone way. The only way out of this situation is to move the job of software engineering to the next level of abstraction. This means that domain experts should be able to express exactly what they want after which a compiler generates the information system that does just that. This has been called 'semantic programming' in the past, and it is in my opinion the only idea to move the software industry forward and prevent a lot of problems. Fortunately there is light at the end of the tunnel. In a recent post I mentioned my idea that AI seems to be the surprising candidate enabling the step towards the next level of abstraction. While in traditional information systems developers tediously wrote detailed instructions how to mangle given input to generate the required output, with AI developers just specify the required output based on a certain input and let it figure out the intermediate steps itself. This is in line with the ideas of semantic programming.
Interestingly enough there is also light at the end of another tunnel that has strikingly similar properties. After diving into the world of computer graphics (CGI), game development, AR/VR, and animation I realised that this was an industry where the exact same type of transformation was taking place. Early CGI work consisted of painstakingly typing out coordinates to, for example, draw a wireframe of a doll. Subsequently tools were developed to allowed artists to draw with a mouse and automatically tween between keyframes. Over the past decades the tools became increasingly more sophisticated, especially in the area of the notoriously complex world of 3D CGI. Each step gradually freeing the artist from tedious, repetitive tasks and allowing them to focus on expressing what they actually wanted. Interestingly enough one of the biggest trends in the CGI industry is the move towards proceduralism, where things like textures, geometry and even complete worlds are generated by procedures instead of by hand. Take a look at how the 3D modelling (if you can still call it that) software Houdini has been used to procedurally generate environments for the latest episode of the Farcry series. The artists of Farcry no longer have to draw every leaf or every road by hand. they specify the properties of their required worlds at a high level after which the algorithms of software like Houdini generate it. You could say that software like Houdini is becoming the compiler, just as AI is becoming the compiler for information systems (as previously discussed). The drive towards proceduralism in the CGI industry is the wish to focus on the high level picture and not on the low level details. But also by the need to create increasingly more complex worlds for which it would be impossible to build by hand.
I find this parallel intriguing. (Somehow my brain is wired to relentlessly look for parallels between separate phenomena.) Understanding the underlying driver enables us to see where our industry, work and tools ar heading. And it gives the creative mind a glimpse into the future.

Questions and ideas

I might have answers, but I sure have questions!
I've been thinking quite a lot about what I should trust to this blog and what not. I don't want to share too much personal stuff for obvious reasons (in case you wonder: we had social media for that awfully wrong, besides if you want to know me more personally I suggest we meet up in RL). Neither do I want the blog to be nothing more than a place where I only forward ideas of others (again, we had social media for that….). And about the topics, I like techy stuff, but as most of you know I'm a firm believer in the ecological approach towards handling a topic. Meaning that something without context has no meaning, so the context must be included. Even if the topic is highly technical. All in all I came to the conclusion that I want this blog to be the place where I share both my questions and ideas with regard to the broad field of the digital revolution. I honestly think that enforces a focus and keeps it interesting for readers while still being worthwhile for me as a place to ask my questions and dump my ideas. So from now on I'll ask myself the question if a new post is about one of my questions or ideas (or both). Ping me if I start to slack! 🙂

Run Jupyter notebooks on hosted GPU for free: Google Colaboratory

I've been refreshing my AI skills lately since they were a little rusty. After getting my master's in AI in 1998, during which the historical victory of Deep Blue over Kasparov took place, the next AI winter set in. For about 15 years AI was about as sexy as Flash is now. But the well known advance in processing power and the staggering price drop of storage AI, and specifically machine learning, became viable. Actually at the time of writing machine learning specialist is probably the most sought after specialism. So it was time for me to get back into the AI game. But mostly to cure my curiosity what the new state of the art was actually about and what was new that could be done. I mean, over the past years we've seen self driving cars, robots, algorithms for social media, all kinds of medical applications, etcetera etcetera come into being solely because of the advances in AI and the needed hardware/software infrastructure. In other words, the AI playground was revived and full of enthusiastically playing children. First thing that surprised me is that although there were a lot of refinements but not many fundamental new technologies. Of course, you run stuff in the cloud, there are more neural net variations, and the time it takes to train a network has decreased dramatically. But the underlying principles are still largely the same. Another thing didn't change were the hoops you have to run through to get your stuff up and running. It was a bit like the current web development situation (those in the know will nod their heads in understanding). Instead of focussing on the algorithms you spend most of your time trying to get your toolset installed and build/deploy street up and running. And that's a bad thing. Fortunately I stumbled across a hidden gem called Google Colaboratory. It's a free service that let's your run Jupypter notebooks on a hosted GPU….for free! If you want you can store the notebooks themselves on Google Drive, or, if you don't want that load them from elsewhere. That is quite amazing and an amazing boon for those that want to get up and running with machine learning as soon as possible. The amount of resources you get for free is, of course, limited, but it's more than enough the experiment and design your data processing pipeline and design, train and test your models. Once your content with your trained model you can take it to more beefy hardware (in case needed). Or to train it on huge training data sets. All in all quite an amazing service that will benefit the machine learning community a great deal. The nice thing about Jupyter notebooks is that you can take them elsewhere and run them there. You are in no way tied to Google, which is a good thing.

AI is the new compiler

What I think is one of the most interesting trends of this moment goes unnoticed by the general public, but surprisingly also by the majority of software professionals. Now that the latest AI winter is over (there have been a few in the past) and every self-respecting information system has at least a bit of AI in its bowels, the way we design and implement information systems is drastically changing. Instead of programmers writing down exact instructions that a computer must execute, machine learning specialists specify and modify both the input and output (I/O) of the system and leave it to AI to find the algorithm that does so on a training and testing set of data. (As a side note, the latter is important to realise, the exact algorithm is very hard to understand and so are predictions of it on unknown input data.) AI thus compiles the requirements into an executable algorithm.

I have been interested in ways we could improve the way we design and implement information systems for a long time. It is my belief that the current state of the art in software engineering is temporary and that we must move on to improve our profession for economic, safety, and societal reasons. Our current way of working is very cumbersome and leads to a lot of problems while at the same time software is controlling an ever increasing part of our daily lives. There is a long and very interesting history of software development from the 1940's up to the present day that I have been following for many years. If there is one continuous thread I would say that it's the fact that the software engineering profession keeps moving towards higher levels of abstraction. From machine code to assembly to C to C++ to Java to 'scripting' languages. But it seems the industry got stuck somewhere around the beginning of the 1990's. When the industry exploded with the advent of the web, so did the number of tools, but none of them was at a fundamental higher abstraction level than languages such as C. So what we got was basically more of the same. Unfortunately 'the same' turned out to be not good enough for our fast changing world. We simply don't have enough software developers to keep up with global demand while at the same time the stakes are rising for every piece of new software that gets released. Admitted, there has been research into more 'semantic' programming languages, but none of them left the academic realm to conquer the software industry.
With the advent of AI something interesting happened though. As said traditional software engineers writing down machine instructions are slowly being replaced by machine learning specialists selecting the right estimator and specifying the I/O. The latter work on higher semantic level, they are concerned with the properties they want the algorithm (system) to have, not about the instructions the machine should execute. This is a fundamental difference that fits neatly with the long historical trend of the software profession moving towards a higher semantic level. Of course the machine learning approach does not fit every use case, there will still be many systems that must be specified procedurally (or functionally if you prefer so), but the variety of use cases for machine learning surprised me (even as an AI veteran).
With machine learning specialist being the most sought after profession at the moment it is my guess that this trend is just picking up steam and we're only starting to scratch the surface of what I believe to be a fundamental change for the software industry at large.

Hype cycle considered harmful

As mentioned in my previous post the downfall in popularity of blockchains is considered to be perfectly in line with Gartner's well-known Hype Cycle. But there are so many fundamental problems with it that it's dangerous to assume a bright future of your technology of choice solely on applying the hype cycle 'theory'. The most obvious one, as pointed out here as well, is that by far the vast majority of technologies fail, despite the fact that they went through a period of hype (being aroused interest). After the hype they just die. It is incredible naive to think every technology will pick up steam again after it has lost interest from the general public. Besides that many successful technologies never go through a 'through of disillusionment'. Just as many technologies never become a hype while still becoming extremely successful in the long run. Sure, a few technologies followed Gartner's Hype Curve perfectly, but most didn't. So predicting every technology will follow the curve is misleading and could be harmful.

Blockchains: It’s not even funny anymore

Halfway 2017 I wrote quite an extensive article about our adventure in the world of blockchains. By that time my colleagues and me had spent over three years deep diving into both the technical and societal aspects of blockchains. How we went from awestruck by the sheer genius of Sakamoto’s idea to the disillusionment in the applicability, the engineering standards, and the cult of ignorance fostered by the ‘blockchain visionaries’. We built companies, products, communities and technologies to get this blockchain revolution going, but we ended up concluding that the only way you could earn a living with blockchains is by either talking about it or building proof of concepts. So we spent years trying to tell the truth about the true potential of blockchains but we were regarded as party poopers. The audience simply didn’t have the technical frame of reference to understand why most use cases would never work. Their response basically boiled down to, “All fine and dandy all this technical mumbo jumbo of yours, but look at all these billions and billions and billions and billions and billions and billions and billions of dollars spent on it. and all these incredible nerds, and all these successful entrepreneurs. They can’t all be wrong, can they?”. Read all the details in the article but in the end we abandoned ship and never looked back. Over the past years I haven’t been following blockchain news, I haven’t visited conferences, I haven’t talked to companies that wanted to ‘do something with blockchains’ (I spent my time reading books and learning skills in an area that actually does have a future, AI). In the meantime blockchains have apparently entered the mainstream. It’s on the news, it’s in the newspapers, it’s on the website of every big corporation and it’s in at least one slide of a presentation of some middle manager.


But a change has been brooding over the past year in the technically more advanced circles. As is often the case the ones with a thorough understanding of the subject matter are the only ones that can grasp the true potential of a technology. (This insight was for me personally the reason to complement my alpha master’s with a beta master’s and PhD.) The word ‘blockchain’ nowadays results in a quirky smile on their faces. The result of having heard so many stories from uninformed ‘visionaries’ that the revolution is neigh and everything will change that the only reaction to so much naïvety they have left is to laugh. I personally got so fed up that about a year ago I presented my tongue-in-cheeck game “Berco Beute’s Blockchain Bullshit Bingo”. On the bingo card were all the cliché statements every apostel of the church of blockchain uttered. For example:
–  It’s going to disrupt EVERYTHING. It’s following the Gartner hype curve, but it will be successful in the long run.
– We don’t need trust anymore.
– Nor trusted third parties.
– Bitcoin will replace fiat currencies.
– Etcetera…
So the word ‘blockchain’ and its cult following are slowly becoming the laughing stock of the industry. Mind you, I’m not talking about the technologists working on it simply because they love to work on interesting technical puzzles. I’m talking about the ‘visionaries’ not bothered by knowledge that fail to apply some modesty to their behaviour. At the time of this writing even mainstream media start to question whether blockchains could ever fulfil all the promises they have been attributed by the visionaries. Yesterday the Dutch newspaper De Volkskrant published an extensive writeup about the failure of all the blockchain projects to come up with truly viable solutions. And even the projects that lived past the ‘pilot’ phase are so simple that most experienced software professionals would agree they could be implemented much faster and cheaper with other technologies.
Although this recent trend makes for some good laughs and a few told-you-so’s, we shouldn’t dismiss it so easily. Not because the promise it still holds, but because the damage it has done. “Damage it has done!?”, you might react, but yes, let me explain. The blockchain is a hype that deserves its own category solely by the sheer amount of resources it has consumed of the past decade.
1. Energy. The proof-of-stake algorithm by now consumes about as much energy as Ireland. Or roughly twice as much as mining copper and gold. Combined. What we get in return is bitcoins, a mysterious entity that’s neither money nor gold, but has proven itself to be an effective way to buy nerds around the globe a couple of Tesla’s (each).
2. Money. Companies, investors and governments have invested billions and billions of dollars in blockchain technologies of the past years. A truly staggering amount of money has flowed to startups, ICO’s, consultancy companies, schools, etc. The awkward aspect of this is that blockchains were invented to create money (bitcoin), not vaporise it.
3. Brainpower. Likely even bigger than the energy consumption has been the brainpower that got sunk into the intellectual blockchain blackhole. Sure, there have been a few technological innovations but that pales in comparison to the amount of intellectual effort that led to nothing. The buzz, billions, technological marvel attracted most of the greatest brains of our era.
Think about it, these resources combined could have been invested in healthcare, fundamental research, battling climate change, fighting cancer, and many other truly important causes. But instead it was spent on a pipe dream. A troubling conclusion for which I think those responsible should be held accountable. All these so-called visionaries that got rich by misleading the general public with smoke, mirrors and visions of the promises land should repay the societal damage they have caused. Although I realise it is highly unlikely this will ever happen, I do think it’s important that this message gets out. As I said, it’s not even funny anymore.

Voice interface hype

Despite all the buzz surrounding smart home assistants such as Google Home, Amazon Lexa and Apple Siri, I still think the expectations with regards to voice interfaces should be scaled back. This Venturebeat article has some nice background reading. For instance, we’ve had voice-to-text for decades now and still almost nobody uses it. Who do you know that dictates his message? And if you know someone, is he/she the odd one out or one of many? While a few decades ago you could still blame the inferior technology for the low adoption rate, nowadays speech recognition has become so good that that’s no longer valid. And still almost nobody regularly uses a voice interface. And to be honest, I don’t think it will ever be really successful. And there is a very simple reason for that. When interacting with information systems users are mostly in a situation where talking is not very handy. They are in company of others that they don’t want to disturb, or they don’t want eavesdroppers, or they’re in a noisy surrounding (public spaces), or they simply want to take some time to order their thoughts before blurting them out. If you think about it there are actually not many situations where users would feel comfortable talking out loud to a computer.

Just as author Neal Stephenson had to point out in his insightful 1999 book “In the Beginning was the Command Line”, don’t just throw away old interface paradigms when a new one comes along. You might misunderstand what made the original paradigm so successful. The same goes for the envisioned switch from text to voice interfaces. To paraphrase an old saying, “A wise man once said….nothing…..but typed it in”.

The important question of data ownership

Over the past months I’ve read 4 books that largely about the same theme: Where do we (as humans) come from, what is our current situation, and what could be our future. The books are Yuval Noah Harari’s ‘Sapiens’, ‘Homo Deus’ and ’21 Lessons for the 21st Century’, and Steven Pinker’s ‘Enlightenment Now: The Case for Reason, Science, Humanism, and Progress’. These books currently lead my list of best books I’ve ever read (with probably a slight win for Steven Pinker) and I encourage everybody to read them. These writers do an amazing job in sketching out the big picture for us humans, clarifying where we’re coming from, where we’re at and we might be heading, while in the same time ask all the right question all of us should be asking. I’ll come back to those books in later posts, since there are so many ideas in there that relate to my mission and the questions I have, but I want to pick out one insight that is particularly relevant for this blog. Both authors mention it, but Harari says it most clearly in his ’21 Lessons for the 21st Century’: The most important question we have to answer is ‘who is going to own the data?’. More details on this later, but the negative and unfixable consequences answers like ‘I don’t know’ or ‘the big tech company CEO’s’ will have are impossible to overestimate. Harari ventures to say that this is the single most important question to be answered in the history of human kind. And that’s a question whose time has come and which WE have to answer. Let that sink in for a moment. As Harari points out data is the resource that will set the global balance of power and over which wars will be fought. He warns not to make the same mistake native Americans for instance made when deceived by imperialists with beads and gold. In other words, realise what you’re giving away and understand the consequences that might have.
One of the biggest problems I see is that most humans have no idea what data is theirs, why this is important and what they should do. The human tragedy of the tendency to value short term positive consequences higher than long term negative consequences. So I’m not convinced we should depend on humans to make the right decisions. The only option we have is to design media that respect mentioned data ownership requirements and make sure those media are available and easy to use.
In one of my previous companies (Contracts11) we worked on solution that I think deserves more attention. Our idea was a new way to build information systems that respected the following requirements:
1. There is only one source of every piece of data
2. Data ownership is clear for everybody
3. Data exchange is always governed by a contract
Ad 1. This idea is one of the deep insights from Georg Gilder’s ” Telecosm: The World After Bandwidth Abundance” (2002) that copies are no longer needed given enough bandwidth. A single source suffices. The practical consequences of this insight are quite amazing if you think about it, but one of the most profound is that it becomes easier to assign ownership and stay in control. This might have seem as a pipe dream for the last decades, but we are slowly but surely moving in this direction. The slow migration of almost every conceivable software service to the cloud is unstoppable. Music (Spotify), games (Steam), films (Netflix), productivity applications (Office 360, Google Suite, Photoshop), etc. It already doesn’t make much sense anymore to buy a multi-terrabyte laptop (although they are available on the market), since you won’t store any films, photo’s, applications, etcetera on them any more. A clear example that we are heading towards the world Georg Gilder described. You could say that files are replicated in ‘the cloud’, but conceptually your dealing with one piece of data (there is one access point to it, and one owner).
Ad 2. Possibly one of the biggest tragedies of the last decade is that it has been unclear who owns which data and the big tech companies (Google, Amazon, Alibaba, Facebook, etc) stepped forward and claimed their turf. A bit like the British colonised many parts of the globe by ‘the cunning use of flags’ (as hilariously pointed out by comedian Eddie Izzard). The natives were so impressed by the flag, the free beads, gold, email/chat/doc services that they gave away something of which they only later realised its worth. By then it was simply too late to turn back the clock and set the record straight.
Ad 3. To prevent misuse of data rightful owners should be able to enforce everybody to play by the rules. Fortunately there is an institute that was designed just for that, the nation state. With its trias politica nation states have a mechanism to create, set and enforce the rules through politics, the military and the legal system. The only thing owners of data have to do is set the conditions under which their data can be used and what will happen if others don’t abide to those rules. This can, obviously, be set in a contract.
So what we at Contracts11 have built were contract-based information systems. They consisted of data sources whose ownership was clear, which contained original data (instead of copies), and where every data exchange was governed by a contract. For every use of a certain piece of data consumers had to sign the contract and in case they misused it they could expect legal consequences. What the contract enforced was of course up to the owner of the data, but generally they would state requirements such as:
– The owner of the data
– Purposes (processes) for which the data might be used
– Who could use the data
– Whether the data could be temporarily stored by the consumer
– Which court of law would be used in case of a dispute
There were a number of insights taken away from a number of pilot projects we did. First of all it turned out that such a contract-based system was no less user friendly than regular applications. On the contrary, instead of having to fill out forms for e.g. an address (with the chance of making spelling errors in copies of that data), users could simply read the contract and check a box.
Another insight was that having no copies had a large number of unforeseen positive consequences. Because it became increasingly clear that ‘big data’ was more often a burden than a blessing. You had to store, maintain, protect, clean, copy, backup, etcetera it, while at that same time it was unclear why you needed all that data in the first place.
This resulted in another insight, that such contract-based information systems enforce data-consumers to only require data that they really need. In other words, they would rethink their processes and model them in such a way that they would lead to the desired state with as minimal data as possible. We called this ‘data minimalism’ and often explained this with the Albert Heijn Sperziebonen example (sorry if you’ve never been to The Netherlands). AH have been tracking most of their costumers for many years via their bonus card. This card is an excellent example of consumers letting short term gains prevailing over long term losses because they don’t fully understand what they are giving away. The strange thing is that in the name of ‘big data all the things’ AH has been harvesting an incredible amount of information while the only answers they needed were often pretty simple and could be asked directly, without the need for this whole ‘big data’ circus. They might for instance be interested wether you (as a customer walking in the store on a Wednesday) would be interested in sperziebonen. Your answer would be a simple yes, no or maybe, and that would have been enough for AH to fire off some process of, for instance, actually offering you sperziebonen for a special price. No need to collect all kinds of relevant data and process, maintain and protect it.
This approach is a practical solution that could help answer the all important question of ‘who owns the data’, as stated in the beginning of this post. It shows that we have to think about, and work on, the medium through which the information flows. It should enforce proper behaviour, adapt to its users, be transparant for any kind of message, not alter the message, etc. This is a big undertaking that requires the combination of many disciplines, from deeply technical to highly philosophical, but it can be done. And should be done. And that is one of my interests and topics of this blog.

English? Nederlands?

I've been thinking about which language to use for this blog for a long time. Although English is my working language and the potential audience for English text dwarfs the potential of Dutch, the latter is my mother tongue and no matter how much English I read, write and speak, I will never be able to put in the finesse that I can with Dutch. So I figured, "why not both?', just write articles that are only relevant for a Dutch audience in Dutch and the rest in English. Since this blog is mainly related to my professional domain, it will mostly be English, but don't be surprised to bump into a Dutch article once in a while. And hey, it's a great opportunity for all the non-Dutch to learn some Dutch!

Why ‘kunnis’?

‘Kunnis’ is the agglutination if the Dutch words ‘kunde’ and ‘kennis’. I made up the term when the term ‘knowledge economy’ (or ‘kenniseconomie’) was all the rage in The Netherlands. That was somewhere halfway during the nineties. What I found to be missing in that rage was a proper appreciation of the ones that could actually create things. Hence the Dutch word ‘kunde’, which means ‘being able to do something’. The word ‘kennis’ means ‘knowledge’. Knowing something and being able to actively do something with that knowledge is in my eyes what everybody should aspire. Over the past 25 years I’ve been an active promotor of a higher appreciation of the engineers, the tinkerers, the programmers, the makers, etc. I’ve written about it, held hackathons before it became a buzzword, talked to schools about programming for kids (before…), made a case for the engineers in the companies I ran, etc. Nowadays every self respecting marketing department is organising hackathons and every board member of every company says all their employees should learn how to ‘code’. Although the latter sounds like a good idea, on deeper thought that might in the end not be the real solution, but I’ll come back to that later (and more often) in this blog. So although from a distance it seems that the makers are back on their pedestal, but that’s just superficial. Beware that I’m taling about The Netherlands because it is definitely different in other parts of the world. There is still a ton of work to do to set the record straight between the ‘talkers’ and the ‘creators’, so I’m sticking to the word ‘kunnis’. And I like the ring it has and the fact that you can easily google it.


For a plethora of reasons I've been silent on all the usual digital platforms for almost a year. It's not just that I've been silent, I haven't been reading or following there much either. And I must say it has been a soothing experience. There were simply more pressing things to do in my life than constantly reading posts on Twitter, LinkedIn, etc. It felt like a constant background buzz slowly dying out and things that really matter slowly coming in perspective in its place. I spent more time with my family, read many great books, honed my 3D modelling skills, refreshed my AI knowledge and skills, met interesting new folks, made new music and played a lot of guitar. But most of all I pondered, with my feet on the table, about the (future) state of the world, the role of technology in that journey and the role that I could or should play in it. There is no simple answer to that question, but I do have an almost unlimited number of ideas about it that could explain or set the course of that journey. I feel a deep urge to share those ideas because it forces me to structure my thoughts and truly hope that someone someday finds value in them. That's why I'm starting this blog. I've tried every possible medium for sharing my ideas (hello Blogger, Google Buzz, Google+, Facebook, Twitter, LinkedIn, Medium,….), and got disappointed again and again by the apparently inescapable route to walled garden-like situations on those corporate-led platforms. So I'm now going back to writing on my own blog, on the open web, where I'm in control and nobody else. This choice directly touches upon one of my deepest interests: The role that new media have, can and will play in our lives. The importance of a value-free medium for our societies at large is hard to over-estimate. And this is not just a philosophical or societal discussion, but increasingly also a technical one. And this broad realm is where my ideas are rooted.

For whatever it is worth, I hope my ideas clarify and inspire, but I'm not aiming to push them through the throats of those that are not genuinely interested. It simply isn't worth my energy, but the former is.