Gen AI-Z

AI has been around for years: the film of the same name was released in 2001, back when the first .com bubble crashed. Yet, only the past few months have seen vast amounts of the population focus on the topic — and worry about it in equal measure…

Let us start with a definition: AI (Artificial Intelligence) can mean many things. Many applied AI tools have long been in use, from your Instagram algorithm catering to your scrolling needs to Siri answering all your questions to Gmail finishing your email sentences. Yet, what has people talking today are naturally the most ambitious types of AI: deep AI models, large language models, artificial neural networks… These tentatively more complex AI models tend to answer a wider range of questions, thus requiring more sophisticated technology — and a lot more data.

The buzz word of the day is “Generative AI”, the newest star family member: AI specifically designed to generate content based on the prompts you give it. The most famous of all has to be ChatGPT, which answers to text queries, but you also have Midjourney, which creates visuals, others that can craft sounds… What these new AIs have in common is:

  1. they are among the more sophisticated models out there. Indeed, in order to be able to generate a text in real time, you need both strong AI models and vast amounts of data to train them;

  2. they provide a service that is immediately quantifiable. If you are a developer and ask ChatGPT to write a piece of code for you, you’ll know: instead of having to write the code yourself, what you are basically left with is proofreading and correcting, which tends to be significantly easier — and less time consuming…

There you have it: complex technology doing immediately quantifiable work. And that has people talking: many are amazed at what a ChatGPT can create in a split second; many more are getting scared that this is the beginning of the end, that these artificial “brains” will slowly but surely replace us, if not quickly and swiftly…

What to say about that? 3 things, to start with:

  1. AI in general and Generative AI in particular will change the game and therefore destroy jobs. It has already started: the more repetitive tasks are gradually being taken over by machines. Think expense checking for accountants (already a standard), SEO copywriting for the web (becoming a standard), stock image illustrations… Basic accounting is now automated, so is gradually SEO-first copywriting, while more and more web illustrations are AI-generated: easier to find exactly what you are looking for. Consequently, copywriters or photographers are going to have less technical job opportunities and that trend will all but continue.

  2. Characteristically, the jobs that AI takes over tend to be the most menial ones. Checking expenses is not exactly the most fun part of an accountant’s role. Neither is SEO copywriting for a writer (trust me on this). So, in essence, what AI is now handling is the most annoying part of the work — what is left is therefore the most interesting bit… Accounting is gradually becoming an expertise game: it was always the case for the best, it is now becoming the standard. You come to your accountant for advice, to optimize your operations, to come up with creative and new ways to do business… Conversely, you will go for an actual photographer for a project because you want a specific style (wedding photographers will not replaced tomorrow); same with writing. The best work is still out there — actually, that is all that will be left soon…

  3. There are also new jobs that will be created in an AI-enabled world that didn’t exist before — or were not monetized as such. These are all the jobs that directly rely on a human touch. Think babysitting, taking care of the elderly, hosting creative workshops, managing group activities… All these roles where the key objective is that human connection will all but become more prevalent. And better appreciated: if you have a robot taking care of the logistical part of the work in a retirement home, the human workers will be left with the most gratifying part: interacting with people under their care, playing board games, reading or watching films together… The part that more specifically makes us human.

Some might say that is an uncharacteristically optimistic view of the topic. Fair enough: there will be a transition period, where some people have to retrain, change careers and/or work on their life plans in more detail. While that is absolutely true, and can be painful to some, it is:

  1. the inevitable part of any innovation wave. The whole point of change is that you have to change with it. That requires some level of effort, although the more you understand what it is for, the more you can appreciate what the end goal is;

  2. a pathway to a significantly better status quo. The notion of having only the most intellectually or creatively rewarding parts of our jobs ultimately left for us humans is one that should arguably have us all looking forward. Because this is what we can realistically expect in the not too distant future, the way our current jobs, sitting behind a desk, already are an arguably better existence than spending 12 hours a day in a factory repeating the exact same movement over and over again…

Again, some might say this is biased. That this better status quo prospect is actually an illusion: that the machines will eventually rise and potentially kill us all. After all, if they become more and more intelligent, at some point they are bound to bypass us mere humans, right? And then what?

That argument centers around the whole singularity theory and the idea that robots (AI) will eventually become self-aware, i.e. possess the same kind of (general) intelligence as us humans — with the vastly greater technical capabilities that come with it. And, that day, a Terminator T-1 000 000 model will show up and exterminate all of us with a flick of the wrist.

That Godwin’s law of all AI conversations, although arguably a complex one to answer, still relies on a couple of fundamental misdirections:

  1. we tend to forget that AI is actually being created by… humans. Although self-evolving, its original coding is still human-made. No matter how many additional layers of code come after it, the fact remains that AI is a human invention. And with that comes the acknowledgment that it is inevitably flawed or biased, the way all of human creation always is — we will have forgotten something in the code’s structure, or wrongfully added another… No matter the reason, there will always come a time when we realize that a particular AI we’re dealing with still has pretty significant flaws. No matter how many CPUs it works on.

  2. The underlying reason for AI to be flawed is that we hardy understand our own intelligence. The knowledge of how a human brain works is still fairly limited and, I posit, will never be complete. This is where philosophy and biology converge: fully knowing oneself, or one’s brain, is inherently impossible because we are using said brain to understand itself. And we know for a fact that any human brain, no matter how superior, has its flaws, works incompletely, is fallible… The same way we know that we don’t know, intellectually speaking. That being said, neurosciences are fast evolving, and will continue to do so. And so will AI, following on that footpath, especially those neural models that most closely mimic our own brain structures… But that will always remain an asymptote, therefore no AI will ever be foundationally more intelligent than the flawed humans that conceived its core coding.

  3. The risk that AI will wipe out humanity because of its now superior intelligence is therefore largely a man-made illusion, one of those fears that make us so inherently human. To be clear, it doesn’t mean that AI cannot destroy humanity: it absolutely can, and will do more so in the future. But that won’t be because it has become smarter than us; quite the contrary: such a scenario would be based on a sequence of events where the AI makes a series of wrong choices. I.e. makes mistakes, directly or indirectly based on the imperfections humans built into it, whether they want it or not…

What is interesting in this whole debate is that it is not only senior citizens who are worried that Gen(erative) AI will kill them all — the younger Gen(eration) Z among us often share that sentiment. Because it looks powerful and therefore dangerous; because it could cause harm to the planet — which is true; because we’re playing with fire more than ever before — which is arguably true. But taking chances, i.e. risks, is what the human species has always done, and how we got from hunting and gathering to having organized, urbanized and wealthy(er) societies.

The risks that come with AI are real, and we should definitely take them into account, work on pertinent regulations, all the while getting involved in their making (Facebook’s recent foray in the field is open source). But we should not think that it is once again the definitive downfall of mankind — or we would largely miss the point…

Previous
Previous

Before sunrise

Next
Next

Hans & Ola Rosling and the chimp test