The AI Job Wars: Episode I

--

by Daniel Fernandez Castro & Alvaro Sanmartín Cid

“Congratulations: your job has been automated”

One day, you wake up like every other day. Today is different, though.

Your job has been automated.

No more going to your workplace.

No more emails.

No more micromanaging or overall annoying bosses (that’s a highlight actually).

Whether you were a career-focused individual or someone who worked to live, the reality is that what you have been doing during the last years has been taken over a robot, an algorithm or a mix of them. What do you do now? I mean, you might really enjoy work, or just love to have that cash at the end of the month, and that little piece of chips doesn’t really seem to care.

You try to look for jobs that require your current skillset in other companies, but all you do is jump from one company to the other every few months while the whole industry adopts the technology that is set to make you obsolete. But eventually you’ll accept it: there’s a set of technologies now that does your job faster, with higher quality and fewer errors, and all of that for a price below your income. Oh, and it has the ability to learn much faster than you do, and benefits from the combined learning from the whole brain dump that is the internet, in real time.

It’s not fair, it doesn’t make sense, nothing in your life has prepared you for complete obsolescence in a matter of months or years. Yet it has happened. You still have a family to feed, and possibly a self-esteem partly dependent on your “professional self”. What do you do now?

The point of this article is to analyse why this is set to happen, and to go over the different technologies that are currently viable and whose development and further adoption in the next years might pose a danger to the current labor structure and thus to the people who depend on it, and who aren’t ready to evolve as fast as technology does. We will often refer to the sets of technologies as “machines” or “an artificial intelligence”, to indicate that it might not just be an algorithm, a neural network, a robot… but a combination of the above.

Jobs and industries have been made obsolete in the past, haven’t they?

Analysing the effects of automation on the job market since the dawn of the first industrial revolution is out of this article’s scope, but we would like to briefly go over the overall dynamics of previous episodes of automation.

During the 19th and 20th centuries, incremental technologies destroyed jobs –sometimes over generations, sometimes over the lifespan of a single one- but most of the times, adoption across organizations, industries or geographies took long enough for humans to adapt, especially given their shorter life expectancies (and thus professional life spans). In many cases, the process was relatively smooth: people in disappearing jobs had the time to retire and, by the time the next generation joined the labor force, they were more educated and/or had other choices and priorities and thus managed to survive and thrive. In some cases, humans needed to move around geographically. In other cases, humans were able to reinvent themselves, by either adopting new skills, or by looking for other jobs or industries whose required skills were similar to those used in previous ones, or any skill that they could easily adopt within some basic training. In the most dramatic cases, people were just out of the labor market for good — this was usually the case for older people or those in whole industries disappearing overnight due to automation or policy. Think of the coal mining industry, for instance.

What we are facing in the next years will be like the latter case. We aren’t expecting an incremental development in technology. What we have ahead is the mother of all disruptive technologies. Technologies that truly perform better than human beings in more than one simple, pre-programmed task like in the past, and for affordable prices. And this will affect almost everyone, regardless of educational background, industry or age. We will see huge disruptions every 5–10 years, not over the course of generations. One-time academic education will provide with a valid basis, but won’t be useful over the entirety of an individual’s lifespan. In-job education programs will probably lack the agility and direction to prepare organizations’ members for new paradigms created by technological change within their companies or else. Add that to much longer life expectancies in humans, and we are facing an issue that we all would need to acknowledge and try to figure out as soon as possible.

You are probably thinking that we are being a bit too catastrophic here — in the end, we’ve coped with changes in the past, so why is this time different?

By Edgarodriguezmunoz (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)]

What’s so special about AI?

For a start, let’s make sure we’re on the same page on what AI is and what it can do.

One of Merriam Webster’s definitions of Artificial Intelligence reads “the capability of a machine to imitate intelligent human behaviour”. For the latter, and in the context of the topic, we will consider “intelligent human behaviour” as a mix of perception, processing of sensorial inputs, decision making, memory and learning and the ability to physically react based on the latter. That is, most of the things that are required for a large number of jobs currently done by human beings. We are not here to discuss if intelligence requires consciousness, the ability to suffer or remembering songs in nostalgia. We’re just interested in what makes Artificial Intelligence be better than us at driving, diagnosing diseases, averting terrorist attacks, playing video-games and making financial decisions. We’re interested in the capabilities of each of the standalone systems (i.e. the technology that perceives, the system that processes data, the algorithm that makes decisions based on scenario analysis…), but most importantly, we’re interested in complex systems that combine several elements from the previous list, and that are able to interact with each other in real time, wherever they are.

How does technology perform better than human beings in each of the tasks usually thought as exclusive to more or less intelligent biological machines (i.e. living organisms)? We’ll go over some currently technologically feasible technologies in the next sections. To clarify some points, we will also illustrate these with the example of self-driven cars, which -even if not fully autonomous yet-, do display some technologies that are currently impacting the world we live in.

By Photo courtesy of The Johns Hopkins University Applied Physics Laboratory (JHU/APL) — https://flic.kr/p/9gJweq, Public Domain, https://commons.wikimedia.org/w/index.php?curid=34517243

Sensing and interpreting

First of all, machines can altogether have more sensory types than us: Are you able to build perfect spatial models based on laser measurements? Do you see infrared light and have a thermal vision? How are your electromagnetism and radioactivity detectors doing? Exactly.

Second, technology has the potential to outperform us at any of our basic five senses. Oh, you see with your two eyes in front of your lovely face? Great, machines (again, we refer to any complex, non-biological system of chips and algorithms as “machine”, don’t think of a Star Wars droid here) can “see” 360 degrees around them, and can zoom. Also, their sight is not restricted to where the machine physically stands — such a system can use cameras and detectors in different places and process it all.

Sensorial technologies (machines that have the ability to capture the data around them) have been around for a while now. We’ve had radar, far-field microphones, and Geiger detectors for decades. So why is it any different now? Well, because in the last years machines haven’t just been able to improve their sensing capabilities: we have taught them to interpret what they’re perceiving.

The Amazon Echo can not only hear what you say, but it can also put it in written words (Automatic Speech Recognition) and, most importantly, understand what you’re trying to say or what the intent is behind your words (Natural Language Processing). Whether it’s truly understanding or not, you don’t need human beings listening to all interactions the device had to actually provide them with what they want. The artificial neural networks in the Amazon systems do it for us. Can you imagine how many jobs actually depend on someone understanding someone else’s intentions in vocal or written form and providing solutions to them, standard or custom?

Something similar is happening with visual recognition. Google, for instance, has developed and is commercializing tools for image recognition and analysis. You input an image, and their neural networks can determine, with a great detail, what/who is on it. For example, if your input is a picture of yourself posing with a book in front of the Eiffel Tower, it finds that there’s a human being in front of the Eiffel Tower, and can also guess if the person wears something in his/her head, what emotions the person is displaying and can also recognize the book title.

Let’s take the example of autonomous cars. Currently, viable technologies used in this type of car include radar, laser light, GPS, odometry, and computer vision (360 degrees vision around them and image recognition), all of them with an astonishing precision. Apart from just perceiving, autonomous cars are able to recognise other vehicles, signals, lanes, pedestrians, cyclists… And identify them as such. This has sometimes made us raise a question: when are we going to ban imperfect humans from being behind the wheel? Although it’s those “imperfect humans” who program the self-driving cars, at least for the moment.

Auto Sensor by Frank Derks — https://www.flickr.com/photos/eurosporttuning/16442169022 CC BY 2.0 https://creativecommons.org/licenses/by/2.0/

Computing speed for rapid, accurate decision making

Let us tell you — you’re not as good at processing as you think. Don’t get us wrong, the human brain is an incredibly complex system, and the point of this article is not to compare the entirety of its capabilities with an Artificial Intelligence, to analyse the different performances of digital versus analog machines or to define how many petaFLOPS of computing speed we would need to replicate all of the human brain’s functions in real time. We’re just evaluating what a machine can do that you cannot. And you probably cannot do the calculation (8,436,762.45/284,545.87)x19.2 in milliseconds, while any basic calculator from the 80s or your smartphone comfortably can.

What’s processing power for? Based on the inputs of the previous section on sensing its environment, a close-to-perfect awareness of its own nature and capabilities, the ability to make simulations and some previous pre-programming on what behaviors a machine should display when in given scenarios, machines are supposed to make decisions and react.

For instance, a currently available model of self-driving car is able to correctly identify a cyclist (plus its size, relative position, and velocity) raising his right arm to indicate a right turn, and react accordingly, by slowing down and giving the cyclist space to maneuver.

Learning and memory

Let’s start off with installed storage capacity. Your brain is supposed to have the capability of storing close to 2.5 petabytes (that’s 2.5 million gigabytes), even if the latter is just an estimate. Meanwhile, you’re able to create memory storage systems as large as you want, by interlacing hard drives or other memory storage modules. In 2011, IBM worked on creating the largest memory storage ever built by connecting 200,000 hard drives together. And that was six years ago, so imagine the total installed storage size of all computers and servers connected to the internet today.

Humans can store memories of almost of their lives, but that memory is usually blurry/distorted even in terms of specific events and is far from including all the details of every event seen, heard, etc. The right set of technologies has the potential to store everything it might perceive, analyze and decide with a pixel-perfect accuracy.

Your brain is your brain, and you will have the same brain from birth to death. True, your brain will physically change along with your life, and your synapses will adapt, subject to learning. But you can’t currently “expand” your memory or your computing power. We are constrained by carbon-based structures that don’t currently support improvements/expansions. Meanwhile, you can improve machines, either by making their processes “leaner” (i.e. get more storage or processing power with the same hardware), or even add more hardware to them. You can’t do that with your brain, can you?

Also, you are fragile and end up dying. And your memories and thoughts and ideas and feelings go with you. That’s less of a problem for silicon-based systems.

In regards to learning, human beings are pretty good at it. But in the last years, there have been breakthroughs in so-called neural networks that have enabled for algorithms to go through deep learning processes. Some of the latest experiments with deep learning have been made popularized by Google DeepMind, showing us that a machine can learn how to play Go, the most complex game known to man (and win), that an algorithm with a body and limbs can teach himself how to walk, and that a system can learn to play Super Mario and kick ass at it in a matter of hours. It’s simple: you give an algorithm a reward function (“you need to get as far as you can”) and some basic instructions (“you have these legs”) and they can figure out the rest. We’re not going to go over how deep learning works, but suffice to say, we’re not the only ones who learn quickly now. Plus, AIs have the potential to learn all the time. No need to rest. No naps, no lazy days. They are doing their best every second. Beat that, lovely human being.

The edge of gamification by Steve Jurvetson — https://www.flickr.com/photos/jurvetson/30374100613 CC BY 2.0 https://creativecommons.org/licenses/by/2.0/

Machine-to-machine communication vs the isolated human experience

The Internet has sometimes been considered as a supra-human entity of itself, a sort of hive mind for Humanity — humans can upload the content of their brains to it, and access other people’s thoughts through it. But it isn’t perfect.

First of all, humans don’t share all their thoughts to the global humanity knowledge database (aka The Internet). Mainly because no one has time to put all his/her thoughts on text/voice. Second, because some people expressly conceal their thoughts, or say the opposite or an altered version of their real thoughts. Third, a staggeringly large amount of the world’s population does not have internet access or has a very limited access to it, given connection speeds or censorship.

Second, it takes long for a thought to be shared to the network (a person needs to reflect about it and write it or record it, then upload it).

Third, it also takes long to find, access and interpret the information on the internet. This happens for three reasons: a) if the content is formatted in a human-friendly way (mostly requiring language), it takes time to read an article, listen to a recording or watch a video, b) if the information is just a dataset (of whatever size or format), we aren’t as good at analysing raw chunks of data, c) it can take a much longer time to find the piece of information in the vast ocean of content.

Machines, on the other side, have the potential to reach something like a hive mind. Information can be transferred between devices in milliseconds and, in cases where data is not easily found or lacks processing (where humans would struggle), machines are much more able to find, process and derive the appropriate information they need from the network. This, in turn, can mean, for instance, that the combined experience and learning of thousands or millions of devices can be instantaneously adopted by each of the individual components of the network.

There are at least two interesting applications of this to autonomous cars that we can think of right now.

First, imagine that all cars of a given company are connected to the same network. If one or more cars “learned” by error (even if they don’t err that much) that braking at a traffic light 0.1 milliseconds later than planned actually reduces the overall gas consumption while maintaining the risk of collision at the same level, the improvement could be instantaneously shared by all machines in that network. Think of it as rapid, shared learning.

Second, think of real-time transparency/coordination between vehicles. Any vehicle can let other vehicles connected to the network know what it plans to do at any given point in time. So you basically would not need traffic lights in the long term, vehicles could “agree” between each other who crosses first and minimize the idle time for all of them involved in an interaction. The good thing about machines is that they can be perfectly predictable, transparent and coherent with their plans.

How these all connect and what it means for us

You might be thinking that we’ve had progress on each of the individual “skills” described above for decades now. And that we’ve always managed to use each of the individual elements to improve the way we do things while maintaining ownership of the other activities along the chain.

The issue with that thought is that it could be wrong, given two main reasons: first, a series of recent breakthroughs. Developments in quantum computing will probably increase the computing power of machines well above the expected trend. More powerful computers and the relatively new access to an ocean of data produced by billions of internet users over years have allowed computer scientists to develop neural networks that can go through deep learning algorithms: there are now algorithms that can learn and improve themselves with few to no human guidance (e.g. through reinforcement learning), based just on available actions and given goals. Companies like Google are offering everyone with a couple of dollars capabilities and tools that could put millions of people out of work, and anyone with an internet connection can use them.

Second, and most importantly, the ability to combine them (i.e. having systems that combine sensorial perception, interpretation, analysis, decision making and/or physical reaction) is set to change the paradigm for almost every industry and human activity.

If you think the above is Sci-Fi, think twice — it is all based on technology that is available as of today. Imagine what we (or machines) can do in the future if we’re currently able to do all of the above and keep following this exponential trend.

A 2013 computerization study from researchers at the Martin School (Oxford University) found that 47% of American workers held jobs at high risk of automation in the next decade or two (Frey, C.B and Osborne, M.A. (2013)). What do surgeons, salespeople, best seller writers and truck drivers have in common? Well, according to a study by researchers from Oxford University and Yale University, who gathered the impressions of experts all around the world, all the mentioned jobs above could be fully automated in the next 40 years. They also mention that “there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.” Some other studies suggest much lower probabilities, but even if that’s the case, the outlook could be so dramatic that we’d need to be ready.

With such a perspective in front of us, shouldn’t we be worried? Not because Humanity will be doomed -we always end up prevailing, right?-, but because of all the individual dramas that this situation has the potential to create. Think of all the people that could be unemployed overnight given the advances mentioned before of adoption was as cheap and fast as it looks it will be.

So, what do we do now?

Episode II coming soon…

Daniel Fernandez Castro is Program Manager at Amazon Alexa Experience and Devices. Álvaro Sanmartín Cid is Head of Growth at Skills Matter. These thoughts are our own and don’t represent the companies we currently work for.

--

--