Job gone, production remains
The essence of derivatives and integrals is the notion of limit. When the limit tends to infinity, or to zero, we invent the notion of derivatives and integrals. For example, a circumference is a polygon with "infinite" sides of length tending to zero, each. In the limit, with the number of sides tending to infinity and the length of each side tending to zero, we have a perfect circumference.
I pushed the bar in mathematics but I wanted to say something simpler, thinking better: it is useful to think the "exaggeration" — the limit — of a situation, sometimes. This is a good example. To help understand "our enemy", it may be useful to know the "maximum size", the limit, of it.
What is the maximum size of the enemy that is the automation that steals human jobs?
I imagine that its infinite, maximum limit is when automation can theoretically steal any and all human jobs.
What the heck do we learn when we make theoretical, even unattainable, measurements like this?
Right away we learn the theoretical maximum size: no humans working, machines doing everything.
What to do with this limit? Dance with it. Get into the dance.
Do we die in this limit? I don't think so, even though it's a good question. This type of question demonstrates more fear of the interlocutor — us, humanity — than sophisticated prediction.
Machines working seem to me to have more to do with: lettuces being planted, nurtured, harvested, distributed. Ditto for Mercedes Benz cars. The raw material is produced, grouped, assembled, distributed. Also toys. Other foods: oysters, steaks, peas. Blankets, clothes.
Where is human death in this limit, in this theoretical maximum scenario?
Not in the lack of production.
In the distribution? In the terrible dictator who will let everyone wither away (will he end up alone with ten blondes?)
I know: perhaps in the typical human pathetic transition — with injustices, deaths, misadventures?
Navigating the maximum limit "cleans" the mind of the mess of thinking about intermediate scenarios.
There is no exact answer about the future, but it clears the mind.
Thinking about the maximum scenario simplifies the mind. It doesn't bring an exact answer, but it takes a lot of mess out of the way.
It helps, at the end of all the thought, to understand the path we may be taking.
Beautiful subject, fascinating, in my particular opinion. Beautiful debate, in progress, far from the end.
Artificial intelligence we can place on a slightly — not much — different level from "automation". Of course, artificial intelligence being a species of the more comprehensive genus, automation. So we can place "programming" as a different species of automation compared to the species of automation that we call artificial intelligence.
Now without further ado, look how brilliant:
Who does artificial intelligence take jobs from?
From the programmer himself!
Artificial intelligence in fact, then, takes the jobs of everything that automation already takes, only it also takes the job of the programmer himself!
Artificial intelligence has fewer and fewer "ifs, loops, and a programmer added functions and codes".
Artificial intelligence was created with this beautiful capacity: it itself perfects its "neural networks".
We only need an "intern" — increasingly, really, just an inexperienced intern — to feed each huge, but still virgin, neural network, with data, data, more data and even more data.
Many like to receive data with correct answers — several data — to calibrate themselves. But they do the calibration more and more on their own. We are increasingly automating the calibration of each type of artificial intelligence.
In fact, soon, not even an "intern" will be necessary. It will be enough to inform the objective (which data with which answers to "swallow") that an artificial intelligence will know how to swallow new data and regurgitate more likely answers. When well calibrated, it informs more than 90%, more than 99% even, of the chance of being correct.
For example, does this lung image have cancer? 99% accuracy, more accurate than the best doctor. Etc.
So not even the programmer is left when neural networks learn to calibrate themselves, just swallowing data — data, by the way, that will already be in the cloud, even, next to it. Not even an intern "to insert the floppy disk with data" is needed anymore.
It is useful to have "mantras", short phrases, to help us understand new situations.
I would say, about this fertile debate:
Jobs may even end ("jobs changing" seems to me like that euphemism of a mother saying to a pregnant daughter: "childbirth doesn't hurt at all").
The difference is not in the finitude of the job.
The difference is in the consequence.
Until 1900, ending a job was ending production.
That's not what happens with automation.
The job ends, the production remains.
Good mantra, this, by the way.
"The job ends, the production remains".
What to do when, or if, or during the transition, in which "The job ends, the production remains."?
"The job ends, the production remains."
Bad? Good? Crazy transition? Do we all run? Sofa and back pain for everyone? Who gets more Mercedes Benz and automatically produced lettuces (therefore, without employment) than others?