Job goes away, production stays
← Back to all articles

Job goes away, production stays

Vladimir Dietrich · July 6, 2020 ·4 min read

The essence of derivatives and integrals is the notion of limit. When the limit tends to infinity, or to zero, we invent the notion of derivatives and integrals. For example, a circumference is a polygon with "infinite" sides of length tending to zero, each one. In the limit, with the number of sides tending to infinity and the length of each side tending to zero, we have a perfect circumference.

I forced the bar in mathematics but I wanted to say something simpler, thinking better: it is useful to think about the "exaggeration" - the limit - of a situation, sometimes. This is a good example. To help understand "our enemy", it may be useful to know its "maximum size", the limit.

What is the maximum size of the enemy of automation that steals human jobs?

I imagine that its infinite, maximum limit is when automation can theoretically steal any and all human jobs.

What the heck do we learn when we make theoretical, even unattainable, measurements like this?

First of all, we learn the theoretical maximum size: no humans working, machines doing everything.

What to do with this limit? Dance with it. Get into the dance.

Do we die in this limit? I don't think so, even though it's a good question. This type of question demonstrates more fear of the interlocutor - us, humanity - than sophisticated prediction.

Machines working seem to have more to do with: lettuces being planted, nurtured, harvested, distributed. Ditto for Mercedes Benz cars. The raw material is produced, grouped, assembled, distributed. Also toys. Other foods: oysters, steaks, peas. Blankets, clothes.

Where is human death in this limit, in this theoretical maximum scenario?

Not in the lack of production.

In the distribution? In the terrible dictator who will let everyone wither (will he end up alone with ten blondes?)

I know: maybe in the pathetic human typical transition - with injustices, deaths, misfortunes?

Navigating the maximum limit "cleans" the mind of the mess of thinking about intermediate scenarios.

There is no exact answer about the future, but it clears the mind.

Thinking about the maximum scenario simplifies the mind. It doesn't bring an exact answer, but it takes a lot of mess out of the way.

It helps, at the end of all thought, to understand the path we may be taking.

Beautiful subject, passionate, in my particular opinion. Beautiful debate, in progress, far from the end.

We can put artificial intelligence on a slightly - not very - different level from "automation". Of course, artificial intelligence being a species of the more comprehensive genus, automation. Then we can put "programming" as a different species of automation compared to the species of automation that we call artificial intelligence.

Now without further ado, look how brilliant:

Who's job does artificial intelligence take?

The programmer's own!

Artificial intelligence actually, then, takes the job of everything that automation already takes, but it also takes the job of the programmer himself!

Artificial intelligence increasingly has less "ifs, loops, and a programmer adding functions and codes".

Artificial intelligence was created with this beautiful ability: it improves its own "neural networks".

We only need an "intern" - increasingly, really, only an inexperienced intern - to feed each huge, but still virgin, neural network, with data, data, more data and even more data.

Many like to receive data with correct answers - lots of data - to calibrate themselves. But they are calibrating more and more on their own. We are increasingly automating the calibration of each type of artificial intelligence itself.

In fact, soon, not even an "intern" will be needed. It will be enough to inform the objective (which data with which answers to "swallow") that an artificial intelligence will know how to swallow new data and regurgitate more likely answers. When well calibrated, it informs more than 90%, more than 99% even, of the chance of being correct.

For example, does this lung image have cancer? 99% accuracy, more accurate than the best doctor. Etc.

So not even the programmer is left when neural networks learn to calibrate themselves, just needing to swallow data - data, by the way, that will already be in the cloud, even, next to it. Not even an intern "to insert the floppy disk with data" is needed anymore.

It is useful to have "mantras", short phrases, to help us understand new situations.

I would say, about this fertile debate:

Jobs may even end ("jobs changing" seems like that euphemism of a mother saying to a pregnant daughter: "childbirth doesn't hurt at all").

The difference is not in the finitude of the job.

The difference is in the consequence.

Until 1900, ending a job was ending production.

This is not what happens with automation.

The job ends, the production stays.

Good mantra, this, by the way.

"The job ends, the production stays".

What to do when, or if, or during the transition, in which "The job ends, the production stays"?

"The job ends, the production stays".

Bad? Good? Crazy transition? Do we all run? Sofa and back pain for everyone? Who gets more Mercedes Benz and automatically produced lettuces (therefore, without employment) than others?