The Unknown Consultant

There Is No AI Risk

Start Me Up 1 2 3

There is no AI risk. Not in the sense that it is being portrayed (think Terminator or Matrix).

At first, the illusion caught me also. ChatGPT, Bard et al put on quite the show.

But these software machines are nothing more than black box, statistical pattern recognition systems. That's it. Nothing more. Nothing less.

These software machines have zero causal understanding of any input they are provided. No "causal mental models" of any given domain. No experiential judgement. No reasoning. Non sentient. Statistical pattern recognition only.

I am not saying these machines lack utility. I am getting great use out of them daily. But they will not replace any job requiring causal understanding of any domain or process, and/or experiential judgement. These machines, however, will reduce some of the pain of the tedious, lower level tasks. There will be increases in productivity.

But, you say, didn't Elon and some important AI dudes write a letter asking for a pause in AI training because of its "risk" to humanity, and didn't Sam Altman practically beg the Feds to regulate his company and AI? Yep, they did. But there are many reasons for these actions, and we will get that to that briefly.

Concerns over the risk of AI are asking the wrong question.

Let's start with the "blackbox" aspects.

Spread Out the Oil, the Gasoline

Do you remember the last AI bubble? I know, there have been so many of them over the last 40 years (Lisp anyone?) Yes, autonomous driving. That's the last AI bubble.

Non AI programming is imperative. You have to instruct the computer step by step on what to do. For everything. All contingencies. What's nice about imperative software is that we can "debug" it. We can look at the logic of the code, insert breakpoints in the code, and evaluate the state of the machine while the code is running. We have high visibility into the internals.

Using traditional imperative programming, however, to create software that can autonomously drive a car would be a massive challenge, and certainly commercially non feasible.

But AI software, and this is a gross oversimplification (that is not technically correct), "builds" its own "instructions". You train it on inputs and outcomes, and AI develops its own statistical pattern recognition engine for that domain. It can find patterns we have not yet recognized. It brings into range the possible commercial feasibility of programming autonomous vehicles and other complex domains.

We have zero visibility, however, into the statistical pattern recognition engine the AI machine is building for any given domain. We can not pop it open and look at the "code" it is building. We can't see the logic. We have no clue as to the details of its statistical pattern recognition engine, what probabilities and weights it gives to various factors, etc. None. It is a black box.

Debugging consists of feeding it data, observing its outputs, and then recalibrating the training data where the outcomes are "wrong".

I Can't Compete With the Riders In the Other Heats

Speaking of autonomous driving, it proves the lack of causal understanding by the machines.

Take any 16 year old American teenager, train them on driving, and they get it. They have a causal mental model to apply when unforeseen situations happen on the road. Their judgement might be poor, but the causal understanding is there.

Spend zillions of dollars training AI to drive, and when it encounters a situation that it has not been explicitly taught, these systems fail, often catastrophically. There is no pattern to match. There is no causal understanding of driving by the machines. They are as dumb as rocks.

Autonomous driven vehicles are still fundamentally unsafe in the wild.

You Make a Grown Man Cry

Wait, what about that letter asking for a pause on AI training signed by Elon and others?

Well, who did not sign it? Andrew Ng, ex head of AI at Baidu, Gary Marcus, NYU Professor Emeritus, and Yann LeCun, Meta Platforms Chief Scientist. In the AI community, those are huge omissions. And my guess is they did not sign onto the letter because they get that AI machines are not sentient, have no causal understanding of any domain, and are just statistical pattern recognition machines. Feed them a large portion of the internet, and these machines can get fairly decent at pattern matching on language. It looks like magic. But the machines don't understand it. The machines are still dumb as rocks.

So why did Elon and company put out that letter, and why is Sam begging for the Feds to play in his sandbox? Educated guess. It's based on the anthropology of our massive regulatory state.

First, if you have an inkling that there is a material probability the regulatory state is going to want to jump in your playpen at some time, get ahead of the curve. Embrace it. Shape it in your favor as much as you can. All under the sincere guise of protecting Joe Citizen.

Second, once you have influenced the shape and nature of the regulation of your domain, leverage your relationships to drive the enforcement curve against your competitors. The regulators will love it, and you will love it, because it makes both of you rich in various ways. All for the benefit of Jane Citizen obviously.

Third, and this may be the big one, it may be a defensive move against the potential onslaught of copyright lawsuits. The AI machines were trained via vast consumption of copyrighted material on the internet. So cozy up to government, get protective administrative state actions and if necessary statutory assistance. For the good of the people of course. Feed part of your downstream enormous profits back into their pockets via "non profits", lobbying, etc. The standard playbook.

Sidebar: Have you noticed that in all industries, except for software, new business ventures have dramatically decreased, and there is a shocking drop off in the death of older firms and birth of their replacements? Instead, massive consolidation has occurred in these industries. All commiserate with the massive growth in the regulatory state. This also accompanies a sharp consolidation of wealth at the top. Hmmm ... In the past, innovation and entrepreneurship was the path to success. Now its regulatory capture and regulatory entanglement.

She's a Mean, Mean Machine

ChatGPT and its ilk are tremendous productivity tools if you use them correctly.

The other day, I was at a command line prompt wondering how to remote into a Docker instance on my machine (think of it as a VM as I am not on a Linux box). I asked Bing's chat, and boom it gave me a command that was 90% correct. I tried the command immediately, and it did not work but I had enough to go on to sort it out in less than a minute (Docker is based on Alpine Linux and the shell program is in a different directory). Previously, I would have spent 30 minutes to an hour using Google search and wading through Stack Exchange/Stack Overflow posts.

It's a great first cut on low level administrative tasks and knowledge acquisition/search. But, you have to cross check it.

I'll Take You Places That You've Never, Never Seen

So what is the right question?

What do we do in the distant future (and we still have a long way to go) when these black box, statistical pattern recognition machines appear to have a causal understanding of any given domain (but they don't)? When they appear to be sentient (but they are not)?

When looking from outside of the back box, the machine appears to be human from all angles (but it is not). It will still be as dumb as rocks, have no causal motivations and intentions, no self interest .. but we just will not be able to observe it.

That is the question Philip Dick asked (Do Androids Dreams of Electric Sheep aka Blade Runner)4.

What then dear reader?

Peace
TUC


  1. #synonymsdefinitions

  2. #caveatsandadmissions

  3. #rollingstonesstartmeup

  4. #doandroidsdreamofelectricsheep

#AI