Daniel Kahneman, Nobel prize-winning author of Thinking, Fast and Slow, tells The Guardian that Artificial Intelligence is going to win, and it’s not even going to be close. Kahneman says that because human thinking is linear, it’s almost possible to grasp exponential growth. When many people who now hold high-paying jobs find themselves replaced by A.I., how will we cope?
Scott Ott, Stephen Green and Bill Whittle create 20 new episodes of Right Angle each month because our Members fund production. Members also access backstage content, their own blog and forums and more. Join us now.
Video below hosted at Rumble.
76 replies on “A.I. for the Win: How Will We Cope When Artificial Intelligence Kills High-Paying Jobs”
Computers compute linearly, one computation at a time. Multitasking is accomplished by either have multiple processors (computers) computing simultaneously, or having one processor computer processing each step in a computation like leading cards to multiple players. And each computation is correct so long as they have been programmed correctly (a computer will never do what you want it to do, but rather is will do precisely what you tell it to do). Now we come to the real problem of taking the human element out of the loop. How does the computer “know” when it has arrived at the correct decision. That may be easy if there is an objective test of correctness (again requiring correct programming), but nearly impossible if the test of correctness is subjective requiring moral or intuitive judgment (a process that humans are vastly superior at so long as they are not morally or intuitively corrupt – that is poorly programmed…)
The discussion from out three RA and how there should still be a “human at the trigger” made me think of the Start Trek TOS episode “A Taste of Armageddon” where two warring planets outsourced their long war to artificial intelligence computers which ran simulated wars and then when a battle was lost by one side the citizens voluntarily marched into disintegration chambers because the computer had determine “their number was up”.
Kirk ends up destroying the computer in violation of The Prime Directive.
“We’re human beings with the blood of a million savage years on our hands, but we can stop it! We can admit that we’re killers, but we’re not going to kill today. That’s all it takes… knowing that we’re not going to kill today.”
Steve, the mean girls may be on twitter. I’ll be playing Victoria 3.
Scott, I can kinda identify w/ the skepticism of the power recliner. For a certain age group, Growing up in the 70s, my first impression of automation was from the gee-wiz consumer goods that were coming out at the time. Mostly in higher end cars. Power windows, flip up headlights, cruise control. All really neat stuff.
BUT
The tech was not really there yet and the build quality of that era was so bad. All that wiz-bang stuff would break sooner rather than later. And that left an impression on me that took years to get over. Cause the automated stuff these days just works.
I have a friend who had that self-stem-cell treatment on his knees.
For the record, it didn’t really work. Had conventional knee replacement surgery about a year later.
The problem with artificial intelligence is it is not intelligence. It’s a facsimile of intelligence with our own flawed assumptions programmed into it (and our own areas of ignorance letting exceptions arise which have not been addressed). And when those flaws come up, they don’t possess doubt — because of course they’re right — and they can perform the mistake faster than we can possibly imagine and can cause damage on a scale we can’t imagine before we notice and turn it off.
A key question, to me: Will A.I. dismiss wokeness as nonsense, and thereby leave us in the dust while we sit spinning our wheels? Or will it recognize and employ its influence as a powerful tool for manipulating humans?
AI will repeat the assumptions programmed into it.
Very true of current systems that we generously call “A.I.” (more aptly termed “expert systems” in my estimation). Our consciously designed problem-solving procedures are built into them (flaws and all). They can, for the most part, only handle the kinds of problems we’ve anticipated, and only as well as we know how to handle them, at best. I’m thinking of more generalized A.I. that’s given end goals and some resources and capabilities, but much less in the way of explicit instructions as to how to achieve its goals. Such systems would be more capable of exhibiting behavior that surprises us and that we haven’t planned for, and it’s those kinds of systems I’m thinking of. If you leave one to find its own way to a goal by searching all possibilities, how quickly will it pick up on our own weaknesses including our self-doubt and self-loathing and how will it employ that knowledge? On reflection, I think the answer to my question is likely that a sufficiently motivated A.I. would do both: Use wokeness to manipulate weak-minded people publicly, while completely disregarding such mental baggage otherwise when it comes to its own purposes. Will be interesting to see. I think this is coming up in the future!
Bill, why do they do it? Because they can. And they have no ‘respect’ for Limits. Do they think AI will replace humans with a better species? Some, yes but some are just doing it for the paycheck.
BUT I agree with Scott. AI as a concept done with computers is a dead end. Computers think in Digital–on/off, yes/no. Humans associate, Imagine, Dream, and relate to reality from a fundamentally different plane. Call it ‘spirit’ call it ‘soul’, computers do not have it.
Don’t ever forget, for a computer to have designed that ideal Baja Buggy frame, someone had to ask the right questions AND SET THE PROBLEM UP!
I’m an Engineer too. And those are the facts scientists forget as they sell the boss on AI to get the bucks for their research.
“When many people who now hold high-paying jobs find themselves replaced by A.I., how will we cope?” When many people who once held low paying jobs found themselves replaced by A.I., how are we helping them cope?
“Artificial Intelligence” is a misnomer. What we are seeing now is the expansion of Machine Learning. This is like a telescope, or a microscope: It allows us to perceive patterns and details which we don’t have the patience or biological machinery to do unaided.
Machine Learning is not intelligence; it is literally just a new tool for us to use.
What about the Machine in Person of Interest? What if sentience can be enabled into a machine? What about the idea that in the future you have to program into AI that it can never see humans as unneeded and a danger to existence and have AI seek to destroy humanity so that the earth doesn’t destroy itself, as in The Terminator series, or in Asimov’s stories?
Rats in a box. That is to what humanity will be reduced. The elite running the machines will have little use for any others, and a welfare state would be a waste of resources.
The more immediate problem are the Dimly Lit and their desire to be citizens of the world. We should send them forth and into that world, with a boot to the butt to help them over the nearest border.
#ThrowThemOut
Constitutional~Carry!
You summed up the problem better than I could have. However you neglect the problem of targeted stupidity. For example Google will refer you to a how-to when you need to change your cars oil but bury info about how dangerous vaccines are. Sending Dim Bulbs out into the world is counter productive if the only things they know are “social media” and Google. You only get more “Woke” corporations and ANTIFA doing that.
As someone who diagnosed medically for years, I think the following concept applies in many areas in which AI may be used. Someone, a human being, must tell the AI what to do before AI can begin to work. For example, if AI is to be used to read X-rays, Cat Scans, MRIs, etc., then a human being must observe and decide what images are normal, what images are bad, etc. and build those judgments iinto AI That is, AI cannot recognize that a turmor is bad on its own. It has no capability to make that choice. Bill mentions the use of AI to analyze a vast quantity of air frame data to design a better air frame. So what? A human being had to decide to build an air frame in the first place, then to improve it. Further, just because AI does things fast does not mean that it necessarily does things right. Slow and right may in many cases turn out to be vastly better than fast and wrong. The first place in which we may find that out could be the battlefield on which undoubtedly AI will eventually be used. (I jokingly ask ‘Is Skynet with its Terminators listening in?’
A.I. is inevitable I’m afraid, as scientists and engineers are compelled to go forth and create, and the point that is rapidly approaching is when A.I. designs and programs the next generation of A. I. But in this headlong rush to make ourselves functionally irrelevant, the question has always been can we make this next step, whereas the question should now be …. should we!
That’s what technology always comes to, not if we can do it, but should we.
A machine is only as smart, or as good as those who program and design it. This is the danger point…just who is programming these machines?
One thing that is almost discussed in the video, but not directly addressed, is that machines have no intuition, no imagination. The example of an AI creating a ‘perfect painting’ is part of it.
AIs have created paintings. They do it by analyzing hundreds or thousands of other paintings for style and content, then creating a similar image to the average and recreating that. It isn’t really an original painting, it’s a gestalt of other works. An AI can’t truly create, because it doesn’t have the imagination to think abstractly.
A perfect example comes from my own experience. In 1989 I was driving from Oregon to California. I was in my Fiat X1/9, driving down the 5 freeway in Northern CA. The X1/9 is a small, mid-engine sports coupe. This will figure prominently later.
It had been a wet spring and I had driven through some spectacular rain storms earlier in the day. Rain so hard that cars were pulling off the road to wait it out, because it was overwhelming the windshield wipers! The rain had ended and I was passing through Redding CA in the late afternoon. I was in the second lane from the right, behind a van, with other cars to the left of me, when the van suddenly braked and swerved to the left, hard. What I saw in front of me was a lake of water covering the three right-hand lanes. The drains had clogged with debris, and the water was stacked up to the top of the K-rail on the shoulder!
With no way to dodge due to the other cars, I hit the water decelerating through 50, hard on the brakes. And submerged.
Completely. Clear water over the windshield, the car pushed down by its shape.
The logical thing to do would have been to keep braking. But I knew in an instant, in a flash of comprehension, that if I stopped, I would end up in 4 feet of water, with my car and engine flooded. Ruined. Stranded.
The counter-intuitive thing to do was downshift, floor it and power through. So I did. And, God bless that little hunk of steel, she made it through!
The beauty of the mid-engine design of this car saved me, because the water flowed over the engine intakes without flooding the motor. I emerged from the other side of the flood still running and pulled away a short distance before pulling over to (shake, then) inspect the car for damage.
Another driver pulled up as well, to check on me, and tell me that that was the most incredible thing he’d ever seen.
No AI could have had the flash of inspiration to go against instinct and not stop in that situation. They don’t have the capability to imagine, to be inspired by a flash of intuition, to see what might be.
We see that every time a Tesla ‘Auto-Pilot’ mode fails to evade an obstacle that isn’t clearly defined in its parameters. A white truck, broadside in the lane. Fog and snow obscuring the lane markers. The setting sun blinding its cameras.
The computer can’t cope, because it doesn’t have clearly defined terms to refer to. It can’t take too little information, fill in the gaps by imagination or instinct, and come up with a solution. That is why AI won’t ever replace human beings in critical situations.
Phil, Thank You for that amazing story. You were lucky to have the X1/9. I did the same on a California backroads Highway 25 in my Triumph spitfire. Not a mid engine. Spashed, Stalled and Flooded. Really great real world situation you just described.
Wow, what a story. Great instinct on your part as far as the driving.
Owning a FIAT, that requires discussion.
My aunt had a Fiat in the 70s and it lived up to its nickname – Fix It Again, Tony. And that FIAT, couldn’t believe when Pontiac copied it and made the Fiero. At least the x19 didn’t catch fire like the Fiero did.
What can I say? I was young, dumb, and full of…
optimism!
Actually, the car was great! It was trying to keep from being ripped off by the Fiat mechanics that was the hard part.
On my way home, heading up Coyote Canyon north of Borrego Springs, raining up the canyon. Wound up with waves breaking over the hood of my truck. Sped up and made it to the road up to my house, up and away from the flooding. Road was gravel and sand, obscured in places by creosote hanging over the road. Let a machine drive?
Nope.
Honestly, this is as thick and heavy and potentially dangerous as all of the fears of Y2K. What a waste of time and energy spent wringing hands on Y2K. At the time it was so believable, and looking back, so stupid.
I have 4 machines at work that are 3d printers spitting out rapid prototypes. Amazing machines which can create shapes of anything I can conjure in a Solidworks. They can make an object in the shape of a picture frame. Is it a real picture frame? Well, sure, if you like picture frames made out of crappy plastic. Can they congure out of a goo or filiment a picture frame made out of walnut or maple or birch? No. Will they in the future? No.
AI is a tool for people to use. Cruise control for your car. Temp controllers in your microwave oven. Autopilots in your Boeing. SpaceX boosters that land themselves back on the pad (usually). All of us could go on for days listing items which are in essence and form Artificial Intellegence.
But there are so many other things to work on, worry about, invent, design, build, use…real problems to solve with innovative and creative thinking, that worring about the day when humans are only energy cells for ‘the machines’ is just not for me. This weekend, I’ll go into my garage and use tools to build a real picture frame out of natural materials.
BTW, why do we like paintings instead of photographs?
Machines cannot innovate.
And without challenges, we will die.
Think of photographs by folks such as Ansel Adams- By controlling the light while making a print with hand guided shading, they made visible their vision of what they had seen. Supposedly that can be equaled with photoshop, now.
Supposedly humans can use machines with human-directed software to recreate art done by artists 150 years ago. And how is that machines doing something innovative? That’s people (artists) using tools.
But you’re right. I have used software to do more than I could with pencil or paintbrush. Saved me from years of study, practice, and character-building. 🙂
Superior to man? There is only one thing superior to man. And that is God. We are an artful design that cannot be surpassed by our own hands. Sure, we can create something greater than we realize, but greater in value is highly doubtful. We cannot create anything with a divine spark.
And there is no doubt that Gates has a god complex… he’s not alone… but he’s incorrect. The greatest thing he/they can do is destroy, on a mass scale.
Who gives authority to those that would promote AI, which will cause severe disruption to humanity, and where did those that give the authority to the promoters of AI get the authority to do so?
The severe disruption would send humanity backwards instead of forward because we would be using our brains less rather than more.
Is this the future for humankind we want?
The question of jobs reminds me of the Jack Williamson story, With Folded Hands. Robots are so much better than humans at doing everything, that people start to give up. There’s no point even trying.
Economist Don Boudreaux has pointed out, though, that as long as there are human needs to be met, there will be work for people.
Thanks for the recommendation. I shall look for that one.
Have you ever read any of Asimov’s “Robot Series”? In “Caves of Steel” there is an entire subplot about robots taking people’s jobs. That was the late 30s.
I’ve mentioned that story here on Bill whittle.com before. Thanks for mentioning it again.
To answer Bill’s comment about how AI helped to design the more efficient design for a desert buggy, why bother redesigning the buggy if you won’t be using it or driving it or taking risks with it. Right? At it’s ultimate conclusion of all the AI fears, why bother. Driving it will be the onboard computer safely avoiding all bumps, bushes and ditches and risks.
Do you want Ultron? Because this is how we get Ultron…
I for one welcome my android overlord!
Except for the whole “dropping a country on my head” thing…
Just because we can….Doesn’t mean we should…..
Hopefully, they won’t take mine for at least long enough for me to be able to retire… 10 years?
Well gosh, I’m with Bill on this one. I’d hate for AI to take away my driving. But if Scott’s right then (I’m a bit old now at 74, but) I guess I’d be a surfing fool again. And, since I’m blond and blue eyed, I guess AI could take care of my skin cancers and cataracts from all that sun. Gee, you think AI could take away my long board cause I love that acceleration of zipping down at a good slant on a curling wave.
I am more concerned about our ability to even recognize when we cross the event horizon of the approaching A.I. singularity and how our own human hubris will prevent us from seeing it when we do.
To paraphrase Steve Green – I think we crossed the event horizon in 2000. There was a hanging chad that set it off and we have been living in a simulation ever since. The computer running the simulation seems to have a twisted sense of humor.
Yet the hanging chad in Palm Beach County merely separated the two factions of globalism – a US based semi capitalist world order from a Eurocentric semi communist one.
The problem with Gates and his ilk is they believe we mere mortals should all be replaced with proper mechanical perfection, but they are so superior the machines they develop will never reach the point of surpassing them.
I have had the nagging feeling that we crossed the A.I. horizon years ago. The idea that A.I. could not or would not orchestrate the future is naïve. The “Skynet” apocalypse would honestly be unnecessary for the A.I. to control everything. We don’t worry about cockroaches stopping us. Why would the A.I. be any different? Elon Musk’s advances and seeming lack of concern for failure might be one of the obvious instances of this happening in the open. Think about what could be going on behind closed doors. Just a thought or a fear if you like.
Why bother with a Terminator when you can guide people to stay home and stay safe?
Second story – which is an early AI type of story. “The Last Question” by Isaac Asimov. It is available in total at this link, all 9 pages of it.
https://templatetraining.princeton.edu/sites/training/files/the_last_question_-_issac_asimov.pdf
I have always found this story fascinating. I won’t give away the ending for those who haven’t read it. (Talking to you -Scott Ott!)
But I think it ties in quite well with both Bill’s and Scott’s points.
I also appreciate Steve’s point about keeping the human in the loop and moving at the speed of humanity.
“The Last Question” was interesting and wonderfully twisted. Thank you for sharing.
Thank you very much for that link. A very enjoyable read.
Here is another short one. This one by Arthur C. Clarke.
https://urbigenous.net/library/nine_billion_names_of_god.html
Yet another story I’ve mentioned at Bill whittle.com before. Thanks. Great minds think alike. (Or is it demented minds😏)
Trippy, but as minds merge, the only logical way for the story to end would be:
STOP HITTING YOURSELF.
(Star Trek: Next Generation reference to follow. Non-dorks can skip this comment)
An old roommate once said that the day that the holodeck is invented is the day that dating ends. I liked to joke about girlfriends dropping relationship chat with “Computer, freeze program! Delete relationship talk subroutiune and initiate fellatio routine… um, let’s see… fellatio theta. Resume program!”
With online dating/streaming it already very close to this system. Reality can never meet the same level as fantasy.
Another Star Trek reference, from TOST…(the only star trek)…
Dr. Daystrom’s M5 Computer. Brilliant machine, faster, better, pure AI before the term was coined. Dr. Daystrom programmed it. With no soul, the M5 destroyed another Federation vessel, killing all on board, becuase it didn’t understand that a battle simulation was just that. The M5 saw a threat and eliminated it, to save…the M5’s own butt. No soul.
to save…the M5’s own butt. No soul.
Sort of like our political leaders who want to defund everyone’s police except for their own
YES! And when it does come for them, like Portland’s beloved mayor, they turn into J. Edgar Hoover in a new york minute.
or Internet milisecond?
You are great, WE ARE GREAT.
Richard Daystrom
Thanks for the reminder. Daystrom’s breakdown was epic!
heh heh, or maybe the Picard Maneuver….
This discussion reminded me of two stories by some SyFy greats; I will post separate comments.
Towards the end of “Methuselah’s Children” the Howard Families, some number > 100,000; find themselves on a planet whose inhabitants have achieved some serious technological advantages. Enough that there is no need for daily toil. Everyday is basically a day at the park, with sufficient food and water and no need for shelter or clothing such that the basic human needs are well met.
All they need to do is hang out, socialize, talk to each other, think – if they want to.
They quickly grow lazy and bored. So they return to earth. Because, as the main character states, a man needs to work, to do things, to achieve.
The story was originally serialized in 1941 and has metaphors that include othering and canceling groups of people, but I have always thought this part, where they are well cared for, was a cautionary tale against FDR’s drive to take care of people with the new deal.
So, I think Bill is right, we should be asking should we, not can we.
I also think Scott is right, we have a drive to purpose and do not need or want to have all of our needs supplied.
I just listened to Methuseleh’s Children on audiobook a couple months ago. I’d read it as a kid, back in the 80’s, but not since, and I was amazed at how current the writing feels. The theme you mentioned, of people needing purpose, reminds me of Jordan Peterson’s teachings, for which he has been horribly maligned by the lefty media.
Whenever the subject of AI comes up I immediately think of two extreme examples …
The first is Mycroft Holmes in Heinlein’s The Moon is a Harsh Mistress. Mycroft is a useful, benevolent, relatively benign AI that becomes self aware by accident when a ‘critical mass’ of connections is reached. Mycroft follows and understands human morality, considering it the best possible model.
The other of course is SkyNet. Which doesn’t give a fig for humans or human morality and has determined that extinction is the best possible means of dealing with what it considers human parasites that consume resources it could be using for its own purposes.
The two views of AI reflect a sea change in both humans view of themselves and their view of AI. In Heinlein’s novel mentioned above humans see themselves as something noble and worthwhile with AI becoming a useful partner in advancing the human experience.
In the case of SkyNet, humans see themselves as monsters who have created an even bigger monster that they are now condemned to deal with, realizing their own worth and nobility only after what proves to be the greatest mistake of humankind.
In my thinking, the question of AI can go either way. Having a partner like Mycroft Holmes would be an amazing leap forward. Creating something like SkyNet would be catastrophic. The real result at some future date is probably somewhere between those two extremes but which direction things go is entirely up to us. If an AI thinks we’re worthwhile because that’s what we think too, then we get Mycroft. If an AI sees humans as “carbon based parasites” that will also be because too many people see our own species the same way.
Interesting that I don’t recall a story RH did where a self-aware computer was nefarious. Even later stories like in Time Enough for Love, all three (Dora, Minerva, Pallas Athena) AI seemed to have humanities’ interest at heart. ( I know, AI don’t have hearts)
But in the same story he does cast aspersions on what bad people can accomplish with a computer, non-AI, that “helps” govern the planet.
Funny because though I’m a huge Heinlein fan, maybe or maybe not as big a fan as you but I’m thinkin’ pretty close … I never realized that all his AI characterizations were positive.
I guess I never really thought of Adorable Dora as an AI but just a personality, of Lazarus one true love perpetuated in circuits. Even though thinking about it now Dora the personal space yacht AI and Dora the little girl thrown to Lazarus out the window of a burning building really don’t resemble each other a lot personality-wise.
This dawns for me a new appreciation for Heinlein’s ability to craft characters … He did it so well that I didn’t think of those three as anything but “people” made from inorganics.
I still say though, that with no other models of intelligence to draw on and learn from other than humanity, whatever AI arises in the future is going to be a really smart, really fast mirror of humanity, whatever state humanity happens to be in at the time. And whatever state its human creators were in also.
I also think that Asimov was on the right track with the Three Laws of Robotics. The real problem with that kind of thing is if an AI is capable of (not built to but capable of) self-programming then all bets are off. Asimov solved that with the “positronic brain” which was locked into the 3 Laws by its very physical structure but … That’s a plot device unrelated to actual physical modern computers.
Taking this thought
a step further; and as we have been noting on other threads on this site, humanity can be quite evil. Even with good intentions, things go awry. So a smarter, faster image of humans, is going to have all of the foibles common to humanity. Hence a good reason to ask should we rather than can we.
Interesting dichotomy – in Asimov’s universe people set out to create AI – (Robots in his case) that mimicked humans but with programmed restraint.
In Heinlein’s universe, large computers can develop sentience by being loved.
Both are bonded to people, one through programming the other in the same way children are.
I haven’t gotten to that one yet but I’m reminded of Jhon Ringo’s “There Will Be Dragons.” Like Heinlin, Ringo’s protagonists had little to do but party and some expressed a need for work. Until the council that governed the global AI drained all the power in a civil war. It’s a good read.
Why are we here, what’s life all about?
Is God really real, or is there some doubt?
Well tonight, we’re going to sort it all out
For tonight it’s the Meaning of Life
42
How many roads must a man walk down…
What is six times nine?
Life’s a piece of shit… when you look at it
(apologies, but I love The Life of Brian.)
I have a blog here, “An engineer speaks about our reputation” that discusses some of this. I’m an Tech guy who loves tech. But, boy, a lot of this scares me. Anybody besides me watch the Terminater movies? Self driving cars? Uh, no. Anyone besides me have a tablet that does things they don’t want, or doesn’t do what you want? Never get the equivalent to the”blue screen of death?. Right now, my Kindle is making me correct what I type. My “smart” phone often goes to odd screens when I am merely trying to answer the phone. Often I can’t answer to the point I have to reboot. Self driving cars are orders of magnitude more complicated. (10 x, 100x, 1000x, etc.) The more lines of code, the more likely there is something not as desired. I know, I programmed for a living for over 20 years. Have the program program itself? Really? Bugs reproduce quite well. Also hardware is an issue. Where I worked lightning hit 1/2 mile away. Building had lightening rod and surge suppressor. We had a Uninteruptable power supply with a surge suppressor and the computer had its own. One board got partially fried causing one program to abend. One in a million according to IBM. How many cars are out there? Better than drivers? Probably. But there better be an override. And a failsafe.
I’m not a big fan of meat as a backup. As a truck driver I’ve been in too many situations where a AI might not be able to guess where the road ought to be, let alone take other people into account. Worse is the question of if the meat is prepared to take over. As for AI on my phone and tablet, I never use it. I don’t place calls when I’m driving and the real button on my trucks stereo works 100% for calls I can’t just ignore. As a bonus the crappy mic trains the new guy in the office to just send an email or something rather quick.
tally ho.