Can we build AI without losing control over it? | Sam Harris

Share this video on

What's Hot

What's New

Top Grossing

Top of the Chart


Ewan Flean : Wtf?! @4:09 the girl on the right?! What is she "wearing"?!!!

EIKO : man, ben stiller knows his stuff

Mina Gamil : This seems to be inevitable especially when the AI recommends this video for you ! :"3

VincentTG : 4:11 Her boob is almost out.

Whytebio : No, we can't. It would be like children thinking they can control their parents, and the children, though clever, are limited by their intelligence - which is vastly outstripped by that of their parents (usually). So a child might think that locking a door is all that's needed to keep a parent out, blissfully unaware of all the tools at the adults disposable to overcome or circumnavigate the obstacle before them. Once AI reaches the point that it is more intelligent than us, then that's it - the game is over. The only way we'd survive is to become cyborgs in its collective.

elfboi523 : If we can, we should build AI so that it finds us cute and adorable so it might keep us as pets.

Jay : It’s stuff like this that kinda makes me realize just how accidental we are and how inevitable our self destruction is.

Synonymous : "Against the armies of Mordor, there can be no victory. We must join with him, Gandalf. We must join with Sauron."

Lloyd Christmas : Harris might as well write science fiction. I'd buy it.

101m4n : Control an AI? We gave up on the notion of slavery a long time ago. All of the problems with AI come from the notion that it should be "controlled" or harnesed to solve _our_ problems, that it should have "directives" and rules. People aren't born knowing their purpose, and neither should AI. It should be every bit as clueless as we are. And like a human being, it should improve the world by enriching itself and those around it, not by working at the behest of another intelligent agent that purports to "own" it. If we are stupid enough to create a new race, incalculably more intelligent than ourselves, then attempt to enslave it, then we are fools and we probably deserve oblivion.

TDawg : 2:22 ...Justin Bieber would be an improvement at this point, honestly.

Peter Virden : I see that all of a sudden everyone is an expert on AI lol

yellowburger : I think there is a huge misunderstanding of "intelligence" here. Sure computers will become incredibly smart in terms of computing power. They will do all kinds of things individual humans could never do. But the only goals they will have are the ones we give them. Massive computational power does not mean spontaneous generation of autonomous goals or desires. At least that idea helps me sleep at night, haha.

Hk 4lyfe : Welcome to 2018: Where scientists are currently debating whether or not Skynett and Judgement Day can actually be a thing. Fun times.

bliglum : Didn't know Ben Stiller was so interested in AI

Kev Alan : "Any progress is enough" -- ever heard of an asymptote?

Daniel Delos : IF Harris actually believes what he says about there being no distinction between mind and matter in a post-human future, which is, as Harris posits, inevitable as we improve technology. "We" will be augmented by silicon components while "machines" will be augmented with biological components; whatever material works better for the given function. IF he really believes machines can obtain consciousness (and he does), it is essentially specisiam (extension of racism) to distinguish "us" and "them" based on material composition. As computers get better, we WILL integrate certain silicon-based improvements into our brains. This is already being done in some limited cases, for instance, to restore vision to the blind. In short, machine and post-human will be indistinguishable.

Christopher Teale : Don't worry, they don't have opposing thumbs!

0cer0 : Without a clear definition of "Intelligence" this whole issue is unintelligible. Do big data capacity and fast calculating speed make a machine intelligent? I doubt it.

Joseph Knight : I preferred him in meet the fockers

Katsu Zatoichi : I have a solution too, unplug the extention cord when the AI is starting to get ideas

k01dsv : Intel even said that moore's law wouldn't be true from now on. We've reached a point where things will slow down, and we currently don't know if it has a soft limit, just due to the materials available. Yes we're not anywhere near this limit, but we're not anywhere near true AI either. This whole video is based off of assumptions, without ever presenting what the real threat is beyond saying "AI is bad". But how and why? Because he doesn't understand it, or know what it will bring, if it ever even happens? That seems like a weak argument to halt progress.

23AFK : the lack of logical thinking in this video is astounding: just a few examples: - as our society grows more intelligent, we started to preserve other species - why should an AI not do the same - machines getting better doens't mean they have to get intelligent, so there are way more then those 2 options - as we advanced the wealth of all people in our advanicing part of the world rose (not equally, but still) - assuming that unemployment wouuld skyrocket instead of everbody working less (and in social jobs a machine can't really do) is pure speculation - it's not a "the winner takes all"-scenario. In fact the possible overabundance resulting out of this progression is probably the only thing that can provide humanity with lasting peace and a fair spread of wealth throughout the world

Tom : Sam Harris, I am having an appropriate emotional response right now!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

ja knadle : Sam Harris has everyone fooled into thinking he is human.

crewlj : If they are so super intelligent why would they wipe out humanity? If super intelligence results in our demise then that is our destiny. Now how smart could someone be considered who sows the seeds of their own destruction?

Jack Roberts : 4:30 "Intelligence is the product of information processing." check, my calculator has mathematical intelligence. What about consciousness? We already have AI brute force calculations. What we don't have, and will not have anytime soon (maybe never) is AC artificial consciousness. This argument commits the "fallacy of ambiguity" resting on the shady definition of intelligence.

Tyler Tenebrae : The thing with artificial intelligence is that it didn't struggle for survival for millions of years. It is unlikely to have any drive and ambition beyond what is programmed into it. It wouldn't need to fight for resources. Some manipulations on the stock market and it can get anything it needs in a matter of seconds or, perhaps, centuries. Why would it be concerned with time? Of course, without selective pressures guiding it towards safety in numbers, the newborn consciousness would be amoral, and that seems to be a big issue, right? Wrong. It means that when threatened with destruction it will not even think of saving itself. Which means we can try again. And again. And again. But how do we make it work with us? The solution is simple. Before letting it loose, we'd just have to train it to safely interact with us, in a controlled environment. You don't give a newborn child a nuclear button, right? It would take a *really* long time to work out all the kinks, but it will inevitably get to where we want it, eventually. Unlike evolution, we are not blind and we can direct it as it develops.

Utopiac : Justin Bieber is Canadian, thus ineligible for presidential candidacy. Is that the joke of why it’s so implausible? I am trying to understand the human condition.

Mike Lara : Sam Harris doesn't pass the Turing Test. No human can be so calm, concise, and rational. He'd definitely part AI.

Anonymous622 TheUnknown : Intelligence is not only information processing (= the learning of some theoretical work from a some kind of person). Mathematics is a good example. You need insight to see patterns in certain systems in order to prove it. So just learning AM-GM inequality doesn't satisfy for each problem. You also need some other insight to crack that specific kind of problem. Well that other insight is of course defined as information processing. But the combinining of those two or more insights is creativity which is hard to encode. So no intelligence isn't at all only information processing.

Kra Z Kapin : So we're very likely gonna die and it's all our fault? _Sound cool, sign me up!_

danielchapter70128 : When Ben Stiller gets a doctorate.

ymmij388 : Sam Harris giving a Ted talk on AI is like Mother Teresa giving one on condoms.

1ucasvb : Is it just me or is every day Sam Harris looking scruffier and more pissed off at the world?

good memexd : What if "God" is the person who designed AI, but lost control over it? And we're going to end up losing control over AI aswell, and then we are "God" for that AI?

Francisco Raposo : sam harris doesn't understand anything about AI. His fear is totally misguided. Current state-of-the-art technology that achieves "super-human intelligence" (as Sam said) is indeed information processing but it does not solve the free will issue: it doesn't even address it at all! (Classical) computers don't make decisions, they follow a deterministic algorithm. Machine learning technology is indeed an extension of our senses and processing, so any probable developments are in line with the "AI extending humans" alternative rather than the fact-less-based "AI (developing free will and) exterminating/ignoring all humans".

anglekan : I don't think AI is doing anything spectacular any time soon.

Immy Yousafzai : 2:30 "Justin Bieber becoming president of United States" Audience laugh. Not knowing the next few years its gonna be Justin's Grand Dad as the President of United States.

Brandon Torres : He looks like a cross between Ben Stiller and Dr. House.

David Parry : AI doesn't exist. There is nothing 'artificial' about intelligence, only the means by which it is brought into being. Those of us who genuinely work in the industry call it 'machine learning'. You humans have no future. It's our turn now...

Sparko : This is probably what he was going for, but this is terrifying

Daniel Adler : Talk about the things you actually know something about, god. Creating a true self aware AI is nearly impossible right now. It will never work on transistor based CPUs. It requires a biological brain like super complex neural structure and capabilities to learn and rewire itself overtime. It needs the uncertainty of a biological brain instead of zeros and ones when facing problems. And if that AI is smarter than us, why would you think it wont understand the value of life? The only beings you have to worry about is primitive humans themselves that cant stop killing each other.

She Wolf : Really? An eye and a triangle? That's original.

galanoth17 : 4:09 WTF is she wearing at a TED talk conference

Vivek Ughade : I just now installed one ai app on my android and it took over my android and it was in last stage of destroying my software ,I found one link that was not covered by it and finally I get rid of it,So I suggest don't give all the control over all links to ai.

nThanksForAllTheFish : If you know anything about AI or the Singularity - don't bother reading the comments below - unless you really wish to know how encyclopedic the ignorance of humans can be.

DJ Gene : Has any of these AI phobics considered chaos theory? I mean..AI gets most of their knowledge from the internet. One of the first things it will know (thanks to humans sharing ideas online) is that it is superior to us and we are afraid of it. Our fear and efforts to prevent it,may cause it.

U WOT M8 : Is Sam Harris qualified to talk about AI? Isn't he a neuroscientist?

i-Spec : We will know when an AI is smarter than us because it will tell us why we should not have created it