Author Topic: AI ‘vastly more risky than North Korea’  (Read 10601 times)

Offline Kadame6

  • Superstar
  • *
  • Posts: 239
  • Reputation: 275
Re: AI ‘vastly more risky than North Korea’
« Reply #40 on: August 18, 2017, 02:35:54 PM »
Honestly, you sound like religious zealots. Thoroughly brainwashed by the hope of AI saving souls. It's misguided nerdism.



I feel threatened. Like if I say something in defense of science and practicalities, I'll be crucified by AI worshipers.
I don't worship it. I'm just unaffraid of it as this dangerous potential being fretted about hasnt been demonstrated to me yet. If it comes to pass and destroys us, well then, nothing lasts forever :D Besides, we likely will have destroyed ourselves long before we create a truly autonomous super intelligence.

More seriously though, inventing such a program will mean would have completely cracked the code of human intelligence so we could reprogram ours to augment just as well as that AI. We would also likely have found a way to keep from dying. This is all too Sci-fi ish for me at the moment. I just dont buy we are there or that we could prevent such a thing if we were headed that way anyway. :D

Offline veritas

  • Enigma
  • *
  • Posts: 3353
  • Reputation: 4790
Re: AI ‘vastly more risky than North Korea’
« Reply #41 on: August 18, 2017, 03:30:13 PM »
The science like cognitive science isn't there yet, there's just no way AI will ever become some conscience entity anytime soon.

Offline Kim Jong-Un's Pajama Pants

  • Moderator
  • Enigma
  • *
  • Posts: 8784
  • Reputation: 106254
  • An oryctolagus cuniculus is feeding on my couch
Re: AI ‘vastly more risky than North Korea’
« Reply #42 on: August 18, 2017, 04:42:36 PM »
It's quite easy to break down what Musk's warning us about: the technological singularity.

The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.

With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.

Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.

In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are?".

As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".

In other words, we most probably have no frigging clue of what's going on.

Our last invention.

And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI.  :D

Dawn of the sexbots



The singularity idea while it seems compelling on the surface, it's still far-fetched in practice.  Machine Learning(ML) at least currently, uses predetermined criteria, in order to minimize a loss function.  What I haven't seen is ML that actually decides what is the criteria in the first place.  That part is always a human decision. 

So we can create an application that becomes very good, much better than humans, at character recognition, facial recognition, self-driving cars, etc.  But we don't know how to make a specification that would allow the creation of an application that can actually generate new and more superior criteria than what it is fed.  In other words, one that can decide that instead of relying on relativity to calibrate GPS, that it would rely on a new theory of its own creation.  While that may change, it's not here yet.

Agreed.

Yet the strides being made as we discuss this right now are staggering, and growing with each passing day. We're almost at the tipping point (I'd say 5-10 years at the most). Here's a gif to help visualize exactly how exponential growth works:



One thing we always seem to forget is that almost all major innovations came from the fringes, not big companies or groups of scientists working together but from one guy who had that insight to see what everyone else missed. I wouldn't be at all surprised if AI comes from some 400 pound computer junky working alone in his underwear in his parent's basement. As an example: when DVD's first came out, they had an encryption to prevent copying, which the major studios had spent millions developing. It was cracked shortly afterwards by some teenager (with a little help from a couple of others) living at his parents.

The 400 pounder cannot be discounted.  In fact most mainstream academics rarely discover anything that rocks the boat on which their careers rest. 

In 2014 IBM created a processor with as many digital neurons as a rat's brain.


processor with as many digital neurons as a rat's brain

https://qz.com/481164/ibm-has-built-a-digital-rat-brain-that-could-power-tomorrows-smartphones/

This is what current mainstream technology allows.  A 400 pounder in the basement(or DARPA) could come up with a principle to represent the activities of neurons in the brain in a much smaller space.  And a way to model emergent features of neural activities like personality, creativity etc.  I don't rule out that happening.  At the moment though, AI is just massive number crunching.
"I freed a thousand slaves.  I could have freed a thousand more if only they knew they were slaves."

Harriet Tubman

Offline MOON Ki

  • Moderator
  • Enigma
  • *
  • Posts: 2668
  • Reputation: 5780
Re: AI ‘vastly more risky than North Korea’
« Reply #43 on: August 18, 2017, 05:33:16 PM »
One thing we always seem to forget is that almost all major innovations came from the fringes, not big companies or groups of scientists working together but from one guy who had that insight to see what everyone else missed.

I don't know about that.  What major innovations did you have in mind?

Quote
As an example: when DVD's first came out, they had an encryption to prevent copying, which the major studios had spent millions developing. It was cracked shortly afterwards by some teenager (with a little help from a couple of others) living at his parents.

Is that really a good example of a major innovation ... or even a minor one?  The code (CSS) that the kid "broke" was a rather poor encryption system that moreover used quite small keys.  And reason for the small keys was not it did not occur to anyone to use larger keys; it had to do with US legal restrictions.  In fact, at the time (1999-2000) that the kid "broke" it,  a good home laptop could break it using brute force in less than a day, although apparently nobody had considered it a worthwhile task, and one today can break it in fractions of a second.     

In any case, as far as I can tell,  far from the idea that has been promoted, to the effect that the kid did something fiendishly clever, all did was to somehow manage to get his hands on the CSS decryption code (and keys) and then essentially modify the code for free distribution on different OS platforms.   
MOON Ki  is  Muli Otieno Otiende Njoroge arap Kiprotich
Your True Friend, Brother,  and  Compatriot.

Offline MOON Ki

  • Moderator
  • Enigma
  • *
  • Posts: 2668
  • Reputation: 5780
Re: AI ‘vastly more risky than North Korea’
« Reply #44 on: August 18, 2017, 05:52:51 PM »
At the moment though, AI is just massive number crunching.

It's all pattern recognition, with a prediction element thrown in if necessary.   In fact, some people who used to work in pattern recognition, without associating it with AI/ML, now make big bucks doing the same thing under the AI/ML labels.

Still, one could take the view that the brain too is just a massive number-cruncher, and all that is needed to "create" a brain is a sufficiently fast computer: there are only so many neurons, connected in only so many ways ... therefore only so many possible states ... and so for a particular task or function one just has to find which set of states is optimal; to explain why a human behaves or will behave in certain ways, one simply looks for certain "hard-wired" states---the sort of stuff psychopaths plead in courts etc.---in the brain (the "initial states" in computer-geek's finite-state machine), adds the current states, ... and voila!; and so on, and so forth. Real AI is coming.
MOON Ki  is  Muli Otieno Otiende Njoroge arap Kiprotich
Your True Friend, Brother,  and  Compatriot.

Offline Kim Jong-Un's Pajama Pants

  • Moderator
  • Enigma
  • *
  • Posts: 8784
  • Reputation: 106254
  • An oryctolagus cuniculus is feeding on my couch
Re: AI ‘vastly more risky than North Korea’
« Reply #45 on: August 18, 2017, 06:08:37 PM »
At the moment though, AI is just massive number crunching.

It's all pattern recognition, with a prediction element thrown in if necessary.   In fact, some people who used to work in pattern recognition, without associating it with AI/ML, now make big bucks doing the same thing under the AI/ML labels.

Still, one could take the view that the brain too is just a massive number-cruncher, and all that is needed to "create" a brain is a sufficiently fast computer: there are only so many neurons, connected in only so many ways ... therefore only so many possible states ... and so for a particular task or function one just has to find which set of states is optimal.   

I tend to have that view.  It's very efficient at pattern recognition.  And modelling new relations between patterns in phenomenon that might be otherwise unrelated - I think AI still has some work cut out on this front.  Our brain is less efficient though, at explicit number crunching - a chimp's brain might actually be better at that task. 
"I freed a thousand slaves.  I could have freed a thousand more if only they knew they were slaves."

Harriet Tubman

Offline MOON Ki

  • Moderator
  • Enigma
  • *
  • Posts: 2668
  • Reputation: 5780
Re: AI ‘vastly more risky than North Korea’
« Reply #46 on: August 18, 2017, 11:29:00 PM »
I tend to have that view.  It's very efficient at pattern recognition.  And modelling new relations between patterns in phenomenon that might be otherwise unrelated - I think AI still has some work cut out on this front.  Our brain is less efficient though, at explicit number crunching - a chimp's brain might actually be better at that task. 

The other thing that it seems to be good is certain types of optimization, especially in tasks that require a great deal of local optimization.  But, again, what is AI there is far from clear. People have long laboured on optimization problems---and successfully too--without considering their work to be AI. Has the  field of AI come up with anything that is fundamentally new in optimization techniques?   It seems that quite a bit of current AI is just the application of well-known techniques in new ways.  Artificial?  Yes, in that it is not a creation of nature.   Intelligent?   Yes; good applications seem to require some intelligence.   But is that it? 

I was just reflecting on much-touted ideas such as that AI would prove itself once, say, a computer could beat a grandmaster at chess.     Now, a chess grandmaster can see only up to 25-30 moves ahead, at the very best, and chess at that level is a timed game.    So any computer that is fast enough ought to be able to beat a grand master by simply running through the tree of possible moves, one at a time; to the extent that there have been specialized computers that did a good job at the task, straightforward tree-pruning might have helped, but the key was in the fact specialization meant speed.   A surprising number of AI "successes" seem to  to be of that just-a-matter-of-technological-time flavor.         

It is certainly possible that the human brain is not very good at explicit number crunching, but perhaps one could still model it as though it were.   I believe that True AI is coming and that it will be based primarily on a very good model, combined with really fast computing and serious storage capacity; that's what we don't have right now.   In fact, True AI will prove, once and for all, that there  is nothing special abut the Homo Sapiens brain.  Actually, more than that: True AI will allow us to replace said brain ---or at least its use---with much,much better products.     
MOON Ki  is  Muli Otieno Otiende Njoroge arap Kiprotich
Your True Friend, Brother,  and  Compatriot.

Offline Kim Jong-Un's Pajama Pants

  • Moderator
  • Enigma
  • *
  • Posts: 8784
  • Reputation: 106254
  • An oryctolagus cuniculus is feeding on my couch
Re: AI ‘vastly more risky than North Korea’
« Reply #47 on: August 19, 2017, 06:45:00 PM »
I tend to have that view.  It's very efficient at pattern recognition.  And modelling new relations between patterns in phenomenon that might be otherwise unrelated - I think AI still has some work cut out on this front.  Our brain is less efficient though, at explicit number crunching - a chimp's brain might actually be better at that task. 

The other thing that it seems to be good is certain types of optimization, especially in tasks that require a great deal of local optimization.  But, again, what is AI there is far from clear. People have long laboured on optimization problems---and successfully too--without considering their work to be AI. Has the  field of AI come up with anything that is fundamentally new in optimization techniques?   It seems that quite a bit of current AI is just the application of well-known techniques in new ways.  Artificial?  Yes, in that it is not a creation of nature.   Intelligent?   Yes; good applications seem to require some intelligence.   But is that it? 

I was just reflecting on much-touted ideas such as that AI would prove itself once, say, a computer could beat a grandmaster at chess.     Now, a chess grandmaster can see only up to 25-30 moves ahead, at the very best, and chess at that level is a timed game.    So any computer that is fast enough ought to be able to beat a grand master by simply running through the tree of possible moves, one at a time; to the extent that there have been specialized computers that did a good job at the task, straightforward tree-pruning might have helped, but the key was in the fact specialization meant speed.   A surprising number of AI "successes" seem to  to be of that just-a-matter-of-technological-time flavor.         

It is certainly possible that the human brain is not very good at explicit number crunching, but perhaps one could still model it as though it were.   I believe that True AI is coming and that it will be based primarily on a very good model, combined with really fast computing and serious storage capacity; that's what we don't have right now.   In fact, True AI will prove, once and for all, that there  is nothing special abut the Homo Sapiens brain.  Actually, more than that: True AI will allow us to replace said brain ---or at least its use---with much,much better products.     

At the moment that is what AI is.  Application of existing models taking advantage of more powerful and cheaper computing resources.  AI is, much like most IT trends, a marketing term.  A fad.  The underlying algorithms are not new, but are now more practical.

Because they can do the grunt work of calculating moves much better than any human, it's just a matter of time that we will have a chess program that a human can never defeat.  To me that is no different than the fact that a computer or a calculator can multiply big numbers faster and more accurately than people.

The human brain is interesting.  Bu there is nothing it can do that other mammalian brains can't or don't.  To me, the difference comes down to degree.  I don't know if AI will be able to model it - things like intuition, anger etc might just be a complex interactions of neurons and chemical signals.  Maybe something more.  If there is something more, that means there is something we don't know.  Something new to interrogate and learn.
"I freed a thousand slaves.  I could have freed a thousand more if only they knew they were slaves."

Harriet Tubman

Offline Nefertiti

  • Moderator
  • Enigma
  • *
  • Posts: 11378
  • Reputation: 26106
  • Shoo Be Doo Be Doo Oop
Re: AI ‘vastly more risky than North Korea’
« Reply #48 on: August 30, 2017, 05:29:51 AM »
I am more with Empy on this - the progress to singularity - just not the scare. In the balance of things the opportunities outweigh the risks. I mean the present civilization has so much trouble - disease, poverty, inequality, conflict, climate change, etc - with no solution in sight until AI showed up. I also buy into the chance we're already in a matrix. Religiously speaking isn't that what the "world" is - God's matrix?

The singularity is coming big. Phobics shudder but I see many solutions :

-End of disease after cracking the human genome
-End of poverty after unending capitalism
-Longevity -end of aging
-Telepathy & universal polygotry & animal speech
-Intergalactic (space) travel -Elon Musk stuff
-Time (realm) travel
-End of death after discovery of Kadame's "software". This is a polymorphic state where you can switch state (looks, age, race, sex, etc) cause you're a sub human implanted with chips. In this case you're actually a robot and have nothing to fear but yourself.

♫♫ They say all good boys go to heaven... but bad boys bring heaven to you ~ song by Julia Michaels

Offline MOON Ki

  • Moderator
  • Enigma
  • *
  • Posts: 2668
  • Reputation: 5780
Re: AI ‘vastly more risky than North Korea’
« Reply #49 on: August 30, 2017, 05:49:34 AM »
I am more with Empy on this - the progress to singularity - just not the scare. In the balance of things the opportunities outweigh the risks. I mean the present civilization has so much trouble - disease, poverty, inequality, conflict, climate change, etc - with no solution in sight until AI showed up. I also buy into the chance we're already in a matrix. Religiously speaking isn't that what the "world" is - God's matrix?

The singularity is coming big. Phobics shudder but I see many solutions :

-End of disease after cracking the human genome
-End of poverty after unending capitalism
-Longevity -end of aging
-Telepathy & universal polygotry & animal speech
-Intergalactic (space) travel -Elon Musk stuff
-Time (realm) travel
-End of death after discovery of Kadame's "software". This is a polymorphic state where you can switch state (looks, age, race, sex, etc) cause you're a sub human implanted with chips. In this case you're actually a robot and have nothing to fear but yourself.

I'm on that side too, but only in that I believe in real AI ... not what is currently being peddled around.    This Singularity thing, whatever it is, doesn't bother me in the least bit---primarily because I think the species Homo Sapiens  has both reached its limits and outlived its usefulness.  In the short term, there might be some value in upgrading it to HS v. 1.5 and then to HS v. 2.0, but that's about it.

Empedocles, Veritas, and Your True Friend ...  have already  had numerous exchanges on this point.   See, for example, this thread: http://www.nipate.org/index.php?topic=3508.msg24941#msg24941
MOON Ki  is  Muli Otieno Otiende Njoroge arap Kiprotich
Your True Friend, Brother,  and  Compatriot.