Author Topic: AI ‘vastly more risky than North Korea’  (Read 10614 times)

Offline Kadame6

  • Superstar
  • *
  • Posts: 239
  • Reputation: 275
Re: AI ‘vastly more risky than North Korea’
« Reply #20 on: August 16, 2017, 02:59:03 PM »
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.


The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.

And no, i am not anti development, however the speed of transition is worrying.
You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..


That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

Offline MOON Ki

  • Moderator
  • Enigma
  • *
  • Posts: 2668
  • Reputation: 5780
Re: AI ‘vastly more risky than North Korea’
« Reply #21 on: August 16, 2017, 03:26:48 PM »
The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

The robots will pay tax: https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/

MOON Ki  is  Muli Otieno Otiende Njoroge arap Kiprotich
Your True Friend, Brother,  and  Compatriot.

Offline MOON Ki

  • Moderator
  • Enigma
  • *
  • Posts: 2668
  • Reputation: 5780
Re: AI ‘vastly more risky than North Korea’
« Reply #22 on: August 16, 2017, 03:36:12 PM »
That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.

On to  the future: https://www.rt.com/viral/349503-vr-porn-event-japan/
MOON Ki  is  Muli Otieno Otiende Njoroge arap Kiprotich
Your True Friend, Brother,  and  Compatriot.

Offline Kichwa

  • Moderator
  • Enigma
  • *
  • Posts: 2886
  • Reputation: 2697
Re: AI ‘vastly more risky than North Korea’
« Reply #23 on: August 16, 2017, 03:46:04 PM »
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.


The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.

And no, i am not anti development, however the speed of transition is worrying.
You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..


That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

 Kadame,  I do not know about that.  Complement maybe but replace?-I do not think so.  The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
"I have done my job and I will not change anything dead or a live" Malonza

Offline Empedocles

  • VIP
  • Enigma
  • *
  • Posts: 823
  • Reputation: 15758
Re: AI ‘vastly more risky than North Korea’
« Reply #24 on: August 16, 2017, 03:58:06 PM »
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.

Yeah, I'm serious.

What do you think singularity means in terms of technological singularity?

And what makes you wrongly assume that I'm against it or as you put it, phobic? I highly recommend you read, if you haven't already, Bostrom's book to see where I'm coming from.

Of course I see the coming of ASI (Artificial Super Intelligence) as a massive step forward for humanity, but, and here I stress the "but", we (even you) have no idea how an ASI will react. No one does!

Heck, even Wozniak once said that we might become pets to an ASI (he did change his mind later but the point still remains that nobody really knows). It's all guess work.

That's the message Musk along with Bill Gates, Stephen Hawkings, plus many more are trying to put across, that we simply don't know and that's what we should be concentrating on (safety), as we move towards ASI.


Thing is, it's unstoppable and if/when we do create an ASI, that's when we'll really know. Until then, it's just guess work, kind of like trying to guess what a day old baby is going be like when s/he's 20 years old.

Isaac Asimov wrote lots of books and short stories, trying to understand how humanity would or could cope with robots (remember his famous 3 laws, later expanded to include the zeroth law?). Again, the point is that we simply don't know how an ASI would be like and Asimov's robotic laws are, frankly speaking, a mess (but a very good start at least).

That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.

Sexbots are already in "service", pissing off (not Trump style) real prostitutes and making them future recipients of UBI.


Offline Kadame6

  • Superstar
  • *
  • Posts: 239
  • Reputation: 275
Re: AI ‘vastly more risky than North Korea’
« Reply #25 on: August 16, 2017, 04:07:34 PM »
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

 Kadame,  I do not know about that.  Complement maybe but replace?-I do not think so.  The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :D

Offline Kadame6

  • Superstar
  • *
  • Posts: 239
  • Reputation: 275
Re: AI ‘vastly more risky than North Korea’
« Reply #26 on: August 16, 2017, 04:17:48 PM »
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?

I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.

Offline Empedocles

  • VIP
  • Enigma
  • *
  • Posts: 823
  • Reputation: 15758
Re: AI ‘vastly more risky than North Korea’
« Reply #27 on: August 16, 2017, 04:20:33 PM »
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

 Kadame,  I do not know about that.  Complement maybe but replace?-I do not think so.  The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :D

Unfortuntaly (or fortunately  :D), they're already working on that so that in the future we can expect AI sexbots which will be indistinguishable from real people.



https://realbotix.systems/

And as the technology exponentially grows, we'll soon have robot wives/husbands.



Offline Kadame6

  • Superstar
  • *
  • Posts: 239
  • Reputation: 275
Re: AI ‘vastly more risky than North Korea’
« Reply #28 on: August 16, 2017, 04:23:31 PM »
Ok, we'll see. For me, part of my needs is to know I have been chosen. I dont know how I could buy that with something someone made in a lab. Euuueww.

Offline Kichwa

  • Moderator
  • Enigma
  • *
  • Posts: 2886
  • Reputation: 2697
Re: AI ‘vastly more risky than North Korea’
« Reply #29 on: August 16, 2017, 04:30:27 PM »
Based on what was written in the past, its amazing how difficult it has been to accurately predict the impact of technology on peoples lives long term.

I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

 Kadame,  I do not know about that.  Complement maybe but replace?-I do not think so.  The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :D

Unfortuntaly (or fortunately  :D), they're already working on that so that in the future we can expect AI sexbots which will be indistinguishable from real people.



https://realbotix.systems/

And as the technology exponentially grows, we'll soon have robot wives/husbands.

"I have done my job and I will not change anything dead or a live" Malonza

Offline Empedocles

  • VIP
  • Enigma
  • *
  • Posts: 823
  • Reputation: 15758
Re: AI ‘vastly more risky than North Korea’
« Reply #30 on: August 16, 2017, 04:33:21 PM »
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?

I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.

Zigackly!

I truly believe we wouldn't be able to control an ASI. Think again of an IQ of 10'000 and growing. Whatever we could think of, it's already done it, vastly faster and it would be always ahead of us.

About the benevolent Super Intelligence, we could already be living in a simulated world, kind of like the Matrix. Interesting theory.  8)

I'm taking it as it comes and like you, not losing any sleep over it.  :D

Offline bryan275

  • Moderator
  • Enigma
  • *
  • Posts: 1419
  • Reputation: 2581
Re: AI ‘vastly more risky than North Korea’
« Reply #31 on: August 16, 2017, 04:54:31 PM »
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?

I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.

Zigackly!

I truly believe we wouldn't be able to control an ASI. Think again of an IQ of 10'000 and growing. Whatever we could think of, it's already done it, vastly faster and it would be always ahead of us.

About the benevolent Super Intelligence, we could already be living in a simulated world, kind of like the Matrix. Interesting theory.  8)

I'm taking it as it comes and like you, not losing any sleep over it.  :D


Surely we must be able to build into it a kill switch of sorts.  We are their god after all, no?

Offline Kim Jong-Un's Pajama Pants

  • Moderator
  • Enigma
  • *
  • Posts: 8784
  • Reputation: 106254
  • An oryctolagus cuniculus is feeding on my couch
Re: AI ‘vastly more risky than North Korea’
« Reply #32 on: August 16, 2017, 05:08:04 PM »
It's quite easy to break down what Musk's warning us about: the technological singularity.

The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.

With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.

Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.

In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are?".

As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".

In other words, we most probably have no frigging clue of what's going on.

Our last invention.

And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI.  :D

Dawn of the sexbots



The singularity idea while it seems compelling on the surface, it's still far-fetched in practice.  Machine Learning(ML) at least currently, uses predetermined criteria, in order to minimize a loss function.  What I haven't seen is ML that actually decides what is the criteria in the first place.  That part is always a human decision. 

So we can create an application that becomes very good, much better than humans, at character recognition, facial recognition, self-driving cars, etc.  But we don't know how to make a specification that would allow the creation of an application that can actually generate new and more superior criteria than what it is fed.  In other words, one that can decide that instead of relying on relativity to calibrate GPS, that it would rely on a new theory of its own creation.  While that may change, it's not here yet.
"I freed a thousand slaves.  I could have freed a thousand more if only they knew they were slaves."

Harriet Tubman

Offline veritas

  • Enigma
  • *
  • Posts: 3353
  • Reputation: 4790
Re: AI ‘vastly more risky than North Korea’
« Reply #33 on: August 16, 2017, 05:12:27 PM »
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.

I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.

Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.

Do you think a TV without maintenance will last longer than a patch of weed?

Offline Empedocles

  • VIP
  • Enigma
  • *
  • Posts: 823
  • Reputation: 15758
Re: AI ‘vastly more risky than North Korea’
« Reply #34 on: August 17, 2017, 10:30:16 AM »
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.

I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.

Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.

Do you think a TV without maintenance will last longer than a patch of weed?

That's linear thinking.

AI is exponential.

With AI, we're entering completely territory, the likes which have never been seen on our planet, with exponentially increase AI intelligence. I again point to the quote by Herr Clarke, ""Any sufficiently advanced technology is indistinguishable from magic".

Humanity is gonna see magic.

Offline Empedocles

  • VIP
  • Enigma
  • *
  • Posts: 823
  • Reputation: 15758
Re: AI ‘vastly more risky than North Korea’
« Reply #35 on: August 17, 2017, 10:47:10 AM »
It's quite easy to break down what Musk's warning us about: the technological singularity.

The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.

With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.

Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.

In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are?".

As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".

In other words, we most probably have no frigging clue of what's going on.

Our last invention.

And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI.  :D

Dawn of the sexbots



The singularity idea while it seems compelling on the surface, it's still far-fetched in practice.  Machine Learning(ML) at least currently, uses predetermined criteria, in order to minimize a loss function.  What I haven't seen is ML that actually decides what is the criteria in the first place.  That part is always a human decision. 

So we can create an application that becomes very good, much better than humans, at character recognition, facial recognition, self-driving cars, etc.  But we don't know how to make a specification that would allow the creation of an application that can actually generate new and more superior criteria than what it is fed.  In other words, one that can decide that instead of relying on relativity to calibrate GPS, that it would rely on a new theory of its own creation.  While that may change, it's not here yet.

Agreed.

Yet the strides being made as we discuss this right now are staggering, and growing with each passing day. We're almost at the tipping point (I'd say 5-10 years at the most). Here's a gif to help visualize exactly how exponential growth works:



One thing we always seem to forget is that almost all major innovations came from the fringes, not big companies or groups of scientists working together but from one guy who had that insight to see what everyone else missed. I wouldn't be at all surprised if AI comes from some 400 pound computer junky working alone in his underwear in his parent's basement. As an example: when DVD's first came out, they had an encryption to prevent copying, which the major studios had spent millions developing. It was cracked shortly afterwards by some teenager (with a little help from a couple of others) living at his parents.

Offline veritas

  • Enigma
  • *
  • Posts: 3353
  • Reputation: 4790
Re: AI ‘vastly more risky than North Korea’
« Reply #36 on: August 17, 2017, 02:07:51 PM »
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.

I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.

Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.

Do you think a TV without maintenance will last longer than a patch of weed?

That's linear thinking.

AI is exponential.

With AI, we're entering completely territory, the likes which have never been seen on our planet, with exponentially increase AI intelligence. I again point to the quote by Herr Clarke, ""Any sufficiently advanced technology is indistinguishable from magic".

Humanity is gonna see magic.

That's just software.

AI at present is quackery. A revolution takes time and is felt moments before it lurches and changes the world. I don't feel it yet. Not at all.

Offline Empedocles

  • VIP
  • Enigma
  • *
  • Posts: 823
  • Reputation: 15758
Re: AI ‘vastly more risky than North Korea’
« Reply #37 on: August 18, 2017, 11:27:04 AM »
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.

I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.

Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.

Do you think a TV without maintenance will last longer than a patch of weed?

That's linear thinking.

AI is exponential.

With AI, we're entering completely territory, the likes which have never been seen on our planet, with exponentially increase AI intelligence. I again point to the quote by Herr Clarke, ""Any sufficiently advanced technology is indistinguishable from magic".

Humanity is gonna see magic.

That's just software.

AI at present is quackery. A revolution takes time and is felt moments before it lurches and changes the world. I don't feel it yet. Not at all.

Maybe. Maybe not.

Who knows?

Maybe we're even right now living in software, a virtual world, like little Pacmen.  :D


Like the old rallying cry of yore, "Get a Horse" when horseless carriages first started appearing, let's see if AI comes to pass (which I believe it will).

Offline Kadame6

  • Superstar
  • *
  • Posts: 239
  • Reputation: 275
Re: AI ‘vastly more risky than North Korea’
« Reply #38 on: August 18, 2017, 12:06:14 PM »
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?

I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.

Zigackly!

I truly believe we wouldn't be able to control an ASI. Think again of an IQ of 10'000 and growing. Whatever we could think of, it's already done it, vastly faster and it would be always ahead of us.

About the benevolent Super Intelligence, we could already be living in a simulated world, kind of like the Matrix. Interesting theory.  8)

I'm taking it as it comes and like you, not losing any sleep over it.  :D


Surely we must be able to build into it a kill switch of sorts.  We are their god after all, no?
That's the crux of what they claim: essentially that we are in danger of creating God. :D After we create God, we will no longer be its god. That's what I got from that Ted Talk. But methinks if we can ever come to do that, then we can enhance our own intelligence or life then.

Granted, I can't help but be influenced by my own world view that an infinite intelligence already rules the universe: it knows how to program our autonomous self-guided kinda intelligence and has embedded a conscience in us to reign in this autonomy. But we don't have this software, lol. I just don't feel the need to think we are about to invent it until someone actually does so.

For now, I am content to think that the special kind of software that is responsible for our kind of autonomous intelligence, life and conscience is of a kind humans are far from being able to invent entirely on their own. I'm comfortable with that stance. :D Therefore I am completely unafraid of progress. Bring it on! I want to go to the moon on holiday. :D

Offline veritas

  • Enigma
  • *
  • Posts: 3353
  • Reputation: 4790
Re: AI ‘vastly more risky than North Korea’
« Reply #39 on: August 18, 2017, 02:24:10 PM »
Honestly, you sound like religious zealots. Thoroughly brainwashed by the hope of AI saving souls. It's misguided nerdism.



I feel threatened. Like if I say something in defense of science and practicalities, I'll be crucified by AI worshipers.