Nipate

Forum => Kenya Discussion => Topic started by: Nefertiti on August 15, 2017, 09:17:27 AM

Title: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 15, 2017, 09:17:27 AM
(https://static.independent.co.uk/s3fs-public/styles/story_large/public/thumbnails/image/2017/01/11/12/artificial-intelligence-1.jpg)

Quote
AI ‘vastly more risky than North Korea’

Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does.

The Tesla and SpaceX chief executive took to Twitter to once again reiterate the need for concern around the development of AI, following the victory of Musk-led AI development over professional players of the Dota 2 online multiplayer battle game.

/photo/1

This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organisations, including OpenAI, to “keep an eye on what’s going on”.

Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.



https://www.theguardian.com/technology/2017/aug/14/elon-musk-ai-vastly-more-risky-north-korea (https://www.theguardian.com/technology/2017/aug/14/elon-musk-ai-vastly-more-risky-north-korea)
Title: Re: AI ‘vastly more risky than North Korea’
Post by: veritas on August 15, 2017, 04:49:04 PM
(https://static.independent.co.uk/s3fs-public/styles/story_large/public/thumbnails/image/2017/01/11/12/artificial-intelligence-1.jpg)

Robina, was this necessary?

i bet you think he's cute.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kim Jong-Un's Pajama Pants on August 15, 2017, 05:18:47 PM
Far fetched.  What exactly is there to fear from AI?  AI boils down to just massive number crunching.  Nothing more.  How is that dangerous?
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 15, 2017, 06:24:13 PM
(https://static.independent.co.uk/s3fs-public/styles/story_large/public/thumbnails/image/2017/01/11/12/artificial-intelligence-1.jpg)

Robina, was this necessary?

i bet you think he's cute.

Yes veri of course he is very cute. I love robots - am mocking Elon - i don;t think robots are risky. Can't get how he's a robophobe.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 15, 2017, 06:27:33 PM
Far fetched.  What exactly is there to fear from AI?  AI boils down to just massive number crunching.  Nothing more.  How is that dangerous?

I think he's phobic / irrational - did you see the spat with Zuckerberg? Could be his rough childhood. Robots are just codes & bots in human shape. The desktop in your soho is just slightly dumber and formless.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kim Jong-Un's Pajama Pants on August 15, 2017, 06:40:09 PM
Far fetched.  What exactly is there to fear from AI?  AI boils down to just massive number crunching.  Nothing more.  How is that dangerous?

I think he's phobic / irrational - did you see the spat with Zuckerberg? Could be his rough childhood. Robots are just codes & bots in human shape. The desktop in your soho is just slightly dumber and formless.


I haven't seen the spat.  Did Musk have a rough childhood?  I imagined he would have been privileged growing up in apartheid South Africa.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 15, 2017, 07:00:20 PM
Far fetched.  What exactly is there to fear from AI?  AI boils down to just massive number crunching.  Nothing more.  How is that dangerous?

I think he's phobic / irrational - did you see the spat with Zuckerberg? Could be his rough childhood. Robots are just codes & bots in human shape. The desktop in your soho is just slightly dumber and formless.


I haven't seen the spat.  Did Musk have a rough childhood?  I imagined he would have been privileged growing up in apartheid South Africa.

Privileged? Lol, anything but it. His old man had issues and he was the weird kid - got it rough at home and school. Enough to be hospitalized.

They differed on the AI apocalypse with Zuck. Elon uttered invectives.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: MOON Ki on August 15, 2017, 08:58:05 PM
I haven't seen the spat.  Did Musk have a rough childhood?  I imagined he would have been privileged growing up in apartheid South Africa.

He claims to have had  rough life of it---that he got bullied at some fancy boarding school or something like that.     :D
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kim Jong-Un's Pajama Pants on August 15, 2017, 09:03:29 PM
I haven't seen the spat.  Did Musk have a rough childhood?  I imagined he would have been privileged growing up in apartheid South Africa.

He claims to have had  rough life of it---that he got bullied at some fancy boarding school or something like that.     :D

That's what I was thinking.  It's not like he was clobbered or shot at a township school protesting the use of Afrikaans or something.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 15, 2017, 10:51:17 PM
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Title: Re: AI ‘vastly more risky than North Korea’
Post by: bryan275 on August 16, 2017, 09:24:47 AM
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??

Real life Tony Stark... lovable rogue...
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Empedocles on August 16, 2017, 12:06:17 PM
It's quite easy to break down what Musk's warning us about: the technological singularity.

The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.

With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.

Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion (https://en.wikipedia.org/wiki/Intelligence_explosion)). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.

In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742) or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are? (https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)".

As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".

In other words, we most probably have no frigging clue of what's going on.

Our last invention.

And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI.  :D

Dawn of the sexbots (https://www.cnet.com/news/abyss-creations-ai-sex-robots-headed-to-your-bed-and-heart/)

Title: Re: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 16, 2017, 01:28:28 PM
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??

Real life Tony Stark... lovable rogue...

Why are you mocking me?
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 16, 2017, 01:37:11 PM
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.

Title: Re: AI ‘vastly more risky than North Korea’
Post by: bryan275 on August 16, 2017, 02:09:46 PM
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??

Real life Tony Stark... lovable rogue...

Why are you mocking me?


Hey hey ...would I do that?  I like Elon too, just not happy that he's working his way into beating the wonderful V8 ICE.

Title: Re: AI ‘vastly more risky than North Korea’
Post by: bryan275 on August 16, 2017, 02:12:36 PM
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.



The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.

And no, i am not anti development, however the speed of transition is worrying.



Title: Re: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 16, 2017, 02:34:01 PM
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.



The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.

And no, i am not anti development, however the speed of transition is worrying.

That is what phobia means - unfounded fear. It did not stop the abolition, the industrial revolution nor the digital revolution. Scuttling capitalism is a footnote compared to to the big opportunities. Economists agree we have a productivity gap in the world; and scientists will tell you we are eons from conventional research solving serious medical challenges. Robots and the singularity are a perfect fix for these.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kadame6 on August 16, 2017, 02:48:01 PM
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.


The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.

And no, i am not anti development, however the speed of transition is worrying.
You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..
Title: Re: AI ‘vastly more risky than North Korea’
Post by: bryan275 on August 16, 2017, 02:49:08 PM


That is what phobia means - unfounded fear. It did not stop the abolition, the industrial revolution nor the digital revolution. Scuttling capitalism is a footnote compared to to the big opportunities. Economists agree we have a productivity gap in the world; and scientists will tell you we are eons from conventional research solving serious medical challenges. Robots and the singularity are a perfect fix for these.


Being robots then they can be built to single handedly target the problem without decimating the current jobs market.  The issue of productivity is because the jobs and available workforce are in the wrong place.   Global employment is not at full capacity for the human race to start replacing it fully with bots.  Kenya has high unemployment rates.



Title: Re: AI ‘vastly more risky than North Korea’
Post by: bryan275 on August 16, 2017, 02:53:29 PM
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.


The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.

And no, i am not anti development, however the speed of transition is worrying.
You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..


That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.

Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kadame6 on August 16, 2017, 02:59:03 PM
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.


The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.

And no, i am not anti development, however the speed of transition is worrying.
You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..


That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: MOON Ki on August 16, 2017, 03:26:48 PM
The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

The robots will pay tax: https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/

Title: Re: AI ‘vastly more risky than North Korea’
Post by: MOON Ki on August 16, 2017, 03:36:12 PM
That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.

On to  the future: https://www.rt.com/viral/349503-vr-porn-event-japan/
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kichwa on August 16, 2017, 03:46:04 PM
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.


The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.   

Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.

And no, i am not anti development, however the speed of transition is worrying.
You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..


That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

 Kadame,  I do not know about that.  Complement maybe but replace?-I do not think so.  The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Empedocles on August 16, 2017, 03:58:06 PM
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?

Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.

Yeah, I'm serious.

What do you think singularity means in terms of technological singularity?

And what makes you wrongly assume that I'm against it or as you put it, phobic? I highly recommend you read, if you haven't already, Bostrom's book to see where I'm coming from.

Of course I see the coming of ASI (Artificial Super Intelligence) as a massive step forward for humanity, but, and here I stress the "but", we (even you) have no idea how an ASI will react. No one does!

Heck, even Wozniak once said that we might become pets to an ASI (https://www.theguardian.com/technology/2015/jun/25/apple-co-founder-steve-wozniak-says-humans-will-be-robots-pets) (he did change his mind (https://www.wired.com/2017/04/steve-wozniak-silicon-valleys-nerdiest-legend/) later but the point still remains that nobody really knows). It's all guess work.

That's the message Musk along with Bill Gates, Stephen Hawkings, plus many more are trying to put across, that we simply don't know and that's what we should be concentrating on (safety), as we move towards ASI.


Thing is, it's unstoppable and if/when we do create an ASI, that's when we'll really know. Until then, it's just guess work, kind of like trying to guess what a day old baby is going be like when s/he's 20 years old.

Isaac Asimov wrote lots of books and short stories, trying to understand how humanity would or could cope with robots (remember his famous 3 laws, later expanded to include the zeroth law?). Again, the point is that we simply don't know how an ASI would be like and Asimov's robotic laws are, frankly speaking, a mess (but a very good start at least).

That's all ok, however where do you think think Pleasure bots come in?  Or beter still where will their services end and human services begin?  They too will compete for those unique human "products" like emotional comfort and sexual gratification.

Sexbots are already in "service", pissing off (not Trump style) real prostitutes and making them future recipients of UBI.

Europe’s first sex robot brothel FORCED OUT of base as prostitutes complain of competition (http://www.express.co.uk/news/world/779841/Sex-robot-brothel-Lumidolls-Barcelona-prostitutes-complain-police)
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kadame6 on August 16, 2017, 04:07:34 PM
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

 Kadame,  I do not know about that.  Complement maybe but replace?-I do not think so.  The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :D
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kadame6 on August 16, 2017, 04:17:48 PM
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?

I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Empedocles on August 16, 2017, 04:20:33 PM
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

 Kadame,  I do not know about that.  Complement maybe but replace?-I do not think so.  The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :D

Unfortuntaly (or fortunately  :D), they're already working on that so that in the future we can expect AI sexbots which will be indistinguishable from real people.

The race to build the world’s first sex robot (https://www.theguardian.com/technology/2017/apr/27/race-to-build-world-first-sex-robot)

https://realbotix.systems/

And as the technology exponentially grows, we'll soon have robot wives/husbands.

(https://pics.filmaffinity.com/ex_machina-368494509-large.jpg)

Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kadame6 on August 16, 2017, 04:23:31 PM
Ok, we'll see. For me, part of my needs is to know I have been chosen. I dont know how I could buy that with something someone made in a lab. Euuueww.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kichwa on August 16, 2017, 04:30:27 PM
Based on what was written in the past, its amazing how difficult it has been to accurately predict the impact of technology on peoples lives long term.

I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.

 Kadame,  I do not know about that.  Complement maybe but replace?-I do not think so.  The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :D

Unfortuntaly (or fortunately  :D), they're already working on that so that in the future we can expect AI sexbots which will be indistinguishable from real people.

The race to build the world’s first sex robot (https://www.theguardian.com/technology/2017/apr/27/race-to-build-world-first-sex-robot)

https://realbotix.systems/

And as the technology exponentially grows, we'll soon have robot wives/husbands.

(https://pics.filmaffinity.com/ex_machina-368494509-large.jpg)
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Empedocles on August 16, 2017, 04:33:21 PM
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?

I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.

Zigackly!

I truly believe we wouldn't be able to control an ASI. Think again of an IQ of 10'000 and growing. Whatever we could think of, it's already done it, vastly faster and it would be always ahead of us.

About the benevolent Super Intelligence, we could already be living in a simulated world, kind of like the Matrix. Interesting theory.  8)

I'm taking it as it comes and like you, not losing any sleep over it.  :D
Title: Re: AI ‘vastly more risky than North Korea’
Post by: bryan275 on August 16, 2017, 04:54:31 PM
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?

I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.

Zigackly!

I truly believe we wouldn't be able to control an ASI. Think again of an IQ of 10'000 and growing. Whatever we could think of, it's already done it, vastly faster and it would be always ahead of us.

About the benevolent Super Intelligence, we could already be living in a simulated world, kind of like the Matrix. Interesting theory.  8)

I'm taking it as it comes and like you, not losing any sleep over it.  :D


Surely we must be able to build into it a kill switch of sorts.  We are their god after all, no?
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kim Jong-Un's Pajama Pants on August 16, 2017, 05:08:04 PM
It's quite easy to break down what Musk's warning us about: the technological singularity.

The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.

With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.

Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion (https://en.wikipedia.org/wiki/Intelligence_explosion)). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.

In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742) or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are? (https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)".

As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".

In other words, we most probably have no frigging clue of what's going on.

Our last invention.

And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI.  :D

Dawn of the sexbots (https://www.cnet.com/news/abyss-creations-ai-sex-robots-headed-to-your-bed-and-heart/)



The singularity idea while it seems compelling on the surface, it's still far-fetched in practice.  Machine Learning(ML) at least currently, uses predetermined criteria, in order to minimize a loss function.  What I haven't seen is ML that actually decides what is the criteria in the first place.  That part is always a human decision. 

So we can create an application that becomes very good, much better than humans, at character recognition, facial recognition, self-driving cars, etc.  But we don't know how to make a specification that would allow the creation of an application that can actually generate new and more superior criteria than what it is fed.  In other words, one that can decide that instead of relying on relativity to calibrate GPS, that it would rely on a new theory of its own creation.  While that may change, it's not here yet.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: veritas on August 16, 2017, 05:12:27 PM
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.

I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.

Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.

Do you think a TV without maintenance will last longer than a patch of weed?
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Empedocles on August 17, 2017, 10:30:16 AM
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.

I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.

Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.

Do you think a TV without maintenance will last longer than a patch of weed?

That's linear thinking.

AI is exponential.

With AI, we're entering completely territory, the likes which have never been seen on our planet, with exponentially increase AI intelligence. I again point to the quote by Herr Clarke, ""Any sufficiently advanced technology is indistinguishable from magic".

Humanity is gonna see magic.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Empedocles on August 17, 2017, 10:47:10 AM
It's quite easy to break down what Musk's warning us about: the technological singularity.

The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.

With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.

Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion (https://en.wikipedia.org/wiki/Intelligence_explosion)). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.

In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742) or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are? (https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)".

As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".

In other words, we most probably have no frigging clue of what's going on.

Our last invention.

And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI.  :D

Dawn of the sexbots (https://www.cnet.com/news/abyss-creations-ai-sex-robots-headed-to-your-bed-and-heart/)



The singularity idea while it seems compelling on the surface, it's still far-fetched in practice.  Machine Learning(ML) at least currently, uses predetermined criteria, in order to minimize a loss function.  What I haven't seen is ML that actually decides what is the criteria in the first place.  That part is always a human decision. 

So we can create an application that becomes very good, much better than humans, at character recognition, facial recognition, self-driving cars, etc.  But we don't know how to make a specification that would allow the creation of an application that can actually generate new and more superior criteria than what it is fed.  In other words, one that can decide that instead of relying on relativity to calibrate GPS, that it would rely on a new theory of its own creation.  While that may change, it's not here yet.

Agreed.

Yet the strides being made as we discuss this right now are staggering, and growing with each passing day. We're almost at the tipping point (I'd say 5-10 years at the most). Here's a gif to help visualize exactly how exponential growth works:

(https://waitbutwhy.com/wp-content/uploads/2015/01/gif)

One thing we always seem to forget is that almost all major innovations came from the fringes, not big companies or groups of scientists working together but from one guy who had that insight to see what everyone else missed. I wouldn't be at all surprised if AI comes from some 400 pound computer junky working alone in his underwear in his parent's basement. As an example: when DVD's first came out, they had an encryption to prevent copying, which the major studios had spent millions developing. It was cracked shortly afterwards by some teenager (with a little help from a couple of others) living at his parents.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: veritas on August 17, 2017, 02:07:51 PM
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.

I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.

Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.

Do you think a TV without maintenance will last longer than a patch of weed?

That's linear thinking.

AI is exponential.

With AI, we're entering completely territory, the likes which have never been seen on our planet, with exponentially increase AI intelligence. I again point to the quote by Herr Clarke, ""Any sufficiently advanced technology is indistinguishable from magic".

Humanity is gonna see magic.

That's just software.

AI at present is quackery. A revolution takes time and is felt moments before it lurches and changes the world. I don't feel it yet. Not at all.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Empedocles on August 18, 2017, 11:27:04 AM
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.

I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.

Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.

Do you think a TV without maintenance will last longer than a patch of weed?

That's linear thinking.

AI is exponential.

With AI, we're entering completely territory, the likes which have never been seen on our planet, with exponentially increase AI intelligence. I again point to the quote by Herr Clarke, ""Any sufficiently advanced technology is indistinguishable from magic".

Humanity is gonna see magic.

That's just software.

AI at present is quackery. A revolution takes time and is felt moments before it lurches and changes the world. I don't feel it yet. Not at all.

Maybe. Maybe not.

Who knows?

Maybe we're even right now living in software, a virtual world, like little Pacmen.  :D


Like the old rallying cry of yore, "Get a Horse" when horseless carriages first started appearing, let's see if AI comes to pass (which I believe it will).
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kadame6 on August 18, 2017, 12:06:14 PM
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?

I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.

Zigackly!

I truly believe we wouldn't be able to control an ASI. Think again of an IQ of 10'000 and growing. Whatever we could think of, it's already done it, vastly faster and it would be always ahead of us.

About the benevolent Super Intelligence, we could already be living in a simulated world, kind of like the Matrix. Interesting theory.  8)

I'm taking it as it comes and like you, not losing any sleep over it.  :D


Surely we must be able to build into it a kill switch of sorts.  We are their god after all, no?
That's the crux of what they claim: essentially that we are in danger of creating God. :D After we create God, we will no longer be its god. That's what I got from that Ted Talk. But methinks if we can ever come to do that, then we can enhance our own intelligence or life then.

Granted, I can't help but be influenced by my own world view that an infinite intelligence already rules the universe: it knows how to program our autonomous self-guided kinda intelligence and has embedded a conscience in us to reign in this autonomy. But we don't have this software, lol. I just don't feel the need to think we are about to invent it until someone actually does so.

For now, I am content to think that the special kind of software that is responsible for our kind of autonomous intelligence, life and conscience is of a kind humans are far from being able to invent entirely on their own. I'm comfortable with that stance. :D Therefore I am completely unafraid of progress. Bring it on! I want to go to the moon on holiday. :D
Title: Re: AI ‘vastly more risky than North Korea’
Post by: veritas on August 18, 2017, 02:24:10 PM
Honestly, you sound like religious zealots. Thoroughly brainwashed by the hope of AI saving souls. It's misguided nerdism.

(https://geekreply.com/wp-content/uploads/2015/02/artificial-intelligence-religion-not-good.jpg)

I feel threatened. Like if I say something in defense of science and practicalities, I'll be crucified by AI worshipers.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kadame6 on August 18, 2017, 02:35:54 PM
Honestly, you sound like religious zealots. Thoroughly brainwashed by the hope of AI saving souls. It's misguided nerdism.

(https://geekreply.com/wp-content/uploads/2015/02/artificial-intelligence-religion-not-good.jpg)

I feel threatened. Like if I say something in defense of science and practicalities, I'll be crucified by AI worshipers.
I don't worship it. I'm just unaffraid of it as this dangerous potential being fretted about hasnt been demonstrated to me yet. If it comes to pass and destroys us, well then, nothing lasts forever :D Besides, we likely will have destroyed ourselves long before we create a truly autonomous super intelligence.

More seriously though, inventing such a program will mean would have completely cracked the code of human intelligence so we could reprogram ours to augment just as well as that AI. We would also likely have found a way to keep from dying. This is all too Sci-fi ish for me at the moment. I just dont buy we are there or that we could prevent such a thing if we were headed that way anyway. :D
Title: Re: AI ‘vastly more risky than North Korea’
Post by: veritas on August 18, 2017, 03:30:13 PM
The science like cognitive science isn't there yet, there's just no way AI will ever become some conscience entity anytime soon.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kim Jong-Un's Pajama Pants on August 18, 2017, 04:42:36 PM
It's quite easy to break down what Musk's warning us about: the technological singularity.

The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.

With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.

Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion (https://en.wikipedia.org/wiki/Intelligence_explosion)). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.

In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742) or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are? (https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)".

As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".

In other words, we most probably have no frigging clue of what's going on.

Our last invention.

And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI.  :D

Dawn of the sexbots (https://www.cnet.com/news/abyss-creations-ai-sex-robots-headed-to-your-bed-and-heart/)



The singularity idea while it seems compelling on the surface, it's still far-fetched in practice.  Machine Learning(ML) at least currently, uses predetermined criteria, in order to minimize a loss function.  What I haven't seen is ML that actually decides what is the criteria in the first place.  That part is always a human decision. 

So we can create an application that becomes very good, much better than humans, at character recognition, facial recognition, self-driving cars, etc.  But we don't know how to make a specification that would allow the creation of an application that can actually generate new and more superior criteria than what it is fed.  In other words, one that can decide that instead of relying on relativity to calibrate GPS, that it would rely on a new theory of its own creation.  While that may change, it's not here yet.

Agreed.

Yet the strides being made as we discuss this right now are staggering, and growing with each passing day. We're almost at the tipping point (I'd say 5-10 years at the most). Here's a gif to help visualize exactly how exponential growth works:

(https://waitbutwhy.com/wp-content/uploads/2015/01/gif)

One thing we always seem to forget is that almost all major innovations came from the fringes, not big companies or groups of scientists working together but from one guy who had that insight to see what everyone else missed. I wouldn't be at all surprised if AI comes from some 400 pound computer junky working alone in his underwear in his parent's basement. As an example: when DVD's first came out, they had an encryption to prevent copying, which the major studios had spent millions developing. It was cracked shortly afterwards by some teenager (with a little help from a couple of others) living at his parents.

The 400 pounder cannot be discounted.  In fact most mainstream academics rarely discover anything that rocks the boat on which their careers rest. 

In 2014 IBM created a processor with as many digital neurons as a rat's brain.

(https://qzprod.files.wordpress.com/2015/08/unnamed-1.jpg?quality=80&strip=all&w=640)
processor with as many digital neurons as a rat's brain

https://qz.com/481164/ibm-has-built-a-digital-rat-brain-that-could-power-tomorrows-smartphones/

This is what current mainstream technology allows.  A 400 pounder in the basement(or DARPA) could come up with a principle to represent the activities of neurons in the brain in a much smaller space.  And a way to model emergent features of neural activities like personality, creativity etc.  I don't rule out that happening.  At the moment though, AI is just massive number crunching.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: MOON Ki on August 18, 2017, 05:33:16 PM
One thing we always seem to forget is that almost all major innovations came from the fringes, not big companies or groups of scientists working together but from one guy who had that insight to see what everyone else missed.

I don't know about that.  What major innovations did you have in mind?

Quote
As an example: when DVD's first came out, they had an encryption to prevent copying, which the major studios had spent millions developing. It was cracked shortly afterwards by some teenager (with a little help from a couple of others) living at his parents.

Is that really a good example of a major innovation ... or even a minor one?  The code (CSS) that the kid "broke" was a rather poor encryption system that moreover used quite small keys.  And reason for the small keys was not it did not occur to anyone to use larger keys; it had to do with US legal restrictions.  In fact, at the time (1999-2000) that the kid "broke" it,  a good home laptop could break it using brute force in less than a day, although apparently nobody had considered it a worthwhile task, and one today can break it in fractions of a second.     

In any case, as far as I can tell,  far from the idea that has been promoted, to the effect that the kid did something fiendishly clever, all did was to somehow manage to get his hands on the CSS decryption code (and keys) and then essentially modify the code for free distribution on different OS platforms.   
Title: Re: AI ‘vastly more risky than North Korea’
Post by: MOON Ki on August 18, 2017, 05:52:51 PM
At the moment though, AI is just massive number crunching.

It's all pattern recognition, with a prediction element thrown in if necessary.   In fact, some people who used to work in pattern recognition, without associating it with AI/ML, now make big bucks doing the same thing under the AI/ML labels.

Still, one could take the view that the brain too is just a massive number-cruncher, and all that is needed to "create" a brain is a sufficiently fast computer: there are only so many neurons, connected in only so many ways ... therefore only so many possible states ... and so for a particular task or function one just has to find which set of states is optimal; to explain why a human behaves or will behave in certain ways, one simply looks for certain "hard-wired" states---the sort of stuff psychopaths plead in courts etc.---in the brain (the "initial states" in computer-geek's finite-state machine), adds the current states, ... and voila!; and so on, and so forth. Real AI is coming.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kim Jong-Un's Pajama Pants on August 18, 2017, 06:08:37 PM
At the moment though, AI is just massive number crunching.

It's all pattern recognition, with a prediction element thrown in if necessary.   In fact, some people who used to work in pattern recognition, without associating it with AI/ML, now make big bucks doing the same thing under the AI/ML labels.

Still, one could take the view that the brain too is just a massive number-cruncher, and all that is needed to "create" a brain is a sufficiently fast computer: there are only so many neurons, connected in only so many ways ... therefore only so many possible states ... and so for a particular task or function one just has to find which set of states is optimal.   

I tend to have that view.  It's very efficient at pattern recognition.  And modelling new relations between patterns in phenomenon that might be otherwise unrelated - I think AI still has some work cut out on this front.  Our brain is less efficient though, at explicit number crunching - a chimp's brain might actually be better at that task. 
Title: Re: AI ‘vastly more risky than North Korea’
Post by: MOON Ki on August 18, 2017, 11:29:00 PM
I tend to have that view.  It's very efficient at pattern recognition.  And modelling new relations between patterns in phenomenon that might be otherwise unrelated - I think AI still has some work cut out on this front.  Our brain is less efficient though, at explicit number crunching - a chimp's brain might actually be better at that task. 

The other thing that it seems to be good is certain types of optimization, especially in tasks that require a great deal of local optimization.  But, again, what is AI there is far from clear. People have long laboured on optimization problems---and successfully too--without considering their work to be AI. Has the  field of AI come up with anything that is fundamentally new in optimization techniques?   It seems that quite a bit of current AI is just the application of well-known techniques in new ways.  Artificial?  Yes, in that it is not a creation of nature.   Intelligent?   Yes; good applications seem to require some intelligence.   But is that it? 

I was just reflecting on much-touted ideas such as that AI would prove itself once, say, a computer could beat a grandmaster at chess.     Now, a chess grandmaster can see only up to 25-30 moves ahead, at the very best, and chess at that level is a timed game.    So any computer that is fast enough ought to be able to beat a grand master by simply running through the tree of possible moves, one at a time; to the extent that there have been specialized computers that did a good job at the task, straightforward tree-pruning might have helped, but the key was in the fact specialization meant speed.   A surprising number of AI "successes" seem to  to be of that just-a-matter-of-technological-time flavor.         

It is certainly possible that the human brain is not very good at explicit number crunching, but perhaps one could still model it as though it were.   I believe that True AI is coming and that it will be based primarily on a very good model, combined with really fast computing and serious storage capacity; that's what we don't have right now.   In fact, True AI will prove, once and for all, that there  is nothing special abut the Homo Sapiens brain.  Actually, more than that: True AI will allow us to replace said brain ---or at least its use---with much,much better products.     
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Kim Jong-Un's Pajama Pants on August 19, 2017, 06:45:00 PM
I tend to have that view.  It's very efficient at pattern recognition.  And modelling new relations between patterns in phenomenon that might be otherwise unrelated - I think AI still has some work cut out on this front.  Our brain is less efficient though, at explicit number crunching - a chimp's brain might actually be better at that task. 

The other thing that it seems to be good is certain types of optimization, especially in tasks that require a great deal of local optimization.  But, again, what is AI there is far from clear. People have long laboured on optimization problems---and successfully too--without considering their work to be AI. Has the  field of AI come up with anything that is fundamentally new in optimization techniques?   It seems that quite a bit of current AI is just the application of well-known techniques in new ways.  Artificial?  Yes, in that it is not a creation of nature.   Intelligent?   Yes; good applications seem to require some intelligence.   But is that it? 

I was just reflecting on much-touted ideas such as that AI would prove itself once, say, a computer could beat a grandmaster at chess.     Now, a chess grandmaster can see only up to 25-30 moves ahead, at the very best, and chess at that level is a timed game.    So any computer that is fast enough ought to be able to beat a grand master by simply running through the tree of possible moves, one at a time; to the extent that there have been specialized computers that did a good job at the task, straightforward tree-pruning might have helped, but the key was in the fact specialization meant speed.   A surprising number of AI "successes" seem to  to be of that just-a-matter-of-technological-time flavor.         

It is certainly possible that the human brain is not very good at explicit number crunching, but perhaps one could still model it as though it were.   I believe that True AI is coming and that it will be based primarily on a very good model, combined with really fast computing and serious storage capacity; that's what we don't have right now.   In fact, True AI will prove, once and for all, that there  is nothing special abut the Homo Sapiens brain.  Actually, more than that: True AI will allow us to replace said brain ---or at least its use---with much,much better products.     

At the moment that is what AI is.  Application of existing models taking advantage of more powerful and cheaper computing resources.  AI is, much like most IT trends, a marketing term.  A fad.  The underlying algorithms are not new, but are now more practical.

Because they can do the grunt work of calculating moves much better than any human, it's just a matter of time that we will have a chess program that a human can never defeat.  To me that is no different than the fact that a computer or a calculator can multiply big numbers faster and more accurately than people.

The human brain is interesting.  Bu there is nothing it can do that other mammalian brains can't or don't.  To me, the difference comes down to degree.  I don't know if AI will be able to model it - things like intuition, anger etc might just be a complex interactions of neurons and chemical signals.  Maybe something more.  If there is something more, that means there is something we don't know.  Something new to interrogate and learn.
Title: Re: AI ‘vastly more risky than North Korea’
Post by: Nefertiti on August 30, 2017, 05:29:51 AM
I am more with Empy on this - the progress to singularity - just not the scare. In the balance of things the opportunities outweigh the risks. I mean the present civilization has so much trouble - disease, poverty, inequality, conflict, climate change, etc - with no solution in sight until AI showed up. I also buy into the chance we're already in a matrix. Religiously speaking isn't that what the "world" is - God's matrix?

The singularity is coming big. Phobics shudder but I see many solutions :

-End of disease after cracking the human genome
-End of poverty after unending capitalism
-Longevity -end of aging
-Telepathy & universal polygotry & animal speech
-Intergalactic (space) travel -Elon Musk stuff
-Time (realm) travel
-End of death after discovery of Kadame's "software". This is a polymorphic state where you can switch state (looks, age, race, sex, etc) cause you're a sub human implanted with chips. In this case you're actually a robot and have nothing to fear but yourself.

Title: Re: AI ‘vastly more risky than North Korea’
Post by: MOON Ki on August 30, 2017, 05:49:34 AM
I am more with Empy on this - the progress to singularity - just not the scare. In the balance of things the opportunities outweigh the risks. I mean the present civilization has so much trouble - disease, poverty, inequality, conflict, climate change, etc - with no solution in sight until AI showed up. I also buy into the chance we're already in a matrix. Religiously speaking isn't that what the "world" is - God's matrix?

The singularity is coming big. Phobics shudder but I see many solutions :

-End of disease after cracking the human genome
-End of poverty after unending capitalism
-Longevity -end of aging
-Telepathy & universal polygotry & animal speech
-Intergalactic (space) travel -Elon Musk stuff
-Time (realm) travel
-End of death after discovery of Kadame's "software". This is a polymorphic state where you can switch state (looks, age, race, sex, etc) cause you're a sub human implanted with chips. In this case you're actually a robot and have nothing to fear but yourself.

I'm on that side too, but only in that I believe in real AI ... not what is currently being peddled around.    This Singularity thing, whatever it is, doesn't bother me in the least bit---primarily because I think the species Homo Sapiens  has both reached its limits and outlived its usefulness.  In the short term, there might be some value in upgrading it to HS v. 1.5 and then to HS v. 2.0, but that's about it.

Empedocles, Veritas, and Your True Friend ...  have already  had numerous exchanges on this point.   See, for example, this thread: http://www.nipate.org/index.php?topic=3508.msg24941#msg24941