AI ‘vastly more risky than North Korea’
Elon Musk has warned again about the dangers of artificial intelligence, saying that it poses “vastly more risk” than the apparent nuclear capabilities of North Korea does.
The Tesla and SpaceX chief executive took to Twitter to once again reiterate the need for concern around the development of AI, following the victory of Musk-led AI development over professional players of the Dota 2 online multiplayer battle game./photo/1If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc
— Elon Musk (@elonmusk) August 12, 2017
This is not the first time Musk has stated that AI could potentially be one of the most dangerous international developments. He said in October 2014 that he considered it humanity’s “biggest existential threat”, a view he has repeated several times while making investments in AI startups and organisations, including OpenAI, to “keep an eye on what’s going on”.
Musk again called for regulation, previously doing so directly to US governors at their annual national meeting in Providence, Rhode Island.
(https://static.independent.co.uk/s3fs-public/styles/story_large/public/thumbnails/image/2017/01/11/12/artificial-intelligence-1.jpg)
Robina, was this necessary?
i bet you think he's cute.
Far fetched. What exactly is there to fear from AI? AI boils down to just massive number crunching. Nothing more. How is that dangerous?
Far fetched. What exactly is there to fear from AI? AI boils down to just massive number crunching. Nothing more. How is that dangerous?
I think he's phobic / irrational - did you see the spat with Zuckerberg? Could be his rough childhood. Robots are just codes & bots in human shape. The desktop in your soho is just slightly dumber and formless.
Far fetched. What exactly is there to fear from AI? AI boils down to just massive number crunching. Nothing more. How is that dangerous?
I think he's phobic / irrational - did you see the spat with Zuckerberg? Could be his rough childhood. Robots are just codes & bots in human shape. The desktop in your soho is just slightly dumber and formless.
I haven't seen the spat. Did Musk have a rough childhood? I imagined he would have been privileged growing up in apartheid South Africa.
I haven't seen the spat. Did Musk have a rough childhood? I imagined he would have been privileged growing up in apartheid South Africa.
I haven't seen the spat. Did Musk have a rough childhood? I imagined he would have been privileged growing up in apartheid South Africa.
He claims to have had rough life of it---that he got bullied at some fancy boarding school or something like that. :D
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??Who in tech doesn't like Elon?? Besides veri. How would someone not like him??Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Real life Tony Stark... lovable rogue...
Who in tech doesn't like Elon?? Besides veri. How would someone not like him??Who in tech doesn't like Elon?? Besides veri. How would someone not like him??Who in tech doesn't like Elon?? Besides veri. How would someone not like him??
Real life Tony Stark... lovable rogue...
Why are you mocking me?
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?
Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?
Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.
The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.
Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.
And no, i am not anti development, however the speed of transition is worrying.
You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.
Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.
Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.
And no, i am not anti development, however the speed of transition is worrying.
That is what phobia means - unfounded fear. It did not stop the abolition, the industrial revolution nor the digital revolution. Scuttling capitalism is a footnote compared to to the big opportunities. Economists agree we have a productivity gap in the world; and scientists will tell you we are eons from conventional research solving serious medical challenges. Robots and the singularity are a perfect fix for these.
You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.
Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.
Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.
And no, i am not anti development, however the speed of transition is worrying.
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.
Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.
Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.
And no, i am not anti development, however the speed of transition is worrying.
That's all ok, however where do you think think Pleasure bots come in? Or beter still where will their services end and human services begin? They too will compete for those unique human "products" like emotional comfort and sexual gratification.
The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.
That's all ok, however where do you think think Pleasure bots come in? Or beter still where will their services end and human services begin? They too will compete for those unique human "products" like emotional comfort and sexual gratification.
I did not say anything about sex. Even today you can get a "human replacement" for that rather easily.You can look at it a different way. The coming economy will be based on us providing uniquely human products and services: creativity and spirituality. Meeting human needs that machines cannot fulfill. I'm all for letting machines do all the mathematical, logistical, menial work they can do as far as possible. But unique music, stories, literature, sense of belonging, beauty, dealing with painful or difficult emotions etc etc...those things will need human creativity. So I assume a lot of that will become more and more valuable and completely new careers will be created as we go forth..Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?The problem with Robots filling the productivity gap and replacing PAYE humans is a serious structural problem for the current economic model as it is.
Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.
Add to that the fact that we need to work for a living, leading to the need for a basic universal income which in itself kills all capitalist philosophies as we know them.
And no, i am not anti development, however the speed of transition is worrying.
That's all ok, however where do you think think Pleasure bots come in? Or beter still where will their services end and human services begin? They too will compete for those unique human "products" like emotional comfort and sexual gratification.
Empedocles, seriously? That is not what the singularity means. Of course phobics like you and Elon see ghosts everywhere. The rest of us see solutions to big problems still facing humanity - diseases, poverty, conflict, climate change, etc. How about fixing that with the "intelligence explosion"? The world economy needs to triple first and scientific knowledge quadripled so folks can stop dying of hunger and cancer. Why do you only see trouble? Why doom? Why no solutions?
Robots will fill the productivity gap - and provide enough resource - so people can work for fun not survival. And Elon himself has said it - robot companies will simply pay heavy tax bill for welfare. No "apocalypse" or such crap. That's phobia.
That's all ok, however where do you think think Pleasure bots come in? Or beter still where will their services end and human services begin? They too will compete for those unique human "products" like emotional comfort and sexual gratification.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :DI did not say anything about sex. Even today you can get a "human replacement" for that rather easily.
Kadame, I do not know about that. Complement maybe but replace?-I do not think so. The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :DI did not say anything about sex. Even today you can get a "human replacement" for that rather easily.
Kadame, I do not know about that. Complement maybe but replace?-I do not think so. The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Lol, I know. It's those intangible things like being wanted by another (human) that make me less panicky about this "potential". But Bryan was worried about "pleasure" bots: Since the beginning, a human doesnt need anything but themselves for mere pleasure. Another person provides a completely different experience, meets all sorts of intangible inexpressible needs besides the pure mechanics of pleasure. Totally agree. :DI did not say anything about sex. Even today you can get a "human replacement" for that rather easily.
Kadame, I do not know about that. Complement maybe but replace?-I do not think so. The workers at Koinange street unlike factory workers are worried about a lot of things but not being replaced by robots.
Unfortuntaly (or fortunately :D), they're already working on that so that in the future we can expect AI sexbots which will be indistinguishable from real people.
The race to build the world’s first sex robot (https://www.theguardian.com/technology/2017/apr/27/race-to-build-world-first-sex-robot)
https://realbotix.systems/
And as the technology exponentially grows, we'll soon have robot wives/husbands.(https://pics.filmaffinity.com/ex_machina-368494509-large.jpg)
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?
I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.
Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?
I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.
Zigackly!
I truly believe we wouldn't be able to control an ASI. Think again of an IQ of 10'000 and growing. Whatever we could think of, it's already done it, vastly faster and it would be always ahead of us.
About the benevolent Super Intelligence, we could already be living in a simulated world, kind of like the Matrix. Interesting theory. 8)
I'm taking it as it comes and like you, not losing any sleep over it. :D
It's quite easy to break down what Musk's warning us about: the technological singularity.
The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.
With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.
Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion (https://en.wikipedia.org/wiki/Intelligence_explosion)). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.
In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742) or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are? (https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)".
As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".
In other words, we most probably have no frigging clue of what's going on.
Our last invention.
And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI. :D
Dawn of the sexbots (https://www.cnet.com/news/abyss-creations-ai-sex-robots-headed-to-your-bed-and-heart/)
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.
I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.
Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.
Do you think a TV without maintenance will last longer than a patch of weed?
It's quite easy to break down what Musk's warning us about: the technological singularity.
The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.
With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.
Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion (https://en.wikipedia.org/wiki/Intelligence_explosion)). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.
In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742) or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are? (https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)".
As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".
In other words, we most probably have no frigging clue of what's going on.
Our last invention.
And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI. :D
Dawn of the sexbots (https://www.cnet.com/news/abyss-creations-ai-sex-robots-headed-to-your-bed-and-heart/)
The singularity idea while it seems compelling on the surface, it's still far-fetched in practice. Machine Learning(ML) at least currently, uses predetermined criteria, in order to minimize a loss function. What I haven't seen is ML that actually decides what is the criteria in the first place. That part is always a human decision.
So we can create an application that becomes very good, much better than humans, at character recognition, facial recognition, self-driving cars, etc. But we don't know how to make a specification that would allow the creation of an application that can actually generate new and more superior criteria than what it is fed. In other words, one that can decide that instead of relying on relativity to calibrate GPS, that it would rely on a new theory of its own creation. While that may change, it's not here yet.
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.
I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.
Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.
Do you think a TV without maintenance will last longer than a patch of weed?
That's linear thinking.
AI is exponential.
With AI, we're entering completely territory, the likes which have never been seen on our planet, with exponentially increase AI intelligence. I again point to the quote by Herr Clarke, ""Any sufficiently advanced technology is indistinguishable from magic".
Humanity is gonna see magic.
I think AI has the potential to be dangerous if it's weaponized but having said that, it's still a hunk of manmade junk and subject to viruses, hacking, glitches, download drama, updates, climate problems, water proofing issues, magnetic problems, battery/recharge problems, and a million and one things that always go wrong with tech gadgets.
I'd also like to stress again, the power grid at present can't sustain the kind of dangers that's constantly preached about AI. The world isn't a death star and doesn't have the natural resources or infrastructure to sustain an AI shakedown over humanity.
Classic example is nature. Human beings think they conquer nature but nature is also a design that's been around for billions of years longer than humanity's existence and has it's own secrets/codes and weaponized forms that's bound to frustrate manmade infrastructure in all its forms. I mean look at the majestic buildings of yesteryears built by monumental human feats- rubbles over time.
Do you think a TV without maintenance will last longer than a patch of weed?
That's linear thinking.
AI is exponential.
With AI, we're entering completely territory, the likes which have never been seen on our planet, with exponentially increase AI intelligence. I again point to the quote by Herr Clarke, ""Any sufficiently advanced technology is indistinguishable from magic".
Humanity is gonna see magic.
That's just software.
AI at present is quackery. A revolution takes time and is felt moments before it lurches and changes the world. I don't feel it yet. Not at all.
That's the crux of what they claim: essentially that we are in danger of creating God. :D After we create God, we will no longer be its god. That's what I got from that Ted Talk. But methinks if we can ever come to do that, then we can enhance our own intelligence or life then.Empedocles, that Ted talk you linked to was very interesting. But I did not get why he thinks imputing our values on those things is a long term solution to the problems he sees with ASI. If the danger is that it will be so smart that our fate will be entirely dependent on it in the same way chimps and all other animals are dependent on our preferences, how is attempting to rig the system in our favour by making it essentially into our superhero going to work? Whatever we put in it, if we are smart enough to figure it out, so could it and so I dont get why we would expect it to stay bound to the conscience we embed in it?
I guess this is the advantage of believing in a benevolent Super intelligence in charge of everything already :D :D :D I just cant bring myself to panick over these scenarios.
Zigackly!
I truly believe we wouldn't be able to control an ASI. Think again of an IQ of 10'000 and growing. Whatever we could think of, it's already done it, vastly faster and it would be always ahead of us.
About the benevolent Super Intelligence, we could already be living in a simulated world, kind of like the Matrix. Interesting theory. 8)
I'm taking it as it comes and like you, not losing any sleep over it. :D
Surely we must be able to build into it a kill switch of sorts. We are their god after all, no?
Honestly, you sound like religious zealots. Thoroughly brainwashed by the hope of AI saving souls. It's misguided nerdism.I don't worship it. I'm just unaffraid of it as this dangerous potential being fretted about hasnt been demonstrated to me yet. If it comes to pass and destroys us, well then, nothing lasts forever :D Besides, we likely will have destroyed ourselves long before we create a truly autonomous super intelligence.
(https://geekreply.com/wp-content/uploads/2015/02/artificial-intelligence-religion-not-good.jpg)
I feel threatened. Like if I say something in defense of science and practicalities, I'll be crucified by AI worshipers.
It's quite easy to break down what Musk's warning us about: the technological singularity.
The reason the word "Singularity" was coined for AI is exactly the same reason we use it for Black Holes, where our mathematics breaks down and we have absolutely no idea and can't know what happens beyond the event horizon of a black hole.
With AI, the same thing is facing us. We have no idea nor can we even guess what happens after. We have zero reference points to call upon nor can we even begin to calculate what will/may happen.
Given: an AI is switched on and immediately starts improving itself exponentially. Hits an IQ of, say, 10'000 within a week of going "live" (i.e. what is called the Intelligence Explosion (https://en.wikipedia.org/wiki/Intelligence_explosion)). How will we mere mortals even begin to understand what it's thinking? What are its morals (if it even has morals)? What are its aspirations? What values will it have? How will it perceive humanity? And so on.
In other words, we're rushing headlong into completely uncharted and unknowable territory. Let no one fool you. We have no idea of the capabilities that a "conscience" AI may entail. None whatsoever. A good starting point to try and understand what we'll be facing to read Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies" (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742) or watch one of his Ted Talks like this one: "What happens when our computers get smarter than we are? (https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are)".
As Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic".
In other words, we most probably have no frigging clue of what's going on.
Our last invention.
And for those who find the robot "hot", just wait a couple more years, if we don't become batteries for the AI. :D
Dawn of the sexbots (https://www.cnet.com/news/abyss-creations-ai-sex-robots-headed-to-your-bed-and-heart/)
The singularity idea while it seems compelling on the surface, it's still far-fetched in practice. Machine Learning(ML) at least currently, uses predetermined criteria, in order to minimize a loss function. What I haven't seen is ML that actually decides what is the criteria in the first place. That part is always a human decision.
So we can create an application that becomes very good, much better than humans, at character recognition, facial recognition, self-driving cars, etc. But we don't know how to make a specification that would allow the creation of an application that can actually generate new and more superior criteria than what it is fed. In other words, one that can decide that instead of relying on relativity to calibrate GPS, that it would rely on a new theory of its own creation. While that may change, it's not here yet.
Agreed.
Yet the strides being made as we discuss this right now are staggering, and growing with each passing day. We're almost at the tipping point (I'd say 5-10 years at the most). Here's a gif to help visualize exactly how exponential growth works:
(https://waitbutwhy.com/wp-content/uploads/2015/01/gif)
One thing we always seem to forget is that almost all major innovations came from the fringes, not big companies or groups of scientists working together but from one guy who had that insight to see what everyone else missed. I wouldn't be at all surprised if AI comes from some 400 pound computer junky working alone in his underwear in his parent's basement. As an example: when DVD's first came out, they had an encryption to prevent copying, which the major studios had spent millions developing. It was cracked shortly afterwards by some teenager (with a little help from a couple of others) living at his parents.
One thing we always seem to forget is that almost all major innovations came from the fringes, not big companies or groups of scientists working together but from one guy who had that insight to see what everyone else missed.
As an example: when DVD's first came out, they had an encryption to prevent copying, which the major studios had spent millions developing. It was cracked shortly afterwards by some teenager (with a little help from a couple of others) living at his parents.
At the moment though, AI is just massive number crunching.
At the moment though, AI is just massive number crunching.
It's all pattern recognition, with a prediction element thrown in if necessary. In fact, some people who used to work in pattern recognition, without associating it with AI/ML, now make big bucks doing the same thing under the AI/ML labels.
Still, one could take the view that the brain too is just a massive number-cruncher, and all that is needed to "create" a brain is a sufficiently fast computer: there are only so many neurons, connected in only so many ways ... therefore only so many possible states ... and so for a particular task or function one just has to find which set of states is optimal.
I tend to have that view. It's very efficient at pattern recognition. And modelling new relations between patterns in phenomenon that might be otherwise unrelated - I think AI still has some work cut out on this front. Our brain is less efficient though, at explicit number crunching - a chimp's brain might actually be better at that task.
I tend to have that view. It's very efficient at pattern recognition. And modelling new relations between patterns in phenomenon that might be otherwise unrelated - I think AI still has some work cut out on this front. Our brain is less efficient though, at explicit number crunching - a chimp's brain might actually be better at that task.
The other thing that it seems to be good is certain types of optimization, especially in tasks that require a great deal of local optimization. But, again, what is AI there is far from clear. People have long laboured on optimization problems---and successfully too--without considering their work to be AI. Has the field of AI come up with anything that is fundamentally new in optimization techniques? It seems that quite a bit of current AI is just the application of well-known techniques in new ways. Artificial? Yes, in that it is not a creation of nature. Intelligent? Yes; good applications seem to require some intelligence. But is that it?
I was just reflecting on much-touted ideas such as that AI would prove itself once, say, a computer could beat a grandmaster at chess. Now, a chess grandmaster can see only up to 25-30 moves ahead, at the very best, and chess at that level is a timed game. So any computer that is fast enough ought to be able to beat a grand master by simply running through the tree of possible moves, one at a time; to the extent that there have been specialized computers that did a good job at the task, straightforward tree-pruning might have helped, but the key was in the fact specialization meant speed. A surprising number of AI "successes" seem to to be of that just-a-matter-of-technological-time flavor.
It is certainly possible that the human brain is not very good at explicit number crunching, but perhaps one could still model it as though it were. I believe that True AI is coming and that it will be based primarily on a very good model, combined with really fast computing and serious storage capacity; that's what we don't have right now. In fact, True AI will prove, once and for all, that there is nothing special abut the Homo Sapiens brain. Actually, more than that: True AI will allow us to replace said brain ---or at least its use---with much,much better products.
I am more with Empy on this - the progress to singularity - just not the scare. In the balance of things the opportunities outweigh the risks. I mean the present civilization has so much trouble - disease, poverty, inequality, conflict, climate change, etc - with no solution in sight until AI showed up. I also buy into the chance we're already in a matrix. Religiously speaking isn't that what the "world" is - God's matrix?
The singularity is coming big. Phobics shudder but I see many solutions :
-End of disease after cracking the human genome
-End of poverty after unending capitalism
-Longevity -end of aging
-Telepathy & universal polygotry & animal speech
-Intergalactic (space) travel -Elon Musk stuff
-Time (realm) travel
-End of death after discovery of Kadame's "software". This is a polymorphic state where you can switch state (looks, age, race, sex, etc) cause you're a sub human implanted with chips. In this case you're actually a robot and have nothing to fear but yourself.