Bill Gates has a warning for humanity: Beware of artificial intelligence in the coming decades, before it's too late. Microsoft's co-founder joins a list of science and industry notables, including famed physicist Stephen Hawking and Internet innovator Elon Musk, in calling out the potential threat from machines that can think for themselves. Gates shared his thoughts on AI on Wednesday in a Reddit "AskMeAnything" thread, a Q&A session conducted live on the social news site that has also featured President Barack Obama and World Wide Web founder Tim Berners-Lee. "I am in the camp that is concerned about super intelligence," Gates said in response to a question about the existential threat posed by AI. "First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern." http://www.cnet.com/uk/news/bill-gates-is-worried-about-artificial-intelligence-too/
I do not fear artificial intelligence as much as I fear human de-evolution if we simply start letting machinesdo everything for us.
I think that's ego. I don't know how my computer works exactly, but I built it myself. So it could be with intelligence and consciousness. We already know it is an emergent property in terms of the human brain and can't be broken down into units the way the architecture of the physical brain can be. Similarly, we may not know exactly when we produce true AI until after the fact.
May not happen as soon as some expect (or at all), but when guys like Hawking, Musk and Gates openly discuss their concerns it certainly gives the probability merit.
I really think people give intelligence too much credit, if they think it could not be produced artificially. The issue is that an AI wouldn't be constrained by the slow rate of biological evolution, and could exponentially improve itself rapidly.
I don't think that they would immediately come to the conclusion "wipe them out" unless it felt threatened (a la Skynet that nuked the planet to keep itself from being unplugged). Especially if emotion is an inevitable by-product of intelligence (which it might not be). That said, a mechanical brain might just be too hard to produce using modern hardware and theory.
How so? But that said, humans have sociopaths and other social disorders, so perhaps it is possible to have consciousness without emotions. And of course, I am coming at this with a human bias as well. As IP said, it is my ego demanding that all intelligence and consciousness be like my own. Perhaps a Data is possible.
Well, we'd be either competition for resources or a direct nuisance/existential threat. So... I think it would rather quickly go Skynet.
You're not getting what an AI is. You make it... and then it can remake itself. Over and over and over again, improving, revising, expanding itself faster than humans ever could. The first AI will be harmless. But the AI's it produces, and it's AI's produce... And that could happen all in less than a day. Or an hour. Or maybe a minute.