Ai. What Are The Chances It Will Destroy The Human Race?

Ai. What Are The Chances It Will Destroy The Human Race?

Elon Musk believes it is highly likely that artificial intelligence (AI) will be a threat to humans. But that's not all science fiction, with a survey of researchers now suggesting that they believe that the probability that artificial intelligence will cause human extinction is 50 / 50. Bostrom says viruses are unlikely to kill the last human, but for him and others, AI is a real existential threat. [Sources: 0, 5, 16] 

    

Unfortunately, Musk is not confident that humanity has taken the right protective measures. In the long term, I think it will be a big risk area, but if we end up building a race of super-intelligent robots, no, we will have a good idea of what is going to happen. [Sources: 4, 20] 

    

A major nuclear war, for example, would probably end civilization and wipe out humanity, "he added. But if modern civilization were to collapse, it is not entirely certain that it would emerge as a surviving human species. Perhaps in the distant future, a galaxy will be populated by a race of super-intelligent robots, much like our own. Or perhaps we will continue to prosper, protected by man-made risks, as long as the planet itself, and perhaps even longer. [Sources: 10, 14, 21] 

    

He described the stakes as "the survival of humanity is at stake." He argued that the ethical and practical implications of superintelligence should be weighed against the implications for thermonuclear wars that were being considered. Unless we go into more detail about what kind of threat AI could pose, he said that AI poses a fundamental risk to the existence of human civilisation. [Sources: 8, 16] 

    

No country can make progress and take advantage of artificial intelligence and new technologies without sacrificing the important characteristics that make up humanity. Without sacrificing them, countries can no longer advance in relation to humanity, an important quality that has defined humanity since the dawn of time. No country will benefit from artificial intelligence or new technologies unless it sacrifices some of the key qualities of human civilisation, such as human dignity and respect for human rights. But AI cannot be used to advance any country, regardless of its size or population. [Sources: 7] 

    

Saying that we don't know things about AI is not the same as saying that we know it is impossible or that it will solve all our problems. One aspect of AI that has been discussed in AI is whether we can teach AI to respect human ethics. AI could kill us it may be safe, but is it just automation or is AI just automation? [Sources: 6, 18, 19] 

    

Super-intelligent computers will destroy us if we try to give them any Asimovian instructions. Shostak does not believe, however, that sophisticated AI will ultimately enslave humanity. Instead, humans simply become immaterial and hyper-intelligent machines. [Sources: 3, 22] 

    

We will not die out due to artificial intelligence, but once AI works, we will be ruled by a superfluous AI society and quickly wiped out by the will of humanity. There is no doubt that there is a battle between humans and laser-eyed robots, and that is what we are referring to here. [Sources: 4, 20] 

    

Some believe that humans will be much better off using advanced AI systems, while others believe that this will lead to our inevitable demise. In Hollywood's narratives, there is always a fight back left to man, but when humanity is confronted with a truly superior intelligence, the result is implausible. [Sources: 11, 22] 

    

Some thinkers overestimate the likelihood that we will have computers as intelligent as humans, and others exaggerate the danger that such computers would pose to humanity. If humanity is not wiped out but subsumed by super-intelligent machines in Kurzweil's narratives, then we may see intelligent machines accelerating human extinction. Haldane notes that if civilization collapses and mankind survives, there is a 50 / 50 chance of a human survival rate of less than 10%. [Sources: 11, 12, 13] 

    

The biggest problem facing Armstrong is whether the threat of mass extinction from artificial intelligence is worth taking seriously. Another major problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. [Sources: 2, 6] 

    

If all goes according to plan, artificial intelligence's capabilities will be charged in ways that may reshape our thinking about the universe and humanity's place in it. By 2030, it is highly likely that the further development of AI and related technologies and systems will improve human capacities. Although the automation of artificial intelligence could eliminate the need for humans to do all this, we will still have to determine what we do. [Sources: 1, 9, 15] 

    

We do not yet know whether AI will usher in a golden age of human existence or whether it will end in the destruction of everything people value. We do not yet know whether AI will usher in a "golden age" of human existence or not, and if so, will it end with the destruction of everything humans value? [Sources: 4, 22] 

    

The Terminator and Matrix films have long painted a dystopian future in which computers develop superhuman intelligence and destroy humanity, and there are also thinkers who think that this type of scenario represents a real danger. A number of scientists and engineers fear that if we build an artificial intelligence that is smarter than we are, a form of artificial intelligence known as general artificial intelligence will be doomed. Many scientists dispute the cybernetic revolt depicted in science fiction books like "The Matrix," arguing that it is more likely that a computer that could threaten humanity (or even more advanced forms of AI) would likely be programmed not to attack humanity. There may be no real danger of creating artificial intelligence that resembles humans, but there has been much talk about the dangers of even trying to produce such a thing. [Sources: 2, 3, 12, 17] 

    






Sources:

    

[0]: https://www.sciencemag.org/news/2018/01/could-science-destroy-world-these-scholars-want-save-us-modern-day-frankenstein

    

[1]: https://www.iotforall.com/impact-of-artificial-intelligence-job-losses

    

[2]: https://en.wikipedia.org/wiki/AI_takeover

    

[3]: https://qz.com/653221/what-will-destroy-us-first-superbabies-or-ai/

    

[4]: https://ib-em.com/general-news/robots-are-surely-not-going-to-destroy-the-planet-or-are-they/

    

[5]: https://www.spectator.co.uk/article/how-close-is-humanity-to-destroying-itself

    

[6]: https://thenextweb.com/insider/2014/03/08/ai-could-kill-all-meet-man-takes-risk-seriously/

    

[7]: https://www.brookings.edu/research/what-is-artificial-intelligence/

    

[8]: https://psmag.com/social-justice/nick-bostrom-superintelligence-singularity-technology-future-books-90067

    

[9]: https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/

    

[10]: https://undark.org/2020/07/24/book-review-the-precipice/

    

[11]: http://www.cnn.com/2014/12/26/opinion/scoblete-ai-human-threat/index.html

    

[12]: https://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking

    

[13]: https://thereader.mitpress.mit.edu/how-humanity-discovered-its-possible-extinction-timeline/

    

[14]: https://www.livescience.com/49952-stephen-hawking-warnings-to-humanity.html

    

[15]: https://www.fastcompany.com/90396213/google-quantum-supremacy-future-ai-humanity

    

[16]: https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-artificial-intelligence-openai-neuralink-ai-warning-a8074821.html

    

[17]: https://www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

    

[18]: https://plato.stanford.edu/entries/ethics-ai/

    

[19]: https://www.cbinsights.com/research/ai-threatens-humanity-expert-quotes/

    

[20]: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/

    

[21]: https://nickbostrom.com/existential/risks.html

    

[22]: https://futurism.com/artificial-intelligence-is-our-future-but-will-it-save-or-destroy-humanity

    

 

James Knox

Hi, My Name Is James, I'm A Life Insurance Agent, Photographer, And Dropshipper, Based In Missouri. Welcome To My Blog.

Thank you for leaving a comment, if you need assistance, please feel free to contact us
_____
Tired of Fake News? Lyfemore. News You Can Trust. Conservative News Blog & More. Politics - Analysis & more. News Blogs, World News - Articles for all

Post a Comment (0)
Previous Post Next Post