This article is within the scope of WikiProject Futures studies, a collaborative effort to improve the coverage of
Futures studies on Wikipedia. If you would like to participate, please visit the project page, where you can join
the discussion and see a list of open tasks.Futures studiesWikipedia:WikiProject Futures studiesTemplate:WikiProject Futures studiesfutures studies articles
This article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to
philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.PhilosophyWikipedia:WikiProject PhilosophyTemplate:WikiProject PhilosophyPhilosophy articles
Superintelligence: Paths, Dangers, Strategies' is part of WikiProject Transhumanism, which aims to organize, expand, clean up, and guide
Transhumanism related articles on Wikipedia. If you would like to participate, you can edit this article, or visit the
project page for more details.TranshumanismWikipedia:WikiProject TranshumanismTemplate:WikiProject TranshumanismTranshumanism articles
Add Transhumanism navigation template on the bottom of all transhumanism articles; (use {{Transhumanism}} or see
navigation template)
Add Transhumanism info box to all transhumanism related talk pages (use {{Wpa}} or see
info box)
Add [[Category:transhumanism]] to the bottom of all transhumanism related articles, so it shows up on the
list of transhumanism articles
Maintenance / Etc
Find/cite sources for all positions of an article (see
citing sources.
Try to expand stubs, however, some "new" articles may be
neologisms, as this is common with positions on theories on life and may be suitable for deletion (see
deletion process)
Watch the list of transhumanism related articles and add to accordingly (see
transhumanism articles)
Clarify references in
Transhumanism, using footnotes.
Untitled
How are WaitButWhy and Tim Urban more mention-worthy here than other random bloggers on the internet? —
Jeraphine Gryphon(
talk) 10:20, 31 March 2015 (UTC)reply
Input from AI/physics/neuroscience experts needed
This article needs input from AI/ML, physics and neuroscience experts. The recent great success of the "new AI" are in machine learning (ML) and most experts feel the key to intelligence is learning (either biological or artificial) - see for example
http://mcgovern.mit.edu/principal-investigators/tomaso-poggio
However, most ML experts are, naturally, gung-ho about the field, and have no incentive to consider the potential limits. These limits generally reflect 2 issues. First, the adverse effects of high dimensionality n, with computational needs for universal intelligence growing exponentially with n. Brains have ~ 1 quadrillion synapses which are updated in parallel every millisecond; currently planned computers have ~ 10 billion transistors serially updated every nanosecond), giving an overall shortfall of order a quintillion.
Second, Moore's law is now slowing as it hits physics limits, so it seems unlikely that quintillion-fold increases in density are possible.
One could argue that in principle brains are proof that an approximation to UI is possible, perhaps at a time scale much faster than the learning that has occurred since humans evolved. Until we throughly understand the neural basis of human intelligence - in particular the detailed operation of the neocortex, that hope is like believing in God: no real evidence.
Paulhummerman (
talk) 13:13, 24 October 2015 (UTC)reply
This is the article on a book by Nick Bostrom, the philosopher. Other expert opinions should be included only, if they reacted specifically to the book. Otherwise, refer to the more gneral wikipedia-pages on this topics, such as
SuperintelligenceAndre908436 (
talk) 17:11, 25 April 2020 (UTC)reply
"the dominant lifeform on Earth"
Does Bostrom really consider superintelligence to be, in reality or potentiality, a lifeform? This seems counterintuitive and, if not a misrepresentation, could do with explaining. –
Arms & Hearts (
talk) 22:49, 19 November 2019 (UTC)reply
That is the fundamental problem with all these AI speculators. How can you convince a machine that it is 'alive', and how can you convince it that it should want to be alive? Over and over the AI 'experts' say we should be worried about an AI that wants 'self-preservation' because any 'intelligent' being would want that. Well, no! Any LIVING being should want that. Why should a computer care if we turn it off? Are we going to program a belief in computer-heaven into it? AI theorizing is SO ridiculous. It's almost as if the people doing it are using their own less-than-real intelligence, like they live so far removed from 'natural life' with so many artifices in their lives, that they are no longer capable of pondering the fundamental realities of life. They're the artificially intelligent trying to imagine programming real live intelligence into a machine, anthropomorphizing to the degree of cliche. — Preceding
unsigned comment added by
154.5.212.157 (
talk) 09:24, 1 March 2021 (UTC)reply