AGI research has become a taboo in the early 21st century Updated +Created
Due to the failures of earlier generations, which believed that would quickly achieve AGI, leading to the AI winters, 21st researchers have been very afraid of even trying it, rather going only for smaller subste problems like better neural network designs, at the risk of being considered a crank.
While there is fundamental value in such subset problems, the general view to the final goal is also very important, we will likely never reach AI without it.
This is voiced for example in Superintelligence by Nick Bostrom (2014) section "Opinions about the future of machine intelligence" which in turn quotes Nils Nilsson:
There may, however, be a residual cultural effect on the AI community of its earlier history that makes many mainstream researchers reluctant to align themselves with over-grand ambition. Thus Nils Nilsson, one of the old-timers in the field, complains that his present-day colleagues lack the boldness of spirit that propelled the pioneers of his own generation:
Concern for "respectability" has had, I think, a stultifying effect on some AI researchers. I hear them saying things like, "AI used to be criticized for its flossiness. Now that we have made solid progress, let us not risk losing our respectability." One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human
thought - and away from "strong AI" - the variety that attempts to mechanize human-level intelligence
Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.
Don't be a pussy, AI researchers!!!
Ciro's 2D reinforcement learning games Updated +Created
Video 1.
Top Down 2D Continuous Game with Urho3D C++ SDL and Box2D for Reinforcement learning by Ciro Santilli (2018)
Source. Source code at: github.com/cirosantilli/Urho3D-cheat.
Figure 1.
Screenshot of the basketball stage of Ciro's 2D continuous game
. Source code at: github.com/cirosantilli/rl-game-2d-grid. Big kudos to game-icons.net for the sprites.
Video 2.
Top Down 2D Discrete Tile Based Game with C++ SDL and Boost R-Tree for Reinforcement Learning by Ciro Santilli (2017)
Source.
The goal of this project is to reach artificial general intelligence.
A few initiatives have created reasonable sets of robotics-like games for the purposes of AI development, most notably: OpenAI and DeepMind.
However, all projects so far have only created sets of unrelated games, or worse: focused on closed games designed for humans!
What is really needed is to create a single cohesive game world, designed specifically for this purpose, and with a very large number of game mechanics.
Notably, by "game mechanic" is meant "a magic aspect of the game world, which cannot be explained by object's location and inertia alone" in order to test the the missing link between continuous and discrete AI.
Much in the spirit of gvgai, we have to do the following loop:
  • create an initial game that a human can solve
  • find an AI that beats it well
  • study the AI, and add a new mechanic that breaks the AI, but does not break a human!
The question then becomes: do we have enough computational power to simulation a game worlds that is analogous enough to the real world, so that our AI algorithms will also apply to the real world?
To reduce computation requirements, it is better to focus on a 2D world at first. Such world with the right mechanics can break any AI, while still being faster to simulate than a 3D world.
The initial prototype uses the Urho3D open source game engine, and that is a reasonable project, but a raw Simple DirectMedia Layer + Box2D + OpenGL solution from scratch would be faster to develop for this use case, since Urho3D has a lot of human-gaming features that are not needed, and because 2019 Urho3D lead developers disagree with the China censored keyword attack.
Simulations such as these can be viewed as a form of synthetic data generation procedure, where the goal is to use computer worlds to reduce the costs of experiments and to improve reproducibility.
Ciro has always had a feeling that AI research in the 2020's is too unambitious. How many teams are actually aiming for AGI? When he then read Superintelligence by Nick Bostrom (2014) it said the same. AGI research has become a taboo in the early 21st century.
Related projects:
Bibliograpy:
Video 3.
DeepMind Has A Superhuman Level Quake 3 AI Team by Two Minute Papers (2018)
Source. Commentary of DeepMind's 2019 Capture the Flag paper. DeepMind does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research. Does this repo contain everything?
Video 4.
OpenAI Plays Hide and Seek... and Breaks The Game! by Two Minute Papers (2019)
Source. Commentary of OpenAi's 2019 hide and seek paper. OpenAI does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research, and even worse due to the fake "Open" in the name. Does this repo contain everything?
Video 5.
Much bigger simulation, AIs learn Phalanx by Pezzza's Work (2022)
Source. 2d agents with vision. Simple prey/predator scenario.
Microscopy connectome extraction Updated +Created
This is the most plausible way of obtaining a full connectome looking from 2020 forward. Then you'd observe the slices with an electron microscope + appropriate Staining. Superintelligence by Nick Bostrom (2014) really opened Ciro Santilli's eyes to this possibility.
Once this is done for a human, it will be one of the greatest milestone of humanities, coparable perhaps to the Human Genome Project. BUt of course, privacy issues are incrediby pressing in this case, even more than in the human genome project, as we would essentially be able to read the brain of the person after their death.
As of 2022, the Drosophila connectome had been almost fully extracted.
This is also a possible path towards post-mortem brain reading.
Figure 1. Source. Unconfirmed, but looks like the type of frozen brain where a Microtome would be used.
Mind uploading Updated +Created
Wikipedia defines Mind uploading as a synonym for whole brain emulation. This sounds really weird, as "mind uploading" suggests much more simply brain dumping, or perhaps reuploading a brain dump to a brain.
Superintelligence by Nick Bostrom (2014) section "Whole brain emulation" provides a reasonable setup: post mortem, take a brain, freeze it, then cut it into fine slices with a Microtome, and then inspect slices with an electron microscope after some kind of staining to determine all the synapses.
Likely implies AGI.
Transcendence (2014) Updated +Created
The premise that "we can't make AGI, but we know enough about the human brain to upload on to a computer" is flawed. Edit: after reading Superintelligence by Nick Bostrom (2014), Ciro Santilli was convinced otherwise. What is flawed is of course just the "extracting connectome with macroscopic probes part". A post mortem connectome extraction with microtome is much more believable. But of course they weren't going to show fake slices of Jonny Depp's brain, are they? Famous actor bodies are sacred! What a huge lost opportunity. On the other hand however, the scale of the first connectome extraction would be arguably too huge to be undertaken by a random pair of rogue researchers. The same would also likely apply to any first time human brain connectome. It would much more likely be a huge public effort, much like the Human Genome Project.
But this film does have the merit of exploring how an AGI might act to take over the AGI might act to take over the world once created, notably by creating its own physical research laboratory. Though it doesn't feel likely that it could go under the radar for 2 years given the energetic requirements of the research. Even the terrorists find it before the FBI!
I also wish they had shown the dildo (or more likely, direct stimulation!) computerized Jonny Depp used to use with his wife before he managed to re-synthesized his body. But you know, 18+ would cut too much profits. Ah, what a shame.
Video 1.
Can You Prove You're Self Aware? in the big lab scene from Transcendence (2014)
. Source.