AI game by DeepMind Updated +Created
Video 2.
Open-Ended Learning Leads to Generally Capable Agents by DeepMind (2021)
Short name: XLand. Whitepaper: www.deepmind.com/blog/generally-capable-agents-emerge-from-open-ended-play.
AI game with natural language Updated +Created
We define this category as AI games in which agents are able to produce or consume natural language.
It dawned on Ciro Santilli that it would be very difficult to classify an agent as an AGI if tthat agent can't speak to take orders, read existing human generated documentation, explain what it is doing, or ask for clarification.
Video 1.
Human player test of DMLab-30 Select Described Object task by DeepMind (2018)
Source. This is one of the games from DeepMind Lab.
Video 2.
WorldGPT by Nhan Tran (2023)
Source. Not the most amazing demo, but it is a start.
Ciro's 2D reinforcement learning games Updated +Created
Video 1.
Top Down 2D Continuous Game with Urho3D C++ SDL and Box2D for Reinforcement learning by Ciro Santilli (2018)
Source. Source code at: github.com/cirosantilli/Urho3D-cheat.
Figure 1.
Screenshot of the basketball stage of Ciro's 2D continuous game
. Source code at: github.com/cirosantilli/rl-game-2d-grid. Big kudos to game-icons.net for the sprites.
Video 2.
Top Down 2D Discrete Tile Based Game with C++ SDL and Boost R-Tree for Reinforcement Learning by Ciro Santilli (2017)
Source.
The goal of this project is to reach artificial general intelligence.
A few initiatives have created reasonable sets of robotics-like games for the purposes of AI development, most notably: OpenAI and DeepMind.
However, all projects so far have only created sets of unrelated games, or worse: focused on closed games designed for humans!
What is really needed is to create a single cohesive game world, designed specifically for this purpose, and with a very large number of game mechanics.
Notably, by "game mechanic" is meant "a magic aspect of the game world, which cannot be explained by object's location and inertia alone" in order to test the the missing link between continuous and discrete AI.
Much in the spirit of gvgai, we have to do the following loop:
  • create an initial game that a human can solve
  • find an AI that beats it well
  • study the AI, and add a new mechanic that breaks the AI, but does not break a human!
The question then becomes: do we have enough computational power to simulation a game worlds that is analogous enough to the real world, so that our AI algorithms will also apply to the real world?
To reduce computation requirements, it is better to focus on a 2D world at first. Such world with the right mechanics can break any AI, while still being faster to simulate than a 3D world.
The initial prototype uses the Urho3D open source game engine, and that is a reasonable project, but a raw Simple DirectMedia Layer + Box2D + OpenGL solution from scratch would be faster to develop for this use case, since Urho3D has a lot of human-gaming features that are not needed, and because 2019 Urho3D lead developers disagree with the China censored keyword attack.
Simulations such as these can be viewed as a form of synthetic data generation procedure, where the goal is to use computer worlds to reduce the costs of experiments and to improve reproducibility.
Ciro has always had a feeling that AI research in the 2020's is too unambitious. How many teams are actually aiming for AGI? When he then read Superintelligence by Nick Bostrom (2014) it said the same. AGI research has become a taboo in the early 21st century.
Related projects:
Bibliograpy:
Video 3.
DeepMind Has A Superhuman Level Quake 3 AI Team by Two Minute Papers (2018)
Source. Commentary of DeepMind's 2019 Capture the Flag paper. DeepMind does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research. Does this repo contain everything?
Video 4.
OpenAI Plays Hide and Seek... and Breaks The Game! by Two Minute Papers (2019)
Source. Commentary of OpenAi's 2019 hide and seek paper. OpenAI does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research, and even worse due to the fake "Open" in the name. Does this repo contain everything?
Video 5.
Much bigger simulation, AIs learn Phalanx by Pezzza's Work (2022)
Source. 2d agents with vision. Simple prey/predator scenario.
DeepMind Lab Updated +Created
github.com/deepmind/lab/tree/master/game_scripts/levels/contributed/dmlab30 has some good games with video demos on YouTube, though for some weird reason they are unlistd.
TODO get one of the games running. Instructions: github.com/deepmind/lab/blob/master/docs/users/build.md. This may helpgithub.com/deepmind/lab/issues/242: "Complete installation script for Ubuntu 20.04".
It is interesting how much overlap some of those have with Ciro's 2D reinforcement learning games
The games are 3D, but most of them are purely flat, and the 3D is just a waste of resources.
Video 1.
Human player test of DMLab-30 Collect Good Objects task by DeepMind (2018)
Source.
Video 2.
Human player test of DMLab-30 Exploit Deferred Effects task by DeepMind (2018)
Source.
Video 3.
Human player test of DMLab-30 Select Described Object task by DeepMind (2018)
Source. Some of their games involve language instructions from the use to determine the desired task, cool concept.
Video 4.
Human player test of DMLab-30 Fixed Large Map task by DeepMind (2018)
Source. They also have some maps with more natural environments.
DeepMind Lab2D vs gvgai Updated +Created
At twitter.com/togelius/status/1328404390114435072 called out on DeepMind Lab2D for not giving them credit on prior work!
This very much looks like like GVGAI which was first released in 2014, been used in dozens (maybe hundreds) of papers, and for which one of the original developers was Tom Schaul at DeepMind...
As seen from web.archive.org/web/20220331022932/http://gvgai.net/ though, DeepMind sponsored them at some point.
Deepmind soccer simulation Updated +Created
  • From Motor Control to Team Play in Simulated Humanoid Football
Video 1.
From Motor Control to Team Play in Simulated Humanoid Football by Ali Eslami (2023)
Source. Likely a reupload by DeepMind employee: www.linkedin.com/in/smalieslami.
Video 2.
DeepMind’s AI Trained For 5 Years by Two Minute Papers (2023)
Source. The 5 years bullshit is of course in-game time clickbait, they simulate 1000x faster than realtime.
gvgai Updated +Created
www.gvgai.net (dead as of 2023)
The project kind of died circa 2020 it seems, a shame. Likely they funding ran out. The domain is dead as of 2023, last archive from 2022: web.archive.org/web/20220331022932/http://gvgai.net/. Marks as funded by DeepMind. Researchers really should use university/GitHub domain names!
Similar goals to Ciro's 2D reinforcement learning games, but they were focusing mostly on discrete games.
They have some source at: github.com/GAIGResearch/GVGAI TODO review
From QMUL Game AI Research Group:From other universities:TODO check:
  • Ahmed Khalifa
  • Jialin Liu
MuJoCo Updated +Created
Was a closed source project by "Roboti LLC", which was then acquired by DeepMind in October 2021 and open sourced March 2022: www.deepmind.com/blog/open-sourcing-mujoco
This library is quite cool. Feel very brutally lean and mean.
MuJoCo getting started Updated +Created
Tested on Ubuntu 23.10;
git clone https://github.com/google-deepmind/mujoco
cd mujoco
git checkout 5d46c39529819d1b31249e249ca399f306a108ac
mkdir -p build
cd build
cmake ..
make -j
Now let's play. Minimal interactive UI simulation of a simple MJCF scene with one falling cube:
bin/basic ../doc/_static/hello.xml
Test soure code: github.com/google-deepmind/mujoco/blob/5d46c39529819d1b31249e249ca399f306a108ac/sample/basic.cc. The only thing you can do is rotate the scene with the computer mouse it seems. Mentioned at: mujoco.readthedocs.io/en/2.2.2/programming.html#sabasic
Some more interesting models can be found under the model/ directory: github.com/google-deepmind/mujoco/tree/5d46c39529819d1b31249e249ca399f306a108ac/model E.g. the imaginary humanoid robot DeepMind used in many demos can be seen with:
bin/basic ../model/humanoid/humanoid.xml
A more advanced UI with a few controls:
bin/simulate ../doc/_static/hello.xml
Test soure code: github.com/google-deepmind/mujoco/tree/5d46c39529819d1b31249e249ca399f306a108ac/simulate. Mentioned at: mujoco.readthedocs.io/en/2.2.2/programming.html#sasimulate
A very cool thing about that UI is that you can manually control joints. There are no joints in the hello.xml, but e.g. with the humanoid model:
bin/simulate ../model/humanoid/humanoid.xml
under "Control" you move each joint of the robot separately which is quite cool.
Video 1.
Demo of MuJoCo's built-in simulate viewer by Yuval Tassa (2019)
Source.
There's also a bin/record test executable that presumably renders the simulation directly to a file:
bin/record ../doc/_static/hello.xml 5 60 rgb.out
ffmpeg -f rawvideo -pixel_format rgb24 -video_size 800x800 -framerate 60 -i rgb.out -vf "vflip" video.mp4
Mentioned at: mujoco.readthedocs.io/en/2.2.2/programming.html#sarecord but TODO that produced a broken video, related issues: