Ciro's 2D reinforcement learning games Updated +Created
Video 1.
Top Down 2D Continuous Game with Urho3D C++ SDL and Box2D for Reinforcement learning by Ciro Santilli (2018)
Source. Source code at: github.com/cirosantilli/Urho3D-cheat.
Figure 1.
Screenshot of the basketball stage of Ciro's 2D continuous game
. Source code at: github.com/cirosantilli/rl-game-2d-grid. Big kudos to game-icons.net for the sprites.
Video 2.
Top Down 2D Discrete Tile Based Game with C++ SDL and Boost R-Tree for Reinforcement Learning by Ciro Santilli (2017)
Source.
The goal of this project is to reach artificial general intelligence.
A few initiatives have created reasonable sets of robotics-like games for the purposes of AI development, most notably: OpenAI and DeepMind.
However, all projects so far have only created sets of unrelated games, or worse: focused on closed games designed for humans!
What is really needed is to create a single cohesive game world, designed specifically for this purpose, and with a very large number of game mechanics.
Notably, by "game mechanic" is meant "a magic aspect of the game world, which cannot be explained by object's location and inertia alone" in order to test the the missing link between continuous and discrete AI.
Much in the spirit of gvgai, we have to do the following loop:
  • create an initial game that a human can solve
  • find an AI that beats it well
  • study the AI, and add a new mechanic that breaks the AI, but does not break a human!
The question then becomes: do we have enough computational power to simulation a game worlds that is analogous enough to the real world, so that our AI algorithms will also apply to the real world?
To reduce computation requirements, it is better to focus on a 2D world at first. Such world with the right mechanics can break any AI, while still being faster to simulate than a 3D world.
The initial prototype uses the Urho3D open source game engine, and that is a reasonable project, but a raw Simple DirectMedia Layer + Box2D + OpenGL solution from scratch would be faster to develop for this use case, since Urho3D has a lot of human-gaming features that are not needed, and because 2019 Urho3D lead developers disagree with the China censored keyword attack.
Simulations such as these can be viewed as a form of synthetic data generation procedure, where the goal is to use computer worlds to reduce the costs of experiments and to improve reproducibility.
Ciro has always had a feeling that AI research in the 2020's is too unambitious. How many teams are actually aiming for AGI? When he then read Superintelligence by Nick Bostrom (2014) it said the same. AGI research has become a taboo in the early 21st century.
Related projects:
Bibliograpy:
Video 3.
DeepMind Has A Superhuman Level Quake 3 AI Team by Two Minute Papers (2018)
Source. Commentary of DeepMind's 2019 Capture the Flag paper. DeepMind does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research. Does this repo contain everything?
Video 4.
OpenAI Plays Hide and Seek... and Breaks The Game! by Two Minute Papers (2019)
Source. Commentary of OpenAi's 2019 hide and seek paper. OpenAI does some similar simulations to what Ciro wants, but TODO do they publish source code for all of them? If not Ciro calls bullshit on non-reproducible research, and even worse due to the fake "Open" in the name. Does this repo contain everything?
Video 5.
Much bigger simulation, AIs learn Phalanx by Pezzza's Work (2022)
Source. 2d agents with vision. Simple prey/predator scenario.
Farama Foundation Updated +Created
Not-for profit that took up OpenAI Gym maintenance after OpenAI dropped it.
Farama Gymnasium Updated +Created
OpenAI Gym development by OpenAI ceased in 2021, and the Farama Foundation not for profit took up maintenance of it.
gymnasium==1.1.1 just worked on Ubuntu 24.10 testing with the hello world gym/random_control.py:
sudo apt install swig
cd gym
virtualenv -p python3
. .venv/bin/activate
pip install -r requirements-python-3-12.txt
./random_control.py
just works and opens a game window on my desktop.
Figure 1.
Lunar Lander environment of Farama Gymnasium with random controls
.
This example just passes random commands to the ship so don't expect wonders. The cool thing about it though is that you can open any environment with it e.g.
./random_control.py CarRacing-v3
To manually control it we can use gym/moon_play.py:
cd gym
./moon_play.py
Manual control is extremely useful to get an intuition about the problem. You will notice immediately that controlling the ship is extremely difficult.
Figure 2.
Lunar Lander environment of Farama Gymnasium with manual control
.
We slow it down to 10 FPS to give us some fighting chance.
We don't know if it is realistic, but what is certain is that this is definitely not designed to be a fun video game!
  • the legs of the lander are short and soft, and you're not supposed to hit the body on ground, so you have to go very slow
  • the thrusters are quite weak and inertia management is super important
  • the ground is very slippery
A good strategy is to land anywhere very slowly and then inch yourself towards the landing pad.
The documentation for it is available at: gymnasium.farama.org/environments/box2d/lunar_lander/ The agent input is described as:
The state is an 8-dimensional vector: the coordinates of the lander in x & y, its linear velocities in x & y, its angle, its angular velocity, and two booleans that represent whether each leg is in contact with the ground or not.
so it is a fundamentally flawed robot training example as global x and y coordinates are precisely known.
Variation in the scenario comes from:
  • initial speed of vehicle
  • shape of lunar surface, but TODO can the ship observe the lunar surface shape in any way? If not, once again, this is a deeply flawed example.
The actions are documented at:
  • 0: do nothing
  • 1: fire left orientation engine
  • 2: fire main engine
  • 3: fire right orientation engine
so we can make it spin like mad counter clockwise with:
action = 1
To actually play the games manually with keyboard, you need to define your own keybindings with gymnasium.utils.play.play. Feature request for default keybindings: github.com/Farama-Foundation/Gymnasium/discussions/1330
There is no C API, you have to go through Python: github.com/Farama-Foundation/Gymnasium/discussions/1181. Shame.
OpenAI Updated +Created
In 2019, OpenAI transitioned from non-profit to for-profit
so what's that point of "Open" in the name anymore??
Text-to-video Updated +Created
This was the Holy Grail as of 2023, when text-to-image started to really take off, but text-to-video was miles behind.