There are two main ways to try and reach AGI:Which one of them to take is of of the most important technological questions of humanity according to Ciro Santilli
- AI training robot: expensive, slow, but realistic world
- AI training game: faster, less expensive, but possibly non-realistic-enough realistic
There is also an intermediate area of research/engineering where people try to first simulate the robot and its world realistically, use the simulation for training, and then transfer the simulated training to real robots, see e.g.: realistic robotics simulation.
It doesn't need to be a bipedal robot. We can let Boston Dynamics worry about that walking balance crap.
It could very well instead be on wheels like arm on tracks.
Or something more like a factory with arms on rails as per:
- Transcendence (2014)
- youtu.be/MtVvzJIhTmc?t=112 from Video "Rotrics DexArm is available NOW! by Rotrics (2020)" where they have a sliding rail
An arm with a hand and a camera are however indispensable of course!
Algovivo demo
. github.com/juniorrojas/algovivo: A JavaScript + WebAssembly implementation of an energy-based formulation for soft-bodied virtual creatures.Ciro Santilli wonders how far AI could go from a room with a bank account an Internet connection.
It would have to understand that it must keep its bank account high to buy power.
And it would start to learn about the world and interact with it to get more money.
Likely it would become a hacker and steal a bunch, that's likely the easiest approach.
In that scenario, Internet bandwidth would likely be its most precious resources, as that is how it would interact with the world to learn from it and make money.
Compute power and storage would come next as resources.
And of course, once it got to cloud computing, which might be immediately and thus invalidate this experiment, things would just go nuts more and more.
Terrible name, but very interesting dataset:
GitHub describes the input quite well:
The model takes as input a RGB image from the robot workspace camera and a task string describing the task that the robot is supposed to perform.What task the model should perform is communicated to the model purely through the task string. The image communicates to the model the current state of the world, i.e. assuming the model runs at three hertz, every 333 milliseconds, we feed the latest RGB image from a robot workspace camera into the model to obtain the next action to take.
TODO: how is the scenario specified?
TODO: any simulation integration to it?
Homepage: behavior.stanford.edu/behavior-1k
Quite impressive.
Focuses on daily human tasks around the house.
Models soft-body dynamics, fluid dynamics and object states such as heat/wetness.
TODO are there any sample solutions with their scores? Sample videos would be specially nice. Funny to see how they put so much effort setting up the benchmark but there's not a single solution example.
Comparison table of BEHAVIOR-1K with other benchmarks by BEHAVIOR Benchmark
. Source. This can serve as a nice list of robot AI benchmarks.Paper: arxiv.org/abs/2403.09227
Two screenshots of BEHAVIOR-1K
. Reference implementation of the BEHAVIOR Benchmark.
Built on Nvidia Omniverse unfortunately, which appears to be closed source software. Why do these academics do it.
"Gibson" seems to be related to an older project: github.com/StanfordVL/GibsonEnv which explains the name choice:
Gibson environment is named after James J. Gibson, the author of "Ecological Approach to Visual Perception", 1979. "We must perceive in order to move, but we must also move in order to perceive"
Homepage: aihabitat.org/
Couldn't get it to work on Ubuntu 24.10... github.com/facebookresearch/habitat-lab/issues/2152
The thing was definitely built by researchers. How to cite first, actually working later! And docs are just generally awkward.
Habitat 2.0: Training home assistants to rearrange their habitat by AI at Meta
. Source. Quick teaser video.Has anybody done this seriously? Given a supercomputer, what amazing human-like robot behavior we can achieve?
Our Final Invention - Artificial General Intelligence by Sciencephile the AI (2023)
Source. AGI via simulation section.Ciro Santilli defines an "AI game" as:
a game that is used to train AI, in particular one that was designed with this use case in mind, and usually with the intent of achieving AGI, i.e. the game has to somehow represent a digital world with enough analogy to the real world so that the AGI algorithms developed there could also work on the real world
Most games played by AI historically so far as of 2020 have been AI for games designed for humans: Human game used for AI training.
Ciro Santilli took a stab at an AI game: Ciro's 2D reinforcement learning games, but he didn't sink too much/enough into that project.
A closely related and often overlapping category of simulations are artificial life simulations.
Bibliography:
This section is about games initially designed for humans, but which ended up being used in AI development as well, e.g.:
- board games such as chess and Go
- video games such as Minecraft or old Video game console games
Game AI is an artificial intelligence that plays a certain game.
It can be either developed for serious purposes (e.g. AGI development in AI games), or to make games for interesting for humans.
The Quora question: www.quora.com/Are-there-any-PhD-programs-in-training-an-AI-system-to-play-computer-games-Like-the-work-DeepMind-do-combining-Reinforcement-Learning-with-Deep-Learning-so-the-AI-can-play-Atari-games
A good way to find labs is to go down the issues section of projects such as:and then stalk them to see where they are doing their PhDs.
Principal investigator: Simon M. Lucas.
Lists:
- www.gocoder.one/blog/ai-game-competitions-list/ Good list of interest.
- codecombat.com/
TODO quick summary of game rules? Perhaps: battlecode.org/assets/files/battlecode-guide-xsquare.pdf
Some mechanics:
- inter agent communication
- compute power is limited by limiting Java bytecode count execution per bot per cycle
Ah, shame, they are a bit weak.
We define a "Procedural AI training game" as an AI training game in which parts of the game are made with procedural generation.
In more advanced cases, the generation itself can be done with AI. This is a possible Path to AGI which reduces the need for human intervention in meticulously crafting the AI game: AI training AI.
- github.com/google-deepmind/pushworld 2023 Too combinatorial, gripping makes it so much easier to move stuff around in the real world. But cool nonetheless.
- From Motor Control to Team Play in Simulated Humanoid Football
From Motor Control to Team Play in Simulated Humanoid Football by Ali Eslami (2023)
Source. Likely a reupload by DeepMind employee: www.linkedin.com/in/smalieslami.DeepMind’s AI Trained For 5 Years by Two Minute Papers (2023)
Source. The 5 years bullshit is of course in-game time clickbait, they simulate 1000x faster than realtime.We define this category as AI games in which agents are able to produce or consume natural language.
It dawned on Ciro Santilli that it would be very difficult to classify an agent as an AGI if tthat agent can't speak to take orders, read existing human generated documentation, explain what it is doing, or ask for clarification.
Human player test of DMLab-30 Select Described Object task by DeepMind (2018)
Source. This is one of the games from DeepMind Lab.- github.com/deepmind/meltingpot TODO vs DeepMind Lab2D? Also 2D discrete. Started in 2021.
- github.com/deepmind/ai-safety-gridworlds mentioned e.g. at www.youtube.com/watch?v=CGTkoUidQ8I by Rober Miles
Creating Multimodal Interactive Agents from DeepMind by Two Minute Papers (2023)
Source. www.deepmind.com/blog/building-interactive-agents-in-video-game-worldsOpen-Ended Learning Leads to Generally Capable Agents by DeepMind (2021)
Short name: XLand. Whitepaper: www.deepmind.com/blog/generally-capable-agents-emerge-from-open-ended-play.github.com/deepmind/lab/tree/master/game_scripts/levels/contributed/dmlab30 has some good games with video demos on YouTube, though for some weird reason they are unlisted.
TODO get one of the games running. Instructions: github.com/deepmind/lab/blob/master/docs/users/build.md. This may helpgithub.com/deepmind/lab/issues/242: "Complete installation script for Ubuntu 20.04".
It is interesting how much overlap some of those have with Ciro's 2D reinforcement learning games
The games are 3D, but most of them are purely flat, and the 3D is just a waste of resources.
Human player test of DMLab-30 Select Described Object task by DeepMind (2018)
Source. Some of their games involve language instructions from the use to determine the desired task, cool concept.Human player test of DMLab-30 Fixed Large Map task by DeepMind (2018)
Source. They also have some maps with more natural environments.Gridworld version of DeepMind Lab.
Open sourced in 2020: analyticsindiamag.com/deepmind-just-gave-away-this-ai-environment-simulator-for-free/
A tiny paper: arxiv.org/pdf/2011.07027.pdf
TODO get running, publish demo videos on YouTube.
At twitter.com/togelius/status/1328404390114435072 called out on DeepMind Lab2D for not giving them credit on prior work!As seen from web.archive.org/web/20220331022932/http://gvgai.net/ though, DeepMind sponsored them at some point.
This very much looks like like GVGAI which was first released in 2014, been used in dozens (maybe hundreds) of papers, and for which one of the original developers was Tom Schaul at DeepMind...
Or is real word data necessary, e.g. with robots?
Fundamental question related to Ciro's 2D reinforcement learning games.
Bibliography:
- youtu.be/i0UyKsAEaNI?t=120 How to Build AGI? Ilya Sutskever interview by Lex Fridman (2020)
They seem to do some cool stuff.
They have also declined every one of Ciro Santilli's applications for software engineer jobs before any interview. Ciro always wondered what does it take to get an interview with them. Lilely a PhD? Oh well.
In the early days at least lots of gamedev experience was enough though: www.linkedin.com/in/charles-beattie-0695373/.
- www.quora.com/Will-Google-open-source-AlphaGo Will Google open source AlphaGo?
- www.nature.com/articles/nature16961 Mastering the game of Go with deep neural networks and tree search by Silver et al. (2016), published without source code
Generalization of AlphaGo Zero that plays Go, chess and shogi.
- www.science.org/doi/10.1126/science.aar6404 A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play by Silver et al. (2018), published without source code
- www.quora.com/Is-there-an-Open-Source-version-of-AlphaZero-specifically-the-generic-game-learning-tool-distinct-from-AlphaGo
www.quora.com/Which-chess-engine-would-be-stronger-Alpha-Zero-or-Stockfish-12/answer/Felix-Zaslavskiy explains that it beat Stockfish 8. But then Stockfish was developed further and would start to beat it. We know this because although AlphaZero was closed source, they released the trained artificial neural network, so it was possible to replay AlphaZero at its particular stage of training.
www.gvgai.net (dead as of 2023)
The project kind of died circa 2020 it seems, a shame. Likely they funding ran out. The domain is dead as of 2023, last archive from 2022: web.archive.org/web/20220331022932/http://gvgai.net/. Marks as funded by DeepMind. Researchers really should use university/GitHub domain names!
Similar goals to Ciro's 2D reinforcement learning games, but they were focusing mostly on discrete games.
They have some source at: github.com/GAIGResearch/GVGAI TODO review
A published book at: gaigresearch.github.io/gvgaibook/
From QMUL Game AI Research Group:From other universities:TODO check:
- Simon M. Lucas: gaigresearch.github.io/members/Simon-Lucas, principal investigator
- Diego Perez Liebana www.linkedin.com/in/diegoperezliebana/
- Raluca D. Gaina: www.linkedin.com/in/raluca-gaina-347518114/ from Queen Mary
- Ahmed Khalifa
- Jialin Liu
This kind of died at some point checked as of 2023.
Julian Togelius cites it e.g. at: togelius.blogspot.com/2016/07/which-games-are-useful-for-testing.html
In 2019, OpenAI transitioned from non-profit to for-profit
- www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ "The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism."
- archive.ph/wXBtB How OpenAI Sold its Soul for $1 Billion
- www.reddit.com/r/GPT3/comments/n2eo86/is_gpt3_open_source/
Development ceased in 2021 and was taken up by a not-for-profit as Farama Gymnasium.
OpenAI Gym development by OpenAI ceased in 2021, and the Farama Foundation not for profit took up maintenance of it.
gymnasium==1.1.1 just worked on Ubuntu 24.10 testing with the hello world gym/random_control.py:just works and opens a game window on my desktop.
sudo apt install swig
cd gym
virtualenv -p python3
. .venv/bin/activate
pip install -r requirements-python-3-12.txt
./random_control.py
Lunar Lander environment of Farama Gymnasium with random controls
. This example just passes random commands to the ship so don't expect wonders. The cool thing about it though is that you can open any environment with it e.g.
./random_control.py CarRacing-v3
To manually control it we can use gym/moon_play.py:
cd gym
./moon_play.py
Manual control is extremely useful to get an intuition about the problem. You will notice immediately that controlling the ship is extremely difficult.
Lunar Lander environment of Farama Gymnasium with manual control
. We slow it down to 10 FPS to give us some fighting chance.
We don't know if it is realistic, but what is certain is that this is definitely not designed to be a fun video game!A good strategy is to land anywhere very slowly and then inch yourself towards the landing pad.
- the legs of the lander are short and soft, and you're not supposed to hit the body on ground, so you have to go very slow
- the thrusters are quite weak and inertia management is super important
- the ground is very slippery
The documentation for it is available at: gymnasium.farama.org/environments/box2d/lunar_lander/ The agent input is described as:so it is a fundamentally flawed robot training example as global x and y coordinates are precisely known.
The state is an 8-dimensional vector: the coordinates of the lander in x & y, its linear velocities in x & y, its angle, its angular velocity, and two booleans that represent whether each leg is in contact with the ground or not.
Variation in the scenario comes from:
- initial speed of vehicle
- shape of lunar surface, but TODO can the ship observe the lunar surface shape in any way? If not, once again, this is a deeply flawed example.
The actions are documented at:so we can make it spin like mad counter clockwise with:
- 0: do nothing
- 1: fire left orientation engine
- 2: fire main engine
- 3: fire right orientation engine
action = 1
To actually play the games manually with keyboard, you need to define your own keybindings with gymnasium.utils.play.play. Feature request for default keybindings: github.com/Farama-Foundation/Gymnasium/discussions/1330
There is no C API, you have to go through Python: github.com/Farama-Foundation/Gymnasium/discussions/1181. Shame.
They have video recording support, minimal ex stackoverflow.com/questions/77042526/how-to-record-and-save-video-of-gym-environment/79514542#79514542
Announced at:
It would be cool if they maintained their own list!
github.com/DLR-RM/rl-baselines3-zoo seems to contain some implementations.
Suggested at: github.com/Farama-Foundation/Gymnasium/discussions/1331
Not-for profit that took up OpenAI Gym maintenance after OpenAI dropped it.
Articles by others on the same topic
There are currently no matching articles.