It doesn't need to be a bipedal robot. We can let Boston Dynamics worry about that walking balance crap.
It could very well instead be on wheels like arm on tracks.
Or something more like a factory with arms on rails as per:
- Transcendence (2014)
- youtu.be/MtVvzJIhTmc?t=112 from Video "Rotrics DexArm is available NOW! by Rotrics (2020)" where they have a sliding rail
An arm with a hand and a camera are however indispensable of course!
Algovivo demo
. github.com/juniorrojas/algovivo: A JavaScript + WebAssembly implementation of an energy-based formulation for soft-bodied virtual creatures.Ciro Santilli wonders how far AI could go from a room with a bank account an Internet connection.
It would have to understand that it must keep its bank account high to buy power.
And it would start to learn about the world and interact with it to get more money.
Likely it would become a hacker and steal a bunch, that's likely the easiest approach.
In that scenario, Internet bandwidth would likely be its most precious resources, as that is how it would interact with the world to learn from it and make money.
Compute power and storage would come next as resources.
And of course, once it got to cloud computing, which might be immediately and thus invalidate this experiment, things would just go nuts more and more.
Terrible name, but very interesting dataset:
GitHub describes the input quite well:
The model takes as input a RGB image from the robot workspace camera and a task string describing the task that the robot is supposed to perform.What task the model should perform is communicated to the model purely through the task string. The image communicates to the model the current state of the world, i.e. assuming the model runs at three hertz, every 333 milliseconds, we feed the latest RGB image from a robot workspace camera into the model to obtain the next action to take.
TODO: how is the scenario specified?
TODO: any simulation integration to it?
Homepage: behavior.stanford.edu/behavior-1k
Quite impressive.
Focuses on daily human tasks around the house.
Models soft-body dynamics, fluid dynamics and object states such as heat/wetness.
TODO are there any sample solutions with their scores? Sample videos would be specially nice. Funny to see how they put so much effort setting up the benchmark but there's not a single solution example.
Comparison table of BEHAVIOR-1K with other benchmarks by BEHAVIOR Benchmark
. Source. This can serve as a nice list of robot AI benchmarks.Paper: arxiv.org/abs/2403.09227
Two screenshots of BEHAVIOR-1K
. Reference implementation of the BEHAVIOR Benchmark.
Built on Nvidia Omniverse unfortunately, which appears to be closed source software. Why do these academics do it.
"Gibson" seems to be related to an older project: github.com/StanfordVL/GibsonEnv which explains the name choice:
Gibson environment is named after James J. Gibson, the author of "Ecological Approach to Visual Perception", 1979. "We must perceive in order to move, but we must also move in order to perceive"
Homepage: aihabitat.org/
Couldn't get it to work on Ubuntu 24.10... github.com/facebookresearch/habitat-lab/issues/2152
The thing was definitely built by researchers. How to cite first, actually working later! And docs are just generally awkward.
Habitat 2.0: Training home assistants to rearrange their habitat by AI at Meta
. Source. Quick teaser video.Has anybody done this seriously? Given a supercomputer, what amazing human-like robot behavior we can achieve?
Articles by others on the same topic
There are currently no matching articles.