Term invented by Ciro Santilli, similar to "nuclear blues", and used to describe the feeling that every little shitty job you are doing (that does not considerably help achieving AGI) is completely pointless given that we are likely close to AGI as of 2023.
Due to the failures of earlier generations, which believed that would quickly achieve AGI, leading to the AI winters, 21st researchers have been very afraid of even trying it, rather going only for smaller subste problems like better neural network designs, at the risk of being considered a crank.
While there is fundamental value in such subset problems, the general view to the final goal is also very important, we will likely never reach AI without it.
This is voiced for example in Superintelligence by Nick Bostrom (2014) section "Opinions about the future of machine intelligence" which in turn quotes Nils Nilsson:
There may, however, be a residual cultural effect on the AI community of its earlier history that makes many mainstream researchers reluctant to align themselves with over-grand ambition. Thus Nils Nilsson, one of the old-timers in the field, complains that his present-day colleagues lack the boldness of spirit that propelled the pioneers of his own generation:
Concern for "respectability" has had, I think, a stultifying effect on some AI researchers. I hear them saying things like, "AI used to be criticized for its flossiness. Now that we have made solid progress, let us not risk losing our respectability." One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human thought - and away from "strong AI" - the variety that attempts to mechanize human-level intelligence
Nilsson’s sentiment has been echoed by several others of the founders, including Marvin Minsky, John McCarthy, and Patrick Winston.
Don't be a pussy, AI researchers!!!
It is hard to overstate how low the level of this conference seems to be at first sight. Truly sad.
By the rich founder of Mt. Gox and Ripple, Jed McCaleb.
Obelisk is the Artificial General Intelligence laboratory at Astera. We are focused on the following problems: How does an agent continuously adapt to a changing environment and incorporate new information? In a complicated stochastic environment with sparse rewards, how does an agent associate rewards with the correct set of actions that led to those rewards? How does higher level planning arise?
Marek Rosa's play thing.
Video 1. AI Game - LLM-driven NPCs that can talk by Marek Rosa (2023) Source. Not the most amazing demo, but the idea is there. Seems to be a preview for AI People. The previous working title seems to have been AI Odyssey.
It is a bit hard to decide if those people are serious or not. Sometimes it feels scammy, but sometimes it feels fun and right!
Particularly concerning is the fact that they are not a not-for-profit entity, and it is hard to understand how they might make money.
Charles Simon, the founder, is pretty focused in how natural neurons work vs artificial neural network models. He has some good explanations of that, and one major focus of the project is their semi open source spiking neuron simulator BrainSimII. While Ciro Santilli believe sthat there might be insight in that, he also has doubts if certain modules of the brain wouldn't be more suitable coded direclty in regular programming languages with greater ease and performance.
FutureAI appears to be Charles' retirement for fun project, he is likely independenty wealthy. Well done.
Video 1. Creativity and AGI by Charles Simon's at AGI-22 (2022) Source. Sounds OK!
Video 2. Machine Learning Is Not Like Your Brain by Future AI (2022) Source. Contains some BrainSimII demos.
The video from futureai.guru/technologies/brian-simulator-ii-open-source-agi-toolkit/ shows a demo of the possibly non open source version. They have a GUI neuron viewer and editor, which is kind of cool.
Video 1. Machine Learning Is Not Like Your Brain by Charles Simon (2022) Source.
Not having a manipulator claw is a major issue with this one.
But they also have a co-simulation focus, which is a bit of a win.
Basically it looks like the dude got enough money after selling some companies, and now he's doing cooler stuff without much need of money. Not bad.