- www.quora.com/Will-Google-open-source-AlphaGo Will Google open source AlphaGo?
- www.nature.com/articles/nature16961 Mastering the game of Go with deep neural networks and tree search by Silver et al. (2016), published without source code
AlphaGo Zero cheat sheet by David Foster (2017)
Source. Generalization of AlphaGo Zero that plays Go, chess and shogi.
- www.science.org/doi/10.1126/science.aar6404 A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play by Silver et al. (2018), published without source code
- www.quora.com/Is-there-an-Open-Source-version-of-AlphaZero-specifically-the-generic-game-learning-tool-distinct-from-AlphaGo
www.quora.com/Which-chess-engine-would-be-stronger-Alpha-Zero-or-Stockfish-12/answer/Felix-Zaslavskiy explains that it beat Stockfish 8. But then Stockfish was developed further and would start to beat it. We know this because although AlphaZero was closed source, they released the trained artificial neural network, so it was possible to replay AlphaZero at its particular stage of training.
The project kind of died circa 2020 it seems, a shame. Likely they funding ran out. The domain is dead as of 2023, last archive from 2022: web.archive.org/web/20220331022932/http://gvgai.net/ is marked as funded by DeepMind. Researchers really should use university/GitHub domain names!
Similar goals to Ciro's 2D reinforcement learning games, but they were focusing mostly on discrete games.
They have some source at: github.com/GAIGResearch/GVGAI TODO review
From QMUL Game AI Research Group:From other universities:TODO check:
- Simon M. Lucas: gaigresearch.github.io/members/Simon-Lucas, principal investigator
- Diego Perez Liebana www.linkedin.com/in/diegoperezliebana/
- Raluca D. Gaina: www.linkedin.com/in/raluca-gaina-347518114/ from Queen Mary
- Ahmed Khalifa
- Jialin Liu
so what's that point of "Open" in the name anymore??
- www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ "The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism."
- archive.ph/wXBtB How OpenAI Sold its Soul for $1 Billion
- www.reddit.com/r/GPT3/comments/n2eo86/is_gpt3_open_source/
www.cam.ac.uk/research/news/the-best-or-worst-thing-to-happen-to-humanity-stephen-hawking-launches-centre-for-the-future-of
The rise of powerful AI will either be the best or the worst thing ever to happen to humanity. We do not yet know which.
The key takeaway is that setting an explicit value function to an AGI entity is a good way to destroy the world due to poor AI alignment. We are more likely to not destroy by creating an AI whose goals is to "do want humans what it to do", but in a way that it does not know before hand what it is that humans want, and it has to learn from them. This approach appears to be known as reward modeling.
Some other cool ideas:
- a big thing that is missing for AGI in the 2010's is some kind of more hierarchical representation of the continuous input data of the world, e.g.:
- game theory can be seen as part of artificial intelligence that deals with scenarios where multiple intelligent agents are involved
- probability plays a crucial role in our everyday living, even though we don't think too much about it every explicitly. He gives a very good example of the cost/risk tradeoffs of planning to the airport to catch a plane. E.g.:
- economy, and notably the study of the utility, is intrinsically linked to AI alignment
Good points:
- Post mortem connectome extraction with microtome
- the idea of a singleton, i.e. one centralized power, possibly AGI-based, that decisivly takes over the planet/reachable universe
- AGI research has become a taboo in the early 21st century section "Opinions about the future of machine intelligence"
is a hyperparameter, and are common choices when doing dataset exploration, as they can be easily visualized on a planar plot.
The mapping is done by projecting all points to a dimensional hyperplane. PCA is an algorithm for choosing this hyperplane and the coordinate system within this hyperplane.
The hyperplane choice is done as follows:
- the hyperplane will have origin at the mean point
- the first axis is picked along the direction of greatest variance, i.e. where points are the most spread out.Intuitively, if we pick an axis of small variation, that would be bad, because all the points are very close to one another on that axis, so it doesn't contain as much information that helps us differentiate the points.
- then we pick a second axis, orthogonal to the first one, and on the direction of second largest variance
- and so on until orthogonal axes are taken
www.sartorius.com/en/knowledge/science-snippets/what-is-principal-component-analysis-pca-and-how-it-is-used-507186 provides an OK-ish example with a concrete context. In there, each point is a country, and the input data is the consumption of different kinds of foods per year, e.g.:so in this example, we would have input points in 4D.
- flour
- dry codfish
- olive oil
- sausage
Suppose that every country consumes the same amount of flour every year. Then, that number doesn't tell us much about which country each point represents (has the least variance), and the first PCA axes would basically never point anywhere near that direction.
Another cool thing is that PCA seems to automatically account for linear dependencies in the data, so it skips selecting highly correlated axes multiple times. For example, suppose that dry codfish and olive oil consumption are very high in Portugal and Spain, but very low in Germany and Poland. Therefore, the variation is very high in those two parameters, and contains a lot of information.
However, suppose that dry codfish consumption is also directly proportional to olive oil consumption. Because of this, it would be kind of wasteful if we selected:since the information about codfish already tells us the olive oil. PCA apparently recognizes this, and instead picks the first axis at a 45 degree angle to both dry codfish and olive oil, and then moves on to something else for the second axis.
In the case of machine learning in particular, it is not part of the training data set.
Hyperparameters can also be considered in domains outside of machine learning however, e.g. the step size in partial differential equation solver is entirely independent from the problem itself and could be considered a hyperparamter. One difference from machine learning however is that step size hyperparameters in numerical analysis are clearly better if smaller at a higher computational cost. In machine learning however, there is often an optimum somewhere, beyond which overfitting becomes excessive.
Pinned article: Introduction to the OurBigBook Project
Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
Intro to OurBigBook
. Source. We have two killer features:
- topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculusArticles of different users are sorted by upvote within each article page. This feature is a bit like:
- a Wikipedia where each user can have their own version of each article
- a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.Figure 1. Screenshot of the "Derivative" topic page. View it live at: ourbigbook.com/go/topic/derivativeVideo 2. OurBigBook Web topics demo. Source. - local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
- to OurBigBook.com to get awesome multi-user features like topics and likes
- as HTML files to a static website, which you can host yourself for free on many external providers like GitHub Pages, and remain in full control
Figure 3. Visual Studio Code extension installation.Figure 4. Visual Studio Code extension tree navigation.Figure 5. Web editor. You can also edit articles on the Web editor without installing anything locally.Video 3. Edit locally and publish demo. Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.Video 4. OurBigBook Visual Studio Code extension editing and navigation demo. Source. - Infinitely deep tables of contents:
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact






