Ubuntu HOWTO by Ciro Santilli 37 Updated +Created
Supervised and unsupervised learning by Ciro Santilli 37 Updated +Created
Supervised learning by Ciro Santilli 37 Updated +Created
k-nearest neighbors algorithm by Ciro Santilli 37 Updated +Created
One of the most simply classification algorithm one can think of: just see whatever kind of point your new point seems to be closer to, and say it is also of that type! Then it is just a question of defining "close".
Solving quadratic equations using continued fractions is a method linked to the approximation of the solutions of these equations through the use of continued fractions. Quadratic equations typically take the form: \[ ax^2 + bx + c = 0 \] where \(a\), \(b\), and \(c\) are coefficients, and \(x\) is the variable we want to solve for.
A relation \( R \) on a set is called a transitive relation if, for all elements \( a, b, c \) in that set, whenever \( a \) is related to \( b \) (denoted \( aRb \)) and \( b \) is related to \( c \) (denoted \( bRc \)), then \( a \) must also be related to \( c \) (denoted \( aRc \)).
Validation data set by Ciro Santilli 37 Updated +Created
Training and inference by Ciro Santilli 37 Updated +Created
This is the first thing you have to know about supervised learning:
Both of those already have hardware acceleration available as of the 2010s.
Training (ML) by Ciro Santilli 37 Updated +Created
Symbolic artificial intelligence by Ciro Santilli 37 Updated +Created
Neuro-symbolic AI by Ciro Santilli 37 Updated +Created
An IBM made/pushed term, but that matches Ciro Santilli's general view of how we should move forward AGI.
Ciro's motivation/push for this can be seen e.g. at: Ciro's 2D reinforcement learning games.
Neural network by Ciro Santilli 37 Updated +Created
Residual neural network by Ciro Santilli 37 Updated +Created
Interesting layer skip architecture thing.
Apparently destroyed ImageNet 2015 and became very very famous as such.
ResNet variant by Ciro Santilli 37 Updated +Created
ResNet v1 vs v1.5 by Ciro Santilli 37 Updated +Created
catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch explains:
The difference between v1 and v1.5 is that, in the bottleneck blocks which requires downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution.
This difference makes ResNet50 v1.5 slightly more accurate (~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec).
Convolutional neural network by Ciro Santilli 37 Updated +Created
CNN convolution kernels are also learnt by Ciro Santilli 37 Updated +Created
CNN convolution kernels are not hardcoded. They are learnt and optimized via backpropagation. You just specify their size! Example in PyTorch you'd do just:
nn.Conv2d(1, 6, kernel_size=(5, 5))
as used for example at: activatedgeek/LeNet-5.
This can also be inferred from: stackoverflow.com/questions/55594969/how-to-visualise-filters-in-a-cnn-with-pytorch where we see that the kernels are not perfectly regular as you'd expected from something hand coded.
LeNet by Ciro Santilli 37 Updated +Created

Pinned article: ourbigbook/introduction-to-the-ourbigbook-project

Welcome to the OurBigBook Project! Our goal is to create the perfect publishing platform for STEM subjects, and get university-level students to write the best free STEM tutorials ever.
Everyone is welcome to create an account and play with the site: ourbigbook.com/go/register. We belive that students themselves can write amazing tutorials, but teachers are welcome too. You can write about anything you want, it doesn't have to be STEM or even educational. Silly test content is very welcome and you won't be penalized in any way. Just keep it legal!
We have two killer features:
  1. topics: topics group articles by different users with the same title, e.g. here is the topic for the "Fundamental Theorem of Calculus" ourbigbook.com/go/topic/fundamental-theorem-of-calculus
    Articles of different users are sorted by upvote within each article page. This feature is a bit like:
    • a Wikipedia where each user can have their own version of each article
    • a Q&A website like Stack Overflow, where multiple people can give their views on a given topic, and the best ones are sorted by upvote. Except you don't need to wait for someone to ask first, and any topic goes, no matter how narrow or broad
    This feature makes it possible for readers to find better explanations of any topic created by other writers. And it allows writers to create an explanation in a place that readers might actually find it.
    Figure 1.
    Screenshot of the "Derivative" topic page
    . View it live at: ourbigbook.com/go/topic/derivative
  2. local editing: you can store all your personal knowledge base content locally in a plaintext markup format that can be edited locally and published either:
    This way you can be sure that even if OurBigBook.com were to go down one day (which we have no plans to do as it is quite cheap to host!), your content will still be perfectly readable as a static site.
    Figure 5. . You can also edit articles on the Web editor without installing anything locally.
    Video 3.
    Edit locally and publish demo
    . Source. This shows editing OurBigBook Markup and publishing it using the Visual Studio Code extension.
  3. https://raw.githubusercontent.com/ourbigbook/ourbigbook-media/master/feature/x/hilbert-space-arrow.png
  4. Infinitely deep tables of contents:
    Figure 6.
    Dynamic article tree with infinitely deep table of contents
    .
    Descendant pages can also show up as toplevel e.g.: ourbigbook.com/cirosantilli/chordate-subclade
All our software is open source and hosted at: github.com/ourbigbook/ourbigbook
Further documentation can be found at: docs.ourbigbook.com
Feel free to reach our to us for any help or suggestions: docs.ourbigbook.com/#contact