FFmpeg filter graph Updated 2025-07-16
Filter graphs are a thing of great beauty. What an amazingly obscure domain-specific language, but which can produce striking results with very little!!!
ffplay -autoexit -nodisp -f lavfi -i '
sine=frequency=500[a];
sine=frequency=1000[b];
[a][b]amerge, atrim=end=2
'
which creates a graph:
                              +--------+
[sine=frequency=500]--->[a]-->|        |
                              | amerge |-->[atrim]-->[output]
[sine=frequency=1000]-->[b]-->|        |
                              +--------+
and plays 500 Hz on the left channel and 1000 Hz on the right channel for 2 seconds.
So we see the following syntax patterns:
  • sine, amerge and atrim are filters
  • sine=frequency=500: the first = says "araguments follow"
    • frequency=500 sets the frequency argument of the sine filter
    • for multiple arguments the syntax is to separate arguments with colons e.g. sine=frequency=500:duration=2
  • ;: separates statements
  • [a], [b]: sets the name of an edge
  • ,: creates unnamed edge between filters that have one input and one output
A list of all filters can be obtained ith:
ffmpeg -filters
and parameters for a single filter can be obtained with:
ffmpeg --help filter=sine
Related question: stackoverflow.com/questions/69251087/in-ffmpeg-command-line-how-to-show-all-filter-settings-and-their-parameters-bef
TODO dump graph to ASCII art? trac.ffmpeg.org/wiki/FilteringGuide#Visualizingfilters mentions a -dumpgraph option, but haven't managed to use it yet.
Bibliography:
ffplay Updated 2025-07-16
Awesome tool to view quick stuff quickly without generating files. Unfortunately it doesn't support all options that the ffmpeg CLI supports, e.g. ffplay multiple input files. One day, one day.
These come with pre-installed drivers, so e.g. nvidia-smi just works on them out of the box, tested on g5.xlarge which has an Nvidia A10G GPU. Good choice as a starting point for deep learning experiments.
Craig Steven Wright meme Updated 2025-07-16
TODO find the Shroud of Turin one.
Web portal Updated 2025-07-16
Open X-Embodiment Updated 2025-07-16
GitHub describes the input quite well:
The model takes as input a RGB image from the robot workspace camera and a task string describing the task that the robot is supposed to perform.
What task the model should perform is communicated to the model purely through the task string. The image communicates to the model the current state of the world, i.e. assuming the model runs at three hertz, every 333 milliseconds, we feed the latest RGB image from a robot workspace camera into the model to obtain the next action to take.
TODO: how is the scenario specified?
TODO: any simulation integration to it?
https://web.archive.org/web/20250209172539if_/https://raw.githubusercontent.com/google-deepmind/open_x_embodiment/main/imgs/teaser.png
useEffect Updated 2025-07-16
This should only be used for things that happen outside of the state that React trackes, e.g. window event handlers.

Unlisted articles are being shown, click here to show only listed articles.