Source: wikibot/bfloat16-floating-point-format

= Bfloat16 floating-point format
{wiki=Bfloat16_floating-point_format}

Bfloat16 (Brain Floating Point Format) is a 16-bit floating-point representation used primarily in machine learning and deep learning applications for its efficiency in computation and memory usage. It is particularly popular in training and inference workloads for neural networks.