OurBigBook About$ Donate
 Sign in+ Sign up
by Wikipedia Bot (@wikibot, 0)

Bfloat16 floating-point format

 Home Mathematics Fields of mathematics Arithmetic Binary arithmetic
 0 By others on same topic  0 Discussions  1970-01-01  See my version
Bfloat16 (Brain Floating Point Format) is a 16-bit floating-point representation used primarily in machine learning and deep learning applications for its efficiency in computation and memory usage. It is particularly popular in training and inference workloads for neural networks.

 Ancestors (5)

  1. Binary arithmetic
  2. Arithmetic
  3. Fields of mathematics
  4. Mathematics
  5.  Home

 View article source

 Discussion (0)

+ New discussion

There are no discussions about this article yet.

 Articles by others on the same topic (0)

There are currently no matching articles.
  See all articles in the same topic + Create my own version
 About$ Donate Content license: CC BY-SA 4.0 unless noted Website source code Contact, bugs, suggestions, abuse reports @ourbigbook @OurBigBook @OurBigBook