= MLperf
{c}
https://mlcommons.org/en/ Their homepage is not amazingly organized, but it does the job.
Benchmark focused on <deep learning>. It has two parts:
* <training (ML)>: produces a trained network
* <inference (ML)>: uses the trained network
Furthermore, a specific network model is specified for each benchmark in the closed category: so it goes beyond just specifying the dataset.
Results can be seen e.g. at:
* training: https://mlcommons.org/en/training-normal-21/
* inference: https://mlcommons.org/en/inference-datacenter-21/
And there are also separate repositories for each:
* https://github.com/mlcommons/inference
* https://github.com/mlcommons/training
E.g. on https://mlcommons.org/en/training-normal-21/ we can see what the the benchmarks are:
|| Dataset
|| Model
| <ImageNet>
| <ResNet>
| KiTS19
| 3D U-Net
| <Open Images dataset>[OpenImages]
| RetinaNet
| <COCO dataset>
| Mask R-CNN
| LibriSpeech
| RNN-T
| Wikipedia
| BERT
| 1TB Clickthrough
| DLRM
| <Go (game)>
| <MiniGo>
Back to article page