Fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services.
- 7/29/20: MLPerf Training v0.7 results are available.
- 11/6/19: MLPerf Inference v0.5 results are available.
- 7/10/19: MLPerf Training v0.6 results are available.
- 6/24/19: MLPerf Inference v0.5 launched. Submissions due 10/11. Results public 11/6.
- 2/14/19: MLPerf Training v0.6 launched. Results due 5/24.
- 12/12/18: MLPerf Training v0.5 results are available.
- 5/2/18: MLPerf Training v0.5 launched. Results due 11/9.
The MLPerf training benchmark suite measures how fast a system can train ML models. To learn more about it, read the overview, read the training rules, or consult the reference implementation of each benchmark. If you intend to submit results, please read the submission rules carefully before you start work. The v0.7 training results are available.
The MLPerf inference benchmark measures how fast a system can perform ML inference using a trained model. The MLPerf inference benchmark is intended for a wide range of systems from mobile devices to servers. To learn more about it, read the overview, read the inference rules, or consult the reference implementation of each benchmark. If you intend to submit results, please read the submission rules carefully before you start work. The v0.5 inference results are available.
MLPerf welcomes everyone who is interested in the performance of ML systems! You can:
MLPerf's mission is to build fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services. MLPerf was founded in February, 2018 as a collaboration of companies and researchers from educational institutions. MLPerf is presently led by volunteer working group chairs. MLPerf could not exist without open source code and publically available datasets others have generously contributed to the community.
- “AI is transforming multiple industries, but for it to reach its full potential, we still need faster hardware and software.” -- Andrew Ng, CEO of Landing AI
- “Good benchmarks enable researchers to compare different ideas quickly, which makes it easier to innovate.” -- David Patterson, Author of Computer Architecture: A Quantitative Approach
- “We are glad to see MLPerf grow from just a concept to a major consortium supported by a wide variety of companies and academic institutions. The results released today will set a new precedent for the industry to improve upon to drive advances in AI.” -- Haifeng Wang, Senior Vice President of Baidu
- “Open standards such as MLPerf and Open Neural Network Exchange (ONNX) are key to driving innovation and collaboration in machine learning across the industry.” -- Bill Jia, VP, AI Infrastructure at Facebook
- “MLPerf can help people choose the right ML infrastructure for their applications. As machine learning continues to become more and more central to their business, enterprises are turning to the cloud for the high performance and low cost of training of ML models,” – Urs Hölzle, Senior Vice President of Technical Infrastructure, Google
- “We believe that an open ecosystem enables AI developers to deliver innovation faster. In addition to existing efforts through ONNX, Microsoft is excited to participate in MLPerf to support an open and standard set of performance benchmarks to drive transparency and innovation in the industry.” – Eric Boyd, CVP of AI Platform, Microsoft
- “MLPerf demonstrates the importance of innovating in scale-up computing as well as at all levels of the computing stack — from hardware architecture to software and optimizations across multiple frameworks.” --Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA
Hewlett Packard Enterprise
Universidad de Sonora
University of Arkansas, Littlerock
University of California, Berkeley
University of California, Santa Cruz
University of Illinois, Urbana Champaign
University of Minnesota
University of Texas, Austin
University of Toronto