Fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services.
What’s New
MLPerf Training
The MLPerf training benchmark suite measures how fast a system can train ML models. To learn more about it, read the overview, read the training rules, or consult the reference implementation of each benchmark. If you intend to submit results, please read the submission rules carefully before you start work. The v0.6 training results are available.
MLPerf Inference
The MLPerf inference benchmark measures how fast a system can perform ML inference using a trained model. The MLPerf inference benchmark is intended for a wide range of systems from mobile devices to servers. To learn more about it, read the overview, read the inference rules, or consult the reference implementation of each benchmark. If you intend to submit results, please read the submission rules carefully before you start work. The v0.5 inference results are available.
Get Involved
About
MLPerf's mission is to build fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services. MLPerf was founded in February, 2018 as a collaboration of companies and researchers from educational institutions. MLPerf is presently led by volunteer working group chairs. MLPerf could not exist without open source code and publically available datasets others have generously contributed to the community.
Support
  • “AI is transforming multiple industries, but for it to reach its full potential, we still need faster hardware and software.” -- Andrew Ng, CEO of Landing AI
  • “Good benchmarks enable researchers to compare different ideas quickly, which makes it easier to innovate.” -- David Patterson, Author of Computer Architecture: A Quantitative Approach
  • “We are glad to see MLPerf grow from just a concept to a major consortium supported by a wide variety of companies and academic institutions. The results released today will set a new precedent for the industry to improve upon to drive advances in AI.” -- Haifeng Wang, Senior Vice President of Baidu
  • “Open standards such as MLPerf and Open Neural Network Exchange (ONNX) are key to driving innovation and collaboration in machine learning across the industry.” -- Bill Jia, VP, AI Infrastructure at Facebook
  • “MLPerf can help people choose the right ML infrastructure for their applications. As machine learning continues to become more and more central to their business, enterprises are turning to the cloud for the high performance and low cost of training of ML models,” – Urs Hölzle, Senior Vice President of Technical Infrastructure, Google
  • “We believe that an open ecosystem enables AI developers to deliver innovation faster. In addition to existing efforts through ONNX, Microsoft is excited to participate in MLPerf to support an open and standard set of performance benchmarks to drive transparency and innovation in the industry.” – Eric Boyd, CVP of AI Platform, Microsoft
  • “MLPerf demonstrates the importance of innovating in scale-up computing as well as at all levels of the computing stack — from hardware architecture to software and optimizations across multiple frameworks.” --Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA
Companies
Researchers from
Contact
General questions: info@mlperf.org
Technical questions: please use GitHub issues
Join the announce list