About MLPerf


MLPerf's mission is to build fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services. We believe that a widely accepted benchmark suite will benefit the entire community, including researchers, developers, hardware manufacturers, builders of machine learning frameworks, cloud service providers, application providers, and end users.

Our goals include:

  • Accelerate progress in ML via fair and useful measurement
  • Serve both the commercial and research communities
  • Enable fair comparison of competing systems yet encourage innovation to improve the state-of-the-art of ML
  • Enforce replicability to ensure reliable results
  • Keep benchmarking effort affordable so all can participate

We are motivated in part by the System Performance Evaluation Consortium (SPEC) benchmark for general-purpose computing and the Transaction Processing Council (TPC) benchmark for database systems that drove rapid, measurable performance improvements in both fields for decades starting in the 1980s.

We aim to engage in “agile benchmarking.” Agile programming tells us that frequent feedback works better than heavyweight planning. Applying this approach to benchmarking, we will rapidly iterate based on feedback from users in the ML community rather than to try to anticipate all potential issues.


MLPerf began in February 2018 with a series of meetings between engineers and researchers from Baidu, Google, Harvard University, Stanford University, and the University of California Berkeley. MLPerf launched the Training benchmark suite on May 2nd, 2018 and published the first Training results, including results from Google, Intel, and NVIDIA, on December 12, 2018. MLPerf launched the Inference benchmark suite on June 24th, 2019.


MLPerf would not be possible without the efforts of many people.

General chair:

Google: Peter Mattson

Working group chairs:

ARM: Colin Osborne
Berkeley Lab: Steve Farell
Brookhaven National Lab: Abid Muslim
Cerebrus: Andy Hock
Cadence: Debajyoti Pal
Cisco: Debo Dutta, Xinyuan Huang
Cray: Jacob Balma
Facebook: Carole-Jean Wu
Google: Victor Bittorf, Peter Mattson
Harvard University: Vijay Janapa Reddi
HP: Sergey Serebryakov
In-Q-Tel: Ankur Ankur
Intel: Christine Cheng, Hanlin Tang
Landing.AI: Greg Diamos
MediaTek: Bing Yu
Microsoft: Sarah Bird, Guenther Schmuelling
Myrtle.AI: Peter Baldwin, Sam Davis
NVIDIA: Jonah Alben, Jonathan Cohen
Real World Insights: David Kanter
Stanford University: Cody Coleman
Synopsys: Jeffery Liao
Univeristy of Toronto: Gennady Pekhimenko


Baidu: Greg Diamos, Siddharth Goyal, Sharan Narang
Google: Peter Mattson, Karmel Allison, Kathy Wu, Cliff Young
Harvard University: Gu-Yeon Wei, Udit Gupta, Lillian Pentecost, Brandon Reagen
Stanford University: Peter Bailis, Matei Zaharia, Cody Coleman, Daniel Kang, Deepak Narayanan
University of California, Berkeley: David Patterson, Ion Stoica

Working group chair alumni:

Amazon: Nelis Franken
MediaTek: David Lee
NVIDIA: Paulius Micikevicius

Other contributors:

List forthcoming.