MLPerf Training v0.6 Results
July 10th, 2019
You may wish to read the Training Overview to better understand the results.
To see the earlier MLPerf Training v0.5 results go here.
MLPerf v0.6 Results Table Explanation
The MLPerf results table is organized first by Division and then by Category. MLPerf has two divisions. The Closed division is intended to compare hardware platforms or software frameworks “apples-to-apples” and requires using the same model and optimizer as the reference implementation. The Open division is intended to foster faster models and optimizers and allows any ML approach that can reach the target quality. MLPerf divides benchmark results into four Categories based on availability.
Available In Cloud systems are available for rent in the cloud.
Available On Premise systems contain only components that are available for purchase.
Preview systems must be submittable as Available In Cloud or Available on Premise in the next submission round.
Research systems either contain experimental hardware or software or available components at experimentally large scale.
Each row in the results table is a set of results produced by a single submitter
using the same software stack and hardware platform. Each row contains the following information:
Submitter: The organization that submitted the results.
System: General system description.
Processor and count: The type and number of CPUs used, if CPUs perform the majority of ML compute.
Accelerator and count: The type and number of accelerators used, if accelerators perform the majority of ML compute.
Software: The ML framework and primary ML hardware library used.
Benchmark Results: The benchmark results as described above. By default, benchmark results are presented as speedups relative to a Pascal P100. The results page enables switching to absolute times.
Details: link to metadata for submission.
Code: link to code for submission.
Notes: arbitrary notes from submitter.