SPEC’s CPU2006 benchmarks on Azure

Avatar

The CPU2006 suite by SPEC is a complex set of tools and tests that allow the user to measure the performance and throughput of a machine’s CPU. This is done by executing a variety of “tests”, which are executables of real-world scenarios (EG: Compression/decompression of files, Complex quantum chemical computations, 3D Ray tracing operations, amongst others).

These tests are divided into 2 groups: integer calculations and floating point calculations; in turn, each of these groups can be run in 2 different modes:

  • base (which gets you the rating of the CPU to perform tasks sequentially)
  • rate or throughput (which gets you the rating of the CPU performing several tasks in parallel)

Which gives us a total of 4 different metrics we can obtain from a single CPU. More information on SPEC and CPU2006 can be found in their website: https://www.spec.org/cpu2006.

Below you can read some instructions that you may find useful to get these metrics for Azure Virtual Machines.

Prerequisites

In order to run these benchmarks, you need the following:

  • The SPEC CPU2006 V1.2 benchmarking suite, which you will need to acquired directly from the SPEC website: https://www.spec.org/cpu2006.
  • A machine (Virtual or not) to run the benchmarks in (also known as SUT or System Under Test)
    • In our case, we’ll run this on an Azure Virtual Machine
  • The SUT’s OS can be Windows or Linux; the instructions below are for Linux (CentOS 7.1)
  • Unless the test binaries are provided for you, the SUT will need to have C, C++, and Fortran compilers installed (Such as gcc, Intel’s C/Fortran compilers, Visual Studio’s, etc.)
    • The compiler chosen may have a big impact on the results obtained, assuming that they’re configured properly and the right flags are sent from the config file.
  • A SPEC configuration file; aside from the actual hardware the benchmarks are run on, this is one of the things that most impacts the obtained results; There are a lot of flags that can be set on it, on top of the flags that can be sent to the compilers from here (Which allow fine-tuning of how the tests will be compiled and run at a lower level). The easiest way to create a config file is to base it off of a similar architecture from SPEC’s published results page.

Steps

These steps assume a CentOS 7.1 Linux VM; they’ll use the gcc, gcc-c++, and gcc-fortran compilers:

 

Create and sign into the VM

 

Sudo up

 

Install the compilers

 

Get the CPU2006 Iso into the machine, in our case we hosted it in an Azure blob, so we get it with

 

Mount the iso

 

Install SPEC CPU2006

 

On the installation path, run SHRC to set the environment for the SPEC tools

 

Copy/create the config file under ‘/usr/cpu2006/config’, in our case, we pre-created the config file and uploaded it to an Azure blob, then we copied it to the VM with curl

 

After the config file is in place, the benchmarks can be run with the following command, the ‘all’ specifies that we want to run both INT and FP tests on the same execution. You specify inside the config file if the run is a base run or a rate run.

 

After that, the benchmarks will take an average of 2 days to complete, this is highly influenced by the CPU’s processing power (it may be just a day, or it may take 4). When it completes, the results can be seen in the /usr/cpu2006/result in the formats specified in the config file.

 

About Config Files

As mentioned, in order to run any benchmarks, you need to create a configuration file that will tell the software how the tests will be compiled and run, what output formats are desired, etc. You should be able to change the compiler or run flags to tweak the performance of the benchmarking tests, which settings you should use are highly dependent on the SUT’s hardware and desired compilers; it’s a good idea to look at the published results page and find several configuration files for similar systems for hints on what flags work best with your system.

With that said, here’s a brief overview of a sample file and what some lines within it mean:

 

[Note] This is just for personal tracking purpose, anything with a # as the first character is a comment and is ignored by the software

[Note] These are the ‘runspec’ default settings. When running ‘runspec’ you would normally have to specify most of these settings to tell the suite how to run the tests

 

[Note] Errors can only be ignored if we’re doing tests runs; if a real run is started, this gets default to ‘no’ regardless of the argument sent or the line in this file

 

[Note] This is the suffix that the results files will have, so that they can be easily distinguished from other/previous runs, make sure to send this flag and customize it to something you’ll recognize

 

[Note] The desired output formats of the results, PDF is recommended as it supports graphics and is easy to read in other Operating Systems

 

[Note] These values show in the report as labels, customize them with your team or department’s name

 

[Note] This line is required to run rate/throughput benchmarks, it tells the suite how many instances of CPU2006 to start up in parallel, in a base run, this line should be commented or removed. It is expected that you’ll set this value to match the amount of cores in the SUT

[Note] The path to the compilers, this is how the suite will call them to try to compile the tests, ALSO any optimization flags that wish to be used with these compilers must be added here

 

[Note] This is just for label purposes in the report, you can fill it up with info about the system under test, but it does not affect the results (Other than these values showing up in it)

 

[Note] Also used for label purposes, these notes will show up in the report, but do not affect results.

 

[Note] Here you can set optimizations flags to run with the suite, or set flags that would cause tests to compile when they would normally fail. For more information on this, check the SPEC website

 

As you can see, there’s a lot of flags that can be added to the config file, for a full list, and some help on how to write your own config file, refer to SPEC’s extensive documentation on config files here.

 

And that’s how you can get started running CPU2006 benchmarks on Azure! You can always check SPEC’s CPU2006 site here, which has an abundance of information on everything discussed in this article. 

Avatar

Follow    

No Comments.