January 7th, 2016

SPEC’s CPU2006 benchmarks on Azure

The CPU2006 suite by SPEC is a complex set of tools and tests that allow the user to measure the performance and throughput of a machine’s CPU. This is done by executing a variety of “tests”, which are executables of real-world scenarios (EG: Compression/decompression of files, Complex quantum chemical computations, 3D Ray tracing operations, amongst others).

These tests are divided into 2 groups: integer calculations and floating point calculations; in turn, each of these groups can be run in 2 different modes:

  • base (which gets you the rating of the CPU to perform tasks sequentially)
  • rate or throughput (which gets you the rating of the CPU performing several tasks in parallel)

Which gives us a total of 4 different metrics we can obtain from a single CPU. More information on SPEC and CPU2006 can be found in their website: https://www.spec.org/cpu2006.

Below you can read some instructions that you may find useful to get these metrics for Azure Virtual Machines.

Prerequisites

In order to run these benchmarks, you need the following:

  • The SPEC CPU2006 V1.2 benchmarking suite, which you will need to acquired directly from the SPEC website: https://www.spec.org/cpu2006.
  • A machine (Virtual or not) to run the benchmarks in (also known as SUT or System Under Test)
    • In our case, we’ll run this on an Azure Virtual Machine
  • The SUT’s OS can be Windows or Linux; the instructions below are for Linux (CentOS 7.1)
  • Unless the test binaries are provided for you, the SUT will need to have C, C++, and Fortran compilers installed (Such as gcc, Intel’s C/Fortran compilers, Visual Studio’s, etc.)
    • The compiler chosen may have a big impact on the results obtained, assuming that they’re configured properly and the right flags are sent from the config file.
  • A SPEC configuration file; aside from the actual hardware the benchmarks are run on, this is one of the things that most impacts the obtained results; There are a lot of flags that can be set on it, on top of the flags that can be sent to the compilers from here (Which allow fine-tuning of how the tests will be compiled and run at a lower level). The easiest way to create a config file is to base it off of a similar architecture from SPEC’s published results page.

Steps

These steps assume a CentOS 7.1 Linux VM; they’ll use the gcc, gcc-c++, and gcc-fortran compilers:

 

Create and sign into the VM

 

Sudo up

sudo -i 

 

Install the compilers

zypper -n install gcc
zypper -n install gcc-c++
zypper -n install gcc-fortran
 

 

Get the CPU2006 Iso into the machine, in our case we hosted it in an Azure blob, so we get it with

curl https://<storageAccount>.blob.core.windows.net/<Blob>/cpu2006-1.2.iso > /tmp/cpu2006-1.2.iso 

 

Mount the iso

mkdir -p
/mnt/disk

mount -o loop /tmp/cpu2006-1.2.iso
/mnt/disk

cd /mnt/disk
 

 

Install SPEC CPU2006

./install.sh -d
/usr/cpu2006
 

 

On the installation path, run SHRC to set the environment for the SPEC tools

cd /usr/cpu2006
. ./shrc
 

 

Copy/create the config file under ‘/usr/cpu2006/config’, in our case, we pre-created the config file and uploaded it to an Azure blob, then we copied it to the VM with curl

curl https://<storageAccount>.blob.core.windows.net/<Blob>/AzureVm.cfg > /usr/cpu2006/config/AzureVm.cfg 

 

After the config file is in place, the benchmarks can be run with the following command, the ‘all’ specifies that we want to run both INT and FP tests on the same execution. You specify inside the config file if the run is a base run or a rate run.

runspec --config=AzureVm.cfg all  

 

After that, the benchmarks will take an average of 2 days to complete, this is highly influenced by the CPU’s processing power (it may be just a day, or it may take 4). When it completes, the results can be seen in the /usr/cpu2006/result in the formats specified in the config file.

 

About Config Files

As mentioned, in order to run any benchmarks, you need to create a configuration file that will tell the software how the tests will be compiled and run, what output formats are desired, etc. You should be able to change the compiler or run flags to tweak the performance of the benchmarking tests, which settings you should use are highly dependent on the SUT’s hardware and desired compilers; it’s a good idea to look at the published results page and find several configuration files for similar systems for hints on what flags work best with your system.

With that said, here’s a brief overview of a sample file and what some lines within it mean:

 

[Note] This is just for personal tracking purpose, anything with a # as the first character is a comment and is ignored by the software

#####################################################
#
# Compiler name/version: [gcc, g++, gfortran 4.8.3]
# Operating system version: [CentOS]
# Hardware: [G1]
#
#####################################################

[Note] These are the ‘runspec’ default settings. When running ‘runspec’ you would normally have to specify most of these settings to tell the suite how to run the tests

tune = base
basepeak = yes

 

[Note] Errors can only be ignored if we’re doing tests runs; if a real run is started, this gets default to ‘no’ regardless of the argument sent or the line in this file

ignore_errors = yes

 

[Note] This is the suffix that the results files will have, so that they can be easily distinguished from other/previous runs, make sure to send this flag and customize it to something you’ll recognize

ext = AzureCentosVm

 

[Note] The desired output formats of the results, PDF is recommended as it supports graphics and is easy to read in other Operating Systems

output_format = txt,html,csv,pdf,cfg

 

[Note] These values show in the report as labels, customize them with your team or department’s name

test_sponsor = Azure
tester = Azure

 

[Note] This line is required to run rate/throughput benchmarks, it tells the suite how many instances of CPU2006 to start up in parallel, in a base run, this line should be commented or removed. It is expected that you’ll set this value to match the amount of cores in the SUT

#System Under Test’s amount of cores
rate = 2

[Note] The path to the compilers, this is how the suite will call them to try to compile the tests, ALSO any optimization flags that wish to be used with these compilers must be added here

default=default=default=default:
#######################################################
#
# Compiler selection
#
#######################################################
CC = /usr/bin/gcc
CXX = /usr/bin/g++
FC = /usr/bin/gfortran

 

[Note] This is just for label purposes in the report, you can fill it up with info about the system under test, but it does not affect the results (Other than these values showing up in it)

## HW config
# default sysinfo is expected to write hw_cpu_name, hw_memory, hw_nchips,
hw_model = Azure VM
hw_cpu_char =
hw_cpu_mhz =
hw_fpu =
hw_ncores =
hw_ncoresperchip =
hw_nthreadspercore =
hw_ncpuorder =
hw_pcache =
hw_scache =
hw_tcache =
hw_ocache =
hw_vendor =
hw_other =
## SW config
# default sysinfo is expected to write prepared_by, sw_os, sw_file, sw_state
sw_compiler = gcc, g++ & gfortran 4.8.3
sw_avail = --
sw_other = None
sw_base_ptrsize = 64-bit
sw_peak_ptrsize = 64-bit

 

[Note] Also used for label purposes, these notes will show up in the report, but do not affect results.

######################################################
# Notes
######################################################
notes_submit_000 ='numactl' was used to bind copies to the cores.
notes_submit_005 =See the configuration file for details.
notes_os_000 ='ulimit -s unlimited' was used to set environment stack size

 

[Note] Here you can set optimizations flags to run with the suite, or set flags that would cause tests to compile when they would normally fail. For more information on this, check the SPEC website

######################################################
# Optimization
######################################################
default=base=default=default:
COPTIMIZE = -O2 -fno-strict-aliasing
CXXOPTIMIZE = -O2 -fno-strict-aliasing
FOPTIMIZE = -O2 -fno-strict-aliasing
######################################################
# 32/64 bit Portability Flags - all
######################################################
default=base=default=default:
PORTABILITY = -DSPEC_CPU_LP64
######################################################
# Portability Flags
######################################################
400.perlbench=default=default=default:
CPORTABILITY = -DSPEC_CPU_LINUX_X64
462.libquantum=default=default=default:
CPORTABILITY = -DSPEC_CPU_LINUX
483.xalancbmk=default=default=default:
CXXPORTABILITY = -DSPEC_CPU_LINUX
481.wrf=default=default=default:
CPORTABILITY = -DSPEC_CPU_CASE_FLAG -DSPEC_CPU_LINUX

 

As you can see, there’s a lot of flags that can be added to the config file, for a full list, and some help on how to write your own config file, refer to SPEC’s extensive documentation on config files here.

 

And that’s how you can get started running CPU2006 benchmarks on Azure! You can always check SPEC’s CPU2006 site here, which has an abundance of information on everything discussed in this article. 

0 comments

Discussion are closed.