Benchmarking Technologies for Storing and Querying Genomic Data

Avatar

Ami

Background

Recently we worked with a startup that provides a big data platform for storing and managing genomic data. Using a dashboard, medical professionals can receive notifications about common mutations in their patients’ DNA, search for specific mutations, and perform family analysis queries. Since ~99.5% of the human genome is shared, the DNA sequencing process typically compares sample sequences with a specific well-known reference genome. The result of this comparison is stored in a raw file called VCF which only contains the differences between the sample and reference genome. This file contains ~1M rows, each represents a gene mutation, namely, a gene index and a value which is different than the one in the reference genome for that index. After getting a VCF file, it goes through a normalization process that prepares it to support different scenarios that the platform offers. The normalization process is a pipeline of steps that adds metadata and annotations from external gene-related knowledge-bases. The normalization process produces a large file containing ~1M rows and ~50 columns.

The Problem

Clinicians and researchers need to be able to run dynamic queries at runtime using the dashboard on their patients’ sequencing results. A basic query can be filtering a sample by multiple columns. A more advanced query can result in a join between a sample with a reference DB (that contains 60M rows). An even more advanced scenario is a multiple join operation in the same query to support family analysis scenarios, in addition to a join with an external reference DB repository. Since these queries are triggered via the dashboard, the requirement was to return the result in less than 2 seconds. Returning a result after a longer period of time is considered a bad user experience. Performing such queries at runtime, such that each sample contains so many rows and columns requires powerful computational resources and takes a relatively long time to complete.

The Engagement

We collaborated with this startup with the goal of performing a query performance benchmark to evaluate several scalable storage architectures:

Impala running on a Cloudera cluster

Impala is well-suited for executing SQL queries for interactive exploratory analytics on large data sets, whereas Hive and MapReduce are appropriate for very long running, batch-oriented tasks. Impala also uses columnar storage which makes much more sense for the type of queries that we used because of the way the data is physically stored on disk. When executing SELECT queries and projecting a few of the columns, only the relevant blocks of the required columns will be read from the disk, this is because the data for the same column of different rows sits next to one another. Read more about columnar storage here.

Spark running on HDInsight cluster

Spark is the first technology which makes sense to try when working with big data on top of HDFS. It can provide a good reference to trying other technologies such as Impala. Read more about spark here.

SQL Data Warehouse

SQL Data Warehouse is a massively parallel processing (MPP) distributed database system. By dividing data and processing capability across multiple nodes, SQL Data Warehouse can offer huge scalability. Read more about SQL Data Warehouse here.

The queries

We agreed on a select number of queries that support common scenarios. We used the same sample files in the different storage solutions and tested how long it took for the query to run on each of them. The queries below are written in a “SQL-like” style for the purpose of clarity. They were implemented in each of the storage solutions according to its syntax.

The reference DB table contains 60M rows in the same structure of the VCF file, but in addition, also contains more metadata about each gene. The normalization process uses this data to annotate the rows in the sample VCF file.

The following table describes the queries and the benchmark test results:

.TechnologyImpalaSpark on HDInsightSpark on ClouderaSQL DWSQL DW
.Cluster Type3 x D133 x D123 x DS13 (5xP30 disks each)400 DWU1000 DWU
indexScenariosecondssecondssecondssecondsseconds
1select from sample, order by gene position, take first 100 records25211
2select from reference db table, filter by one field, order by another field, take first 100 records11963826
3join sample with reference db table on gene id, order by gene position, take first 100 records775347168156
4join sample with reference db table on gene id, take first 100 records21127512111
5join sample with reference db table on gene id, filter by one field, order by gene position, take first 100 records131116111
6join sample with reference db table on gene id, group by one field, order by another field, take first 100 records37412352
7select from reference db table, group by one field, order by another field, take first 100 records122520157
8select from sample rows that don’t exist in reference db table, take first 100 records3736743252
9join 4 samples on gene id, filter by one field, order by another field, take first 100 records7162032
10add 1M records into a 60M records table7353343
11add 1M records into a new empty table4.53533621
12add 60M records into a new empty table140500280214124

Conclusions

Based on the results, the above alternatives didn’t satisfy the 2 seconds requirement.

However, we recommended the following:

  1. In the short term, use RDBMS (ie SQL Server, MySql, Postgres) for driving most of the realtime queries issued via the dashboard and that must complete in <2 seconds. Validate that the execution plan doesn’t perform a full-table scan.
  2. Deploy a small Hadoop cluster specifically including Spark, Impala, HDFS and Parquet. Use the cluster to offload some of the more complex web dashboard queries that do not need a sub-second response from RDBMS to Impala. Over time, expand the number of queries that are handled outside of RDBMS by using Impala to do the “heavy lifting”.
  3. Expand ad-hoc query capabilities and optimize data preparation by using Impala and Spark. Pre-process raw data using Spark and Impala and load relevant slices of the data into the RDBMS. Use Impala when building out new ad-hoc query capabilities in the web application and include relevant UI tweaks (i.e. “wait… loading…”) to minimize the user-experience impact of the longer response times.

0 comments

Comments are closed.