Introduction
I uploaded a new version of mytpcds, a wrapper around TPC/DS benchmark allowing easy and quick roll-out of TPC/DS test against leading RDMS including Hadoop SQL engines. Just deploy, configure using a template provided and run. In the new version, the implementation of Query Validation Test is added.
Query Validation Test
Query Validation Test verifies the accuracy of SQL engine. RDBMS is to run a sequence of SQL statements, called Validation Queries, on Qualification database and compare the result data set against the expected data set. During Validation Test, the queries should come back with the same result. The Validation Queries are standard TPC/DS queries templates having where query parameters are substituted by predefined constants.
The substitution values for Validation Queries are defined in "TPC-DS Specification" manual, chapter "Appendix B: Business Questions". It is a mundane and error-prone task to prepare Validation Queries manually in 99 queries templates and I wanted also to avoid having two different versions of TPC/DS queries. So I decided to make the process automatic. Firstly I extracted all parameters and substitution values into separate configuration files. The name convention is <n>.par. The <n> maps to appropriate TPC/DS query. For instance, 1.par contains substitution values for query1.tpl.
YEAR=2000STATE=TNAGG_FIELD=SR_RETURN_AMT
The run.sh launcher contains a separate task: ./tpc.sh queryqualification. This task replaces all parameters placeholder with the corresponding validation values and put them in <TPC/DS root dir>/work/{dbtype}queries directory ready to be picked up by other run.sh tasks.
TPC/DS package comes with expected data sets for Query Validation. It is included in <TPC/DS root>/answer_sets. Unfortunately, the format of answer sets is not consistent which makes impossible the automated verification. So I prepared my own version of answer sets using DB2 output. Unfortunately, it does not comply with output from other RDBMS including Hive, so it is still a pending task which output is invalid.
QueryRunner
Query Validation Test requires comparing the current result sets against a reference result set. Unfortunately, the output from different RDBS using the command line client varies significantly which make automated comparison impossible. So I decided to prepare my own Java QueryRunner using JDBC and have the full control of how the result is produced. The target jar is produced by mvn package command. The only prerequisite for every database is JDBC driver jar. So far, I tested the QueryRunner for: NPS/Netezza, DB2, IBM BigSql, SqlServer and Hadoop Hive.
QueryRunner, Hive and Kerberos
Because life is never an easy road free of stones, the real challenge was to execute QueryRunner for Hadoop/Hive in Kerberized environment. The parameters regarding Kerberos cannot be included in URL connection string. Before executing DriverManager.getConnection(url, user, password); the client should authenticate in Hadoop cluster using Hadoop related libraries. Of course, I wanted to avoid keeping two versions of QueryRunner and have the development consistent. So I developed a separate package HadoopAuth and in Hadoop/Hive environment, the Hadoop Kerberos authentication is done using HadoopAuth package but through Java reflection feature. This way I was able to keep QueryRunner clean. How to configure QueryRunner for Hadoop/Hive is described here.
Next steps
- Further analysis of reference QueryValidation answer tests.
- Add Microsoft SqlServer to RDBMS supported by myTPC-DS.
- Enable QueryRunner for all RDBMS supported.
Brak komentarzy:
Prześlij komentarz