Blog do projektu Open Source JavaHotel

wtorek, 30 maja 2017

BigInsights, docker

Problem
I've spent some time trying to dockerize BigInsights, IBM Open Platform. After resolving some issues, I was able to perform installation. Everything run smoothly except Spark installation. Although installation was reported as successful, Spark History Server did not start.

 File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 424, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 265, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 243, in _assert_valid
    raise Fail(format("Source {source} doesn't exist"))
resource_management.core.exceptions.Fail: Source /usr/iop/current/spark-historyserver/lib/spark-assembly.jar doesn't exist
It turned out that spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm did not unpack all files included. Some directories: /usr/iop/4.2.0.0/spark/lib and /usr/iop/4.2.0.0/spark/sbin were skipped. What is more interesting, while installing the package using rpm command directly, rpm -i spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm, all content of rpm was extracted correctly, while using yum command, yum install spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm, some directories were excluded without signaling any error. I spent sleepless night trying to get a clue.
Solution
I found the explanation here. There was a mistake in spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm package. Some files in the rpm were marked as 'documentation'. It was revealed by running rpm --dump command.
rpm -qp --dump spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm /usr/iop/4.2.0.0/spark/sbin/start-shuffle-service.sh 1279 1466126392 dfe89bfa493c263e4daa8217a9f22db12d6e9a9e1b161c5733acddc5d6b6498c 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/start-slave.sh 3151 1466126392 623bc623a3c92394cd4b44699ea3ab78b049149f10ee4b6f41d30ab2859f8395 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/start-slaves.sh 2061 1466126391 24f329f4cd7c48b8cbd52e87b33e1e17228b5ff97f1bcb5b403e1b538b17e32a 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/start-thriftserver.sh 1824 1466126392 fcef75ab00ef295ade0c926f584902291b3c06131dcb88786a5899e48de12bae 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-all.sh 1478 1466126392 efb2dc4fafed8d94d652c8cfd81f6ba59de6e9c6ae04da2e234e291f867f1d41 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-history-server.sh 1056 1466126393 8f74163405d9832f7f930ed00582dd89f3e6ffc1c6f3750e3a4a1639c63593ae 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-master.sh 1220 1466126391 ba5058a39699ae4d478dc1821fc999f032754b476193896991100761cd847710 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-mesos-dispatcher.sh 1112 1466126393 b30ce7366e5945f6c02494ce402bcebe5573c423d5eed646b0efc37a2dbc4a8c 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-mesos-shuffle-service.sh 1084 1466126393 6da69a8927513ed32fdb2d8088e3971596201595a84c9617aa1bdeefd0ef8de7 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-shuffle-service.sh 1067 1466126391 817ef1a4679c22a9bc3f182ee3e0282001ab23c1c533c12db3d0597abad81d58 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-slave.sh 1557 1466126392 cd0e35cd11b3452e902e117226e1ee851fc2cb7e2fcce8549c1c4f4ef591173e 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-slaves.sh 1298 1466126392 a3366c8ab6b142eb7caf46129db2e73e610a3689e3c3005023755212eb5c008c 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-thriftserver.sh 1066 1466126391 53b9e9a886c03701d7b1973d2c4448c484de2b5860959f7824e83c4c2a48170b 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/work 19 1466127922 0000000000000000000000000000000000000000000000000000000000000000 0120777 root root 0 1 0 /var/run/spark/work /var/lib/spark 6 1466127905 0000000000000000000000000000000000000000000000000000000000000000 040755 spark spark 0 0 0 X /var/log/spark 6 1466127905 0000000000000000000000000000000000000000000000000000000000000000 040755 spark spark 0 0 0 X /var/run/spark 17 1466127905 0000000000000000000000000000000000000000000000000000000000000000 040755 spark spark 0 0 0 X /var/run/spark/work 6 1466127905 0000000000000000000000000000000000000000000000000000000000000000 040755 spark spark 0 0 0 X

The signature: root root 0 1 0 (mark 1) describes the file as "documentation". To shrink the space consumed by packages, the docker "centos" image contains "tsflags=nodocs" feature in /etc/yum.conf configuration file.
So the temporary workaround is to comment out this feature. To avoid loading unnecessary documentation, one can install Spark separately and have this patch in force only during the installation of this component.

środa, 10 maja 2017

Sqoop, Hive, load data incrementally

Introduction
Hive is a popular, SQL-like engine over HDFS data and Sqoop is a tool to transfer data from external RDBMS tables into HDFS. Sqoop simply runs SELECT query against RDBS table and the result is stored in HDFS or as a Hive table directly. After the first loading, the effective way to keep tables synchronized is to update Hive table incrementally in order to avoid moving all data again and again. Theoretically,  the task is simple. Assuming that external table has a primary key and source data are not updated or deleted, take the greatest key already inserted into Hive table and transfer only rows whose primary keys are greater than this threshold.
There is also an additional requirement. A very effective data format for Hive tables is Parquet but Sqoop can only create Hive tables in text format. There is --as-parquetfile Sqoop parameter but I failed to try to enable it for Hive tables.
Solution
The solution is uploaded here.
I decided to implement a two-hop solution. Firstly load delta rows into a staging table in text format using Sqoop and afterward insert rows into the target Parquet Hive table. The whole workflow can be described as follows:
  • Recognize if the target Hive table exists already. If yes, calculate the maximum value for the primary key.
  • Extract from external RDBMS table all rows with the primary key greater than maximum or the whole table if the Hive table does not exist yet. Store data into the staging table.
  • If the target Hive table does not exist, create the table in Parquet format. Execute Hive command "CREATE .. TABLE AS SELECT * FROM stage.table
  • If the target Hive table is created already, simply add new rows with command: INSERT INTO TABLE .. SELECT * FROM stage.table
The solution is implemented as Oozie workflow. Can be launched as a single Oozie task or as Oozie coordinator task. Sample shell scripts for both tasks are available here. common.properties file is used as a template for job.properties and coordinator.properties file.