SparkOscope gives developers more complete insights into Apache Spark

SparkOscope gives developers more complete insights into Apache Spark

SparkOscope gives developers more complete insights into Apache Spark

During the last year, the High Performance Systems team for IBM Research, Ireland has been using Apache Spark to perform analytics on large volumes of sensor data. The Spark applications it has developed revolve around cleansing, filtering and ingesting historical data. These applications must be executed daily; therefore, it was essential for the team to understand Spark resource utilization—and thus, provide customers with optimal time to insight and cost control through better-calculated infrastructure needs.

Presently, the conventional way of identifying bottlenecks in a Spark application include inspecting the Spark Web UI either throughout the duration of the jobs/staging execution or the postmortem, which is limited to job-level application metrics reported by the Spark built-in metric system (for example, stage completion time). The current version of the Spark metric system supports recording the values of the metrics in local CSV files and also integrating with external metrics systems such as Ganglia.

The team found it cumbersome to manually consume and efficiently inspect these CSV files generated at the Spark worker nodes. Although using an external monitoring system such as Ganglia would automate this process, the team was still plagued with the inability to derive temporal associations between system-level metrics such as CPU utilization and job-level metrics as reported by Spark (for example, job or stage ID). For instance, the team could not trace the root cause of a peak in HDFS reads or CPU usage to the code in the Spark application causing the bottleneck.

Read Also:
How IBM Is Using Artificial Intelligence to Provide Cybersecurity

To overcome these limitations, IBM developed SparkOscope. This tool takes advantage of the job-level information available through the existing Spark Web UI and minimizes source code pollution by using the current Spark Web UI to monitor and visualize job-level metrics of a Spark application, such as completion time. More importantly, it extends the Web UI with a palette of system-level metrics about the server, virtual machine or container that each of the Spark job’s executor ran on.

 



Enterprise Data World 2017

2
Apr
2017
Enterprise Data World 2017

$200 off with code 7WDATA

Read Also:
Are All Businesses Set To Become Data Companies?

Data Visualisation Summit San Francisco

19
Apr
2017
Data Visualisation Summit San Francisco

$200 off with code DATA200

Read Also:
Business Transformation Demands Modern Data Integration

Chief Analytics Officer Europe

25
Apr
2017
Chief Analytics Officer Europe

15% off with code 7WDCAO17

Read Also:
How Apache Kafka promises to be your enterprise's central nervous system for data
Read Also:
Databricks Spark Platform Gets Deep Learning Boost -

Chief Analytics Officer Spring 2017

2
May
2017
Chief Analytics Officer Spring 2017

15% off with code MP15

Read Also:
IBM releases DataWorks to give enterprise data a home and a brain

Big Data and Analytics for Healthcare Philadelphia

17
May
2017
Big Data and Analytics for Healthcare Philadelphia

$200 off with code DATA200

Read Also:
How Apache Kafka promises to be your enterprise's central nervous system for data

Leave a Reply

Your email address will not be published. Required fields are marked *