SparkOscope gives developers more complete insights into Apache Spark

SparkOscope gives developers more complete insights into Apache Spark

SparkOscope gives developers more complete insights into Apache Spark

During the last year, the High Performance Systems team for IBM Research, Ireland has been using Apache Spark to perform analytics on large volumes of sensor data. The Spark applications it has developed revolve around cleansing, filtering and ingesting historical data. These applications must be executed daily; therefore, it was essential for the team to understand Spark resource utilization—and thus, provide customers with optimal time to insight and cost control through better-calculated infrastructure needs.

Presently, the conventional way of identifying bottlenecks in a Spark application include inspecting the Spark Web UI either throughout the duration of the jobs/staging execution or the postmortem, which is limited to job-level application metrics reported by the Spark built-in metric system (for example, stage completion time). The current version of the Spark metric system supports recording the values of the metrics in local CSV files and also integrating with external metrics systems such as Ganglia.

The team found it cumbersome to manually consume and efficiently inspect these CSV files generated at the Spark worker nodes. Although using an external monitoring system such as Ganglia would automate this process, the team was still plagued with the inability to derive temporal associations between system-level metrics such as CPU utilization and job-level metrics as reported by Spark (for example, job or stage ID). For instance, the team could not trace the root cause of a peak in HDFS reads or CPU usage to the code in the Spark application causing the bottleneck.

Read Also:
Neural Networks and Modern BI Platforms Will Evolve Data and Analytics

To overcome these limitations, IBM developed SparkOscope. This tool takes advantage of the job-level information available through the existing Spark Web UI and minimizes source code pollution by using the current Spark Web UI to monitor and visualize job-level metrics of a Spark application, such as completion time. More importantly, it extends the Web UI with a palette of system-level metrics about the server, virtual machine or container that each of the Spark job’s executor ran on.

 



Data Science Congress 2017

5
Jun
2017
Data Science Congress 2017

20% off with code 7wdata_DSC2017

Read Also:
How to Dump Jargon and Really Use Business Intelligence

AI Paris

6
Jun
2017
AI Paris

20% off with code AIP17-7WDATA-20

Read Also:
Microsoft CEO Satya Nadella Has Much To Say About Artificial Intelligence

Chief Data Officer Summit San Francisco

7
Jun
2017
Chief Data Officer Summit San Francisco

$200 off with code DATA200

Read Also:
The art of designing data flow on a free-form canvas
Read Also:
Big Data for the Small Enterprise

Customer Analytics Innovation Summit Chicago

7
Jun
2017
Customer Analytics Innovation Summit Chicago

$200 off with code DATA200

Read Also:
Data Version Control

HR & Workforce Analytics Innovation Summit 2017 London

12
Jun
2017
HR & Workforce Analytics Innovation Summit 2017 London

$200 off with code DATA200

Read Also:
Using 'Faked' Data is Key to Allaying Big Data Privacy Concerns

Leave a Reply

Your email address will not be published. Required fields are marked *