Managing Thousands of Spark Workers in the Cloud: DataVisor Presents at SAIS 2018

DataVisor’s Yuhao Zheng, Tech Lead Manager, and Boduo Li, Senior Research Scientist, Infrastructure, discuss how DataVisor leverages Spark’s scalability and portability to protect 4 Billion accounts from fraud abuse and money laundering. Watch the video to learn about managing 2000+ Spark workers in clusters and as well as DataVisor’s proprietary SparkGenerator, and automated Spark management platform that optimally balances cost and resource allocation.

Click here to get more information about the session >>

Session Abstract:

At DataVisor, we fight online fraud, abuse, and money laundering using unsupervised machine learning approach that clusters millions of users. In order to support the computationally intensive workload, DataVisor uses Spark as the mainstay of its computation infrastructure. The scalability and portability of our Spark infrastructure is critical to our company when we expand our business. In this talk, we will present our story of how we manage our Spark infrastructure at scale.

At peak time, we have 2000+ Spark workers online, and we group these workers into ~50 clusters of various size. The benefits of this, on one hand, is data isolation, which is critical to DataVisor as we are processing multi-customer data. On the other hand, this is for cost and performance consideration, as we want to provide just enough resources to each Spark application. When under-provision, Spark application will fail due to out-of-memory or out-of-disk. However we want to avoid unnecessary over-provision as it dramatically increases our cloud cost.

Next, we will present our DataVisor SparkGenerator (DSG), which is designed to automatically manage our Spark infrastructure. The responsibility of DSG includes (a) launching and shutting down Spark cluster, to maximize concurrency and minimize cost, (b) assigning Spark applications to the proper clusters intelligently, according to the Spark application profile, and (c) managing the dependency among Spark applications, to make our pipeline run smoothly and efficiently, and (d) running all of the Spark worker on Spot instances, reducing the cloud computation cost versus on-demand by over 80%.

2018-06-26T10:25:31+00:00 June 18th, 2018|Technical Post|