Skip to content

Latest commit

 

History

History
184 lines (123 loc) · 11.1 KB

README.md

File metadata and controls

184 lines (123 loc) · 11.1 KB

ZparkIO logo

License: MIT release-badge maven-central-badge CI BCH compliance Coverage Status Mutation testing badge

ZparkIO

Boiler plate framework to use Spark and ZIO together.

The goal of this framework is to blend Spark and ZIO in an easy to use system for data engineers.

Allowing them to use Spark in a new, faster, more reliable way, leveraging ZIO power.

Table of Contents

Created by gh-md-toc

What is this library for ?

This library will implement all the boiler plate for you to be able to include Spark and ZIO in your ML project.

It can be tricky to use ZIO to save an instance of Spark to reuse in your code and this library solve all the boilerplate problem for you.

More About ZparkIO

Public Presentation

Feel free to look at the slides on Google Drive or on SlideShare presented during the ScalaSF meetup on Thursday, March 26, 2020. You can also watch the presentation on Youtube.

ZparkIO was on version 0.7.0, so things might be out of date.

Migrate your Spark Project to ZparkIO

Migrate from Plain Spark to ZparkIO

Why would you want to use ZIO and Spark together?

From my experience, using ZIO/Future in combination with Spark can speed up drastically the performance of your job. The reason being that sources (BigQuery, Postgresql, S3 files, etc...) can be fetch in parallel while the computation are not on hold. Obviously ZIO is much better than Future but it is harder to set up. Not anymore!

Some other nice aspect of ZIO is the error/exception handling as well as the build-in retry helpers. Which make retrying failed task a breath within Spark.

How to use?

I hope that you are now convinced that ZIO and Spark are a perfect match. Let's see how to use this Zparkio.

One of the easiest way to use ZparkIO is to use the giter8 template project:

sbt new leobenkel/zparkio.g8

Include dependencies

First include the library in your project:

libraryDependencies += "com.leobenkel" %% "zparkio" % "[SPARK_VERSION]_[VERSION]"

With version being: maven-central-badge release-badge.

To checkout out the Spark Versions and the Version.

This library depends on Spark, ZIO and Scallop.

Unit-test

You can also add

libraryDependencies += "com.leobenkel" %% "zparkio-test" % "[VERSION]"

With version being: maven-central-badge-test .

To get access to helper function to help you write unit tests.

How to use in your code?

There is a project example you can look at. But here are the details.

Main

The first thing you have to do is extends the ZparkioApp trait. For an example you can look at the ProjectExample: Application.

Spark

By using this architecture, you will have access to SparkSesion anywhere in your ZIO code, via

import com.leobenkel.zparkio.Services._

for {
  spark <- SparkModule()
} yield {
  ???
}

for instance you can see its use here.

Command lines

You will also have access to all your command lines automatically parsed, generated and accessible to you via:

CommandLineArguments ; it is recommended to make this helper function to make the rest of your code easier to use.

Then using it, like here, is easy.

Helpers

In the implicits object, that you can include everywhere. You are getting specific helper functions to help streamline your projects.

Unit test

Using this architecture will literally allow you to run your main as a unit test.

Examples

Simple example

Take a look at the simple project example to see example of working code using this library: SimpleProject.

More complex architecture

A full-fledged, production-ready project will obviously need more code than the simple example. For this purpose, and upon suggestion of several awesome people, I added a more complex project. This is a WIP and more will be added as I go. MoreComplexProject.

Authors

Leo Benkel

  • leobenkel-github-badge
  • leobenkel-linkedin-badge
  • leobenkel-personal-badge
  • leobenkel-patreon-badge

Alternatives