Running Spark Job in AWS Data Pipeline

If you want to run Spark job in AWS data pipeline, add an EmrActivity and use command-runner.jar to submit the spark job.

In the Step field box of the EmrActivity node, enter the command as follows


Some useful resources

BLAS in MLlib

I was looking into BLAS.scala routines for MLlib’s vectors and matrices. It looks like Spark uses the F2jblas for level 1 routines.

// For level-1 routines, we use Java implementation.
private def f2jBLAS: NetlibBLAS = {
  if (_f2jBLAS == null) {
    _f2jBLAS = new F2jBLAS

Here are some great resources about different BLAS implementations

BLAS routines (Level 1,2,3)

Render Json using Jackson in Scala

If you use Jackson Json library in Scala, remember to register the DefaultScalaModule so that ObjectMapper can convert List, Array to Json correctly. See below.

val objectMapper = new ObjectMapper()

Simple example:

import com.fasterxml.jackson.annotation.JsonAutoDetect.Visibility
import com.fasterxml.jackson.annotation.{JsonProperty, PropertyAccessor}
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.module.scala.DefaultScalaModule

object JsonExample {
  case class Car(@JsonProperty("id")  id: Long)
  case class Person(@JsonProperty("name") name: String = null,
                    @JsonProperty("cars") cars: Seq[Car] = null)

  def main(args:Array[String]):Unit = {
    val car1 = Car(12345)
    val car2 = Car(12346)
    val carsOwned = List(car1, car2)
    var person = Person(name="wei", cars=carsOwned)

    val objectMapper = new ObjectMapper()
    objectMapper.setVisibility(PropertyAccessor.ALL, Visibility.NONE)
    objectMapper.setVisibility(PropertyAccessor.FIELD, Visibility.ANY)
    println(s"person: ${objectMapper.writeValueAsString(person)}")

person: {“name”:”wei”,”cars”:[{“id”:12345},{“id”:12346}]}

StackoverflowError when running ALS in Spark’s MLlib

If you ever encounter StackoverflowError when running ALS in Spark’s MLLib, the solution is to turn on checkpointing as follows


Check out the Jira ticket regarding the issue and pull request below

Spark Dedup before Join

In Spark, as with any SQL left outer join, it will produce more rows than the total number of rows in the left table if the right table has duplicates.

You could first drop the duplicates on the right table before performing join as follows.


Or you could also do a groupBy and aggregate

Take a look at this dedup example

Tuning Spark Jobs

Here are some useful resources on how to tune Spark job in terms of number of executors, executor memory and number of cores.