It could be argued that the most important component of any Data Analysis is the component that contains the data. In Spark, there are a number of different data containers, primarily RDDs, Data Frames and Datasets. So, what are these containers and when should we use them?
A Resilient Distributed Dataset, aka RDD, is "a fault-tolerant collection of elements that can be operated on in parallel." In layman's terms, an RDD is a collection of "things", nothing more. These "things" can take many forms, including sentences from a book, records from a database or machine data represented in binary. To help us visualize, here's a basic RDD created from a CSV file. For reasons we'll touch on later in this post, we'll be using Scala for this post.
val rdd_lines = sc.textFile("/databricks-datasets/adult/adult.data")
We see that creating an RDD can be done with one easy function. In this snippet, sc represents the default SparkContext. This is extremely important, but is better left for a later post. SparkContext offers the .textFile() function which creates an RDD from a text file, parsing each line into it's own element in the RDD. These lines happen to represent CSV records. However, there are many other common examples that use lines of free text.
It should also be noted that we can use the .collect() and .take() functions to view the contents of an RDD. The difference between .collect() and .take() is that .take() allows us to specify the number of elements we want to retrieve, whereas .collect() returns the entire RDD.
There's also an interesting conflict in the way that Scala displays arrays. Scala displays each element of the array, separating the elements with commas. However, it doesn't display quotes around string values. In our case, since our elements are actually CSV records, it ends up looking like each value is an individual element. This obviously isn't the case because we used .take(2) to display them. We added the red line in the picture to show where the elements split.
While we could use this RDD for the rest of this post, it will probably make more sense later if we split the data along the commas to create something that more closely resembles a real-world dataset.
val rdd_adult = rdd_lines
.filter(f => f != "")
.map(l => l.split(", "))
The code to transform the raw lines of CSV text into their own arrays is a little more complex and deserves its own post. Here are some good links for getting started with Transformations and Actions. Fortunately, splitting the elements into their own sub-arrays also fixed the awkward display issue. Next, let's look at Data Frames.
|Unnamed and Untyped Data Frame|
val df_adult_nonames = sqlContext.read.csv("/databricks-datasets/adult/adult.data")
We see that creating a Data Frame from a CSV is a good deal simpler than the RDD was. The ability to use the .read.csv() function abstracts away what we had to do manually before. A major difference is that we create Data Frames using a SQLContext, which is a layer on top of the SparkContext that allows us to perform more SQL-like operations. Again, contexts are very important and deserve their own post.
Storing the data as a Data Frame gives us the substantial advantage of storing each individual data point in a named column. This allows us to easily manipulate individuals columns. We'll see more on this later. The previous screenshot showed that we can create Data Frames without supplying any column information at all. However, we also have the ability to supply the column names so we don't have to use the default ones.
|Named and Typed Data Frame|
val df_adult_names = sqlContext.read
We see that the .option() and .toDF() functions allow us to infer the schema and pass the column names easily. Another difference between RDDs and Data Frames is what we display Data Frames using the .show() function instead .take(). Interestingly, the .collect() and .take() functions can be used to turn Data Frames back into RDDs. Data Frames and Datasets also have a .rdd attribute that accomplishes this.
|Take from Data Frame|
Speaking of Datasets, let's take a look at one. Basically, Data Frames are a special type of Dataset. Technically speaking, a Data Frame is an UNTYPED Dataset of Rows. Basically, a Row is designed to closely resembles what a record from a relational table would look like. A Row contains a fixed number of elements that represent individual objects. We can also use STRONGLY-TYPED Datasets of custom classes, other than Rows. For instance, we can create a Dataset out of our named and typed Data Frame by explicitly creating a case class for it.
case class Adult (
val ds_adult = df_adult_names.as[Adult]
Having a strongly-typed Dataset does offer a few advantages from a deep technical perspective. Simply put, if we can be certain that an object is of a certain type, then we can build more robust applications. Interestingly, a major difference between Data Frames and Datasets is that Datasets can be reverted back to the original objects, whereas Data Frames cannot. For instance, assume that we have an RDD of objects. If we cast this RDD to a Dataset and back to an RDD, then we will have our original RDD back. However, if we cast this RDD to a Data Frame and back to an RDD, then we will have a new RDD of Rows because the Data Frame can only hold Rows.
Now, let's try a few basic data manipulation operations. First, let's select two "columns" from our data, age and income.
val rdd_select = rdd_adult
.map(x => (x(0), x(14)))
|Select Data Frame|
val df_select = df_adult_names
val ds_select = df_adult_names
We see that RDDs require the awkward .map() function where we build an anonymous function and reference the "columns" (in RDD terms, they are actually elements within a tuple) by their indices. Data Frames and Datasets, on the other hand, allow us to use the much more natural .select() function and reference the columns by their names. However, if we look very closely at the output for the Dataset command, we see that the .select() functions returned a Data Frame, not a Dataset. We need to remember that Datasets are simply strongly-typed Data Frames. Therefore, we can cast this into a Dataset again by creating a new case class for the new object that we are creating.
|Select Dataset (Typed)|
case class Adult_select (
val ds_select = ds_adult
Obviously, this trend will repeat itself for every step, so we won't mention it again. Interestingly, it's also possible to operate on Datasets (but NOT Data Frames) using the .map() function.
|Select Dataset (map)|
case class Adult_map (
val ds_map = ds_adult
.map(x => (x.age, x.income))
This is another advantage of Datasets. However, using the .map() function with an anonymous function removes the column names, resulting in even more code to add them back. All in all, the .select() approach is much simpler. Next, let's take this one step further and create a new column, ageinmonths.
|New Column RDD|
val rdd_col = rdd_adult
.map(x => (x(0), x(0).toInt * 12))
|New Column Data Frame|
val df_col = df_adult_names
.withColumn("ageinmonths", $"age" * 12)
|New Column Dataset|
case class Adult_col (
val ds_col = ds_adult
.withColumn("ageinmonths", $"age" * 12)
We see that the RDD code leverages the .map() function again, while the Data Frame and Dataset code leverage the .withColumn() and .select() functions. Again, we see that the Data Frame and Dataset code sacrifices compactness for readability. We've heard that there's also a performance difference as well. However, given the simplistic operations and small dataset, we were not able to verify this. We'll dig into this in a later post.
To back up to a comment at the beginning of this post, why did we choose to use Scala to showcase this functionality instead of Python, a language that we are much more comfortable with and is honestly much more readable? Per the official Spark documentation,
Python does not have the support for the Dataset API. But due to Python’s dynamic nature, many of the benefits of the Dataset API are already available (i.e. you can access the field of a row by name naturally row.columnName). The case for R is similar.While this isn't a post about the differences between Spark programming languages, we do see an interesting dichotomy arising. Simply put, Java and Scala are great languages for developing robust, programmer-friendly applications, while Python and R are great languages for creating rich, analyst-friendly analyses. In fact, it's quite common for analysts to use PySpark or SparkR to munge through large amounts of data, then pull a much smaller portion of that data into more manageable objects, like Pandas or R Data Frames for deeper analysis or visualizations. This allows them to leverage the power of Spark when it's needed and keep it simpler when it isn't.
This seems like a great place to stop for today. We hope that this post open your eyes a little to the differences between RDDs, Data Frames and Datasets, as well as how you can leverage them in your analyses. Stay tuned for the next post where we'll dig deeper and create even more complex transformations. Thanks for reading. We hope you found this informative.
Service Engineer - FastTrack for Azure