As part of this session we will see basic transformations we can perform on top of Data Frames such as filtering, aggregations, joins etc using SQL. We will build end to end application by taking a simple problem statement.
Spark SQL – Overview
Problem Statement – Get daily product revenue
Relationship with Hive
Projecting Data using Select
Filtering Data using where
Joining Data Sets
Grouping Data and Performing Aggregations
Sorting data
Development Life Cycle
Spark SQL – Overview
Let us recap about Data Frame Operations. It is one of the 2 ways we can process Data Frames.
Selection or Projection – select clause
Filtering data – where clause
Joins – join (supports outer join as well)
Aggregations – group by and aggregations with support of functions such as sum, avg, min, max etc
Sorting – order by
Analytics Functions – aggregations, ranking and windowing functions
Problem Statement – Get daily product revenue
Here is the problem statement for which we will be exploring Data Frame APIs to come up with final solution.
All hive databases, tables and even partitions are nothing but directories in HDFS
We can create tables in Hive with column names and data types
Table names, column names, data types, location, file format, delimiter information is considered as metadata
This metadata is stored in metastore which is typically relational database such as MySQL, Postgres, Oracle etc
Once table is created, data can be queried or processed using HiveQL
HiveQL will be compiled into Spark or Map Reduce job based on the execution engine.
If Hive is integrated with Spark on the cluster using SparkSession object’s sql API we should be able to query and process data from Hive tables using Spark engine
Query output will be converted to Data Frame
SparkSession object’s sql API can execute standard hive commands such as show tables, show functions etc
Show functions – spark.sql("show functions").show(300, false)
Describe function – spark.sql("describe function substring").show(false)
We can also create/drop tables, insert/load data into tables using Hive syntax as part of sql function of SparkSession object
As part of SparkSession object’s read, there is an API which facilitate us to read raw data from Hive table into Data Frame
write package of data frame provides us APIs such as saveAsTable, insertInto etc to directly write data frame into Hive table.
Selection or Projection – select clause
Now let us see how we can project data the way we want using select.
We can run queries directly from hive tables or register data frames as temporary views/tables.
We can use select and fetch data from the fields we are looking for.
We can represent data using DataFrame.ColumnName or directly ‘ColumnName’ in select clause – e.g.: spark.sql("select order_id, order_date from orders").show()
We can apply necessary functions to manipulate data while it is being projected – spark.sql("select substring(order_date, 1, 7) from orders").show()
We can give aliases to the derived fields using alias function – spark.sql("select substring(order_date, 1, 7) as order_month from orders").show()
Filtering data – where clause
We can use where clause to filter the data.
One by using class.attributeName and comparing with values – e. g.: spark.sql("select * from orders where order_status = 'COMPLETE'").show()
Make sure both orders and orderItems data frames are created
Let us see few more examples
Get orders which are either COMPLETE or CLOSED
Get orders which are either COMPLETE or CLOSED and placed in month of 2013 August
Get order items where order_item_subtotal is not equal to product of order_item_quantity and order_item_product_price
Get all the orders which are placed on first of every month
[gist]a7af220444e29bebaa023cb31caa5bb1[/gist]
Joining Data Sets
Quite often we need to deal with multiple data sets which are related with each other.
We need to first understand the relationship with respect to data sets
All our data sets have relationships defined between them.
orders and order_items are transaction tables. orders is parent and order_items is child. Relationship is established between the two using order_id (in order_items, it is represented as order_item_order_id)
We also have product catalog normalized into 3 tables – products, categories and departments (with relationships established in that order)
We also have customers table
There is relationship between customers and orders – customers is parent data set as one customer can place multiple orders.
There is relationship between product catalog and order_items via products – products is parent data set as one product can be ordered as part of multiple order_items.
Determine the type of join – inner or outer (left or right or full)
We can perform joins using ascii syntax with join along with on clause
We can also perform outer joins (left or right or full)
Let us see few examples
Get all the order items corresponding to COMPLETE or CLOSED orders
Get all the orders where there are no corresponding order_items
Check if there are any order_items where there is no corresponding order in orders data set
[gist]7892747a38ce1d03396d649fe5dabe93[/gist]
Aggregations using group by and functions
Many times we want to perform aggregations such as sum, average, minimum, maximum etc with in each group. We need to first group the data and then perform aggregation.
group by is the function which can be used to group the data on one or more columns
Once data is grouped we can perform all supported aggregations – sum, avg, min, max etc
Let us see few examples
Get count by status from orders
Get revenue for each order id from order items
Get daily product revenue (order_date and order_item_product_id are part of keys, order_item_subtotal is used for aggregation)
[gist]32aafb614e975890293a1f044e622a86[/gist]
Sorting data
Now let us see how we can sort the data using sort or orderBy.
order by can be used to sort the data
We can perform composite sorting by using multiple fields
By default data will be sorted in ascending order
We can change the order by using desc
Let us see few examples
Sort orders by status
Sort orders by date and then by status
Sort order items by order_item_order_id and order_item_subtotal descending
Take daily product revenue data and sort in ascending order by date and then descending order by revenue.
[gist]7c5b7b7d8de2c8ae718032e8fd8113a4[/gist]
Development Life Cycle (Daily Product Revenue)
Let us develop the application using IntelliJ and run it on the cluster.
Make sure application.properties have required input path and output path along with execution mode
Create new package retail_db_sql and new object GetDailyProductRevenueSQL
Read orders and order_items data into data frames
Filter for complete and closed orders
Join with order_items
Aggregate to get revenue for each order_date and order_item_product_id
Sort in ascending order by date and then descending order by revenue
Save the output as CSV format
Validate using IntelliJ
Ship it to the cluster, run it on the cluster and validate.