Skip to content

Data cleaning, pre-processing, and Analytics on a million movies using Spark and Scala.

License

Notifications You must be signed in to change notification settings

Thomas-George-T/Movies-Analytics-in-Spark-and-Scala

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub GitHub top language GitHub language count GitHub last commit ViewCount

Overview

Solving analytical questions on the semi-structured MovieLens dataset containing a million records using Spark and Scala. This features the use of Spark RDD, Spark SQL and Spark Dataframes executed on Spark-Shell (REPL) using Scala API. We aim to draw useful insights about users and movies by leveraging different forms of Spark APIs.

Table of Contents

Major Components

Apache Spark Logo Scala

Environment

  • Linux (Ubuntu 15.04)
  • Hadoop 2.7.2
  • Spark 2.0.2
  • Scala 2.11

Installation steps

  1. Simply clone the repository

    git clone https://github.com/Thomas-George-T/Movies-Analytics-in-Spark-and-Scala.git
    
  2. In the repo, Navigate to Spark RDD, Spark SQL or Spark Dataframe locations as needed.

  3. Run the execute script to view results

    sh execute.sh
    
  4. The execute.sh will pass the scala code through spark-shell and then display the findings in the terminal from the results folder.

Analytical Queries

Spark RDD

Spark SQL

Spark DataFrames

Miscellaneous

Note: The results were collected and repartitioned into the same text file: This is not a recommended practice since performance is highly impacted but it is done here for the sake of readability.

Mentions

This project was featured on Data Machina Issue #130 listed at number 3 under ScalaTOR. Thank you for the listing

License

This repository is licensed under Apache License 2.0 - see License for more details