Assignment on Spark

One of the most common uses of Spark is analyzing and processing log files. In this assignment, we will put Spark to good use for an OSS project that retrieves and downloads data from GitHub, called GHTorrent. GHTorrent works by following the Github event timeline and then retrieving all items linked from each event recursively and exhaustively. To make monitoring and debugging easier, the GHTorrent maintainers use extensive runtime logging for the downloader scripts.

Here is an extract of what the GHTorrent log looks like:

DEBUG, 2017-03-23T10:02:27+00:00, ghtorrent-40 -- ghtorrent.rb: Repo EFForg/https-everywhere exists
DEBUG, 2017-03-24T12:06:23+00:00, ghtorrent-49 -- ghtorrent.rb: Repo Shikanime/print exists
INFO, 2017-03-23T13:00:55+00:00, ghtorrent-42 -- api_client.rb: Successful request. URL: https://api.github.com/repos/CanonicalLtd/maas-docs/issues/365/events?per_page=100, Remaining: 4943, Total: 88 ms
WARN, 2017-03-23T20:04:28+00:00, ghtorrent-13 -- api_client.rb: Failed request. URL: https://api.github.com/repos/greatfakeman/Tabchi/commits?sha=Tabchi&per_page=100, Status code: 404, Status: Not Found, Access: ac6168f8776, IP: 0.0.0.0, Remaining: 3031
DEBUG, 2017-03-23T09:06:09+00:00, ghtorrent-2 -- ghtorrent.rb: Transaction committed (11 ms)

Each log line comprises of a standard part (up to .rb:) and an operation-specific part. The standard part fields are like so:

  1. Logging level, one of DEBUG, INFO, WARN, ERROR (separated by ,)
  2. A timestamp (separated by ,)
  3. The downloader id, denoting the downloader instance (separated by --)
  4. The retrieval stage, denoted by the Ruby class name, one of:
    • event_processing
    • ght_data_retrieval
    • api_client
    • retriever
    • ghtorrent

Grade: This assignment consists of 130 points. You need to collect 100 to get a 10!

Loading and parsing the file

For the remaining of the assignment, you need to use this file (~300MB compressed). Make sure you use the correct kernel in your notebook (either the PySpark kernel or the Apache Toree Scala kernel).

T (5 points): Download the log file and write a function to load it in an RDD. If you are doing this in Scala, make sure you use a case class to map the file fields.

In [ ]:
 

T (5 points): How many lines does the RDD contain?

In [ ]:
 

Basic counting and filtering

T (5 points): Count the number of WARNing messages

In [ ]:
 

T (10 points): How many repositories where processed in total? Use the api_client lines only.

In [ ]:
 

Analytics

T (5 points): Which client did most HTTP requests?

In [ ]:
 

T (5 points): Which client did most FAILED HTTP requests?

In [ ]:
 

T (5 points): What is the most active hour of day?

In [ ]:
 

T (5 points): What is the most active repository?

Hint: use messages from the ghtorrent.rb layer only

In [ ]:
 

T (5 points): Which access keys are failing most often?

Hint: extract the Access: ... part from failing requests

In [ ]:
 

Indexing

Typical operations on RDDs require grouping on a specific part of each record and then calculating specific counts given the groups. While this operation can be achieved with the group_by family of funcions, it is often useful to create a structure called an inverted index. An inverted index creates an 1..n mapping from the record part to all occurencies of the record in the dataset. For example, if the dataset looks like the following:

col1,col2,col3
A,1,foo
B,1,bar
C,2,foo
D,3,baz
E,1,foobar

an inverted index on col2 would look like

1 -> [(A,1,foo), (B,1,bar), (E,1,foobar)]
2 -> [(C,2,foo)]
3 -> [(D,3,baz)]

Inverted indexes enable us to quickly access precalculated partitions of the dataset. To see their effect on large datasets, lets compute an inverted index on the downloader id part.

T (10 points): Create a function that given an RDD[Seq[T]] and an index position (denotes which field to index on), it computes an inverted index on the RDD.

def inverted_index(rdd, idx_id):
  pass
In [ ]:
 

T (5 points): Compute the number of different repositories accessed by the client ghtorrent-22 (without using the inverted index).

In [ ]:
 

T (5 points): Compute the number of different repositories accessed by the client ghtorrent-22 (using the inverted index you calculated above). Remember that Spark computations are lazy, so you need to run the inverted index generation before you actually use the index.

In [ ]:
 

T (5 points): You should have noticed some difference in performance. Why is the indexed version faster?

In [ ]:
 

T (5 points): Read up about groupByKey. Explain in 3 lines why it the worst function in the Spark API and what you can use instead.

In [ ]:
 

Joining

We now need to monitor the behaviour of interesting repositories. Use this link to download a list of repos into which we are interested to. This list was generated on Oct 10, 2017, more than 7 months after the log file was created. The format of the file is CSV, and the meaning of the fields can be found on the GHTorrent project web site documentation.

T (5 points): Read in the CSV file to an RDD (let's call it interesting). How many records are there?

In [ ]:
 

T (10 points): How many records in the log file refer to entries in the interesting file?

Hint: Yes, you need to join :) First, you need to key both RDDs by the repository name to do such a join.

In [ ]:
 

T (5 points): Which of the interesting repositories has the most failed API calls?

In [ ]:
 

Dataframes

T (10 points) Read in the interesting repos file using Spark's CSV parser. Convert the log RDD to a Dataframe.

In [ ]:
 

T (15 points) Repeat all 3 queries in the "Joining" section above using either SQL or the Dataframe API. Measure the time it takes to execute them.

In [ ]:
 

T (5 points) Select one of the queries and compare the execution plans between the RDD version and your version. (you can see them by going to localhost:4040 in your VM). What differences do you see?

In [ ]: