Lambda architecture— how to build a Big data pipeline part 1

With a large number of smart devices generating a huge amount of data, it would be ideal to have a big data system holding the history of data.

However, processing large data sets is too slow to maintain real-time updates of devices.

The two requirements for real-time tracking and keeping results accurately up to date can be satisfied by building a lambda architecture.

Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch and stream-processing methods.

This approach to architecture attempts to balance latency, throughput, and fault-tolerance by using batch processing to provide comprehensive and accurate views of batch data, while simultaneously using real-time stream processing to provide views of online data.

Bootstrapping a Lambda ProjectThe idea of this project is to provide you with a bootstrap for your next Lambda architecture.

We are addressing some of the main challenges that everyone faces when starting with Big data.

This project will definitely help you get an understanding of the data processing world and save you a lot of time in setting up your initial Lambda architecture.

In this blog post, I will walk through some concepts and technologies that we have placed in our bootstrap Lambda project.

I’m not planning to go deep in the concepts and tools, we have a lot of posts about those out there — the intention here is to present an application example containing the patterns, tools, and technologies used to develop Big data processing.

In this project, we’ll use Lambda architecture to analyse and process IoT connected vehicle’s data and send the processed data to a real-time traffic monitoring dashboard.

Some patterns, tools, and technologies that you will see in this system:Spark, Spark Streaming, Docker, Kafka, Web Sockets, Cassandra, Hadoop File System, Spring Boot, Spring Data and everything developed using Java 8.

Infrastructure ManagementIn our project, all component parts are dynamically managed using Docker, which means you don’t need to worry about setting up your local environment, the only thing you need is to have Docker installed.

Having separate components, you will have to manage the infrastructure for each component.

Infrastructure as Code (IaC) was born as a solution to this challenge.

Everything that our application needs will be described in a file (Dockerfile).

Along with docker-compose file, we orchestrate multi-containers application and the entire service configuration will be versioned, making the process of building and deploying the whole project easy.

Data producingIn our project, we are simulating a system with the connected vehicle providing real-time information.

Those connected vehicles generate a huge amount of data which are extremely random and time-sensitive.

Obviously, there is no IoT connected to our project, therefore we are producing fake random data and sending it to Kafka.

See the producer subproject.

Stream processingStream processing allows us to process data in real time as they arrive and quickly detect conditions within small time.

In the point of performance, the latency of batch processing will be in minutes to hours while the latency of stream processing will be in seconds or milliseconds.

In our speed layer, we are processing the streaming data using Kafka with Spark streaming and two main tasks are done in this layer: first, the stream data is appended into HDFS for later batch processing; Second, is performed the analyse and the process of IoT connected vehicle’s data.

Batch Processing?It is responsible for creating the batch view from the master data set stored in the Hadoop distributed file system(HDFS).

It might take a large amount of time for that file to be processed, for this reason, we also have the real-time processing layer.

we are processing the batch data using Spark and storing the pre-computed views into Cassandra.

Serving layerOnce the computed view from batch and speed layers are stored in the Cassandra database, we have created a Spring Boot application which response to ad-hoc queries by returning pre-computed views in a dashboard that is automatically updated using Web socket to push the most updated report to the UI.

SummaryOur Lambda project receives real-time IoT Data Events coming from Connected Vehicles, then ingested to Spark through Kafka.

Using the Spark streaming API, we processed and analysed IoT data events and transformed them into vehicle count for different types of vehicles on different routes.

While simultaneously the data is also stored into HDFS for Batch processing.

We performed a series of stateless and stateful transformation using Spark streaming API on streams and persisted them to Cassandra database tables.

In order to get accurate views, we also process a batch processing creating a batch view into Cassandra.

We developed responsive web traffic monitoring dashboard using Spring Boot, SockJs and Bootstrap which merge two views from the Cassandra database before pushing to the UI using web socket.

Github projectTraffic Data Monitoring Using IoT, Kafka and Spark StreamingInternet of Things (IoT) is an emerging disruptive technology and becoming an increasing topic of interest.

One of the…www.



. More details

Leave a Reply