Apache Spark is a flexible framework that allows processing of batch and real-time data. Its unified engine has made it quite popular for big data use cases. This course will help you to get started with Apache Spark 2.0 and write big data applications for a variety of use cases. It will also introduce you to Apache Spark one of the most popular Big Data processing frameworks. Although this course is intended to help you get started with Apache Spark, it also focuses on explaining the core concepts. This practical guide provides a quick start to the Spark 2.0 architecture and its components. It teaches you how to set up Spark on your local machine. As we move ahead, you will be introduced to resilient distributed datasets (RDDs) and DataFrame APIs, and their corresponding transformations and actions. Then, we move on to the life cycle of a Spark application and learn about the techniques used to debug slow-running applications. You will also go through Sparks built-in modules for SQL, streaming, machine learning, and graph analysis. Finally, the course will lay out the best practices and optimization techniques that are key for writing efficient Spark applications. By the end of this course, you will have a sound fundamental understanding of the Apache Spark framework and you will be able to write and optimize Spark applications.