Apache Spark Fundamentals Training Course
Apache Spark is an analytics engine designed to distribute data across a cluster in order to process it in parallel. It contains modules for streaming, SQL, machine learning and graph processing.
This instructor-led, live training (online or onsite) is aimed at engineers who wish to deploy Apache Spark system for processing very large amounts of data.
By the end of this training, participants will be able to:
- Install and configure Apache Spark.
- Understand the difference between Apache Spark and Hadoop MapReduce and when to use which.
- Quickly read in and analyze very large data sets.
- Integrate Apache Spark with other machine learning tools.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction
- Apache Spark vs Hadoop MapReduce
Overview of Apache Spark Features and Architecture
Choosing a Programming Language
Setting up Apache Spark
Creating a Sample Application
Choosing the Data Set
Running Data Analysis on the Data
Processing of Structured Data with Spark SQL
Processing Streaming Data with Spark Streaming
Integrating Apache Spark with 3rd Part Machine Learning Tools
Using Apache Spark for Graph Processing
Optimizing Apache Spark
Troubleshooting
Summary and Conclusion
Requirements
- Experience with the Linux command line
- A general understanding of data processing
- Programming experience with Java, Scala, Python, or R
Audience
- Developers
Open Training Courses require 5+ participants.
Apache Spark Fundamentals Training Course - Booking
Apache Spark Fundamentals Training Course - Enquiry
Apache Spark Fundamentals - Consultancy Enquiry
Consultancy Enquiry
Testimonials (5)
Banyak contoh praktis, cara berbeda untuk mendekati masalah yang sama, dan terkadang trik yang tidak begitu jelas untuk meningkatkan solusi saat ini
Rafal - Nordea
Course - Apache Spark MLlib
Machine Translated
very interactive...
Richard Langford
Course - SMACK Stack for Data Science
Sufficient hands on, trainer is knowledgable
Chris Tan
Course - A Practical Introduction to Stream Processing
Get to learn spark streaming , databricks and aws redshift
Lim Meng Tee - Jobstreet.com Shared Services Sdn. Bhd.
Course - Apache Spark in the Cloud
practice tasks
Pawel Kozikowski - GE Medical Systems Polska Sp. Zoo
Course - Python and Spark for Big Data (PySpark)
Upcoming Courses (Minimal 5 peserta)
Related Courses
Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
21 HoursKursus ini ditujukan bagi pengembang dan ilmuwan data yang ingin memahami dan menerapkan kecerdasan buatan dalam aplikasi mereka. Fokus khusus adalah pada analisis data, kecerdasan buatan terdistribusi, dan pemrosesan bahasa alami.
Big Data Analytics with Google Colab and Apache Spark
14 HoursPelatihan langsung yang dipimpin instruktur di Indonesia (online atau di tempat) ini ditujukan untuk ilmuwan data dan insinyur tingkat menengah yang ingin menggunakan Google Colab dan Apache Spark untuk pemrosesan dan analitik data besar.
Pada akhir pelatihan ini, peserta akan dapat:
- Siapkan lingkungan data besar menggunakan Google Colab dan Spark.
- Memproses dan menganalisis kumpulan data besar secara efisien dengan Apache Spark.
- Visualisasikan data besar dalam lingkungan kolaboratif.
- Integrasikan Apache Spark dengan alat berbasis cloud.
Big Data Analytics in Health
21 HoursBig data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.
The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.
In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice.
Note
- To request a customized training for this course, please contact us to arrange.
Introduction to Graph Computing
28 HoursDalam pelatihan langsung yang dipandu instruktur di Indonesia ini, peserta akan mempelajari tentang penawaran teknologi dan pendekatan implementasi untuk memproses data grafik. Tujuannya adalah untuk mengidentifikasi objek dunia nyata, karakteristik dan hubungannya, kemudian memodelkan hubungan ini dan memprosesnya sebagai data menggunakan pendekatan Graph Computing (juga dikenal sebagai Analisis Grafik). Kami mulai dengan ikhtisar umum dan mempersempitnya pada alat tertentu saat kami melangkah melalui serangkaian studi kasus, latihan langsung, dan penerapan langsung.
Pada akhir pelatihan ini, peserta akan dapat:
- Memahami bagaimana data grafik dipertahankan dan dilintasi.
- Pilih kerangka kerja terbaik untuk tugas tertentu (dari basis data grafik hingga kerangka kerja pemrosesan batch.)
- Terapkan Hadoop, Spark, GraphX dan Pregel untuk melakukan komputasi grafik di banyak mesin secara paralel.
- Lihat masalah big data dunia nyata dalam bentuk grafik, proses, dan lintasan.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in Indonesia (online or onsite) is aimed at system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
Hortonworks Data Platform (HDP) for Administrators
21 HoursPelatihan langsung yang dipandu instruktur di Indonesia (online atau di tempat) ini memperkenalkan Hortonworks Data Platform (HDP) dan memandu peserta melalui penerapan solusi Spark + Hadoop.
Pada akhir pelatihan ini, peserta akan dapat:
- Gunakan Hortonworks untuk menjalankan Hadoop secara andal dalam skala besar.
- Satukan kemampuan keamanan, tata kelola, dan operasi Hadoop dengan alur kerja analitik tangkas Spark.
- Gunakan Hortonworks untuk menyelidiki, memvalidasi, mensertifikasi, dan mendukung setiap komponen dalam proyek Spark.
- Memproses berbagai jenis data, termasuk data terstruktur, tak terstruktur, bergerak, dan diam.
A Practical Introduction to Stream Processing
21 HoursIn this instructor-led, live training in Indonesia (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.
By the end of this training, participants will be able to:
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
Python and Spark for Big Data for Banking (PySpark)
14 HoursPython adalah bahasa pemrograman tingkat tinggi yang terkenal karena sintaksisnya yang jelas dan keterbacaan kode. Spark adalah mesin pemrosesan data yang digunakan dalam kueri, analisis, dan transformasi data besar. PySpark memungkinkan pengguna untuk menghubungkan Spark dengan Python.
Target Pemirsa: Profesional tingkat menengah di industri perbankan yang familiar dengan Python dan Spark, yang ingin memperdalam keterampilan mereka dalam pemrosesan big data dan pembelajaran mesin.
SMACK Stack for Data Science
14 HoursThis instructor-led, live training in Indonesia (online or onsite) is aimed at data scientists who wish to use the SMACK stack to build data processing platforms for big data solutions.
By the end of this training, participants will be able to:
- Implement a data pipeline architecture for processing big data.
- Develop a cluster infrastructure with Apache Mesos and Docker.
- Analyze data with Spark and Scala.
- Manage unstructured data with Apache Cassandra.
Administration of Apache Spark
35 HoursThis instructor-led, live training in Indonesia (online or onsite) is aimed at beginner-level to intermediate-level system administrators who wish to deploy, maintain, and optimize Spark clusters.
By the end of this training, participants will be able to:
- Install and configure Apache Spark in various environments.
- Manage cluster resources and monitor Spark applications.
- Optimize the performance of Spark clusters.
- Implement security measures and ensure high availability.
- Debug and troubleshoot common Spark issues.
Apache Spark in the Cloud
21 HoursApache Spark's learning curve is slowly increasing at the begining, it needs a lot of effort to get the first return. This course aims to jump through the first tough part. After taking this course the participants will understand the basics of Apache Spark , they will clearly differentiate RDD from DataFrame, they will learn Python and Scala API, they will understand executors and tasks, etc. Also following the best practices, this course strongly focuses on cloud deployment, Databricks and AWS. The students will also understand the differences between AWS EMR and AWS Glue, one of the lastest Spark service of AWS.
AUDIENCE:
Data Engineer, DevOps, Data Scientist
Spark for Developers
21 HoursOBJECTIVE:
This course will introduce Apache Spark. The students will learn how Spark fits into the Big Data ecosystem, and how to use Spark for data analysis. The course covers Spark shell for interactive data analysis, Spark internals, Spark APIs, Spark SQL, Spark streaming, and machine learning and graphX.
AUDIENCE :
Developers / Data Analysts
Scaling Data Pipelines with Spark NLP
14 HoursPelatihan langsung yang dipimpin instruktur di Indonesia (online atau di tempat) ini ditujukan untuk ilmuwan data dan pengembang yang ingin menggunakan Spark NLP, dibangun di atas Apache Spark, untuk mengembangkan, menerapkan, dan meningkatkan skala model dan jalur pemrosesan teks bahasa alami.
Pada akhir pelatihan ini, peserta akan dapat:
- Siapkan lingkungan pengembangan yang diperlukan untuk mulai membangun jalur NLP dengan Spark NLP.
- Memahami fitur, arsitektur, dan manfaat penggunaan Spark NLP.
- Gunakan model terlatih yang tersedia di Spark NLP untuk mengimplementasikan pemrosesan teks.
- Pelajari cara membangun, melatih, dan menskalakan Spark NLP model untuk proyek tingkat produksi.
- Terapkan klasifikasi, inferensi, dan analisis sentimen pada kasus penggunaan dunia nyata (data klinis, wawasan perilaku pelanggan, dll.).
Python and Spark for Big Data (PySpark)
21 HoursDalam pelatihan langsung yang dipimpin instruktur di Indonesia ini, peserta akan mempelajari cara menggunakan Python dan Spark bersama-sama untuk menganalisis data besar saat mereka mengerjakan latihan langsung.
Pada akhir pelatihan ini, peserta akan mampu:
- Pelajari cara menggunakan Spark dengan Python untuk menganalisis Big Data.
- Kerjakan latihan yang meniru kasus dunia nyata.
- Gunakan alat dan teknik yang berbeda untuk analisis data besar menggunakan PySpark.
Apache Spark MLlib
35 HoursMLlib is Spark’s machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. It consists of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as lower-level optimization primitives and higher-level pipeline APIs.
It divides into two packages:
-
spark.mllib contains the original API built on top of RDDs.
-
spark.ml provides higher-level API built on top of DataFrames for constructing ML pipelines.
Audience
This course is directed at engineers and developers seeking to utilize a built in Machine Library for Apache Spark