
Apache Spark™ - Unified Engine for large-scale data analytics
Apache Spark ™ is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters.
Overview - Spark 3.5.5 Documentation
It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for incremental computation and stream processing.
Documentation | Apache Spark
The documentation linked to above covers getting started with Spark, as well the built-in components MLlib, Spark Streaming, and GraphX. In addition, this page lists other resources for learning Spark.
Quick Start - Spark 3.5.5 Documentation
Where to Go from Here This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark’s interactive shell (in Python or Scala), then show how to write applications in Java, Scala, and Python. To follow along with this guide, first, download a packaged release of Spark from the Spark website.
Downloads - Apache Spark
As new Spark releases come out for each development stream, previous ones will be archived, but they are still available at Spark release archives. NOTE: Previous releases of Spark may be affected by security issues.
Examples | Apache Spark
Spark allows you to perform DataFrame operations with programmatic APIs, write SQL, perform streaming analyses, and do machine learning. Spark saves you from learning multiple frameworks and patching together various libraries to perform an analysis.
PySpark Overview — PySpark 3.5.5 documentation - Apache Spark
2025年2月23日 · PySpark combines Python’s learnability and ease of use with the power of Apache Spark to enable processing and analysis of data at any size for everyone familiar with Python. PySpark supports all of Spark’s features such as Spark SQL, DataFrames, Structured Streaming, Machine Learning (MLlib) and Spark Core.
Spark SQL & DataFrames | Apache Spark
Integrated Seamlessly mix SQL queries with Spark programs. Spark SQL lets you query structured data inside Spark programs, using either SQL or a familiar DataFrame API. Usable …
RDD Programming Guide - Spark 3.5.5 Documentation
This guide shows each of these features in each of Spark’s supported languages. It is easiest to follow along with if you launch Spark’s interactive shell – either bin/spark-shell for the Scala shell or bin/pyspark for the Python one.
Getting Started — PySpark 3.5.5 documentation - Apache Spark
Getting Started ¶ This page summarizes the basic steps required to setup and get started with PySpark. There are more guides shared with other languages such as Quick Start in Programming Guides at the Spark documentation. There are live notebooks where you can try PySpark out without any other step: Live Notebook: DataFrame