MLlib is Spark's machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. At a high level, it provides tools such as:
- ML Algorithms: common learning algorithms such as classification, regression, clustering, and collaborative filtering
- Featurization: feature extraction, transformation, dimensionality reduction, and selection
- Pipelines: tools for constructing, evaluating, and tuning ML Pipelines
- Persistence: saving and load algorithms, models, and Pipelines
- Utilities: linear algebra, statistics, data handling, etc.
The MLlib RDD-based API is now in maintenance mode.
What are the implications?
- MLlib will still support the RDD-based API in
spark.mllibwith bug fixes.
- MLlib will not add new features to the RDD-based API.
- In the Spark 2.x releases, MLlib will add features to the DataFrames-based API to reach feature parity with the RDD-based API.
- After reaching feature parity (roughly estimated for Spark 2.3), the RDD-based API will be deprecated.
- The RDD-based API is expected to be removed in Spark 3.0.
Why is MLlib switching to the DataFrame-based API?
- DataFrames provide a more user-friendly API than RDDs. The many benefits of DataFrames include Spark Datasources, SQL/DataFrame queries, Tungsten and Catalyst optimizations, and uniform APIs across languages.
- The DataFrame-based API for MLlib provides a uniform API across ML algorithms and across multiple languages.
- DataFrames facilitate practical ML Pipelines, particularly feature transformations. See the Pipelines guide for details.
What is "Spark ML"?
- "Spark ML" is not an official name but occasionally used to refer to the MLlib DataFrame-based API. This is majorly due to the
org.apache.spark.mlScala package name used by the DataFrame-based API, and the "Spark ML Pipelines" term we used initially to emphasize the pipeline concept.
Is MLlib deprecated?
- No. MLlib includes both the RDD-based API and the DataFrame-based API. The RDD-based API is now in maintenance mode. But neither API is deprecated, nor MLlib as a whole.
MLlib uses the linear algebra package Breeze, which depends on netlib-java for optimised numerical processing. If native libraries[^1] are not available at runtime, you will see a warning message and a pure JVM implementation will be used instead.
Due to licensing issues with runtime proprietary binaries, we do not include
netlib-java's native proxies by default. To configure
netlib-java / Breeze to use system optimised binaries, include
com.github.fommil.netlib:all:1.1.2 (or build Spark with
-Pnetlib-lgpl) as a dependency of your project and read the netlib-java documentation for your platform's additional installation instructions.
Configuring these BLAS implementations to use a single thread for operations may actually improve performance (see SPARK-21305). It is usually optimal to match this to the number of cores each Spark task is configured to use, which is 1 by default and typically left at 1.
To use MLlib in Python, you will need NumPy version 1.4 or newer.
[^1]: To learn more about the benefits and background of system optimised natives, you may wish to watch Sam Halliday's ScalaX talk on High Performance Linear Algebra in Scala.
The list below highlights some of the new features and enhancements added to MLlib in the
2.3 release of Spark:
- Built-in support for reading images into a
DataFramewas added (SPARK-21866).
OneHotEncoderEstimatorwas added, and should be used instead of the existing
OneHotEncodertransformer. The new estimator supports transforming multiple columns.
- Multiple column support was also added to
Bucketizer(SPARK-22397 and SPARK-20542)
- A new
FeatureHashertransformer was added (SPARK-13969).
- Added support for evaluating multiple models in parallel when performing cross-validation using
- Improved support for custom pipeline components in Python (see SPARK-21633 and SPARK-21542).
DataFramefunctions for descriptive summary statistics over vector columns (SPARK-19634).
- Robust linear regression with Huber loss (SPARK-3181).
The migration guide is now archived on this page.