Skip to content

Commit

Permalink
[SPARK-9575] [MESOS] Add docuemntation around Mesos shuffle service.
Browse files Browse the repository at this point in the history
andrewor14

Author: Timothy Chen <[email protected]>

Closes apache#7907 from tnachen/mesos_shuffle.
  • Loading branch information
tnachen authored and Andrew Or committed Aug 12, 2015
1 parent 5c99d8b commit 741a29f
Showing 1 changed file with 14 additions and 0 deletions.
14 changes: 14 additions & 0 deletions docs/running-on-mesos.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,20 @@ node. Please refer to [Hadoop on Mesos](https://github.com/mesos/hadoop).

In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through Mesos.

# Dynamic Resource Allocation with Mesos

Mesos supports dynamic allocation only with coarse grain mode, which can resize the number of executors based on statistics
of the application. While dynamic allocation supports both scaling up and scaling down the number of executors, the coarse grain scheduler only supports scaling down
since it is already designed to run one executor per slave with the configured amount of resources. However, after scaling down the number of executors the coarse grain scheduler
can scale back up to the same amount of executors when Spark signals more executors are needed.

Users that like to utilize this feature should launch the Mesos Shuffle Service that
provides shuffle data cleanup functionality on top of the Shuffle Service since Mesos doesn't yet support notifying another framework's
termination. To launch/stop the Mesos Shuffle Service please use the provided sbin/start-mesos-shuffle-service.sh and sbin/stop-mesos-shuffle-service.sh
scripts accordingly.

The Shuffle Service is expected to be running on each slave node that will run Spark executors. One way to easily achieve this with Mesos
is to launch the Shuffle Service with Marathon with a unique host constraint.

# Configuration

Expand Down

0 comments on commit 741a29f

Please sign in to comment.