Spark applications run as independent sets of processes on a cluster. The driver program & Spark context takes care of the job execution within the cluster.A job is split into multiple tasks that are distributed over the worker node. When an RDD is created in Spark context, it can be distributed across various nodes. Worker nodes are slaves that execute different tasks.