List and kill jobs in Shell / Hadoop

access_time 2 years ago visibility39 comment 0

The following code snippet shows how to list and kill Hadoop jobs including (MapReduce and YARN jobs).

Remember to replace $jobId and $applicationId to your own job/application ID. You can also use the following command but it is deprecated: hadoop job -list hadoop job -kill

Code snippet

# MapReduce jobs
# Replace %jobId with your own job ID
mapred job -list
mapred job -kill $jobId

# YARN jobs
# Replace $applicationId with your own application ID

yarn application -list
yarn application -kill $applicationId
info Last modified by Raymond at 2 years ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Want to publish your article on Kontext?

Learn more

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.

Learn more arrow_forward

More from Kontext

local_offer spark local_offer hadoop local_offer yarn local_offer oozie local_offer spark-advanced

visibility 1957
thumb_up 0
access_time 2 years ago

Recently I created an Oozie workflow which contains one Spark action. The Spark action master is yarn and deploy mode is cluster. Each time when the job runs about 30 minutes, the application fails with errors like the following: Application application_** failed 2 times due to AM Container for ...

local_offer hadoop local_offer yarn local_offer hdfs local_offer big-data-on-linux

visibility 6868
thumb_up 0
access_time 3 years ago

Previously, I summarized the steps to install Hadoop in a single node Windows machine. Install Hadoop 3.0.0 in Windows (Single Node) In this page, I am going to document the steps to setup Hadoop in a cluster. The architecture looks like the following diagram: * Since Hadoop 2.x, Job ...

local_offer hadoop local_offer hdfs local_offer parquet local_offer sqoop local_offer big-data-on-linux

visibility 3073
thumb_up 0
access_time 3 years ago

This page continues with the following documentation about configuring a Hadoop multi-nodes cluster via adding a new edge node to configure administration or client tools. Configure Hadoop 3.1.0 in a Multi Node Cluster In this page, I’m going to show you how to add a edge node into the ...

About column

Code snippets and tips for various programming languages/frameworks.

rss_feed Subscribe RSS