Read Hadoop Credential in PySpark

access_time 2 years ago visibility1382 comment 0

In one of my previous articles about Password Security Solution for Sqoop, I mentioned creating credential using hadoop credential command. The credentials are stored in JavaKeyStoreProvider. Credential providers are used to separate the use of sensitive tokens, secrets and passwords from the details of their storage and management.

The following command lines create a credential named mydatabase.password in both local JCEKS file and also in HDFS.

#Store the password in HDFS

hadoop credential create mydatabase.password -provider jceks://hdfs/user/hue/mypwd.jceks

# Store the password locally

hadoop credential create mydatabase.password -provider jceks://file/home/user/mypwd.jceks

For running jobs in clusters like YARN, it is important to create the credential in HDFS so that it can be accessed by all worker nodes in the cluster.

Once the credential is created, you can easily use it in Sqoop by passing in the credential name as parameter. However, if you want to access the credential in Spark, what should you do? If you are using Scala, you can easily reference the Hadoop java libraries for credential. However, if you use Python as programming language, it won’t be that straightforward.

Sample code to retrieve Hadoop credential in PySpark

from pyspark.sql import SparkSession

appName = "PySpark Hadoop Credential Example"
master = "local"

# Create Spark session
spark = SparkSession.builder \
    .appName(appName) \
    .master(master) \
    .getOrCreate()

# Replace the credential provider path accordingly
credential_provider_path = 'jceks://hdfs/user/hue/.jceks' 
credential_name = 'mydatabase.password'

# Retrive credential/password from Hadoop credential
conf = spark.sparkContext._jsc.hadoopConfiguration()
conf.set('hadoop.security.credential.provider.path',credential_provider_path)
credential_raw = conf.getPassword(credential_name)
credential_str = ''
for i in range(credential_raw.__len__()):
    credential_str = credential_str + str(credential_raw.__getitem__(i))

# Now you can use credential_str, for example, use it as database password in JDBC to load data from databases into Spark data frame.

Access to the credential provider file

Anyone who has access to your credential provider file can also use the same approach to retrieve the credential value from the provider. So it is important to manage the access to the credential file so that only allowed users can access it.

More details about Hadoop credential API

Refer to the official page to learn more about Hadoop credential APIs: CredentialProvider API Guide.

info Last modified by Raymond at 2 years ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Want to publish your article on Kontext?

Learn more

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.


Learn more arrow_forward

More from Kontext

local_offer python local_offer spark local_offer spark-dataframe

visibility 26758
thumb_up 0
access_time 2 years ago

This post shows how to derive new column in a Spark data frame from a JSON array string column. I am running the code in Spark 2.2.1 though it is compatible with Spark 1.6.0 (with less JSON SQL functions). Refer to the following post to install Spark in Windows. Install Spark 2.2.1 in Windows ...

local_offer hadoop local_offer hive local_offer Java

visibility 929
thumb_up 1
access_time 6 months ago

When I was configuring Hive 3.0.0 in Hadoop 3.2.1 environment, I encountered the following error: Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V This issue happened because  guava lib ...

Install Hadoop 3.0.0 on Windows (Single Node)

local_offer hadoop local_offer yarn local_offer hdfs local_offer big-data-on-windows-10

visibility 35764
thumb_up 3
access_time 3 years ago

This page summarizes the steps to install Hadoop 3.0.0 on your Windows environment. Reference page: https://wiki.apache.org/hadoop/Hadoop2OnWindows https://hadoop.apache.org/docs/r1.2.1/cluster_setup.html info A newer version of installation guide for latest Hadoop 3.2.1 is available. I ...

About column

Apache Spark installation guides, performance tuning tips, general tutorials, etc.

*Spark logo is a registered trademark of Apache Spark.

rss_feed Subscribe RSS