Create, Drop, Alter and Use Database - Hive SQL

access_time 3 months ago visibility56 comment 0

This page shows how to create, drop, alter and use Hive databases via Hive SQL (HQL).

Create database

CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
  [COMMENT database_comment]
  [LOCATION hdfs_path]
  [MANAGEDLOCATION hdfs_path]
  [WITH DBPROPERTIES (property_name=property_value, ...)];

LOCATION is used to specify default HDFS location for external table while MANAGEDLOCATION is the default HDFS location for managed tables.

Example:

CREATE DATABASE IF NOT EXISTS hql;
CREATE SCHEMA IF NOT EXISTS hql;

Output:

OK

Alter database

ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role;
ALTER (DATABASE|SCHEMA) database_name SET LOCATION hdfs_path;
ALTER (DATABASE|SCHEMA) database_name SET MANAGEDLOCATION hdfs_path;
ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...); 

Example:

ALTER DATABASE hql SET DBPROPERTIES('database usage'='Hive SQL tutorials.'); 

Output:

OK

Use database

USE database_name;

Example:

USE hql;

Output:

OK

The above command sets current database to hql.

Drop database

DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];

The default drop option is RESTRICT which will error out if the database is not empty. Use CASCADE option to drop all the objects in the database too.

Example:

DROP SCHEMA hql CASCADE;

Output:

OK

Install Hive database

Follow the article below to install Hive on Windows 10 via WSL if you don't have available available Hive database to practice Hive SQL:

Examples on this page are based on Hive 3.* syntax.

Run query

All these SQL statements can be run using beeline CLI:

$HIVE_HOME/bin/beeline --silent=true

The above command line connects to the default HiveServer2 service via beeline. Once beeline is loaded, type the following command to connect:

0: jdbc:hive2://localhost:10000> !connect jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: hive
Enter password for jdbc:hive2://localhost:10000:
1: jdbc:hive2://localhost:10000>

The terminal looks like the following screenshot:


info Last modified by Administrator at 4 months ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Want to publish your article on Kontext?

Learn more

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.


Learn more arrow_forward

More from Kontext

Schema Merging (Evolution) with Parquet in Spark and Hive

local_offer parquet local_offer pyspark local_offer spark-2-x local_offer hive local_offer hdfs local_offer spark-advanced

visibility 6227
thumb_up 1
access_time 10 months ago

Schema evolution is supported by many frameworks or data serialization systems such as Avro, Orc, Protocol Buffer and Parquet. With schema evolution, one set of data can be stored in multiple files with different but compatible schema. In Spark, Parquet data source can detect and merge schema of ...

local_offer teradata local_offer SQL local_offer teradata-functions

visibility 624
thumb_up 0
access_time 5 months ago

This page shows how to trim or remove leading or/and trailing zeros using Teradata SQL.

local_offer hive local_offer SQL local_offer hive-sql-ddl

visibility 258
thumb_up 0
access_time 4 months ago

This page shows how to create a temporary Hive table via Hive SQL (HQL). Example: CREATE TEMPORARY TABLE temp_customer (`cust_id` int, `name` string,`created_date` date) STORED AS PARQUET; Temporary Hive tables are only visible to the creation session and will be deleted automatically when ...

About column

Articles about Apache Hadoop installation, performance tuning and general tutorials.

*The yellow elephant logo is a registered trademark of Apache Hadoop.

rss_feed Subscribe RSS