Create, Drop, and Truncate Table - Hive SQL

access_time 1 month ago visibility49 comment 0

This page shows how to create, drop, and truncate Hive tables via Hive SQL (HQL).

External and internal tables

Refer to Differences between Hive External and Internal (Managed) Tables to understand the differences between managed and unmanaged tables in Hive. 

Create table

Example:

CREATE TABLE IF NOT EXISTS hql.customer(cust_id INT, name STRING, created_date DATE)
COMMENT 'A table to store customer records.';

Output:

OK

Drop table

Syntax:

DROP TABLE [IF EXISTS] table_name [PURGE]; 

Example:

DROP TABLE IF EXISTS hql.customer;

Underlying data of this internal table will be moved to Trash folder.

DROP TABLE IF EXISTS hql.customer PURGE;

Underlying data in HDFS will be purged directly and table cannot be restored. 

Truncate table

Syntax:

TRUNCATE [TABLE] table_name [PARTITION partition_spec];
 
partition_spec:
  : (partition_column = partition_col_value, partition_column = partition_col_value, ...)

TRUNCATE state is used to truncate a table or partitions in a table. 

Example:

TRUNCATE TABLE hql.customer;

Install Hive database

Follow the article below to install Hive on Windows 10 via WSL if you don't have available available Hive database to practice Hive SQL:

Examples on this page are based on Hive 3.* syntax.

Run query

All these SQL statements can be run using beeline CLI:

$HIVE_HOME/bin/beeline --silent=true

The above command line connects to the default HiveServer2 service via beeline. Once beeline is loaded, type the following command to connect:

0: jdbc:hive2://localhost:10000> !connect jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: hive
Enter password for jdbc:hive2://localhost:10000:
1: jdbc:hive2://localhost:10000>

The terminal looks like the following screenshot:


info Last modified by Administrator at 1 month ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.


Learn more arrow_forward

More from Kontext

local_offer python local_offer spark local_offer pyspark local_offer hive local_offer spark-database-connect

visibility 22187
thumb_up 4
access_time 2 years ago

From Spark 2.0, you can easily read data from Hive data warehouse and also write/append new data to Hive tables. This page shows how to operate with Hive in Spark including: Create DataFrame from existing Hive table Save DataFrame to a new Hive table Append data to the existing Hive table via ...

local_offer hadoop local_offer hive local_offer Java

visibility 929
thumb_up 1
access_time 6 months ago

When I was configuring Hive 3.0.0 in Hadoop 3.2.1 environment, I encountered the following error: Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V This issue happened because  guava lib ...

local_offer hive local_offer SQL local_offer hive-sql-ddl

visibility 8
thumb_up 0
access_time 1 month ago

This page shows how to create bucketed sorted Hive tables via Hive SQL (HQL). CLUSTERD BY is used to create bucketed table Example: CREATE TABLE IF NOT EXISTS hql.transactions_bucketed(txn_id BIGINT, cust_id INT, amount DECIMAL(20,2),txn_type STRING, created_date DATE) COMMENT 'A table to ...

About column

Articles about Apache Hadoop installation, performance tuning and general tutorials.

*The yellow elephant logo is a registered trademark of Apache Hadoop.

rss_feed Subscribe RSS