Data import
1. Load data into the table (load)
1) Grammar
hive> load data [local] inpath 'Data path' [overwrite] into table \ student [partition (partcol1=val1,...)];
(1) load data: indicates loading data
(2) Local: indicates loading data from local to hive table; Otherwise, load data from HDFS to hive table
(3) inpath: indicates the path to load data
(4) Overwrite: it means to overwrite the existing data in the table, otherwise it means to append
(5) into table: indicates which table to load
(6) student: represents a specific table
(7) Partition: indicates to upload to the specified partition
Note: when files on HDFS are loaded into Hive, only metadata in namenode is modified;
2) Practical cases
Create a table first
hive (default)> create table student(id string, name string)\ row format delimited fields terminated by '\t';
(1) Show me how to load local files into hive
hive (default)> load data local inpath '/opt/module/hive/datas/student.txt' into table default.student;
(2) Show me how to load HDFS files into hive
Upload files to HDFS
hive (default)> dfs -put /opt/module/hive/datas/student.txt /user/atguigu/hive;
Load data on HDFS
hive (default)> load data inpath '/user/atguigu/hive/student.txt' into table default.student;
(3) Demonstrate that loading data overwrites the existing data in the table
Upload files to HDFS
hive (default)> dfs -put /opt/module/datas/student.txt /user/atguigu/hive;
Loading data overwrites the existing data in the table
hive (default)> load data inpath '/user/atguigu/hive/student.txt' overwrite into table default.student;
2. Insert data into the table through query statements (insert)
More scenarios for insert are to insert the query results from table A into A result table;
1) Practical cases
Create a table first
hive (default)> create table student_par(id int, name string) \ row format delimited fields terminated by '\t';
1) Insert is used to insert basic data
hive (default)> insert into table student_par values(1,'wangwu'),(2,'zhaoliu');
2) Basic mode insertion (based on the query results of a single table)
hive (default)> insert overwrite table student_par select id, name from student where month='201709';
Insert into: insert into a table or partition by appending data. The original data will not be deleted
insert overwrite: it will overwrite the existing data in the table
Note: insert does not support inserting partial fields
3) Multi table (multi partition) insertion mode (based on query results of multiple tables)
hive (default)> from student insert overwrite table student partition(month='201707') select id, name where month='201709' insert overwrite table student partition(month='201706') select id, name where month='201709';
3. Create tables and load data in query statements (As Select)
Create a table based on the query results (the query results are added to the newly created table)
create table if not exists student3 as select id, name from student;
4. Specify the loading data path through Location when creating a table
1) Upload data to hdfs
hive (default)> dfs -mkdir /student; hive (default)> dfs -put /opt/module/datas/student.txt /student;
2) Create a table and specify the location on hdfs
hive (default)> create external table if not exists student5( id int, name string ) row format delimited fields terminated by '\t' location '/student;
3) Query data
hive (default)> select * from student5;
5. Import data into the specified Hive table
Note: export first, and then import the data.
hive (default)> import table student2 from
'/user/hive/warehouse/export/student';
Data export
1. Insert export
1) Export query results to local
hive (default)> insert overwrite local directory '/opt/module/hive/datas/export/student' select * from student;
2) Format and export query results to local
hive(default)>insert overwrite local directory '/opt/module/hive/datas/export/student1' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' select * from student;
3) Export the query results to HDFS (no local)
hive (default)> insert overwrite directory '/user/atguigu/student2' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' select * from student;
2. Export Hadoop commands to local
hive (default)> dfs -get /user/hive/warehouse/student/student.txt /opt/module/datas/export/student3.txt;
3. Hive Shell command export
Basic syntax: (hive -f/-e execute statement or script > file)
[atguigu@hadoop102 hive]$ bin/hive -e 'select * from default.student;' >
/opt/module/hive/datas/export/student4.txt;
4 Export to HDFS
(defahiveult)> export table default.student to '/user/hive/warehouse/export/student';
export and import are mainly used for Hive table migration between two Hadoop platform clusters.
5.Sqoop export
To be added
Clear data in table (Truncate)
Note: Truncate can only delete management tables, not data in external tables
hive (default)> truncate table student;