Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Common commands of sqoop

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

1. Basic command

List which databases are in MySQL

Sqoop list-databases\-connect jdbc:mysql://hadoop01:3306/\-username hadoop\-password root\

List which tables are in the database in MySQL

Sqoop list-tables\-connect jdbc:mysql://hadoop01:3306/mysql\-username hadoop\-password root\

Create a table in hive that is the same as in the MySQL database table

Create-hive-table-- connect jdbc:mysql://hadoop01:3306/test-- username hadoop-- password root-- table book-- data import of hive-table test_hk2.sqoop

Syntax: sqoop import (generic-args) (import-args)

Common parameters

-connect jdbc connection address-- connection-manager connection manager-- driver driver class-- hadoop-mapred-home $HADOOP_MAPRED_HOME-- help help information-P enter password-m from the command line to specify the number of maptask Specify the parallelism of the maptask-- target-dir specifies the directory where the exported data is stored on the hdfs-- fields-terminated-by specifies the separator between the fields in each record-- where specifies the where condition of the query sql-- query specifies the sql query-- columns specifies the columns of the query (do not specify all columns to be exported by default)-- password password-- username account-- verbose print process information-- connection-param-file optional parameters

MySQL-> HDFS

# No delimiter and path sqoop import\-- connect jdbc:mysql://hadoop01:3306/test\ # specify the connection-- username hadoop\ # specify the user name of the MySQL-password root\ # specify the password of the MySQL-- table book\ # specify the table of the exported MySQL-m 1 # start a maptask#ps: if you do not specify the storage directory of the file Then the default will be saved in the / user/ user / book directory on hdfs, with the default field delimiter comma. # specify the import path and separator sqoop import\-- connect jdbc:mysql://hadoop01:3306/test\-- username hadoop\-- password root\-- table book\-- target-dir / user/hadoop/book\-- fields-terminated-by'\ t'\-m "import the result of where sqoop import\-connect jdbc:mysql://hadoop01:3306/test\-- username hadoop\-- password root\-- where" bid > 2 "\-- table book\ -- target-dir / user/hadoop/book1\-conditions after m 1ps:where Use double quotation marks. If the conditional field is string, use single quotation marks # Import query result data sqoop import\-- connect jdbc:mysql://hadoop01:3306/test\-- username hadoop\-- password root\-- target-dir / user/hadoop/book3\-- query 'select * from book where 1 and $CONDITIONS'\-- split-by bid\ # specify the maptask split column Number of column values. Hashcode / maptask-fields-terminated-by'\ t'\-m 1ps:1.query is enclosed in single quotation marks and the conditions in the SQL statement are in double quotation marks, otherwise an error will be reported two。 When using query, there must be a where clause and and $CONDITIONS must be added, otherwise an error will be made that the 4.--query sql statement that 3.--query cannot be used with-- where and-- columns needs to be escaped in double quotation marks.

MySQL-> hive

  Sqoop imports relational data into hive by first importing it into hdfs, and then from load to hive.

# General import sqoop import\-- connect jdbc:mysql://hadoop01:3306/test\-- username hadoop\-- password root\-- table book\-- hive-import\-m 1ps: when importing data into a hive table, the default is under the default library, and the table name is the same as the table name imported by MySQL. Separated by'\ u0001' by default. # full set sqoop import\-- connect jdbc:mysql://hadoop01:3306/test\-- username hadoop\-- password root\-- table book\-- fields-terminated-by'\ t'\ # column separator-- lines-terminated-by'\ n'\ # row separator The default is also\ n--hive-import\-- hive-overwrite\ # override import-create-hive-table\ # create tables (also automatically create tables by default)-hive-table test.book\ # hive table name-delete-target-dir # delete the temporary directory stored in the middle ps: the table will be created automatically, but the library will not, so make sure that the hive database exists before executing the statement, otherwise an error will be reported. # incremental Import Incremental data import imports only newly added rows # it needs to add 'incremental',' check-column' And 'last-value' option to perform incremental import sqoop import\-- connect jdbc:mysql://hadoop01:3306/test\-- username hadoop\-- password root\-- table book\-- target-dir / user/hadoop/book\-- incremental append\ # specify incremental import-- check-column bid\ # specify incremental column-- last-value 500\ # specify the value of the last ending column (starting from > 500) all the way to the end-m 1ps:1.--incremental has two values. Append must specify an incremental column and specify the use of-- check-column (usually the primary key of MySQL) Lastmodified, the last time the table was modified. If you want to import incremental backup into hive, you can only specify-- target-dir and import it to the directory where the table is stored. The default hive does not support incremental import (that is, you cannot add-- hive-import).

MySQL-> hbase

# sqoop import\-- connect jdbc:mysql://hadoop02:3306/mysql\-- username root\-- password root\-- table help_keyword\-- hbase-table new_help_keyword\-- column-family person\-- hbase-row-key help_keyword_id# field explanation-- connect jdbc:mysql://hadoop04:3306/mysql indicates that the URI--hbase-create-table of the remote or local Mysql service means that the table is created in the HBase. -- hbase-table new_help_keyword means to create a table new_help_keyword in HBase. Hbase-row-key help_keyword_id indicates that the rowkey of the hbase table is the help_keyword_id field of the mysql table. -- column-family person means to create the column family person in the table new_help_keyword. -- username 'root' means to connect to the mysql using the user root. -- the user password of the password 'root' connection mysql-- table help_keyword indicates the help_keyword table of the exported mysql database. Data export of 3.sqoop

Syntax: sqoop export (generic-args) (export-args)

Common parameters:

-- direct Quick Import-- the directory where export-dir HDFS exports data-- MJM table table mappers all have fewer map threads-- which table to export-- call stored procedure-- which field is used by update-key to determine the update-- update-mode insert mode. The default is to update only. Can be set to allowinsert.--input-null-string character type null processing-input-null-non-string non-character type null processing-staging-table temporary table-clear-staging-table emptying temporary table-batch batch mode

HDFS-> MySQL

Sqoop export\-connect jdbc:mysql://hadoop01:3306/test\-username hadoop\-password root\-table book\-export-dir / sqoopdata\-fields-terminated-by','

Hive---- > MySQL

Sqoop export\-- connect jdbc:mysql://hadoop01:3306/test\-- username hadoop\-- password root\-- table book\-- export-dir / user/hive/warehouse/uv/dt=2011-08-03\-- input-fileds-terminated-by'

Hbase---- > MySQL

By default, there is no command to import data from hbase directly into MySQL, because the amount of table data in hbase is usually large, and if it is imported into MySQL at one time, it may cause MySQL to crash directly.

But there are other ways to import:

Flatten the Hbase data into a HDFS file, then import the Hbase data into the Hive table by sqoop, then import the Hbase data into the Hive table, then import the mysql directly use the Java API of Hbase to read the table data, and import directly to mysql without the need to use sqoop

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report