In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Brief introduction:
This article mainly introduces the basic grammar and simple use of sqoop.
1. View the command help [hadoop@hadoop000 ~] $sqoop helpusage: sqoop COMMAND [ARGS] Available commands: codegen Generate code to interact with database records create-hive-table Import a table definition into Hive eval Evaluate a SQL statement and display the results export Export an HDFS directory to a database table help List available commands import Import a table from a database to HDFS import-all-tables Import tables from a database to HDFS import- Mainframe Import datasets from a mainframe server to HDFS job Work with saved jobs list-databases List available databases on a server list-tables List available tables in a database merge Merge results of incremental imports metastore Run a standalone Sqoop metastore version Display version informationSee 'sqoop help COMMAND' for information on a specific command.# prompts us to use sqoop help command (the command to be queried) to make a detailed query of the command 2 .list-databases# View the list-databases command help [hadoop@hadoop000 ~] $sqoop help list-databasesusage: sqoop list-databases [GENERIC-ARGS] [TOOL-ARGS] Common arguments:-- connect Specify JDBC connect string-- connection-manager Specify connection manager Class name-connection-param-file Specify connection parameters file-driver Manually specify JDBC driver class to use-hadoop-home Override $HADOOP_MAPRED_HOME_ARG-- hadoop-mapred-home Override $HADOOP_MAPRED_HOME_ARG-- help Print usage instructions-P Read password from console-password Set authentication password--password-alias Credential provider password alias-password-file Set authentication Password file path-relaxed-isolation Use read-uncommitted isolation for imports-skip-dist-cache Skip copying jars to distributed cache-username Set authentication username-- verbose Print more information while working# simply use [hadoop@oradb3 ~] $sqoop list-databases\ >-- connect jdbc:mysql://localhost:3306\ >-- username root\ >-- password 12345 Results the information_schemamysqlperformance_schemaslow_query_logsystest3.list-tables# command helps [hadoop@hadoop000 ~] $sqoop help list-tablesusage: sqoop list-tables [GENERIC-ARGS] [TOOL-ARGS] Common arguments:-- connect Specify JDBC connect string-- connection-manager Specify connection manager Class name-connection-param-file Specify connection parameters file-driver Manually specify JDBC driver class to use-hadoop-home Override $HADOOP_MAPRED_HOME_ARG-- hadoop-mapred-home Override $HADOOP_MAPRED_HOME_ARG-- help Print usage instructions-P Read password from console-password Set authentication password--password-alias Credential provider password alias-password-file Set authentication Password file path-relaxed-isolation Use read-uncommitted isolation for imports-skip-dist-cache Skip copying jars to distributed cache-username Set authentication username-- how to use verbose Print more information while working# [hadoop@hadoop000 ~] $sqoop list-tables\ >-- connect jdbc:mysql://localhost:3306/test\ >-- username root \ >-- password 12345 results t_ordertest0001test_1013test_dyctest_tb4. Import mysql into HDFS (import)
(import / user/ username / table name under the current user directory by default)
Speaking of which, expand a little bit of knowledge here:
Hadoop fs-ls shows the current user directory, namely / user/hadoop
Hadoop fs-ls / shows the HDFS root directory # View commands help [hadoop@hadoop000 ~] $sqoop help import# execute import [hadoop@hadoop000 ~] $sqoop import\ >-- connect jdbc:mysql://localhost:3306/test\ >-- username root\ >-- password 123456\ >-- table students
This error is likely to occur at this time.
Exception in thread "main" java.lang.NoClassDefFoundError: org/json/JSONObject
Here we need to import the java-json.jar package download address and add java-json.jar to the.. / sqoop/lib directory.
# execute import again to import [hadoop@hadoop000 ~] $sqoop import\ >-- connect jdbc:mysql://localhost:3306/test\ >-- username root\ >-- password 123456\ >-- table students18/07/04 13:28:35 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh6.7.018/07/04 13:28:35 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using-P instead.18/07/04 13:28:35 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.18/07/04 13:28:35 INFO tool.CodeGenTool: Beginning code generation18/07/04 13:28:35 INFO manager.SqlManager: Executing SQL statement: SELECT T. * FROM `students` AS t LIMIT 118-07-04 13:28:35 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `students` AS t LIMIT 118-07-04 13:28:35 INFO orm.CompilationManager: HADOOP_MAPRED _ HOME is / home/hadoop/app/hadoop-2.6.0-cdh6.7.018/07/04 13:28:37 INFO orm.CompilationManager: Writing jar file: / tmp/sqoop-hadoop/compile/3024b8df04f623e8c79ed9b5b30ace75/students.jar18/07/04 13:28:37 WARN manager.MySQLManager: It looks like you are importing from mysql.18/07/04 13:28:37 WARN manager.MySQLManager: This transfer can be faster! Use the-direct18/07/04 13:28:37 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.18/07/04 13:28:37 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql) 18-07-04 13:28:37 INFO mapreduce.ImportJobBase: Beginning import of students18/07/04 13:28:38 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar18/07/04 13:28:39 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps18/07/04 13:28:39 INFO client.RMProxy: Connecting to ResourceManager at / 0.0.0.0 INFO client.RMProxy 803218 INFO client.RMProxy 04 13:28:41 INFO db.DBInputFormat: Using read commited transaction isolation18/07/04 13:28:41 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN (`id`), MAX (`id`) FROM `students`18 / 07 INFO db.IntegerSplitter: Split size: 0 Num splits: 4 from: 1001 to: 100318/07/04 13:28:41 INFO mapreduce.JobSubmitter: number of splits:318/07/04 13:28:42 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1530598609758_001518/07/04 13:28:42 INFO impl.YarnClientImpl: Submitted application application_1530598609758_001518/07/04 13:28:42 INFO mapreduce.Job: The url to track the job: http://oradb3:8088/proxy/application_1530598609758_0015/18/07/04 13:28: 42 INFO mapreduce.Job: Running job: job_1530598609758_001518/07/04 13:28:52 INFO mapreduce.Job: Job job_1530598609758_0015 running in uber mode: false18/07/04 13:28:52 INFO mapreduce.Job: map 0 reduce 0 reduce 07 reduce 04 13:28:58 INFO mapreduce.Job: map 33% reduce 0 Chark 07 reduce 0 13:28:59 INFO mapreduce.Job: map 67% reduce 0 Grey 07 reduce 04 13:29:00 INFO mapreduce.Job: map 100 reduce 0 / 07 INFO mapreduce.Job 04 13:29:00 INFO mapreduce.Job: Job job_1530598609758_0015 completed successfully18/07/04 13:29:00 INFO mapreduce.Job: Counters: 30.. 18 INFO mapreduce.ImportJobBase 07 INFO mapreduce.ImportJobBase 04 13:29:00 INFO mapreduce.ImportJobBase: Transferred 40 bytes in 21.3156 seconds (1.8766 bytes/sec) 18-07-04 13:29:00 INFO mapreduce.ImportJobBase: Retrieved 3 records.# log information generated by you must understand # View the file on HDFS [ Hadoop@hadoop000 ~] $hadoop fs-ls / user/hadoop/studentsFound 4 items-rw-r--r-- 1 hadoop supergroup 0 2018-07-04 13:28 / user/hadoop/students/_SUCCESS-rw-r--r-- 1 hadoop supergroup 13 2018-07-04 13:28 / user/hadoop/students/part-m-00000-rw-r--r-- 1 hadoop supergroup 13 2018-07-04 13:28 / user/ Hadoop/students/part-m-00001-rw-r--r-- 1 hadoop supergroup 14 2018-07-04 13:28 / user/hadoop/students/part-m-00002 [hadoop@hadoop000 ~] $hadoop fs-cat / user/hadoop/students/ "part*" 1001 Lodd,231002,sdfs,211003,sdfsa,24
We can also add some other parameters to make the import process more controllable.
-m specifies the number of map processes started. The default is 4.
-- delete-target-dir deletes the target directory
-- mapreduce-job-name specifies the name of the job of mapreduce
-- target-dir is imported into the specified directory
-- fields-terminated-by specifies the delimiter between fields
-- null-string means a field of type string. If Value is NULL, replace it with the specified character.
-- null-non-string means a field of non-string type. If Value is NULL, replace it with the specified character.
-- columns imports some fields in the table
-- where imports data conditionally
-- query imports according to the sql statement using the-- query keyword, you cannot use-- table and-- columns.
-- options-file executes in a file
# Import [hadoop@hadoop000 ~] $sqoop import\ >-- connect jdbc:mysql://localhost:3306/test\ >-- username root-- password 123456\ >-- mapreduce-job-name FromMySQL2HDFS\ >-- delete-target-dir\ >-- table students\ >-m imports HDFS to view [hadoop@hadoop000 ~] $hadoop fs-ls / user/hadoop/students Found 2 items-rw-r--r-- 1 hadoop supergroup 0 2018-07-04 13:53 / user/hadoop/students/_SUCCESS-rw-r--r-- 1 hadoop supergroup 40 2018-07-04 13:53 / user/hadoop/students/part-m-00000 [hadoop@oradb3 ~] $hadoop fs-cat / user/hadoop/students/ "part*" 1001 Lodd,231002,sdfs,211003,sdfsa Use the where parameter [hadoop@hadoop000 ~] $sqoop import\ >-- connect jdbc:mysql://localhost:3306/test\ >-- username root-- password 123456\ >-- table students\ >-- mapreduce-job-name FromMySQL2HDFS2\ >-- delete-target-dir\ >-- fields-terminated-by'\ t'\ >-- M1\ >-- null-string 0\ >-- columns "name"\ >-- target-dir STU_COLUMN_WHERE\ >-- where'id
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.