In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "using sqoop to test the connection use of oracle database". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Test the connection usage of the oracle database
① connects to the oracle database and lists all databases
[hadoop@eb179 sqoop] $sqoop list-databases--connect jdbc 10.1.69.173:1521:ORCLBI-- username huangq-P
Or sqoop list-databases--connect jdbc racle:thin10.1.69.173:1521:ORCLBI-- username huangq--password 123456
Or MySQL:sqoop list-databases-connectjdbc:mysql://172.19.17.119:3306/-username hadoop-password hadoop
Warning: / home/hadoop/sqoop/../hcatalog does not exist!HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: / home/hadoop/sqoop/../accumulo does not exist! Accumulo imports willfail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: $HADOOP_HOME is deprecated.
14-08-17 11:59:24 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
Enter password:
14-08-17 11:59:27 INFO oracle.OraOopManagerFactory: Data Connector for Oracleand Hadoop is disabled.
14-08-17 11:59:27 INFO manager.SqlManager: Using default fetchSize of 1000
14-08-17 11:59:51 INFO manager.OracleManager: Time zone has been set to GMT
MRDRP
MKFOW_QH
Tables from the ② Oracle database are imported into HDFS
Note:
By default, four map tasks are used. Each task writes its imported data to a separate file, and the four files are located in the same directory. In this example,-M1 means that only one map task text file cannot be saved as a binary field, and it cannot distinguish between null values and string values. "null" will generate an ENTERPRISE.java file after executing the following command, which can be viewed through ls ENTERPRISE.java. Code generation is a necessary part of the sqoop import process. Before writing the data from the source database to HDFS, sqoop will first deserialize it with the generated code
[hadoop@eb179~] $sqoop import-- connect jdbc racle:thin10.1.69.173:1521:ORCLBI--username huangq-- password 123456-- table ORD_UV-M1-- target-dir/user/sqoop/test-- direct-split-size 67108864
Warning: / home/hadoop/sqoop/../hcatalog does not exist! HCatalog jobs willfail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: / home/hadoop/sqoop/../accumulo does not exist! Accumulo imports willfail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: $HADOOP_HOME is deprecated.
14-08-17 15:21:34 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
14-08-17 15:21:34 WARN tool.BaseSqoopTool: Setting your password on thecommand-line is insecure. Consider using-P instead.
14-08-17 15:21:34 INFO oracle.OraOopManagerFactory: Data Connector for Oracleand Hadoop is disabled.
14-08-17 15:21:34 INFO manager.SqlManager: Using default fetchSize of 1000
14-08-17 15:21:34 INFO tool.CodeGenTool: Beginning code generation
14-08-17 15:21:46 INFO manager.OracleManager: Time zone has been set to GMT
15:21:46 on 14-08-17 INFO manager.SqlManager: Executing SQL statement: SELECT t.*FROM ORD_UV t WHERE 1: 0
14-08-17 15:21:46 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is/home/hadoop/hadoop
Note: / tmp/sqoop-hadoop/compile/328657d577512bd2c61e07d66aaa9bb7/ORD_UV.javauses or overrides a deprecated API.
Note: Recompile with-Xlint:deprecation for details.
14-08-17 15:21:47 INFO orm.CompilationManager: Writing jar file:/tmp/sqoop-hadoop/compile/328657d577512bd2c61e07d66aaa9bb7/ORD_UV.jar
14-08-17 15:21:47 INFO manager.OracleManager: Time zone has been set to GMT
14-08-17 15:21:47 INFO manager.OracleManager: Time zone has been set to GMT
14-08-17 15:21:47 INFO mapreduce.ImportJobBase: Beginning import of ORD_UV
14-08-17 15:21:47 INFO manager.OracleManager: Time zone has been set to GMT
14-08-17 15:21:49 INFO db.DBInputFormat: Using read commited transactionisolation
14-08-17 15:21:49 INFO mapred.JobClient: Running job: job_201408151734_0027
14-08-17 15:21:50 INFO mapred.JobClient: map 0 reduce 0
14-08-17 15:22:12 INFO mapred.JobClient: map 100% reduce 0
14-08-17 15:22:17 INFO mapred.JobClient: Job complete: job_201408151734_0027
14-08-17 15:22:17 INFO mapred.JobClient: Counters: 18
14-08-17 15:22:17 INFO mapred.JobClient: Job Counters
14-08-17 15:22:17 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=15862
15:22:17 on 14-08-17 INFO mapred.JobClient: Total time spent by allreduces waiting after reserving slots (ms) = 0
15:22:17 on 14-08-17 INFO mapred.JobClient: Total time spent by allmaps waiting after reserving slots (ms) = 0
14-08-17 15:22:17 INFO mapred.JobClient: Launched map tasks=1
14-08-17 15:22:17 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
14-08-17 15:22:17 INFO mapred.JobClient: File Output FormatCounters
14-08-17 15:22:17 INFO mapred.JobClient: Bytes Written=1472
14-08-17 15:22:17 INFO mapred.JobClient: FileSystemCounters
14-08-17 15:22:17 INFO mapred.JobClient: HDFS_BYTES_READ=87
14-08-17 15:22:17 INFO mapred.JobClient: FILE_BYTES_WRITTEN=33755
14-08-17 15:22:17 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=1472
14-08-17 15:22:17 INFO mapred.JobClient: File Input FormatCounters
14-08-17 15:22:17 INFO mapred.JobClient: Bytes Read=0
14-08-17 15:22:17 INFO mapred.JobClient: Map-Reduce Framework
14-08-17 15:22:17 INFO mapred.JobClient: Map input records=81
15:22:17 on 14-08-17 INFO mapred.JobClient: Physical memory (bytes) snapshot=192405504
14-08-17 15:22:17 INFO mapred.JobClient: Spilled Records=0
15:22:17 on 14-08-17 INFO mapred.JobClient: CPU time spent (ms) = 1540
15:22:17 on 14-08-17 INFO mapred.JobClient: Total committed heapusage (bytes) = 503775232
15:22:17 on 14-08-17 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2699571200
14-08-17 15:22:17 INFO mapred.JobClient: Map output records=81
14-08-17 15:22:17 INFO mapred.JobClient: SPLIT_RAW_BYTES=87
15:22:17 on 14-08-17 INFO mapreduce.ImportJobBase: Transferred 1.4375 KB in29.3443 seconds (50.1631 bytes/sec)
14-08-17 15:22:17 INFO mapreduce.ImportJobBase: Retrieved 81 records.
③ data export Oracle and HBase
Use export to import data from hdfs into a remote database
Export--connectjdbc racle:thin 192.168.**.**:**:**--username * *-password=**-m1tableVEHICLE--export-dir / user/root/VEHICLE
Import data into Hbase
Sqoop import-connect jdbc racle:thin 192.168.**.**:**:**--username**--password=**--m1-table VEHICLE--hbase- create-table-hbase-table VEHICLE--hbase-row-key ID--column-family VEHICLEINFO-split-by ID
This is the end of the content of "using sqoop to test the connection use of oracle database". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.