In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
-- case1-= order_created =-/ * 10703007267488 2014-05-01 06VO 0109:03:12.324+01*/CREATE EXTERNAL TABLE order_created 12.334mm 01101010435096 2014-05-01 0714 09:03:12.324+01*/CREATE EXTERNAL TABLE order_created (orderNumber STRING, event_time STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY'\ t'LOCATION'/ tmp/db_case1/order_created' CREATE EXTERNAL TABLE order_created_partition (orderNumber STRING, event_time STRING) PARTITIONED BY (event_month string) ROW FORMAT DELIMITED FIELDS TERMINATED BY'\ t'LOCATION'/ tmp/db_case1/order_created_partition';CREATE TABLE order_created_dynamic_partition (orderNumber STRING, event_time STRING) PARTITIONED BY (event_month string); insert into table order_created_dynamic_partition PARTITION (event_month) select orderNumber, event_time, substr (event_time, 1,7) as event_month from order_created Set hive.exec.dynamic.partition.mode=nonstrict / * hive.exec.dynamic.partition=false hive.exec.dynamic.partition.mode=strict hive.exec.max.dynamic.partitions.pernode=100 Maximum number of dynamic partitions allowed to be created in each mapper/reducer node hive.exec.max.dynamic.partitions=1000 Maximum number of dynamic partitions allowed to be created in total hive.exec.max.created.files=100000 Maximum number of HDFS files created by all mappers/reducers in a MapReduce job hive.error.on.empty.partition=false*/select INPUT__FILE__NAME, ordernumber Event_time, BLOCK__OFFSET__INSIDE__FILE / (length (ordernumber) + length (event_time) + 2) + 1 from order_created_dynamic_partition Select INPUT__FILE__NAME, ordernumber, event_time, round (BLOCK__OFFSET__INSIDE__FILE / (length (ordernumber) + length (event_time) + 2) + 1) from order_created_dynamic_partition;desc formatted order_created_dynamic_partition;desc formatted order_created_dynamic_partition partition (event_month='2014-05'); CREATE TABLE order_created_dynamic_partition_parquet (orderNumber STRING, event_time STRING) PARTITIONED BY (event_month string) ROW FORMAT DELIMITED FIELDS TERMINATED BY'\ t'STORED AS parquet MSCK REPAIR TABLE order_created_dynamic_partition_parquet;-- set to textfile format, bug in hiveALTER TABLE order_created_dynamic_partition_parquet PARTITION (event_month='2014-06') SET SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe';ALTER TABLE order_created_dynamic_partition_parquet PARTITION (event_month='2014-06') SET FILEFORMAT textfile;-- impalaALTER TABLE order_created_dynamic_partition_parquet PARTITION (event_month='2014-06') SET FILEFORMAT textfile -- set to parquet file format, hive 2 * 60 * 60 OR COALESCE (unix_timestamp (order_shipped_ts, 'yyyy-MM-dd HH:mm:ss.S'), 0)-unix_timestamp (order_created_ts,' yyyy-MM-dd HH:mm:ss.S') > 4 * 60 * 60 OR COALESCE (unix_timestamp (order_shipped_ts, 'yyyy-MM-dd HH:mm:ss.S'), 0)-unix_timestamp (order_created_ts 'yyyy-MM-dd HH:mm:ss.S') > 48 * 60 * 60) Select orderNumber, order_created_ts, order_picked_ts, order_shipped_ts, order_received_ts, order_cancelled_ts from order_tracking_join WHERE order_created_ts IS NOT NULL AND order_cancelled_ts IS NULL AND (COALESCE (unix_timestamp (order_picked_ts, 'yyyy-MM-dd HH:mm:ss.S'), 0)-unix_timestamp (order_created_ts 'yyyy-MM-dd HH:mm:ss.S') > 2 * 60 * 60 OR COALESCE (unix_timestamp (order_shipped_ts,' yyyy-MM-dd HH:mm:ss.S'), 0)-unix_timestamp (order_created_ts, 'yyyy-MM-dd HH:mm:ss.S') > 4 * 60 * 60 OR COALESCE (unix_timestamp (order_shipped_ts,' yyyy-MM-dd HH:mm:ss.S') 0)-unix_timestamp (order_created_ts, 'yyyy-MM-dd HH:mm:ss.S') > 48 * 60 * 60)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.