In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you clickhouse batch insertion data and ClickHouse commonly used command example analysis, I believe that most people do not understand, so share this article for your reference, I hope you will learn a lot after reading this article, let's go to understand it!
one。 Installation and use
ClickHouse is an open source columnar storage database management system provided by Yandex, which is mostly used in online analysis (OLAP) scenarios. It can provide massive data storage and analysis, and make use of the characteristics of its data compression and vectorization engine to provide fast data search.
Ⅰ). Install sudo yum install yum-utilssudo rpm-- import https://repo.yandex.ru/clickhouse/CLICKHOUSE-KEY.GPGsudo yum-config-manager-- add-repo https://repo.yandex.ru/clickhouse/rpm/stable/x86_64sudo yum install clickhouse-server clickhouse-clientsudo / etc/init.d/clickhouse-server startclickhouse-client Ⅱ). Configuration
A). Clickhouse-server
CLICKHOUSE_USER=username CLICKHOUSE_LOGDIR=$ {CLICKHOUSE_HOME} / log/clickhoue-serverCLICKHOUSE_LOGDIR_USER=usernameCLICKHOUSE_DATADIR_OLD=$ {CLICKHOUSE_HOME} / data/oldCLICKHOUSE_DATADIR=$ {CLICKHOUSE_HOME} / data
B). Config.xml
...... Info ${CLICKHOUSE_HOME} / log/clickhoue-server/clickhoue-server.log ${CLICKHOUSE_HOME} / log/clickhoue-server/clickhoue-server-error.log 100m 5 ${CLICKHOUSE_HOME} ${CLICKHOUSE_HOME} / tmp ${CLICKHOUSE_HOME} / user_files:: Asiz/Shanghai... ... Ⅲ). Start and stop service # a). Start the service sudo service clickhouse-server start#### b). Stop serving sudo service clickhouse-server stop
Ⅳ). Client access
Clickhouse-client II. Common command Ⅰ). Create table CREATE TABLE IF NOT EXISTS database.table_name ON cluster cluster_shardNum_replicasNum ('id' UInt64,' name' String, 'time' UInt64,' age' UInt8, 'flag' UInt8) ENGINE = MergeTreePARTITION BY toDate (time/1000) ORDER BY (id,name) SETTINGS index_granularity = 8192 Ⅱ). Create materialized views CREATE MATERIALIZED VIEW database.view_name ON cluster cluster_shardNum_replicasNumENGINE = AggregatingMergeTreePARTITION BY toYYYYMMDD (time) ORDER BY (id,name) AS SELECT toStartOfHour (toDateTime (time/1000)) as time, id,name, sumState (if (flag = 1,1,0) AS successCount, sumState (if (flag = 0,1,0) AS faildCount, sumState (if ((age))
< 10), 1, 0)) AS rang1Age, sumState( if ((age >10) AND (age
< 20), 2, 0)) AS rang2Age, sumState( if ((age >20), 3,0) AS rang3Age, maxState (age) AS maxAge, minState (age) AS minAgeFROM datasource.table_nameGROUP BY time,id,name Ⅲ). Insert data
a)。 Ordinary data insertion
INSERT INTO database.table_name (id, name, age, flag) VALUES (1, 'test', 15,0)
B) .Json data insertion
INSERT INTO database.table_name FORMAT JSONEachRow {"id": "1", "name": "test", "age": "11", "flag": "1"} Ⅳ). Query data
a)。 Table data query
SELECT * FROM database.table_name WHERE id=1
b)。 Materialized view query
SELECT id, name, sumMerge (successCount), sumMerge (faildCount), sumMerge (rang1Age), sumMerge (rang2Age), maxMerge (maxAge), minMerge (minAge) FROM database.view_name WHERE id=1GROUP BY id, name Ⅴ). Create the NESTED table CREATE TABLE IF NOT EXISTS database.table_name ('id' UInt64,' name' String, 'time' UInt64,' age' UInt8, 'flag' UInt8nested_table_name Nested (sequence UInt32, id UInt64, name String, time UInt64, age UInt8, flag UInt8 socketAddr String, socketRemotePort UInt32, socketLocalPort UInt32, eventTime UInt64, exceptionClassName String, hashCode Int32, nextSpanId UInt64) ENGINE = MergeTreePARTITION BY toDate (time / 1000) ORDER BY (id, name) Time) SETTINGS index_granularity = 8192 Ⅵ) .NESTED table data query SELECT table1.*,table1.id FROM nest.table_name AS table1 array JOIN nested_table_name AS table2 Ⅶ. Configure dictionary entry url hostname 9000 default dict url_dict 30 36 id hash_code String url String Url_hash hostname 9000 default dict url_hash 30 36 hash_code String url String Ⅷ). Dictionary query SELECT id, dictGet ('name',' name', toUInt64 (name)) AS name, dictGetString ('url',' url', tuple (url)) AS urlFROM table_name Ⅸ). Import data clickhouse-client-- query= "INSERT INTO database.table_name FORMAT CSVWithNames"
< /path/import_filename.csvⅩ).导出数据clickhouse-client --query="SELECT * FROM database.table_name FORMAT CSV" sed 's/"//g' >/ path/export_filename.csv Ⅺ). Check partition status SELECT table, name, partition,active FROM system.parts WHERE database='database_name' Ⅻ). Clean up partitionALTER TABLE database.table_name ON cluster cluster_shardNum_replicasNum detach partition 'partition_id'XIII). Check the compression ratio of the column SELECT database, table, name, formatReadableSize (sum (data_compressed_bytes) AS c) AS comp, formatReadableSize (sum (data_uncompressed_bytes) AS r) AS raw, c AS comp_ratioFROM system.columnsWHERE database='database_name' AND table='table_name'GROUP BY nameXIV). Check the disk occupation of the materialized view clickhouse-client-- query= "SELECT partition,count (*) AS partition_num, formatReadableSize (sum (bytes)) AS disk_size FROM system.columns WHERE database='database_name'"-- external-- le=***.sql-- name=parts-- structure='table String, name String, partition UInt64, engine String'-h hostname and above are all the contents of the article "sample Analysis of clickhouse bulk insertion data and ClickHouse commands". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.