In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "Influxdb research and practice in Docker". In daily operation, I believe that many people have doubts about Influxdb research and practice in Docker. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "Influxdb research and practice in Docker". Next, please follow the editor to study!
Influxdb research and practice:
Influxdb introduction:
Use the TSM (Time Structured Merge) storage engine to allow high uptake speed and data compression
Write in go without additional dependencies
Simple, high performance write query httpAPI interface
Plug-ins that support other data acquisition protocols, such as graphite,collected,OpenTSDB
Using relay to build highly available https://docs.influxdata.com/influxdb/v1.0/high_availability/relay/
Extended sql-like language, it is easy to query summary data
Tag support can be used to make queries more efficient and fast
Retention policy effectively automatically eliminates expired data
Continuously generated automatically calculated data will make frequent queries more efficient.
Support for web management pages
Download and install:
Github: https://github.com/influxdata/influxdb source code compilation
Download from the official website:
Centos series: wget https://dl.influxdata.com/influxdb/releases/influxdb-1.0.0.x86_64.rpm & & sudo yum localinstall influxdb-1.0.0.x86_64.rpm
Source package series: wget https://dl.influxdata.com/influxdb/releases/influxdb-1.0.0_linux_amd64.tar.gz & & tar xvfz influxdb-1.0.0_linux_amd64.tar.gz
Docker series: docker pull influxdb
Installation manual: https://docs.influxdata.com/influxdb/v0.9/introduction/installation/
Configuration:
# cat / etc/influxdb/influxdb.confreporting-disabled = false [registration] [meta] dir = "/ var/lib/influxdb/meta" hostname = "10.0.0.2" # this hostname must be written locally Otherwise, you cannot connect to the APIbind-address = ": 8088" retention-autocreate = trueelection-timeout = "1s" heartbeat-timeout = "1s" leader-lease-timeout = "500ms" commit-timeout = "50ms" cluster-tracing = false [data] dir = "/ var/lib/influxdb/data" max-wal-size = 104857600 # Maximum size the WAL can reach before a flush. Defaults to 100MB.wal-flush-interval = "10m" # Maximum time data can sit in WAL before a flush.wal-partition-flush-delay = "2s" # The delay time between each WAL partition being flushed.wal-dir = "/ var/lib/influxdb/wal" wal-logging-enabled = true [hinted-handoff] enabled = truedir = "/ var/lib/influxdb/hh" max-size = 1073741824max-age = "168h" retry-rate-limit = 0retry-interval = "1s" retry-max-interval = " 1m "purge-interval =" 1h "[admin] enabled = truebind-address =": 8083 "https-enabled = falsehttps-certificate =" / etc/ssl/influxdb.pem "[http] enabled = truebind-address =": 8086 "auth-enabled = falselog-enabled = truewrite-tracing = falsepprof-enabled = falsehttps-enabled = falsehttps-certificate =" / etc/ssl/influxdb.pem "[opentsdb] enabled = false [false] enabled = false
Note: the influxdb service will launch three ports: 8086 is the default data processing port for the service, which is mainly used for influxdb database related operations and can provide related API;8083. It provides a visual web interface for administrators to provide users with friendly visual query and data management; 8088 is mainly for metadata management. It should be noted that influxdb requires the influxdb user to start by default, and the data is stored under / var/lib/influxdb/, which should be paid attention to in the production environment.
Start:
In the same way as telegraf startup, you can use init.d or systemd to manage influxdb
Note that after startup, you need to check whether the relevant ports are listening, and check the log to ensure that the service starts normally.
Use:
If the core part of using telegraf is configuration, then the core of influxdb is the use of the SQL language. Influxdb supports three operation modes by default:
Log in to the shell of influxdb:
Create database: create database mydb create user: create user "bigdata" with password 'bigdata' with all privileges View database: show databases Data insertion: insert bigdata,host=server001,regin=HC load=88 switch database: use mydb to see what measurement (similar to tables in a database) in the database: show measurements query: select * from cpu limit 2 query started an hour ago and now ends: # select load from cpu where time > now ()-1h query from the beginning of the historical era to 1000 days: # select load from cpu where time
< now() + 1000d查找一个时间区间:#select load from cpu where time >'2016-08-18' and time
< '2016-09-19'查询一个小时间区间的数据,比如在September 18, 2016 21:24:00:后的6分钟:#select load from cpu where time >'2016-09-18T21GV 24buret 00Z' + 6m use regular query for all measurement data: # select * from /. * / limit 1#select * from / ^ docker/ limit 3#select * from / .* mem.*/ limit 3 regular matching plus specified tag: (= ~! ~) # select * from cpu where "host"! ~ / .* HC.*/ limit 4#SELECT * FROM "h3o_feet" WHERE ("location" = ~ / .* .y / OR "location" = ~ / . * m. Ordering /) AND "water_level" > 0 LIMIT 4 sort: the use of group by must be to use # select count (type) from events group by time (10s) # select count (type) from events group by time (10s) in a composite function. Type do tag:#select count (type) as number_of_types group by time (10m) # select count (type) from events group by time (1h) where time > now ()-3h use the fill field: # select count (type) from events group by time (1h) fill (0) / fill (- 1) / fill (null) where time > now ()-3h data aggregation: select count (type) from user_events merge admin_events group by time (10m)
Use API to manipulate data:
Create a database: curl-G "http://localhost:8086/query"-- data-urlencode" q=create database mydb "insert data: curl-XPOST 'http://localhost:8086/write?db=mydb'-d' biaoge,name=xxbandy,xingqu=coding age=2'curl-I-XPOST 'http://localhost:8086/write?db=mydb'-- data-binary' cpu_load_short,host=server01 Region=us-west value=0.64 1434055562000000000'curl-I-XPOST 'http://localhost:8086/write?db=mydb'-- data-binary' cpu_load_short,host=server02 value=0.67cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257cputloadloadkeeper value=2.0 1422568543702900257' write the sql statement to the file And insert it through api: # cat sql.txtcpu_load_short,host=server02 value=0.67cpu_load_short,host=server02,region=us-west value=0.55 1422568543702900257cputloadloadcoach Region=us-west value=2.0 1422568543702900257#curl-I-XPOST 'http://localhost:8086/write?db=mydb'-- data-binary @ cpu_data.txt query data: (--data-urlencode "epoch=s" specify time series "chunk_size=20000" specify query block size) # curl-G http://localhost:8086/query?pretty=true-- data-urlencode "db=ydb"-- data-urlencode "q=select * from biaoge where xingqu='coding'" data Analysis: # Curl-G http://localhost:8086/query?pretty=true-- data-urlencode "db=mydb"-- data-urlencode "q=select mean (load) from cpu" # curl-G http://localhost:8086/query?pretty=true-- data-urlencode "db=mydb"-- data-urlencode "q=select load from cpu" can be seen that the value of load is 42 78 15.4 respectively. The value calculated by mean (load) is 45 db=ydb-G http://localhost:8086/query?pretty=true-- data-urlencode "db=ydb"-- data-urlencode "q=select mean (load) from cpu where host='server01'"
Use the web interface provided by influxdb to operate:
This is only a brief introduction to the use of influxdb. Later, if you want to aggregate and perfectly display data in grafana, you may need to be familiar with the various query syntax of influxdb. (in fact, it is some skills in using sql statements, the use of aggregate functions, subqueries, etc.)
At this point, the study of "Influxdb Research and practice in Docker" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.