In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "how to use InfluxDB", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to use InfluxDB" this article.
1. Introduction to InfluxDB 1. Introduction to InfluxDB
InfluxDB is an open source distributed database of timing, events and metrics written in the Goto language without external dependencies. It is similar to Elasticsearch.
function:
-based on time series, supports time-related correlation functions (such as maximum, minimum, summation, etc.)
-measurability: you can calculate large amounts of data in real time
-event-based: it supports arbitrary event-based data: it supports arbitrary event data.
The main features of :
-unstructured (schemaless): can be any number of columns
-Extensible, supports a series of functions such as min, max, sum, count, mean, median, etc. Convenient statistics support a series of functions such as min, max, sum, count, mean, median, etc.
-native HTTP support, built-in HTTP API native HTTP support, built-in HTTP API
-powerful SQL-like syntax powerful SQL-like syntax
-built-in management interface, easy to use, easy to use
Comparison of InfluxDB with traditional databases:
2. Concepts unique to InfluxDB
then introduces the unique concept of InfluxDB through an insert operation. In InfluxDB, we can roughly treat a piece of data to be stored as a virtual key and its corresponding value (field value) in the following format:
Insert cpu_usage,host=server01,region=us-west value=0.64 1434055562000000000`
virtual key includes the following parts: database, retention policy, measurement, tag sets, field name, timestamp.
Database: database name, multiple databases can be created in InfluxDB, and data files in different databases are stored in isolation.
Retention policy: storage policy, which is used to set the data retention time. At the beginning of each database, a default storage policy autogen is automatically created, and the data retention time is permanent. Users can then set their own storage policy.
Measurement: similar to tables in a relational database.
Tag sets: tags is sorted in lexicographic order in InfluxDB. For example, host=server01,region=us-west and host=server02,region=us-west are two different tag set.
Tag: tag, in InfluxDB, tag is a very important part, table name + tag together as the index of the database, is the form of "key-value".
Ield name: for example, the value in the above data is fieldName,InfluxDB that supports the insertion of multiple fieldName in a single piece of data.
Timestamp: each piece of data needs to be specified with a timestamp, which is given special treatment in the TSM storage engine to optimize subsequent query operations.
(1) Point
Point consists of a time stamp (time), data (field), and a label (tags).
Point is equivalent to a row of data in a traditional database, as shown in the following table:
(2) Series
Series is equivalent to a collection of data in InfluxDB. In the same database, the identical data of retention policy, measurement and tag sets belong to the same series, and the data of the same series are physically stored together in chronological order.
(3) Shard
Shard is an important concept in InfluxDB, which is associated with retention policy. There are many shard under each storage policy, and each shard stores data within a specified period of time and does not repeat; for example, data from 7: 00 to 8: 00 falls into the shard0, and data from 8: 00 to 9: 00 falls into the shard1. Each shard corresponds to an underlying tsm storage engine with independent cache, wal, and tsm file.
(4) components
The TSM storage engine is mainly composed of several parts: cache, wal, tsm file, compactor.
1) Cache: cache is the equivalent of memtabl in LSM Tree. When you insert data, you actually write data to both cache and wal, and you can think of cache as the in-memory cache of the data in the wal file. When InfluxDB starts, it traverses all the wal files and reconstructs the cache, so that even if the system fails, it does not result in data loss.
The data in cache is not growing indefinitely, and there is a maxSize parameter that controls how much memory the data in cache takes up will be written to the tsm file. If it is not configured, the default upper limit is 25MB. Every time the data in the cache reaches the threshold, the current cache will be snapped, then the contents of the current cache will be cleared, and a new wal file will be created for writing. The remaining wal file will eventually be deleted, and the data in the snapshot will be sorted and written to a new tsm file.
2) the content of the WAL:wal file is the same as the cache in memory, and its purpose is to persist the data. When the system crashes, you can recover the data that has not been written to the tsm file through the wal file.
3) TSM File: the maximum size of a single tsm file is 2GB, which is used to store data.
4) the Compactor:compactor component runs continuously in the background, checking every 1 second to see if there is a need to compress the merged data.
performs two main operations:
-one is to take a snapshot of the data in cache after reaching the threshold size, and then transfer it to a new tsm file.
-another is to merge the current tsm files, merging multiple small tsm files into one, so that each file can reach the maximum size of a single file as far as possible, reducing the number of files, and some data deletion operations are also completed at this time.
2. Influx deployment and installation 1. Influx download
Official website address: https://dl.influxdata.com
# 1. Download wget https://dl.influxdata.com/influxdb/releases/influxdb-1.1.0.x86_64.rpmyum localinstall influxdb-1.1.0.x86_64.rpm#2. Online yum installation # 2.1 configure yum source cat create database tes # create database > drop database test # delete database > use test # enter database > insert disk_free,hostname=server01 value=442221834240i # create & & insert data > select * from disk_free # query data > show measurement # all tables in display library > drop measurement disk_free # delete tables
We found that in the operation as fierce as the tiger above, there is no command similar to create table. Why?
originally because: InfluxDB does not show the statement to create a table, can only be through the insert data room or to create a new table.
Insert disk_free,hostname=server01 value=442221834240i-- dissects the meaning of the above command disk_free is the table name, hostname is the tag, value=xx is the record value (field), there can be multiple record values, and the system has its own additional timestamp. You can also manually add a timestamp insert disk_free,hostname=server01 value=442221834240i 14353621895756921822. Data preservation policy (Retention Policies)
introduction: InfluxDB does not provide a method to delete data records directly, but provides a data preservation strategy, which is mainly used to specify the data retention time. If you exceed the specified time, delete this part of the data.
# View Retention Policies > show retention policies on testname duration shardGroupDuration replicaN default-----autogen 0s 168h0m0s 1 false in the current database
Explanation:
-name: name, the name of this example is default.
-duration: duration, 0 means unlimited.
The storage time of -shardGroupDuration:shardGroup. ShardGroup is a basic storage structure of InfluxDB. The query efficiency of data that should be longer than this time should be reduced.
-replicaN: the full name is replication and the number of copies.
-default: whether it is the default policy.
# create a new Retention Policies > create retention policy "rp_name" on "test" duration 3w replication 1 default# modify Retention Policies > alter retention policy "rp_name" on "test" duration 30d default > show retention policies on testname duration shardGroupDuration replicaN default-----autogen 0s 168h0m0s 1 falserp _ name 720h0m0s 24h0m0s 1 true# Delete Retention Policiesdrop retention policy "rp_name" on "test"
Create statement profiling:
Create retention policy "rp_name" on "test" duration 3w replication 1 default
-rp_name: save policy name
-test: the database targeted
-3w: save for 3 weeks, the data before 3 weeks will be deleted. Influxdb has various event parameters, and the duration must be at least 1 hour; for example: h (hour), d (day), w (week).
-replication: the number of copies is usually 1.
3. Continuous query (Continuous Queries)
introduction: InfluxDB's continuous query is a set of statements that start automatically and regularly in the database, and the statement must contain the select keyword and the group by time () keyword. InfluxDB places the query results in the specified data table.
purpose: the use of continuous query is the best way to reduce the sampling rate. The combination of continuous query and storage strategy will greatly reduce the system usage of InfluxDB. And after using continuous query, the data will be stored in the specified data table, which provides convenience for the future statistics of data with different precision.
creation statement:
CREATE CONTINUOUS QUERY ON [RESAMPLE [EVERY] [FOR]] BEGIN SELECT () [, ()] INTO FROM [WHERE] GROUP BY time () [,] END
For example, :
CREATE CONTINUOUS QUERY wj_30m ON test BEGIN SELECT mean (connected_clients), MEDIAN (connected_clients), MAX (connected_clients), MIN (connected_clients) INTO redis_clients_30m FROM redis_clients GROUP BY ip,port,time (30m)-explain: a new continuous query named wj_30m is created in the test database, taking the average, median, maximum and minimum values of the connected_clients field every 30 minutes from the redis_ clients table and inserting it into the redis_clients_30m table The data retention policies used are all default.
Other operations for continuous queries:
# View continuous queries in the library > show continuous queriesname: _ internalname query-----name: testname query-----# Delete Continuous Queries > drop continuous query on 4. User Management and privilege Operation
user Management:
# Login as xxx user $influx-username useer-password abcd# display all users > show usersuser admin-----zy true# create normal user > CREATE USER "username" WITH PASSWORD 'password'# create administrator user > CREATE USER "admin" WITH PASSWORD' admin' WITH ALL PRIVILEGES# set password for user > SET PASSWORD FOR ='# delete user > DROP USER "username"
permission settings:
# authorize administrator rights for an existing user > GRANT ALL PRIVILEGES TO # revoke user permissions > REVOKE ALL PRIVILEGES FROM # Show user permissions on different databases > SHOW GRANTS FOR 5. Influxdb query
supports two ways for Influxdb: SQL-like query and Http API query:
-- SQL-like query (query the latest three pieces of data) SELECT * FROM weather ORDER BY time DESC LIMIT 3#Http API query $curl-G 'http://localhost:8086/query?pretty=true'-- data-urlencode "db=test"-- data-urlencode "q=SELECT * FROM weather ORDER BY time DESC LIMIT 3" 4, JAVA_API1 of Influx. Addition, deletion, modification and search of InfluxDB
Here the editor uses the structure of the maven project to test the addition, deletion, modification and query of the InfluxDB database.
Org.influxdb influxdb-java 2.5
InfluxDBUtils:
Import org.influxdb.InfluxDB;import org.influxdb.InfluxDBFactory;import org.influxdb.dto.Point;import org.influxdb.dto.Query;import org.influxdb.dto.QueryResult;import java.util.Map;/** Created with IntelliJ IDEA. * * User: ZZY * * Date: 2019-11-15 * * Time: 10:10 * * Description: * / public class InfluxDBConnect {private String username;// username private String password;// password private String openurl;// connection address private String database;// database private InfluxDB influxDB; public InfluxDBConnect (String username, String password, String openurl, String database) {this.username = username; this.password = password This.openurl = openurl; this.database = database;} / * * Connect to the temporal database Get InfluxDB**/ public InfluxDB getConnect () {if (influxDB==null) {influxDB=InfluxDBFactory.connect (openurl,username,password); influxDB.createDatabase (database);} return influxDB } / * set data storage policy * defalut policy name / database database name / 30d data retention time limit 30 days / 1 copy number 1 / ending DEFAULT means the default policy is set to * / public void setRetentionPolicy () {String command=String.format ("CREATE RETENTION POLICY\"% s\ "ON\"% s\ "DURATION% s REPLICATION% s DEFAULT", "defalut", database "30d", 1) This.query (command);} / * query * @ param command query statement * @ return * / public QueryResult query (String command) {return influxDB.query (new Query (command,database)) } / * insert * @ param measurement table * @ param tags tag * @ param fields field * / public void insert (String measurement, Map tags, Map fields) {Point.Builder builder = Point.measurement (measurement); builder.tag (tags); builder.fields (fields); influxDB.write (database, ", builder.build ()) } / * * delete * @ param command delete statement * @ return returns error message * / public String deleteMeasurementData (String command) {QueryResult query = influxDB.query (new Query (command, database)); return query.getError () } / * create database * @ param dbName * / public void createDB (String dbName) {influxDB.createDatabase (dbName);} / * delete database * @ param dbName * / public void deleteDB (String dbName) {influxDB.deleteDatabase (dbName);}}
Pojo:
Import java.io.Serializable;/** Created with IntelliJ IDEA. * * User: ZZY * * Date: 2019-11-15 * * Time: 10:07 * * Description: * / public class CodeInfo implements Serializable {private static final long serialVersionUID = 1L; private Long id; private String name; private String code; private String descr; private String descrE; private String createdBy; private Long createdAt; private String time; private String tagCode; private String tagName; public static long getSerialVersionUID () {return serialVersionUID }} / / set and get method...
Test:
Import org.influxdb.InfluxDB;import org.influxdb.dto.QueryResult;import java.util.*;/** Created with IntelliJ IDEA. * * User: ZZY * * Date: 2019-11-15 * * Time: 11:45 * * Description: test the addition, deletion, modification and query of influxDB * / public class Client {public static void main (String [] args) {String username = "admin"; / / username String password = "admin"; / / password String openurl = "http://192.168.254.100:8086";// connection address String database =" test " / / Database InfluxDBConnect influxDBConnect = new InfluxDBConnect (username, password, openurl, database); influxDBConnect.getConnect (); / / insertInfluxDB (influxDBConnect); testQuery (influxDBConnect);} / / insert data public static void insertInfluxDB (InfluxDBConnect influxDB) {Map tags = new HashMap () into Measurement; Map fields = new HashMap (); List list = new ArrayList (); CodeInfo info1 = new CodeInfo () Info1.setId (1L); info1.setName ("BANKS"); info1.setCode ("ABC"); info1.setDescr ("Agricultural Bank of China"); info1.setDescrE ("ABC"); info1.setCreatedBy ("system"); info1.setCreatedAt (new Date (). GetTime ()); CodeInfo info2 = new CodeInfo (); info2.setId (2L) Info2.setName ("BANKS"); info2.setCode ("CCB"); info2.setDescr ("China Construction Bank"); info2.setDescrE ("CCB"); info2.setCreatedBy ("system"); info2.setCreatedAt (new Date (). GetTime ()); list.add (info1); list.add (info2); String measurement = "sys_code" For (CodeInfo info: list) {tags.put ("TAG_CODE", info.getCode ()); tags.put ("TAG_NAME", info.getName ()); fields.put ("ID", info.getId ()); fields.put ("NAME", info.getName ()); fields.put ("CODE", info.getCode ()) Fields.put ("DESCR", info.getDescr ()); fields.put ("DESCR_E", info.getDescrE ()); fields.put ("CREATED_BY", info.getCreatedBy ()); fields.put ("CREATED_AT", info.getCreatedAt ()); influxDB.insert (measurement, tags, fields) }} / / query data in Measurement public static void testQuery (InfluxDBConnect influxDB) {String command = "select * from sys_code"; QueryResult results = influxDB.query (command); if (results = = null) {return;} for (QueryResult.Result result:results.getResults ()) {List series = result.getSeries () For (QueryResult.Series serie: series) {System.out.println ("serie:" + serie.getName ()); / / Table name Map tags = serie.getTags (); if (tags! = null) {System.out.println ("tags:-") Tags.forEach ((key, value)-> {System.out.println (key + ":" + value);} System.out.println ("values:---"); List values = serie.getValues () / / list all columns in each serie-- value column all uppercase List columns = serie.getColumns (); / / list all columns in each serie for (List list: values) {for (int item0; I)
< list.size(); i++){ String propertyName = setColumns(columns.get(i));//字段名 Object value =list.get(i); System.out.println(value.toString()); } } System.out.println("columns:"); for(String column:columns){ System.out.println(column); } } } } //删除Measurement中的数据 public static void deletMeasurementData(InfluxDBConnect influxDB){ String command = "delete from sys_code where TAG_CODE='ABC'"; String err =influxDB.deleteMeasurementData(command); System.out.println(err); } private static String setColumns(String column){ System.out.println(column); String[] cols = column.split("_"); StringBuffer sb = new StringBuffer(); for(int i=0; i< cols.length; i++){ String col = cols[i].toLowerCase(); if(i != 0){ String start = col.substring(0, 1).toUpperCase(); String end = col.substring(1).toLowerCase(); col = start + end; } sb.append(col); } System.out.println(sb.toString()); return sb.toString(); }}五、InfluxDB 导入导出数据1. 数据导出 (1)普通导出 $influx_inspect export -datadir "/var/lib/influxdb/data" -waldir "/var/lib/influxdb/wal" -out "test_sys" -database "test" -start 2019-07-21T08:00:01Z#命令解释influx_inspect export -datadir "/data/influxdb/data" # 勿动,influxdb 默认的数据存储位置 -waldir "/data/influxdb/wal" # 勿动,influxdb 默认的数据交换位置 -out "telemetry_vcdu_time" # 导出数据文件的文件名 -database telemetry_vcdu_time # 指定要导出数据的数据库 -start 2019-07-21T08:00:01Z # 指定要导出的数据的起始时间 此时在当前目录下会出现一个名为test_sys的文件,查看文件内容: (2)导出成CSV格式文件 $influx -database 'test' -execute 'select * from sys_code' -format='csv' >Sys_code.csv
At this point, there will be an extra sys_code.csv file in the current directory to view the contents of the file:
two。 Import data $influx-import-path=telemetry_sat_time-precision=ns # command explains influx-import # No parameters, do not move-path=telemetry_sat_time # specify the file to import data-precision=ns # specify the time precision of imported data above is all the content of this article "how to use InfluxDB", thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.