In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces TDengine+Grafana how to achieve data visualization, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.
After understanding the basic functions, I launched the TDengine version of the monitoring system, but found that "group by" is not allowed in grafana. Instead, multiple SMS status data can only be put into one dashboard by writing multiple where statements, as shown in the figure:
If there are more where conditions, this approach is too stupid and inflexible.
So, began to carefully study the official documents, understand the "super table", "continuous query", etc., encountered a lot of problems in the process, make a record here (test environment, the data is generated by simulation).
I. installation and operation
Test environment
Cat / etc/redhat-release CentOS Linux release 7.7.1908 (Core)
The installation is simple:
Rpm-ivh tdengine-1.6.3.1-3.x86_64.rpm
The default (unchanged profile) is used for all configurations.
Start TDengine:
Systemctl start taosd
II. Building databases and tables
Enter "taos" on the command line
Taos >
The following database and table building operations are carried out at this prompt
Create a database
Create database jk keep 365precision 'us'
Description:
Keep 365 means that the database stores the data within 365 days, and the data before 365 days will be automatically cleared (because it is a time series database, you cannot delete the tables in the library)
Precision 'us' indicates that the timestamp precision in the database is "microseconds" (the default is milliseconds, or it can be displayed as precision' ms'). After testing, it is possible to use both single and double quotation marks, but not without quotation marks.
The configuration of time precision is no longer supported in the configuration file, and must be specified when building the database. (many configurations do not take effect after testing, and the official reply obtained after Issues)
TDengine is originally designed for the Internet of things, the information collection accuracy of the equipment to "millisecond" is enough, but our SMS platform will have a burst of data generation, in order to avoid possible data loss, the accuracy is set to "microsecond", after testing, the effect is very good, the test simulation data insertion timestamp is obtained by "now", the left side of the following figure is "millisecond" accuracy You can see a situation of "0 row (s)", indicating that some data has not been inserted, the right side is "microsecond" precision, and no data has been inserted.
two。 Create a super table
Enter the database "jk"
Taos > use jk; Database changed. Create table jiankong (ts timestamp, gatewayid binary (6), companyid binary (20), provinceid binary (10), cityid binary (10), value int, timestr binary (30) tags (type binary (10), subtype binary (10))
There are 3 kinds of type and hundreds of kinds of subtype in the SMS system, so take these static messages (or simply understand that the field for group by needs to be set to tag)
Explain the super watch:
STable is an abstraction of data collection points of the same type, a collection of instances of the same type, and contains multiple child tables with the same data structure.
Each STable defines a table structure and a set of tags for its child table: the table structure is the data column recorded in the table and its data type; the label signature and data type are defined by STable, and the tag value records the static information of each child table, which is used to group and filter the child table.
A child table is essentially an ordinary table, composed of a timestamp primary key and several data columns, each row records specific data, and the data query operation is exactly the same as the ordinary table; but the difference between the child table and the ordinary table is that each child table is subordinate to a super table and has a set of tag values defined by STable.
Perform multi-table aggregation queries for all child tables created by STable. Conditional filtering based on all tag values (where) is supported, and results can be aggregated according to the values in TAGS (group by). Fuzzy matching filtering for binary type is not supported.
The tag data (or tag value) is directly associated with each child table, and the same tag value (one or more, up to 6) is located to a child table (in the way of "automatically creating a table when writing data". You can map the same tag value to multiple child tables).
The tag value supports Chinese, and the tag type needs to be set to NCHAR (which has been verified in other tests).
3. Create a child table (the child table is a normal table, and the table structure is completely defined by the super table):
Create table jiankong_sub_send using jiankong tags ('send',' send'); create table jiankong_sub_delivrd using jiankong tags ('delivrd',' delivrd'); create table jiankong_sub_undeliv_db_0108 using jiankong tags ('undeliv',' DB:0108'); create table jiankong_sub_undeliv_db_0107 using jiankong tags ('undeliv',' DB:0107'); create table jiankong_sub_undeliv_balance using jiankong tags ('undeliv',' BALANCE') Create table jiankong_sub_undeliv_id_0076 using jiankong tags ('undeliv',' ID:0076'); create table jiankong_sub_undeliv_ib_0008 using jiankong tags ('undeliv',' IB:0008')
The combination of the same type of type and subtype is created as a child table (just for testing, so not all hundreds of subtype are tabulated)
4. Insert data
INSERT INTO jiankong_sub_send VALUES (now, 3034, '1564,' 109, '1272,' 2, '20190 9231530') INSERT INTO jiankong_sub_delivrd VALUES (now, 3034, '1564,' 109, '1272,' 2, '201909231530') INSERT INTO jiankong_sub_undeliv_balance VALUES (now, 1179, '152,' 106, '1000,' 1' '201910071113') INSERT INTO jiankong_sub_undeliv_id_0076 VALUES (now, 1165,' 178531) INSERT INTO jiankong_sub_undeliv_db_0108 VALUES (now, 90, '548) INSERT INTO jiankong_sub_undeliv_db_0108 VALUES (now, 90,' 548) INSERT INTO jiankong_sub_undeliv_db_0108 VALUES (now, 90, '548,' 123, 1237, 11415) INSERT INTO jiankong_sub_undeliv_db_0108 VALUES '201910061127') INSERT INTO jiankong_sub_undeliv_db_0107 VALUES (now, 2261,' 808, '116,' 1314,'2, '201910032106')
The above is to insert simulation data into the seven sub-tables created above. Because of simulating a large amount of data, you need to write a shell script (you can also use other ways) to inject data.
When writing data, you cannot operate on STable directly, but on each child table.
5. Database, table structure, etc.
Query database information:
Taos > show databases Name | created time | ntables | vgroups | replica | days | keep1,keep2 Keep (D) | tables | rows | cache (b) | ablocks | tblocks | ctime (s) | clog | comp | time precision | status | = log | 19-11-18 16GV 37rows 14.025 | 4 | 1 | 1 | 10 | 30Ling 30 | 32 | 1024 | 2048 | 2.00000 | 32 | | 3600 | 1 | 2 | us | ready | jk | 19-11-18 16 handlers 4819.867 | 10 | 1 | 1 | 10 | 365365365 | 1024 | 4096 | 4.00000 | 4.00000 | 3600 | 1 | us | ready | Query OK | 1 row (s) in set (0.002487s)
Query the super table:
Taos > show stables; name | created_time | columns | tags | tables | = jiankong | 19-11-18 16 purse 48 columns 41.540 | 7 | 2 | 7 | Query OK, 1 row (s) in set (0.002140s)
Query the table structure of the super table:
Taos > describe jiankong Field | Type | Length | Note | = ts | TIMESTAMP | 8 | | gatewayid | BINARY | | 6 | | companyid | BINARY | 20 | | provinceid | BINARY | 10 | | cityid | | | BINARY | 10 | value | INT | 4 | | timestr | BINARY | 30 | | type | | | BINARY | 10 | tag | subtype | BINARY | 10 | tag | Query OK | 9 row (s) in set (0.001301s)
You can see "tag" in the Note column, indicating that this column is a label
Query subtables:
Taos > show tables Table_name | created_time | columns | stable | = jiankong_sub_delivrd | 19-11-18 16VR 4917.009 | 7 | jiankong | | jiankong_sub_undeliv_ib_0008 | 19-11-18 1649Freight 17.025 | 7 | jiankong | jiankong_sub_undeliv_db_0108 | 19-11-18 16 jiankong 4917.016 | 7 | jiankong | jiankong_sub_undeliv_db_0107 | 19-11-18 16 140 4949 jiankong 17.018 | 7 | jiankong | _ id_0076 | 19-11-18 164949 jiankong_sub_send 17.023 | 7 | jiankong | jiankong_sub_send | 19-11-18 1649 4949 jiankong_sub_send 17.003 | 7 | jiankong | | jiankong_sub_undeliv_balance | 19-11-18 1649 Query OK. | 17.021 | jiankong | 10 row (s) in set (0.007001s)
Query the table structure of a specific child table:
Taos > describe jiankong_sub_undeliv_db_0108 Field | Type | Length | Note | = ts | TIMESTAMP | 8 | | gatewayid | | BINARY | 6 | companyid | BINARY | 20 | | provinceid | BINARY | 10 | cityid | | | BINARY | 10 | | value | INT | 4 | | timestr | BINARY | 30 | | type | BINARY | 10 | undeliv | subtype | BINARY | 10 | DB:0108 | Query OK 9 row (s) in set (0.001195s)
You can see "undeliv" (the type field in the super table) and "DB:0108" (the subtype field in the super table) in the Note column. These two static label values determine the child table.
6. Data query
Perform a grouped aggregate query on type:
Taos > select sum (value) from jk.jiankong group by type; sum (value) | type | = = 11827688 | delivrd | 55566578 | send | 46687487 | undeliv | Query OK, 3 row (s) in set (0.018251s)
Perform a grouped aggregate query on subtype:
Taos > taos > select sum (value) from jk.jiankong group by subtype Sum (value) | subtype | = 9317 | BALANCE | 65219 | DB:0107 | 2077691 | DB:0108 | 2804417 | IB:0008 | 41730843 | ID:0076 | delivrd | 55566578 | send | Query OK, 7 row (s) in set (0.013978s)
Group aggregate queries for type and subtype:
Taos > select sum (value) from jk.jiankong group by type, subtype Sum (value) | type | subtype | = 11827688 | delivrd | delivrd | 55566578 | send | send | 9317 | undeliv | BALANCE | 65219 | undeliv | DB:0107 | 2077691 | undeliv | DB:0108 | 2804417 | undeliv | IB:0008 | 41730843 | undeliv | ID:0076 | Query OK 7 row (s) in set (0.732830s)
Group and aggregate queries for type and subtype by day:
Taos > select sum (value) from jk.jiankong interval (1D) group by type, subtype Ts | sum (value) | type | subtype | = = 19-11-18 0000 value 00.000000 | 1760800 | delivrd | delivrd | 19-11-19 0000 value 00.000000 | 14768 | delivrd | delivrd | 19-11-20 0000 value 00.000000 | 3290720 | delivrd | delivrd | 19-11-21 0000 value 00.000000 | 4973640 | delivrd | delivrd | | | 19-11-22 00 00 send | 00.000000 | 1787760 | delivrd | delivrd | 19-11-18 00 00 send | send | 19-11-19 00 00 send | 00.000000 | 310128 | send | send | 19-11-20 00 00 send | 00.000000 | 9482760 | send | send | 19-11-21 0000 send 00.000000 | 6470940 | send | | | send | 19-11-22 00 00 send | 2325960 | send | send | 19-11-18 00 00 send | 6200 | undeliv | BALANCE | 19-11-19 00 00 send | 52 | undeliv | BALANCE | 19-11-20 00 00 send 00.000000 | 1590 | undeliv | BALANCE | 19-11-21 00 send 00.000000 | | 1085 | undeliv | BALANCE | 19-11-22 0000 undeliv | BALANCE | 19-11-18 0000 undeliv 00.000000 | 43400 | undeliv | DB:0107 | 19-11-19 0000 undeliv 00.000000 | 364 | undeliv | DB:0107 | 19-11-20 0000 undeliv 00.000000 | 11130 | undeliv | DB:0107 | 19-11-21 00undeliv | DB:0107 | 2730 | undeliv | DB:0107 | 19-11-18 0000 undeliv 00.000000 | 1382600 | undeliv | DB:0108 | 19-11-19 0000 undeliv 00.000000 | 11596 | undeliv | DB:0108 | 19-11-20 0000 undeliv 00.000000 | 354570 | undeliv | | DB:0108 | 19-11-21 00 00 DB:0108 | 241955 | undeliv | DB:0108 | 19-11-22 00 00 DB:0108 | 86970 | undeliv | DB:0108 | 19-11-18 00 00 DB:0108 | 1866200 | undeliv | IB:0008 | 19-11-19 00 00 saw 00.000000 | 15652 | undeliv | IB:0008 | 19-11-20 00:00:00. | | 000000 | 478590 | undeliv | IB:0008 | 326585 | undeliv | IB:0008 | 19-11-22 0000 undeliv | 00.000000 | 117390 | undeliv | IB:0008 | 19-11-18 0000undeliv | 00.000000 | 27769800 | undeliv | ID:0076 | 1911-19000000 | 00.000000 | 232908 | undeliv | ID:0076 | 19- 11-20 00 undeliv | 00.000000 | 7121610 | undeliv | ID:0076 | 19-11-21 00 undeliv | 00.000000 | 19-11-22 00 undeliv | 00.000000 | 1746810 | undeliv | ID:0076 | Query OK 35 row (s) in set (0.023865s)
Here interval is the length of the aggregate time period, with a minimum interval of 10 milliseconds (10a)
When no super table is created, an error will be reported if you make a grouping and aggregation query on the ordinary table:
Taos > select sum (value) from jk.jiankong group by type; TSDB error: invalid SQL: group by only available for STable query
7. Automatically create a child table when writing data
We have another requirement, because there are hundreds of static data to be monitored and there is uncertainty, it is impossible to create all the child tables when building the database and table. This function completely solves our problem.
The following is an excerpt from the official website:
In some special scenarios, users are not sure whether a device's table exists when writing data, so you can use automatic table-building syntax to automatically create non-existent child tables with the table structure defined by the super table when writing data. if the table already exists, no new table will be created.
Note: automatic table creation statements can only automatically create child tables, not super tables, which requires that the super table has been defined in advance. The automatic table creation syntax is very similar to the insert/import syntax, except that super tables and label information are added to the statement. The specific syntax is as follows:
INSERT INTO USING TAGS (,...) VALUES (field_value,...) (field_value,...)
By contrast, create a child table with create:
Create table jiankong_sub_send using jiankong tags ('send',' send')
III. Install and configure garafana
Installation
After downloading grafana's rpm installation package on the official website https://grafana.com/grafana/download, install:
Rpm-ivh grafana-6.4.4-1.x86_64.rpm
2. Copy TDengine's Grafana plug-in to Grafana's plug-in directory
The Grafana plug-in for TDengine is in the / usr/local/taos/connector/grafana directory of the installation package.
Cp-r / usr/local/taos/connector/grafana/tdengine/ / var/lib/grafana/plugins
3. Start Grafana
Systemctl start grafana-server
4. Log in to the Grafana server through host:3000 in the browser
The default username and password are admin
5. Add TDengine data source
Find "TDengine" at the bottom
Name: "TDengine" (can be another name)
Host: test server address "http://192.168.8.66:6020""
User: defaults to "root"
Password: defaults to "taosdata"
Test it:
6. Add Folder
Put the same type of dashboard (dashboard) that needs to be monitored into a Folder
7. Add Dashboard
After entering the Folder you just created, create a Dashboard
The sql statement at INPUT should pay attention to the position of fill. It needs to precede group by, otherwise an error will be reported.
Configure graphic display
You can choose graphs, tables, dashboards, etc., according to your needs.
Configure the "graph" here, and at the bottom of the diagram are the specific graphic display details, such as marking, filling, customizing the curve color of the displayed fields, and so on.
Give this dashboard a name you can understand at a glance:
8. Six dashboards are configured
9. Instances with no group by and group by
Without group by, you need to write multiple sql, which can be distinguished by where condition. If there are too many classifications, it will be troublesome and inflexible.
With group by, a sql will solve the problem:
The following figure shows the curve trend with or without group by, and you can see that it is exactly the same.
IV. A preliminary study of advanced functions and some associations
1. Continuous query (Continuous Query)-precomputation at the user level
I think the concept of "pre-calculation" of TDengine is very good. The official website is excerpted as follows:
In order to effectively improve the performance of query processing, in view of the immutable characteristics of the Internet of things, TDengine records the maximum, minimum, and other statistical data of the data in each saved data block. If the query processing involves all the data of the entire data block, the precomputed results are used directly and the contents of the data block are no longer read. As the size of the precomputing module is much smaller than the size of the specific data stored on the disk, for the query processing with disk IO as the bottleneck, the use of precomputing results can greatly reduce the reading of IO and accelerate the process of query processing.
The official website documents for continuous queries are excerpted as follows:
Stream Computing based on sliding window (Stream)
Continuous query is a query executed automatically by TDengine on a regular basis, which is calculated by sliding window and is a simplified time-driven streaming calculation. For tables or super tables in the library, TDengine can provide continuous queries that are executed automatically on a regular basis, and users can ask TDengine to push the results of the query or write the results back to TDengine. Each query executed is a time window that slides forward as time flows. When defining continuous queries, you need to specify the size of the time window (time window, parameter interval) and each forward incremental time (forward sliding times, parameter sliding).
Among them, the way to write the result back to TDengine is actually a kind of user-level pre-calculation, so that TDengine performs background calculation according to the user-defined time window and time increment. When the user queries the data, it will be very fast to read the data directly from the written-back table.
Create a continuous query:
Taos > create table test_stream_sum as select sum (value) from jiankong interval (20s) sliding (10s) group by type, subtype; Query OK, 1 row (s) affected (0.000983s)
For the above continuous query, the actual output of the select part of sql is as follows:
Taos > select sum (value) from jiankong interval (20s) group by type, subtype Ts | sum (value) | type | subtype | = = 19-11-18 16 value 50 value 40.000000 | 9088 | delivrd | delivrd | 19-11-18 16 Vera 51value 00.000000 | 31808 | delivrd | delivrd | 19-11-18 16 value 51lav 20.000000 | 15904 | delivrd | delivrd | 19-11-18 16 type 5220.000000 | 12212 | delivrd | delivrd | 19-11-18 16 delivrd 52lav 40.000000 | 31524 | delivrd | delivrd | 19-11-18 16V 53 delivrd 00.000000 | 31524 | delivrd | delivrd | 19-11-18 16V 5320. 000000 | 31808 | delivrd | delivrd | 19-11-18 16V 53V 50.000000 | 31240 | delivrd | delivrd | 19-11-18 16V 54v 00.000000 | | 31524 | delivrd | delivrd | 19-11-18 16 delivrd 5420. 000000 | 31524 | delivrd | delivrd | 19-11-18 16 delivrd 54 40. 000000 | 31240 | delivrd | delivrd | 19-11-18 16 Vera 55 delivrd 00.000000 | 31524 | delivrd | delivrd | 19-11-18 16 delivrd 5520. 000000 | delivrd | 28 400 | delivrd | delivrd | 19-11-18 16 Suzhou 5540. 000000 | 31808 | delivrd | delivrd | 19-11-18 16JV 56 delivrd 00.000000 | 31524 | delivrd | delivrd | 19-11-18 16V 56 delivrd 20.000000 | 31240 | delivrd | delivrd | 19-11-18 16 JV 56 JV 40.000000 | 31524 | delivrd | delivrd | 19-11-18 16JV 57v 00.000000 | 32092 | delivrd | delivrd | 19-11-18 1657 Suzhou 20.000000 | 31240 | delivrd | delivrd | 19-11-18 16 delivrd 5720. 000000 | 32092 | delivrd | delivrd | 19-11-18 16-58 Vera 00.000000 | 31240 | delivrd | delivrd | 19-11-18 16 Vera 5820.000000 | 22720 | delivrd | delivrd | 19-11-18 16 JV 550 Switzerland 40.000000 | 190848 | send | send |
The actual data in the table of the automatically created continuous query is:
Taos > select * from test_stream_sum; ts | sum_value_ | = = 19-11-18 17 from test_stream_sum; ts 1730 row (s) in set (0.00041s) | 18460 | 19-11-18 17 from test_stream_sum; ts 17V 50.000000 | 50.000000 | 19-11-18 17RV 17V (s) 50.000000 | 15620 |
The above results are not the expected results, and the aggregate query is not displayed according to the group by field I defined.
So the engineer of github Issues, taos replied, "continuous query does not support group by very well at present, this problem is already in our plan list, and the function will be improved later" because "because after writing back in this way, the primary key of the timestamp of the new table may appear the same (different group by fields will have the same time), so it will conflict, so it is not supported for the time being".
Thank you for reading this article carefully. I hope the article "how to achieve data Visualization in TDengine+Grafana" shared by the editor will be helpful to everyone. At the same time, I also hope that you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.