Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure and use Loki

2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to configure Loki". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to configure Loki.

Background of Loki birth

Kubernetes has become the de facto standard in the field of choreography, while Prometheus has also become the standard in the field of monitoring and monitoring based on the Kubernetes platform. Prometheus can collect business metrics data, Grafana interface display, AlertManager alarm, one-stop monitoring framework was born. Through this framework, the running status of the service can be monitored online, and if it is abnormal, it can be notified to the relevant personnel through various channels; the relevant personnel can view the alarm information and analyze the specific causes of the problems through the log.

How do I view the log?

We can query in Pod. If the Pod process has crashed, you will not be able to enter the container. It doesn't matter. For the log files mounted by the host where the Pod is located, you have to query the host where the Pod has crashed, and then query the logs in the host by command. In this case, if you encounter multiple copies of a service running on the same node, the logs may be cross-printed. The service crash has not been solved. You have already crashed. In fact, the real cause of this problem is the super automatic horizontal expansion capability of Kubernetes. You may not be able to accurately predict the number of service replicas and their nodes. Most companies build a log collection and view platform based on ELK (log collection solution). This platform not only consumes resources, but also requires frequent switching between Kibina and Grafana platforms. Affect work efficiency, in order to solve this problem, Loki came out.

Since then, the one-stop monitoring, alarm and log analysis platform has solved the problem that we do not have to switch systems frequently.

Design idea of Loki Architecture

A complete log collection framework based on Loki needs three parts to complete.

Promtail: the log collection client runs on various compute nodes in DaemonSet mode, and of course, it can also be run in Pod through sidercar mode. Promtail itself can be replaced with fluent-bit or fluentd

Loki: log collection server that receives logs sent from Promtail

Grafana: log display

Loki is a highly available, scalable, multi-tenant log collection system, inspired by Prometheus, but Loki focuses on logs and obtains log information through client push. Prometheus focuses more on monitoring metrics and obtains metric information through pull. Compared with other log systems, Prometheus has the following advantages:

It saves resources and provides log compression function.

Instead of adding the full text to the index, the tag is added to the index, which is very easy for people who have used Prometheus.

It is ideal for storing and searching Kubernetes Pod logs because it can add node information, container information, namespaces, and tags to the index where the Pod is located.

Native support for Grafana version 6.0 and above.

Introduction of Loki internal components Distributor

Its main function is to receive logs from the client. After receiving the logs, Distributor will first verify the correctness. After the verification is passed, it will be divided into multiple batches and sent to Ingester. Each incoming stream corresponds to an Ingester, and after the log is sent to Distributor, Distributor calculates which Ingester it should be routed to based on hash and metadata algorithm.

Among them, the communication between Distributor and Ingester is through gRPC, and they are stateless applications that support scale-out.

Ingester

Its main function is to receive logs from Distributor and write them to the back-end storage, where the back-end storage can be DynamoDB, S3, Cassandra, FS, and so on. It is important to note that ingester strictly verifies that received log rows are received in ascending timestamp order (that is, the timestamp of each log is later than the previous log).

When ingester receives logs that do not follow this order, the log lines are rejected and an error (Entry out of order) is returned.

To sum up, first distributor will accept requests from external data streams, each with its own consistent hash, and then distributor will send the data flow to the correct ingester by calculating hash; ingester will create chunk or append data to the existing chunk (tenants and tags must be unique), and finally complete the data storage.

Chunks and index

Chunks is a Loki long-term data store designed to provide query and write operations, supporting DynamoDB, Bigtable, Cassandra, S3, FS (stand-alone). Index is an index generated from metadata in chunks and supports DynamoDB, Bigtable, Apache Cassandra, BoltDB (stand-alone). By default, Chunks uses FS local file system storage, which is limited to about 550W chunk, and it may be problematic to exceed this limit.

Index uses BoltDB storage, and BoltDB is a well-known KV read and write engine implemented by Go. Users include etcd and so on. If you need to support highly available deployments, you need to introduce big data components

Query is mainly responsible for scheduling front-end query requests, first Ingesters the data in memory, and then go back to the back-end storage to query data, supporting parallel query and data caching. Loki configuration

There are many configurations of Loki, which are configured in / etc/loki/loki.yaml. If you need to optimize storage or if there are abnormal problems with log reception, you may need to modify the configuration. For example, when Loki sends logs on the receiving client, the sending rate may exceed the limit, and ingestion_rate_mb may need to be modified at this time.

Recommendations for using Loki

In the process of using Loki, you may wonder whether you should use as many tags as possible in order to improve the query speed, because the indexes of Loki itself are generated by tags. In the case of using other log systems, you can solve the problem of slow query speed by adding as many indexes as possible, which is a common way of thinking. However, the design idea of the Loki data store is to use as few indexes as possible, because the Loki itself stores the data as multiple blocks and matches the blocks through the indexes in the label. If you think the query speed is slow, you can reconfigure the shard size and interval, or you can use as many queries in parallel as you can. This tradeoff between smaller indexes and parallel brute force queries and larger / faster full-text indexes enables Loki to save money compared to other systems. The cost and complexity of operating a large index is high, and once the index is established, it is usually fixed, and if you want to query or not, you will be paid 24 hours a day. The advantage of this design means that you can decide what the query requirements are. Changes can be made as needed, while data is massively compressed and stored in low-cost object storage to minimize fixed operating costs. At the same time, it still has incredibly fast query capabilities, and Loki is compatible with cloud native ideas.

Loki installation

There are roughly four ways to install Loki: TK (officially recommended), helm, docker, and binary deployment. I orchestrated and run it through K8s statefulset. For details, please refer to:

Https://github.com/grafana/loki/blob/v1.5.0/docs/installation/README.mdPromtail

See this name will think of Prometheus, in fact, their design ideas are the same, it runs as a client agent on the computing node, of course, it can also run in Pod through side car mode, the main functions are to collect logs, label log flows, and push logs.

Function configuration

Clients: used to configure Loki server addr

Positions: collect the log file location. In Kubernetes, the service runs as Pod. The Pod life cycle may end at any time, so you need to record the log collection location and mount it to the host to facilitate the next collection.

Scrape_configs: log file collection configuration, supports the collection of syslog, jouanl, docker, Kubernetes, and log files. According to the collection requirements, self-configuration.

Installation and deployment

It is recommended to run it in DaemonSet mode. For more information, please see the official yaml orchestration example:

Https://github.com/grafana/loki/blob/v1.5.0/docs/clients/promtail/installation.md

I won't repeat it.

Grafana configuration

The Grafana version should be above version 6. 0.

Log in to the admin account on the left menu bar of the Grafana instance, click Configuration > Data Sources, click the + Add data source button to enter the Loki service address. If you enter http://localhost:3100 or Loki svc address: https://loki:3100, click Explore on the right, it will prompt the Log labels search button, and click to search.

At this point, I believe you have a deeper understanding of "how to configure Loki". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report