Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to integrate SLF4J+logback for logging in SpringBoot3

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "SpringBoot3 how to integrate SLF4J+logback for logging", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "SpringBoot3 how to integrate SLF4J+logback for logging" this article.

SpringBoot uses logback as the logging framework by default, and you can check logback directly when you generate a springboot project, so you can use logback directly. If you add it manually, it is recommended to use slf4j+logback. Later projects are easier to maintain:

Org.slf4j slf4j-api 1.7.21 ch.qos.logback logback-core 1.1.7 ch.qos.logback logback-classic 1.1.7

SLF4J is a simple Facade for logging systems that allows end users to use the logging system they want when deploying their applications. It probably means that you only need to write the code that logs in a uniform way, regardless of which log system the logs are through and in what style they are output, because they depend on the log system that is bound when the project is deployed.

For example, if you use SLF4J to log in the project, and bind Log4j (that is, import the corresponding dependencies), the log will be output in the style of Log4j; later, you need to output the log in the style of Logback, you only need to replace Log4j with Logback, and you don't have to modify the code in the project.

1 Fast implementation

If we need to implement such a requirement: record the call interface events and pass parameters in the file and display them in the console. It's easy to implement, just three steps.

The first step is to create a logback.xml file in the resource directory and write it internally:

% d [% thread]%-5level% logger {50}-[% file:%line] -% msg%n UTF-8 ${logFile} .log% d [% thread]%-5level-[% file:%line] -% msg%n UTF-8 ${ LogFile}. D {yyyy-MM-dd}.% i.log ${maxFileSize}

The second chapter will explain the details of this xml in detail.

The second step is to configure the log profile path to be used by the project in the application.yml file:

Logging: config: classpath:logback.xml

The third step is to add logging to the interface:

The input parameter of import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RequestMethod;import org.springframework.web.bind.annotation.RestController;@RestControllerpublic class TestController {/ / getLogger () is the current class, otherwise the class name of the output log will be the wrong private final Logger logger = LoggerFactory.getLogger (TestController.class) RequestMapping (value = "/ test", method = RequestMethod.GET) public String logTest (String name, String age) {logger.info ("logTest,name: {}, age: {}", name, age); return "success";}}

Of course, it's even easier if you install the lombok plug-in: detailed instructions on how to use Lombok

Import lombok.extern.slf4j.Slf4j;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RequestMethod;import org.springframework.web.bind.annotation.RestController;@RestController@Slf4jpublic class TestController {@ RequestMapping (value = "/ test", method = RequestMethod.GET) public String logTest (String name, String age) {log.info ("logTest,name: {}, age: {}", name, age); return "success";}

Call the API after starting the project, and the console output is as expected:

At the same time, a log directory is added to the project to generate the mutest.log file, which records the log:

The function has been realized, but we still have a lot of small question marks in our heads. There is no hurry. Let's talk about it in detail.

2 configure xml

First, create a file under the resource directory, named logback.xml. Now write some fixed content into it, which looks like this:

2.1 configuration

Is the root node of the xml file logback.xml, which contains the following attributes:

Scan: when this property is set to true, the configuration file will be reloaded if it changes, and the default value is true.

ScanPeriod: sets the time interval for monitoring whether the configuration file is modified. If no time unit is given, the default unit is millisecond. This property takes effect when scan is true. The default interval is 1 minute.

Debug: when this property is set to true, logback internal log information will be printed to view the running status of logback in real time. The default is false.

For example, the following configuration:

2.2 property and springProperty

These two nodes can set global variables.

Property can be set directly, for example:

This sets a variable named logFile, which is subsequently referenced to its value logs/mutest through ${logFile}.

On the other hand, springProperty needs to match the configuration file, for example:

It also sets a variable named logFile, but does not assign a value directly, but points to the path to the configuration file through source, which looks like this:

Log: file: logs/mutest2.3 root

The root node, a required node, is used to specify the most basic log output level and can be understood as the root logger.

A typical root node is as follows:

2.4 appender

The appender node is a very critical node, which is responsible for formatting a log output node (that is, describing the log storage type, location, scrolling rules, and so on). I personally understand that the role of appender is similar to constructing a log template, while logger is the real log exporter, using an appender as a template to write logs.

There are three types of appender, namely ConsoleAppender (console log), FileAppender (file log), and RollingFileAppender (scrolling file log).

2.4.1 ConsoleAppender

ConsoleAppender is used to output logs to the console, which is generally used when debugging locally. Its configuration is very simple. A typical ConsoleAppender is as follows:

% d [% thread]%-5level% logger {50}-[% file:%line] -% msg%n UTF-8

Appender has two attributes, name and class:

The name of the name:appender node, which is referenced by the logger node later. There can be no duplicate appender name in an logback configuration file.

Class: which log output strategy is used: ConsoleAppender (console log), FileAppender (file log), and RollingFileAppender (scrolling file log).

2.4.2 FileAppender

FileAppender is used to add logs to files. A typical FileAppender is as follows:

TestFile.log true%-4relative [% thread]%-5level% logger {35} -% msg%n

Compared to ConsoleAppender, it has a few more child nodes, so let's look at them one by one:

The file name to be written can be either a relative directory or an absolute directory. It will be created automatically if the parent directory does not exist, and there is no default value.

If it is true, the log is appended to the end of the file. If it is false, empty the existing file. Default is true.

Format logged events (specific parameters will be explained later)

If it is true, the log will be safely written to the file, even if other FileAppender are also writing to this file, which is inefficient. The default is false.

The output format of the log

Pattern defines the output format of the log. Let's take% d [% thread]%-5level-[% file:%line] -% msg%n as an example and break it down:

% date: indicates date

% thread: represents the thread name

%-5level: indicates that the level displays 5 characters wide from the left

% logger {50}: indicates that the Logger name is up to 50 characters long

% msg: indicates log message

% n: newline character

2.4.3 RollingFileAppender

RollingFileAppender is used to scroll a file, first logging to a specified file, and then logging to other files when certain conditions are met. A typical RollingFileAppender node is as follows:

${logFile} .log% d [% thread]%-5level-[% file:%line] -% msg%n UTF-8 ${logFile}.% d {yyyy-MM-dd}.% i.log ${maxFileSize} 30 1GB error ACCEPT DENY

In addition, there are some common child nodes under the RollingFileAppender node:

When scrolling occurs, determines the behavior of RollingFileAppender, which involves file movement and renaming

Log output interceptor, you can customize the interceptor or use some of the system's defined interceptors.

When scrolling occurs, determines the behavior of RollingFileAppender, which involves file movement and renaming The property class defines a specific scrolling policy class.

SizeAndTimeBasedRollingPolicy: according to the log file size and time period as the sharding condition, if any one of them is satisfied, the sharding should be done. The value of maxFileSize determines the upper limit of log file size for the day. Beyond this limit, there will be multiple log files on the same day, so there is a% I in ${logFile}.% d {yyyy-MM-dd}.% I, which is written to deal with the generation of multiple log files on the same day. When the log volume is very large, mutest.log.2020-07-28.0.log and mutest.2020-07-28.1.log will occur.

TimeBasedRollingPolicy: only based on the time period as the segmentation condition, under this strategy, the archive log name format is set to ${logFile}.% d {yyyy-MM-dd} .log.

SizeBasedTriggeringPolicy: the only trigger condition for log scrolling is based on file size as the sharding condition.

: necessary nodes. Taking ${logFile}.% d {yyyy-MM-dd}.% i.log as an example (mutest.2019-07-28.0.log), there are several parts:

${logFile}: fix the file name prefix, here referencing the variable set in.

% d {yyyy-MM}: specifies the format of the intermediate date for the log name. If there is only% d, the yyyy-MM-dd format will be used by default.

% I: the log volume is so large that when two or more log files are generated on the same day, this property will suffix the log name with an index to distinguish it.

.log.zip: specifies the compressed format of the archive log file.

There are also several attributes to be added according to the scrolling policy:

This is the size of the active file, which is required under SizeAndTimeBasedRollingPolicy policy and SizeBasedTriggeringPolicy policy. The default value is 10MB. If you exceed this size, a new active file will be generated.

An optional node that controls the maximum number of archive files retained, and deletes old files if the number is exceeded. If you set scrolling every month, and it is 6, only the files for the last 6 months are saved, and the previous files are deleted. Note that if the old files are deleted, the directories created for archiving will also be deleted.

Optional node, indicating that the total size of the log file exceeds 1GB will delete the archive log file

2.5 logger

The logger node, an optional node, is used to indicate the log output level of a specific package or class, as well as what to use (can be understood as a log template).

A typical logger node is as follows:

Name: a must-write attribute that specifies a specific package or class, and the log output in the specified package or class will be configured in accordance with the logger.

Level: an optional attribute that specifies the log output level, which overrides the output level of the root configuration.

Addtivity: non-required attribute, whether to transfer print information to the superior loger. The default is true.

Appender-ref: referenced appender, which will implement the behavior defined in appender. For example, if the appender of fileLog is referenced in the above example, the log printed in com.mutest.demo will be recorded according to the configuration of fileLog. A logger can have multiple references that do not affect each other.

3 more scenarios 3.1 Log level

There are five levels of logback, which are TRACE < DEBUG < INFO < WARN < ERROR, defined in the ch.qos.logback.classic.Level class.

Trace: it's tracking, that is, if the program is pushed forward, you can write a trace output, so there should be a lot of trace, which is generally not set to this level.

Debug: points out that fine-grained information events are very helpful for debugging applications.

Info: messages highlight the running process of the application at a coarse-grained level.

Warn: output warnings and logs above warn.

Error: output error message log.

In addition, OFF means to disable all logs, and ALL means to enable all logs.

So, how do you set the logging level in logback?

First, you can set the log level in, and if not, the default root logger level is DEBUG.

Second, you can set the log level in logger, which will be overridden after setting, but not the log level that will be inherited.

In addition, you can set a more specific log level in the configuration file, such as setting all log output levels under the com.mutest.controller package to info, so that even if the log is set to error level in logger, the log is still output.

Logging: config: classpath:logback.xml level: com.mutest.controller: info3.2 log scroll

If you do not set the log scrolling policy, logs will be appended to a file all the time, and the log file will become larger and larger, it will be slow to find useful information, and there is a risk of filling up the disk. Therefore, we need to set up a scrolling strategy, that is, certain conditions are met to generate a new file, while the old log file is archived.

3.2.1 time Policy

If the time period is used as a sharding condition, the class should be set to ch.qos.logback.core.rolling.TimeBasedRollingPolicy. A typical example (a log file is generated every day and saved for 30 days) is as follows:

LogFile.%d {yyyy-MM-dd} .log 30%-4relative [% thread]%-5level% logger {35} -% msg%n

FileNamePattern: necessary node, including file name and "% d" conversion character, "% d" can contain a time format specified by java.text.SimpleDateFormat, such as:% d {yyyy-MM}. If you use d directly, the default format is yyyy-MM-dd.

MaxHistory: the maximum date on which the log is saved. For example, if you set pattern on a daily basis, then setting maxHistory to 30 will automatically delete logs from 30 days ago.

TotalSizeCap: the maximum saved size of the log. When this value is exceeded, the old log file is automatically deleted. Must be used with maxHistory, and Mr. maxHistory is effective, followed by determining whether the totalSizeCap is reached.

CleanHistoryOnStart: default false. If set to true, the old log files will be deleted automatically when the project starts.

3.2.2 File size Policy

If the file size is used as the sharding condition, the class should be set to ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy. A typical example (if the active log file size exceeds 30m, a new active log file is generated) as follows:

LogFile.%d {yyyy-MM-dd}.% i.log 30MB%-4relative [% thread]%-5level% logger {35} -% msg%n

It is important to note that% I must have, and if multiple archive log files are generated on the same day,% I will generate a suffix to distinguish it. For example, mutest.2019-07-28.0.log and mutest.2019-07-28.1.log.

3.2.3 time and file size policy

According to the log file size and time period as the segmentation condition, if any one of them is satisfied, the segmentation should be done. To set the class to ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy, a typical example is as follows:

Mutest.log% d [% thread]%-5level-[% file:%line] -% msg%n UTF-8 ${logFile}.% d {yyyy-MM-dd}.% i.log ${maxFileSize}

Similarly, there must be% I in.

3.3 Log filtering

First of all, understand that the log levels from low to high are:

TRACE < DEBUG < INFO < WARN < ERROR < FATAL

Sometimes we need to filter the logs, and logback provides implementations of a variety of filtering rules.

3.3.1 LevelFilter

For example, if the log level is above info, but we do not want to print logs of type warn, follow the configuration below:

Warn DENY ACCEPT

The meaning of several parameters:

Ch.qos.logback.classic.filter.LevelFilter: filter rules. Here is a match based on the log level.

Level: the log level to match.

DENY: the matching log will be rejected.

ACCEPT: logs that do not match will be printed.

3.3.2 ThresholdFilter

In addition to ch.qos.logback.classic.filter.LevelFilter, there is another filtering strategy: ThresholdFilter. The threshold filter filters out logs that are lower than the specified threshold. When the log level is equal to or higher than the critical value, and the filter returns the NEUTRAL; log level below the critical value, the log will be rejected.

For example, set up to print only logs above the info level:

Info3.3.3 EvaluatorFilter

EvaluatorFilter is an evaluation filter that evaluates and verifies whether the log meets the specified conditions.

Return message.contains ("success"); ACCEPT DENY

Attribute interpretation:

: discriminator, the commonly used discriminator is JaninoEventEvaluato, which is also the default discriminator. It takes any Java Boolean expression as the evaluation condition. The evaluation condition is dynamically compiled after it has been explained in the configuration file. If the Boolean expression returns true, the filter condition is met. Evaluator has a subtag that is used to configure evaluation conditions.

: used to configure actions that meet the filter criteria, ACCEPT or DENY

: used to configure actions that do not meet the filtering criteria, ACCEPT or DENY

3.4 combined with profile

There are two kinds of configuration files for Spring Boot project: application.properties file and application.yml file, but I personally prefer to use yml file.

Configuration of logs in the application.yml file:

Logging: config: logback.xml level: com.example.demo.dao: trace

Logging.config is used to specify which configuration file to read when the project starts. The log configuration file specified here is the logback.xml file under the root path. The configuration information about the log is placed in the logback.xml file.

Logging.level is used to specify the output level of logs in a specific Mapper. The above configuration indicates that the output level of all Mapper logs under the com.example.demo.dao package is Trace, and the SQL for operating the database will be printed. During development, it is set to trace to make it easy to locate problems. In a production environment, you can set this log level to error level.

The commonly used log levels are ERROR, WARN, INFO, and DEBUG from highest to lowest.

The above is all the contents of the article "how SpringBoot3 integrates SLF4J+logback for logging". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report