In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly explains "how to configure Log log". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn how to configure Log log.
1.Slf4j
The full name of Slf4j is Simple Loging Facade For Java, it is only a unified interface to provide log output for Java programs, not a specific log implementation, such as JDBC, just a rule.
Therefore, Slf4j alone does not work, and must be matched with other specific logging implementations, such as:
Org.apache.log4j.Logger for Apache.
JDK comes with java.util.logging.Logger and so on.
Simple grammar
SLF4J is not as common as Log4J, because many developers are familiar with Log4J and do not know SLF4J, or do not focus on SLF4J and insist on using Log4J.
Let's first take a look at the Log4J example:
Logger.debug ("Hello" + name)
Due to the problem of string concatenation, using the above statement will first concatenate the string, and then decide whether to output this log according to whether the current level is lower than Debug. Even if the log is not output, the string stitching operation will be performed.
So many companies enforce the following statement so that string concatenation is performed only when it is currently at the Debug level:
If (logger.isDebugEnabled ()) {LOGGER.debug ("Hello" + name);}
It avoids the problem of string concatenation, but it's a little too cumbersome, isn't it? In contrast, SLF4J provides the following simple syntax:
LOGGER.debug ("Hello {}", name)
Its form is similar to the first example, but there is no string concatenation problem, nor is it as tedious as the second.
Log level Level
Slf4j has four levels of log level to choose from, from top to bottom, from low to high, and the high priority will be printed:
Debug: to put it simply, all the information that is good for program debugging can be output by debug.
Info: information that is useful to users.
Warn: may result in incorrect information.
Error: as the name implies, the place where the error occurred.
Use
Because it is a mandatory specification, create it directly using LoggerFactory:
Import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Test {private static final Logger logger = LoggerFactory.getLogger (Test.class); / /. }
Configuration mode
Spring Boot supports Slf4j very well, and Slf4j has been integrated internally. Generally, we will configure Slf4j when we use it.
Application.yml file is the only file that needs to be configured in Spring Boot. At the beginning of the project, it is an application.properties file. Personally, I prefer to use yml file, because yml file has a good sense of hierarchy and looks more intuitive.
But the yml file format requirements are relatively high, such as the English colon must be followed by a space, otherwise the project may not be able to start, and do not report an error. You can use properties or yml depending on your habits.
Let's take a look at the configuration of logs in the application.yml file:
Logging: config: classpath:logback.xml level: com.bowen.dao: trace
Logging.config is used to specify which configuration file to read when the project starts. Here, the log configuration file is the classpath:logback.xml file, and the relevant configuration information about the log is placed in the logback.xml file.
Logging.level is used to specify the output level of logs in a specific mapper. The above configuration indicates that the output level of all mapper logs under the com.bowen.dao package is trace, and the sql for operating the database will be printed.
During development, it is set to trace to make it easy to locate problems. In a production environment, you can set this log level to error level.
The commonly used log levels are ERROR, WARN, INFO, and DEBUG from highest to lowest.
2.Log4j
Log4j is an open source project of Apache. By using Log4j, we can control the transmission of log information to the console, files, GUI components, and even socket servers, NT event loggers, UNIX Syslog daemons, and so on.
We can also control the output format of each log; by defining the level of each log information, we can control the log generation process in more detail.
Constituent architecture
Log4j consists of three important components:
Logger: controls which logging statements are enabled or disabled, and places level restrictions on log information.
Appenders: specifies whether the log will be printed to the console or to a file.
Layout: controls the display format of log information.
There are five levels of Log information to be output in Log4j, which are DEBUG, INFO, WARN, ERROR, and FATAL.
When output, only the level of information higher than the level specified in the configuration can be really output, so it is easy to configure the content to be output in different situations without changing the code.
Log level Level
The main Log4j log levels are as follows:
Off: turn off logs, the highest level, and no logs can be output.
Fatal: catastrophic error, the highest of all levels capable of outputting logs.
Error: error, which is generally used for exception information.
Warn: warnings, generally used for non-standard references and other information.
Info: general information.
Debug: debugging information, generally used in the process of program execution.
Trace: stack information, generally not used.
All: open all logs, minimum level, all logs can be used.
In the Logger core class, with the exception of off/all, each log level corresponds to a set of overloaded methods for logging at different levels. The log is recorded when and only if the log level corresponding to the method is greater than or equal to the set log level.
Use
To use Log4j, you only need to import a jar package:
Org.log4j log4j 1.2.9 e
Configuration mode
To create a log4j.properties configuration file in the Resources Root directory, it is important to note that the location and name of the file cannot be wrong, and then add configuration information to the properties file.
Log4j.rootLogger=debug,cons log4j.appender.cons=org.apache.log4j.ConsoleAppender log4j.appender.cons.target=System.out log4j.appender.cons.layout=org.apache.log4j.PatternLayout log4j.appender.cons.layout.ConversionPattern=%m%n
The propertis file is the most commonly used configuration. In the actual development process, basically use properties files.
The pripertis configuration file is configured as follows:
# configure the log level, specify the valid Appender name, AppenderA is the name of the defined Appender, log4j.rootLogger= log level, AppenderA,AppenderB,... #-define an appender---- # define an appender, the appender name can be arbitrary, # if you want to make the appender effective Must be added to the previous line of rootLogger, followed by the corresponding Appender class log4j.appender.appender name = org.apache.log4j.ConsoleAppender log4j.appender.appender name. Target = System.out # defines the layout of Appender log4j.appender.appender name .layout = org.apache.log4j.SimpleLayout
3.Logback
In a nutshell, Logback is a logging framework for the Java domain. It is considered to be the heir to Log4J. Logback is an upgrade to Log4j, so Logback naturally has many advantages over Log4j.
Module composition
Logback consists of three main modules:
Logback-core
Logback-classic
Logback-access
Logback-core is the infrastructure for other modules, and other modules are built on it. Obviously, logback-core provides some key general mechanisms.
Logback-classic has the same status and role as Log4J, it is also considered to be an improved version of Log4J, and it implements a simple logging facade SLF4J.
Logback-access mainly serves as a module that interacts with Servlet containers, such as Tomcat or Jetty, and provides some functions related to HTTP access.
Three modules
Logback component
The main components of Logback are as follows:
Logger: log logger; associate it to the corresponding context of the application; it is mainly used to store log objects; you can customize the log type level.
Appender: used to specify the destination of log output; the destination can be console, file, database, etc.
Layout: responsible for converting events into strings; output of formatted log information; Layout objects are encapsulated in encoder in logback.
Advantages of Logback
The main advantages of Logback are as follows:
Logback executes faster with the same code path.
A fuller test.
SLF4J API is natively implemented (Log4J also needs to have an intermediate transformation layer).
A document with richer content.
XML or Groovy configuration is supported.
Configuration files are automatically hot loaded.
Graceful recovery from IO errors.
Automatically delete log archives.
Automatically compress the log into an archive file.
Prudent mode is supported to enable multiple JVM processes to record the same log file.
Support to add condition judgment to the configuration file to adapt to different environments.
A more powerful filter.
Support for SiftingAppender (filterable Appender).
Exception stack information with package information.
Tag attribute
Configuration structure
The main tag attributes of Logback are as follows:
Configuration: root node of the configuration.
Scan: when ture, if the configuration file properties change, it will be scanned and reloaded. The default is true.
ScanPeriod: monitor whether the configuration file has a modified time interval. If no time unit is given, the default unit is millisecond; the default time is 1 minute; it takes effect when scan= "true".
Debug: when true, the internal log information of logback is typed out to view the running status of logback in real time. The default value is false.
ContextName: context name, which defaults to "default". This tag can be set to another name to distinguish records from different applications; once set, it cannot be modified.
The child node of appender:configuration, the component responsible for logging, has two necessary attributes, name and class.
Name of the name:addender.
The fully qualified name of class:appender is the corresponding specific Appender class name, such as ConsoleAppender, FileAppender.
Append: when true, the log is appended to the end of the file. If flase, empty the existing file. The default value is true.
Configuration mode
The logback framework loads a configuration file named logback-spring or logback under classpath by default:
[% d {yyyy-MM-dd' 'HH:mm:ss.sss}] [% C] [% t] [% L] [%-5p]% m% n ERROR DENY ACCEPT [% d {yyyy-MM-dd'' HH:mm:ss.sss}] [% C] [% t] [% L] [%-5p]% m% n ${LOG_INFO_HOME} / /% d.log 30 ERROR [% d {yyyy-MM-dd' 'HH:mm:ss.sss}] [% C] [% t] [% L] [%-5p]% m% n ${LOG_ERROR_HOME} / /% d.log 30
4.ELK
ELK is the abbreviation of software collection Elasticsearch, Logstash and Kibana. Large-scale log real-time processing system can be built by these three software and their related components.
A new FileBeat is added, which is a lightweight log collection and processing tool (Agent). Filebeat takes up less resources and is suitable for collecting logs on various servers and transferring them to Logstash. This tool is also officially recommended.
Architecture diagram
Elasticsearch: a Lucene-based distributed storage and indexing engine that supports full-text indexing. It is mainly responsible for indexing and storing logs to facilitate business side to retrieve and query.
Logstash: it is a middleware for log collection, filtering and forwarding. It is mainly responsible for collecting and filtering all kinds of logs from all business lines, and then forwarding them to Elasticsearch for further processing.
Kibana: is a visualization tool, which is mainly responsible for querying Elasticsearch data and showing it to business parties in a visual way, such as pie charts, histograms, region charts, etc.
Filebeat: belongs to Beats, is a lightweight log collection and processing tool.
Beats currently includes four tools: Packetbeat (collecting network traffic data), Topbeat (collecting CPU and memory usage data at system, process, and file system levels), Filebeat (collecting file data), and Winlogbeat (collecting Windows event log data).
Main features
A complete centralized logging system needs to include the following main features:
Collection: ability to collect log data from multiple sources.
Transmission: stable transmission of log data to the central system.
Storage: how to store log data.
Analysis: UI analysis can be supported.
Warning: be able to provide error reporting and monitoring mechanism.
ELK provides a complete set of solutions, and they are all open source software, which cooperate with each other, connect perfectly and efficiently meet the applications of many occasions. At present, a mainstream log system.
Application scenario
In the operation and maintenance of massive log system, the following aspects are essential:
Centralized query and management of distributed log data.
System monitoring, including the monitoring of system hardware and application components.
Troubleshooting.
Security information and event management.
Report function.
ELK runs on distributed systems, through collection, filtering, transmission, storage, centralized management and quasi-real-time search and analysis of massive system and component logs, using simple and easy-to-use functions such as search, monitoring, event messages and reports.
Help operation and maintenance personnel to carry out quasi-real-time monitoring of online business, timely location of causes of business anomalies, troubleshooting, tracking and analysis of Bug during program development, business trend analysis, safety and compliance audit, and in-depth mining of big data value of logs.
At the same time, Elasticsearch provides a variety of API (REST JAVA PYTHON and other API) for users to expand and develop to meet their different needs.
Configuration mode
For the configuration of filebeat, open filebeat.yml and configure it, as follows:
# input source, you can write multiple filebeat.input:-type: log enabled: true # enter the source file address path:-/ data/logs/tomcat/*.log # multi-line regular matching, matching rules example: 2020-09-29 If this is not the case, merge multiline: pattern:'\ s *\ ['negate: true match: after # with a name tags: ["tomcat"] # output target, and you can change logstash to es output.logstash: hosts: [172.29.12.35V5044]
For the configuration of logstash, create a file with the suffix .conf, or open the .conf file under the config folder, where multiple configuration files can be started at the same time, and there is a powerful filter feature that can filter the raw data, as follows:
# input source (must) input {# console type stdin {} # file to read file {# similar given name type = > "info" # file path, you can use * to represent all path = > ['/ usr/local/logstash-7.9.1/config/data-my.log'] # read from scratch for the first time Next time continue to read start_position = > "beginning"} file {type = > "error" path = > ['/ usr/local/logstash-7.9.1/config/data-my2.log'] start_position = > "beginning" codec= > multiline {pattern = > "\ s *\ [" negate = > true what = > "previous"} # use beats {port = > 5044}} # with filebates Target (must) output {# to determine whether the type is the same if [type] = "error" {# if so Query the elasticsearch {hosts = > "172.29.12.35 hosts 9200" # kibana in this es by the name of index The YYYY here is the dynamic acquisition date index = > "log-error-% {+ YYYY.MM.dd}"} if [type] = = "info" {elasticsearch {hosts = > "172.29.12.35 index 9200" # kibana queries by the name of index index = > "log-info-% {+ YYYY.MM.dd}"} # here determines whether the tags given in filebates is tomcat if "tomcat "in [tags] {elasticsearch {hosts = >" 172.29.12.35 hosts 9200 "# kibana queries by the name of index index = >" tomcat "}} # the console will also print the message stdout {codec = > rubydebug {}}. Thank you for reading. The above is the content of "how to configure Log log". After the study of this article, I believe you have a deeper understanding of how to configure Log log, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.