Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the top ten problems affecting the performance of Java EE?

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

What are the ten major problems affecting the performance of Java EE, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.

Here are 10 common problems that affect Java EE performance.

1. Lack of correct capacity planning

Capacity planning is a comprehensive and developing process standard that predicts current and future IT environmental capacity requirements. Reasonable capacity planning will not only ensure and track current IT production capacity and stability, but also ensure that new projects are deployed to the existing production environment with minimal risk. Hardware, middleware, JVM, tuning, etc., should be ready before the project is deployed.

Insufficient specification of 2.Java EE middleware environment

If there are no rules, there will be no square. The second common reason is that Java EE middleware or infrastructure is not standardized. At the beginning of the project, there is no reasonable specification on the new platform, resulting in poor system stability. This increases customer costs, so it is necessary to take the time to develop a reasonable Java EE middleware environment specification. This work should be combined with the initial capacity planning iteration.

Excessive garbage collection of 3.Java virtual machine

Are you familiar with the error message "java.lang.OutOfMemoryError"? An exception that is thrown due to excessive memory space consumption of JVM (Java heap, native heap, and so on).

Garbage collection problems do not necessarily appear as an OOM condition. Excessive garbage collection can be understood as long JVM pauses and performance degradation caused by JVM GC threads collecting collection data slightly or over in a short period of time. There may be several reasons:

Compared with the JVM load and application memory footprint, the Java heap may choose too small.

The use of JVM GC policy is unreasonable.

The application takes up too much static or dynamic memory to be used on a 32-bit JVM.

Over time, the JVM OldGen leak became more and more serious, while GC discovered it hours or days later.

It is a very common problem that JVM PermGen space (only HotSpot VM) or native heap can be leaked over time; the mistake of OOM is that the application moves dynamically after observing for a period of time.

The proportional space of YoungGen and OldGen does not match your application.

The Java heap is too large on the 32-bit VM, causing the native heap to overflow, as shown by OOM trying to link a new Java EE application, creating a new Java thread, or calculating local memory allocation tasks.

Recommendations:

Observe and understand JVM garbage collection in depth. Start GC and provide all the data according to a reasonable assessment of health.

Keep in mind that GC-related problems are not found in development or functional testing, but need to be found in a multi-user and high-load test environment.

4. Too much or too little integration with external systems

The fourth reason for the poor performance of Java EE is the highly distributed system, and the typical case is the telecom IT environment. In this environment, a middleware area (for example, service bus) rarely does all the work, but simply "delegates" some business to other parts, such as product quality, customer data and order management, to other Java EE middleware platforms or legacy systems, such as mainframes that support different load types and communication protocols.

Such an external system call means that the client's Java EE application triggers the creation or reuse of socket links to read and write data from the external system. It can be configured as synchronous or asynchronous invocation depending on the implementation and implementation of the business process. It is important to note that response time varies according to the stability of the external system, so it is also important to protect Java EE applications and middleware by using timeouts appropriately.

Here are three situations where problems and performance degradation often occur:

Synchronize and call too many external systems one after another.

When the link between the Java EE client application and the external system timed out, the data was lost or the value was too high, causing the client thread to get stuck, resulting in a domino effect.

Timed out, but the program still executes normally, but the middleware does not handle this strange path.

More negative testing is recommended, which means that "artificial" conditions for these problems are needed to test how external system errors are handled between the application and the middleware.

5. Lack of proper database SQL tuning and capacity planning

You may be surprised by this one: database problems. Most Java EE enterprise systems rely on relational databases to handle complex business processes. A solid database environment can ensure the scale growth of the IT environment to support the growing business.

In practice, database-related performance problems are common. Because most database transactions are performed by JDBC data sources (including relational persistence API, such as Hibernate). Performance problems are initially manifested as thread blocking.

The following are the database problems that often occur in my 10 years of work (take Oracle database as an example):

Isolated, long-running SQL. The main manifestations are thread blocking, lack of optimization in SQL, lack of index, non-* execution plan, return of a large number of data sets, and so on.

Table or row-level data locking. When committing a two-phase transaction model (for example, the infamous Oracle suspicious transaction). The Java EE container may leave some unhandled transactions waiting for the * * to commit or roll back, leaving data locks that can trigger performance problems until the * * locks are removed. For example, middleware power outages or server crashes can cause these situations.

Lack of reasonable and standardized database management tools. For example, REDO logs in Oracle, database data files, etc. Insufficient disk space and non-rotation of log files can trigger large performance problems and power outages.

Recommendations:

Reasonable capacity planning, including load and performance testing, is essential to optimize the data environment and identify problems in a timely manner.

If you are using the Oracle database, make sure that the DBA team reviews AWR reports on a regular basis, especially during contextual event and root cause analysis.

Use JVM thread storage and AWR reports to find out why SQL is slow or use monitoring tools to do so.

Strengthen the database environment of "operations" (disk space, data files, redo logs, tablespaces, etc.) for proper monitoring and alarm. If you do not do so, the client IT environment will experience more power outages and spend a lot of time troubleshooting.

6. Specific application performance issues

The following focus is on the more serious Java EE application problems. With regard to specific application performance issues, the following points are summarized:

Thread-safe code problems

Communication API missing timeout setting

The problem of resource management in JDBC or relational API

Lack of proper data cache

Excessive data caching

Too much logging

Tuning problem of 7.Java EE Middleware

Generally speaking, Java EE middleware is enough, but it lacks the necessary optimization. Most Java EE containers have a variety of options for your applications and business processes.

Without proper tuning and practice, the Java EE container may be in a negative state.

The following figure is an example of a view and check column:

8. Insufficient active monitoring

Lack of monitoring will not cause actual performance problems, but it will affect your understanding of the performance and health of the Java EE platform. Eventually, the environment can reach a break point, which may expose some flaws and problems (memory leaks in JVM, etc.).

In my experience, if you don't monitor it at first, but run it for a few months or years later, the stability of the platform will be greatly reduced.

In other words, it is never too late to improve the existing environment. Here are some suggestions:

Review existing Java EE environmental monitoring capabilities and identify areas for improvement.

The monitoring program should cover the entire environment as much as possible.

The monitoring scheme should conform to the capacity planning process.

9. Public infrastructure hardware saturation

This problem is often seen when too many Java EE middleware environments are deployed to existing hardware along with the JVM process. Too many JVM processes are a real performance killer for a limited physical CPU core. In addition, with the growth of client business, hardware also needs to be reconsidered.

10. Network delay

* * one problem that affects performance is the network. Network problems occur from time to time, such as router, switch, and DNS server failures. It is more common to have periodic or intermittent delays in a highly dispersed IT environment. The example in the picture below is the delay between a Weblogic cluster communication in the same area and the Oracle database server.

Intermittent or periodic delays can trigger important performance issues that affect Java EE applications in different ways.

Because of a large number of fetch iterations (network incoming and outgoing), the application of data query problems involving large data sets will be very affected by network latency.

Applications are also affected by network delays when dealing with external system big data loads (such as XML data), resulting in large response intervals when sending and receiving responses.

The Java EE container replication process (clustering) is also affected and puts failover features such as multicast or unicast packet loss at risk.

JDBC row data "prefetching", XML data compression, and data caching can reduce network latency. This network latency problem should be carefully examined when designing a new network topology.

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report