Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to optimize performance in the cloud

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Many IT professionals tend to think that there is only one way to optimize cloud. In fact, there are many ways to optimize the cloud, including ways to reduce costs, improve performance, reliability, and even environmental sustainability.

Different optimization objectives tend to promote each other, so it is helpful to consider optimization extensively to take full advantage of the cloud strategy. This article helps you familiarize yourself with various optimization methods and how they complement each other to make your cloud environment more efficient.

The value and Strategy of performance Optimization

1. Performance optimization value

Performance is the most important indicator of an application system, and unless there is no choice, no user will put up with a slow response application system or website. A large amount of data shows that an increase in the response time of the core experience every 0.1 seconds results in a 1% drop in revenue.

After the application system is running online, with the continuous growth of the amount of data and visits of the system, the response speed of the system will usually become slower and slower, especially in the peak case, it often can not meet the business needs, and even the application service will be interrupted, which will cause huge brand loss and economic loss to the enterprise, so performance optimization will be very important.

Through performance optimization, less hardware resources can be used to support more business development, so as to achieve the purpose of saving hardware costs; at the same time, in the case of limited resources, the response ability of the system can be improved. bring better user experience and promote business growth.

2. Performance optimization strategy

For the application system, the user needs to go through a lot of links from the browser request to the database to complete the transaction operation. if the system response is slow, each link of the request must be analyzed to find out where the performance bottleneck may occur and locate the problem.

The method to troubleshoot bottlenecks is usually to check the logs of all aspects of request processing to analyze which part of the response time is unreasonable or beyond expectations, and then check the monitoring data to analyze whether the main factors affecting performance are CPU, memory, disk, network and other infrastructure resources, architecture design, or slow SQL statements.

After locating the specific cause of the performance problem, do targeted performance optimization.

Cloud performance optimization system

1. Performance optimization system

Performance optimization, in short, is to make it run faster and take less time to complete specific functions without affecting the correctness of the system.

There are many dimensions of performance optimization, which can be carried out from the following five aspects: resource layer, architecture layer, application layer, database layer, middleware layer. The performance optimization system is shown in the following figure:

2. Resource layer optimization

The optimization of cloud resource layer includes horizontal and vertical expansion of cloud resources, and the optimization of cloud resource level can be based on the quantitative index data of cloud monitoring.

Cloud Monitoring can monitor the dynamic indicators of cloud resources in real time, and is the general entrance to the monitoring and management of all cloud products. You can view the most complete and detailed monitoring data through Cloud Monitoring. Cloud Monitoring can monitor cloud products such as CVM, cloud database, load balancer and other cloud products in real time, extract key indicators of cloud products and display them in the form of monitoring charts. You can get a comprehensive understanding of resource utilization, application performance, and cloud product health through the use of cloud monitoring.

Horizontal expansion is to increase the number of instances such as CVM and cloud database, while vertical expansion is to upgrade the specification configuration of cloud resources such as CVM and cloud database, such as CPU, memory, disk, bandwidth and other parameters, to optimize the access performance of the system to solve resource bottlenecks.

3. Architecture layer optimization

The performance problems of the system may also be caused by unreasonable system architecture design. For example, in the architecture design, we do not consider to do read-write separation, database sub-table, dynamic and static separation, CDN acceleration, cache acceleration, elastic scaling and so on.

Read-write separation and database sub-table solve the problem of database access performance. It is very convenient to achieve read-write separation on the cloud. After creating a read-only instance, configure a read-write separation address in the application. You can automatically forward the write request to the master instance and the read request to each read-only instance.

Dynamic and static separation, CDN acceleration and caching solve the problem of fast reading static files or hot data, such as pictures, videos, hot items, inventory, and so on. Enterprises need to use some mature cloud native solutions as far as possible to optimize access performance from the architecture design level.

Auto scaling solves the problem of automatic expansion of application servers. By configuring scaling rules and policies in advance, CVM instances are automatically added to ensure computing power when business demand increases, so as to avoid access latency and resource overload.

4. Application layer optimization

The key to application layer optimization is to quickly diagnose the bottleneck of the application.

The rapid development of Internet business has brought increasing traffic pressure, business logic has become increasingly complex, the traditional stand-alone applications have been unable to meet the demand. More and more websites or systems are gradually adopting distributed deployment architecture.

At the same time, with the continuous maturity of basic development frameworks such as Spring Cloud/Dubbo, more and more enterprises begin to split the application architecture vertically according to business modules, forming a micro-service architecture that is more suitable for team collaborative development and rapid iteration.

The distributed micro-service architecture is advanced in development efficiency, but it brings great challenges to the traditional monitoring, operation and maintenance and diagnosis technology. The main challenges include:

It is difficult to locate problems:

In micro-service distributed architecture, a business request usually returns a result after going through multiple services / nodes. Once an error occurs in the request, it is often necessary to check the log repeatedly on multiple machines to initially locate the problem. Troubleshooting of simple problems often involves multiple teams.

Finding bottlenecks is difficult:

When the stutter occurs in the user feedback system, it is difficult to quickly find out where the bottleneck lies: the network problem from the user terminal to the server, the slow response caused by the high load on the server, or the excessive pressure on the database? Even if you locate the link that leads to stutter, it is difficult to quickly locate the root cause at the code level.

The architecture is difficult to sort out:

After the business logic becomes more and more complex, it is difficult to sort out which downstream services (database, HTTP API, cache) an application depends on and which external calls it depends on from the code level. The carding of business logic, architectural governance, and capacity planning have also become more difficult.

Usually, you need to use tools such as performance stress testing tools (such as PTS) and application real-time monitoring services (such as ARMS) to trace links based on front-end, application, business customization and other dimensions to achieve complete call link recovery, call request statistics, link topology and application dependency analysis. Link tracing can help to quickly analyze and diagnose performance bottlenecks in distributed application architecture and improve the efficiency of development and diagnosis in the era of micro-services.

After locating the bottleneck problem, carry out targeted optimization work, such as optimizing slow SQL statements, optimizing calling error reporting program code, optimizing calling exception API and so on. Usually, after optimization, the performance and capacity water level of the system can be tested again with the performance pressure test tool. through the pressure test results, the bottleneck of the system is further analyzed, and the application is iteratively optimized.

5. Database optimization

The main factors that affect the performance of the database system are as follows: the hardware configuration of the system, the physical distribution of database files, the parameters of the database instance, the physical design of the database, and the SQL statement of the application.

To optimize the performance of the database, the following data contents need to be collected first:

System software and hardware environment: including server operating system settings, hardware configuration, network configuration, software environment, startup options, process information, performance information, disk usage and so on.

Hardware operation: including CPU, memory, disk, network operation data.

Database instance configuration: instance configuration parameters.

Database configuration: including recovery model, automatic contraction, space growth and other information.

Database disk usage: including database size, table size, number of records, index size, occupied space and so on.

Index and fragmentation: including the index on the table, the fragmentation of the index, the index maintenance plan, etc.

SQL statement execution: including SQL statement execution time, start time, database, statement content, deadlock, blocking and so on.

Application running status: including the peak hours of the system, database maintenance tasks at night, slow business reports by users, and system operation characteristics.

The main optimization items for database performance are shown in the following figure:

6. Optimization of middleware

In information systems, many performance problems are caused by inconspicuous application middleware. Application middleware was born to help application programmers deal with frequent things that do not have much to do with business logic, such as dealing with the relationship between the application and the database, setting the number of session opened to handle customer requests, the session timeout, and so on.

However, while enjoying convenience, application middleware will also become the creator of system performance problems, and developers and testers often ignore the impact of middleware itself on performance. this impact includes the restriction of transaction throughput, the impact of response time, the impact of transaction success rate and so on.

The goal of middleware optimization is to shorten the time spent in middleware (improve user experience) and improve the throughput of the entire application server. Middleware optimization, what parameters to adjust, we must understand its meaning, principle, adjusted benefits and risks, it is best that N parameters can be wound into a whole in the mind.

High priority: resize JDBC connection pool, thread pool, heap size of JVM virtual machine.

Medium priority: number of sessions, garbage collection GC policy.

There are also caches and data source statement cache sizes.

Improper configuration can also lead to the fake death of the middleware. For example, a certain type of resource (session or jdbc) is completely occupied by the application and is not released for a short time, so that the new request cannot be executed, resulting in a fake death. In such cases, it is necessary to configure the parameters for timeout abandonment.

Performance optimization is a complex system engineering, first of all, we need to locate the performance bottleneck, and then comprehensively analyze and optimize it from the aspects of cloud resources, system architecture, application, database, middleware, etc. The ultimate goal of performance optimization is to improve the user experience.

With the continuous increase of the amount of system data and access users, as well as the continuous iteration of system functions, the system needs continuous performance optimization, which is a protracted war. Only in this way can users have a better access experience and support business growth.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report