In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
Web server site access is slow how to improve, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.
A brief Analysis of Optimization ideas
To optimize the performance of the Web server, let's first look at the steps of the Web server in web page processing:
The Web browser sends a Web page request to a specific server; after receiving the web page request, the Web server looks for the requested web page and transmits the requested Web page to the Web browser; the Web browser receives the requested web page content and displays it. The above three steps are all related to the Web server, but the actual Web server performance is most relevant in step 2, where the Web server needs to find the Web page content requested by the browser. We know that Web page content has static, dynamic and static content. The web server can send the results back to the browser directly. For dynamic content, it usually needs to be processed by the application server first, and the application server will return the results. Of course, there are Web servers that can handle dynamic content. For example, IIS can explain and deal with two Microsoft dynamic web scripting languages, ASP and ASP.NET. From the brief analysis above, we can roughly conclude that there are several factors that affect Web page access: the speed at which the Web server reads static page content from disk, that is, the time; the time at which the Web server determines whether the request content is static or dynamic; the time it takes for the Web server to forward the request to the application server; and the time it takes for the application server to process (interpret) dynamic content. The Web server returns the response time of the Web content to the browser; the processing performance of the Web server receiving the request from the browser; the transmission time of the Web access request data on the network: including two parts: from the browser to the server, and from the server to the browser; the browser locally calculates and renders the Web content, that is, the time to display the content after receiving the content. The above 8 items are easy to understand and straightforward, but there are also the following factors related to the speed experience of Web pages. You can think about this. Or whether it will affect the performance of page access. The time for Web server to perform security policy check, or performance; the time for Web server to read log files, write log contents, and turn off access to log files, first read, then write and then close; the read and write in these three steps also involve disk access performance factors; at the same time, the number of clients connected to the Web server, that is, the amount of concurrent visits.
We can abstract a total of 11 factors above, which are:
Web server disk performance; Web server and application server interaction performance; application server processing performance of dynamic content, or dynamic content application processing performance; connection speed between client and Web server, that is, network transmission performance; Web browser interpretation and rendering of Web content performance; Web access concurrency performance.
As reflected in our performance optimization, we can start from:
Increase the bandwidth, including the Internet connection bandwidth on both sides of the server and client; speed up the processing performance of dynamic content; use static content as much as possible, so that the Web server can send Web content directly to the browser without requesting the application server. Here are another solution: dynamic content caching, dynamic content static load balancing of multiple servers to handle a large number of concurrent visits at the same time Improve the server disk access performance, that is, the so-called HTTP O performance; reduce the number of HTTP requests in the web page; replace the Web server with better performance; deploy the server reasonably and deploy the server closer to the client, which has been proved to significantly improve the access performance. Performance optimization practice after a brief analysis of the previous section, I believe you have a certain idea to optimize the Web server, you can optimize from the hardware level, software level, Web code three levels. Below we combine a specific example to practice, the example of this article is a small Web site, part of the data is hypothetical, if similar, pure coincidence, only to throw a brick to attract jade. In practical work, if you encounter a large site, you can refer to the analysis here to modify the optimization scheme. 1. Site introduction
A community forum site, using Discuz! The forum program is constructed, which is composed of mainstream PHP + MySQL.
At present, the website has nearly 50,000 registered users, the vast majority of them are domestic users, the number of active users is about half, the average daily PV is 150,000 to 200,000, and the number of independent visits to IP is about 8000.
2. Web server performance optimization requirements
The website is now deployed in a foreign server, renting a virtual host to operate, because the visit volume is relatively large, so it often receives a large notice from the virtual host service provider to control the visit volume.
In addition, the server of the virtual host is in the United States, the reason for not renting the virtual host in China is that the domestic website is very cumbersome in terms of filing, and the amount of data and visits are relatively small at the beginning of the operation of the website, so the performance requirements are not high and the amount of data is small, so the server is relatively fast in querying and processing data, and it also makes people feel that the access speed is not slow. Now, with the continuous increase in the amount of data and visits. Access speed has slowed down significantly, and it is time to improve access performance.
Based on the current situation of the community website, the optimization requirement is that the domestic access speed needs to be doubled. At present, the loading time of the home page takes about 40 seconds. It is hoped that the home page can be loaded within 20 seconds after optimization.
In addition, it is proposed that the website data can be automatically backed up once a day, and the backup data can be retained for one month so that it can be restored at any time.
Of the above two requirements, the first is the performance optimization requirement, and the second is the additional requirement.
3. Performance optimization scheme
According to the current situation and optimization needs of its website, combined with its own experience, coupled with Google's search, and constant confirmation and communication with website owners, we finally get the following performance optimization solutions:
From virtual host deployment to stand-alone server deployment
The virtual host is too limited to configure the Web server and PHP dynamic cache, and the independent server can own memory and processor resources, which is no longer limited by the memory and processor resources of each virtual host user. Processor resources and memory resources have a direct performance improvement effect on accepting more concurrent access.
The PHP + MySQL program is used on the website from the Windows operating system to the Linux operating system. The performance of PHP under Windows is limited by the fact that IIS needs to call PHP in the form of ISAPI, so the performance is not as good as that of Apache under Linux to explain PHP directly through the PHP module, let alone the performance of Nginx and PHP-FPM. Since a separate server is used, the operating system can determine by itself, and we have chosen the familiar Ubuntu Linux Server 10.04 for Linux system (there was no 12.04 a year ago). ^-^. The reason why Web servers use Nginx instead of Apache and choose Nginx instead of Apache is very straightforward and straightforward, because there are many static attachment files in the site, and Nginx is almost 10 times more efficient than Apache in dealing with static content. In terms of PHP interpretation and pseudo-static rules, Apache is better than Nginx, but that doesn't prevent us from giving it up. To alleviate this, we did dynamic caching in the face of PHP later. For dynamic caching of PHP queries, using eAccelerator as an accelerator PHP accelerator is to improve the efficiency of PHP execution, thus caching the opcode of PHP, so that the later execution of PHP does not have to parse the conversion, and you can directly call the PHP opcode, which increases the speed a lot. EAccelerator is an open source PHP accelerator that optimizes and caches dynamic content, which improves the caching performance of PHP scripts and almost eliminates the overhead on the server when PHP scripts are compiled. It also optimizes scripts to speed up their execution. So that the PHP program code implementation efficiency can be increased by 1-10 times, this acceleration is still very obvious.
Specifically, we plan to optimize the following settings for eAccelerator:
Caching is done using physical memory instead of disk. We know that the read and write performance of memory is N times that of hard disk, so when memory resources can be arranged, it is strongly recommended to use memory to save the cache contents of eAccelerator. The cache size is set to 32MB, which is the maximum cache capacity supported by the operating system by default. Although we could increase this value by modifying the configuration file, we didn't think it was necessary, so we gave up. Nginx performance optimization
Nginx is selected, and although its performance is very good, we still need to optimize its performance. In this case, we made the following optimizations:
Using eight processes, each process requires about 20m of memory consumption, which is a total of about 150m of memory.
Make full use of the CPU kernel of the master server:
Quad-core, using the CPU sticky configuration option (worker_cpu_affinity), allocating two processes per core processor.
Enable gzip compression:
Gzip compression has a very good compression effect on JS, CSS and XML. It can be compressed by half, that is, doubling the transmission time.
For picture files, JPG has been compressed, its compression performance is less.
Local caching of pictures for 1 day:
There are many pictures on the website, usually after a picture is uploaded, it will not be frequently modified, but will only be accessed frequently, so putting the image in the Nginx cache can reduce the server access loading times and improve the access speed.
JS and CSS files are cached locally for 7 days:
These two kinds of web page files, usually will not modify it, it will be cached, can reduce the number of loading, improve access speed.
Why these two kinds of files do not set the cache validity period together with the picture, it takes into account the different frequency of modification of different files.
Nginx logs are cut once a day:
This optimization can greatly reduce the size of the Nginx log file, after a week of viewing, the daily log file is about 50m, if not cut every day, with a monthly cut, then a month's log file is a few G, to the Web server to load such a large file in memory, the system itself does not have enough memory, it will naturally be used to disk cache, which affects performance.
About 50m a day, it can be loaded smoothly in memory, so that Nginx can quickly save the access log when dealing with access.
After the above optimization projects, the Nginx side needs to occupy a total of about 200m of memory resources.
Nginx does not have a PHP module to optimize the performance of the PHP CGI process, so its support for PHP is achieved through PHP-FPM, and PHP-FPM runs the process to handle concurrent requests. In this case, we configure 20 processes, each of which takes up about 20m of memory resources, a total of about 400m. At the same time, the interaction mechanism between PHP-FPM and Nginx is Linux Socket mode instead of TCP protocol port, Socks is system-level processing mode, socks is a file connection, and TCP protocol port needs to be processed by network protocol, so the performance of the former is not as good as the former. MySQL database performance optimization because the main program of the website is an open source program developed by others, so we can not deal with the program optimization of database query, we can only find a breakthrough from MySQL itself. We can imagine that for the forum website, the visit volume of viewing and checking posts is much larger than that of creating posts and replying posts, which is reflected in the MySQL database, that is, the connection between reading table and query table data is more.
Therefore, we have to choose a storage engine with better performance for reading tables and querying. Combined with previous knowledge, the default MyISAM engine of MySQL is designed to deal with an environment where the frequency of reading is much greater than that of writing, with considerable query efficiency and low memory footprint, which is consistent with our rented VPS with low memory configuration.
As for the optimization of MySQL configuration parameters, the default medium environment configuration file is directly used because of the limited memory resources on the server. Content distribution network application site every day more than 100,000 visits, tens of thousands of independent IP access, check previous access statistics, access from various regions of the country, using a variety of network connections to access, in order to ensure the access speed of users from each network, but also reduce the request to the website server, we use CDN to distribute static content, so that users everywhere can access the files cached on the CDN The CDN service will cache the static content on their servers across the country when it is accessed for the first time. On the second visit, users will not actually connect to the website server to get the files, but directly from the CDN server, which can significantly improve the performance of the website. Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.