In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
This article comes from the official account of Wechat: ByteCode (ID:ByteCode1024), author: programmer DHL
Today's article focuses on memory-related knowledge points, and what factors can cause OOM crashes and corresponding solutions, so you will learn the following through this article:
What is virtual memory and physical memory
What is the available virtual memory for 32-bit and 64-bit devices respectively
Why lack of virtual memory mainly occurs on 32-bit devices
How to solve the problem of insufficient virtual memory
Distribution of virtual memory after App startup is complete
How to solve the problem of insufficient memory in Java heap
There is still a lot of memory available on the Java heap, why is there still OOM?
When doing performance optimization, you need to care about those metrics data.
I don't know if my friends have experienced the same optimization scheme. After App An is launched, the collapse rate decreases a lot, but App B only gets a little benefit. Each optimization scheme may not get the same optimization effect on different App, because each App faces different user groups in different countries and regions, so the models are different, so we need to know about memory-related knowledge. Combine online and offline data, attribution to their own App, prescribe the right medicine, in order to achieve greater benefits.
Memory is an extremely scarce resource, unreasonable use will lead to less and less available memory, may cause stutter, ANR, OOM crash, Native crash and so on, seriously affecting the user experience. So when we are doing performance optimization, memory optimization is a very important link.
In the early days of memory optimization, we all have a subconscious "the less memory, the better" in our minds, which is wrong in some cases. For example, on the high-end computer, we can allocate more memory, which can improve the user experience, but on the low-end machine, the memory itself is very small, so we should try to reduce the allocation of memory. For example, animation and special effects aimed at wasting performance can be turned off on low-end computers, or hardware acceleration can be turned off and other solutions can be used instead, which can not only reduce crashes, but also reduce stutters and improve user experience.
Because Java has an automatic recycling mechanism, in the process of development, few people will care about memory problems, and a subconscious GC will automatically recycle in mind, so useless resources such as Bitmap, animation, players, etc., will not be released actively after use, waiting for GC to be recycled. In actual projects, relying on GC is unreliable. First of all, the automatic collection mechanism of GC is uncertain, and GC is also divided into different types. If Full GC occurs, the stop the world event will be triggered, which will make App more serious.
In addition, GC's recycling mechanism determines whether an object can be recycled according to the reachability analysis algorithm. If there is a memory leak, GC will not recover these resources and accumulate gradually. When the memory limit of the heap is reached, OOM crashes, so you have to make sure that you do not write the code of memory leak, and the rest of the team do not write the code of memory leak. However, in reality, this is impossible. So the idea of relying on the GC automatic recycling mechanism is unreliable. Although Java has a memory recovery mechanism, we should keep the consciousness of memory management in mind, so when we apply for memory, exit, or are not in use, release memory in time. Really achieve the allocation of time, timely release.
When there is less and less available memory, when it is serious, it will lead to OOM crash. Friends who have done OOM optimization should find that most of the OOM crash stacks captured online are the last straw that kills the camel, and is not the root of the problem, so we need to attribute the OOM crash to find the bulk of memory consumption. Reduce the amount of memory that has been used on the machine, thereby reducing OOM crashes, so I've roughly broken down into the following areas.
Virtual memory and physical memory
Heap memory
Heap memory leak refers to the memory allocated to objects while the program is running. When the program exits or exits the interface, the allocated memory is not released or cannot be released for other reasons.
Resource leaks, such as FD, socket, threads, etc., are limited on each phone. If they are not released, they will collapse because of resource exhaustion. We have had FD leaks online, resulting in a three-fold increase in the collapse rate.
Allocated memory reaches the upper limit of the Java heap
There is a lot of available memory because the memory is fragmented and there is not enough space for contiguous segments.
The cumulative amount of single or multiple assignments of objects is too large, such as creating a Bitmap all the time in a circular animation
Java heap memory overflow
Memory leak
The number of FD exceeds the threshold of current mobile phones.
The number of threads exceeds the current threshold for mobile phones.
The percentage of FD and thread crashes is very low, so this is not the focus of our early optimization. In this article, we focus on virtual memory and physical memory, the next article will introduce heap memory, heap memory is the area where the program allocates memory to objects, and it also belongs to the scope of virtual memory.
Virtual memory and physical memory before introducing virtual memory, we need to introduce physical memory. Physical memory is real memory (that is, memory bars). If the application operates directly on physical memory, there will be many problems:
Security problem, the memory space between applications is not isolated, which will cause application A to modify the memory data of application B, which is very insecure.
The utilization rate of memory space is low, and the use of memory by applications will lead to the problem of memory fragmentation. Even if there is still a lot of memory available, there are not enough consecutive segments of memory allocated, resulting in a crash.
Low efficiency, when multiple applications read and write physical memory at the same time, the use efficiency will be very low.
To solve the above problem, we need to allocate "intermediate memory" for each application that will eventually be mapped to physical memory, which is the virtual memory that we will talk about next.
The operating system allocates an independent virtual memory for each application to achieve memory isolation between applications, avoiding the problem that application A modifies the memory data of application B. Virtual memory will eventually be mapped to physical memory. When the application applies for memory, the virtual memory is obtained, and only when the write operation is actually performed will it be allocated to physical memory. The advantage is that applications can use contiguous address spaces to access discontiguous physical memory.
The amount of virtual memory available for each application is limited by the CPU bit width and the kernel. We often talk about 16-bit cpu,32 bit cpu,64 bit CPU, which refers to the bit width of CPU, indicating the data width that can be processed at one time, that is, the number of binary digits that CPU can handle, that is, 16bit CPU 32bit and 64bit respectively. At present, 32-bit and 64-bit devices are commonly used in the market.
Virtual memory available for 32-bit and 64-bit devices the size of virtual memory available for 32-bit devices 3GB32-bit CPU architecture is 2 ^ 32 = 4GB. Virtual memory space is divided into kernel space and user space. The system provides three parameters for virtual address space allocation, which represent the size of virtual address space accessible by user space.
VMSPLIT_3G: default value, indicating that user space can use the low address of 3GB, and the rest of the high address of 1GB is assigned to the kernel
VMSPLIT_2G: a low address that indicates that 2GB can be used in user space
VMSPLIT_1G: a low address that indicates that 1GB can be used in user space
Virtual memory size available for 64-bit applications 512GB64-bit CPU devices have 64-bit address space, but not all of them can be used, and only part of the address can be used for later expansion.
The length of the default virtual address for Android is configured as CONFIG_ARM64_VA_BITS=39, that is, the address space available for Android's 64-bit applications is 2 ^ 39 = 512GB.
When 32-bit applications run on 64-bit devices, the 4GB virtual address space can be used, while 64-bit applications can use 512GB space. Therefore, there is no shortage of virtual space on 64-bit machines. Therefore, in 2019, Google Play requires a 64-bit version in addition to the 32-bit version.
In our OOM crash devices, 32-bit devices account for more than 50% +, and insufficient virtual memory mainly occurs on 32-bit devices.
Why virtual memory shortage mainly occurs on 32-bit devices, limited by the address space maximum memory 4 GB, kernel space occupies 1G, and the remaining 3G is user space. We can check the current virtual memory allocation by parsing the / process/ pid / smaps file.
Https://android.googlesource.com/ platform / frameworks / base/+/3025ef332c29e255388f74b2afefe05f64bce07c / core / jni / android_os_Debug.cpp
Pre-allocation of system resources includes the need to load the code and resources of the Framework layer when the Zygote process is initialized. The child processes that come out of Fork can be used directly. Framework resources include: Framework layer Java code, so, art virtual machine, various static resource fonts, files, etc.
In the pre-allocated area of the system, the [anon:libwebview reservation] area occupies 130MB memory.
App's own resources, including code in App, resources, stack space consumed by threads directly or indirectly by App, memory requested by App, memory file mapping, etc.
The Java heap is used to allocate objects created by Java / Kotlin. It is managed and recycled by GC. When GC reclaims, the objects in From Space are copied to To Space. These two areas are dalvik-main space and dalvik-main space 1, respectively. The size of these two areas is the same as the Java heap size of my current testing machine, which is 512 MB, as shown in the following figure.
According to the explanation in the Android source code, the size of the Java heap should be set according to the RAM Size, which is an empirical value, and the manufacturer can change it. If the phone is Root, you can also change it yourself, no matter how big the RAM is. So far, the upper limit of the Java heap is 512MB by default. Https://android.googlesource.com/ platform / frameworks / native/+/master/ build
RAM (MB)-dalvik-heap.mkheapsize (MB) phone-hdpi-dalvik-heap.mk32512-dalvik-heap.mk1281024-dalvik-heap.mk2562048-dalvik-heap.mk5124096-dalvik-heap.mk512 no matter how big the RAM is, the upper limit of the heap so far defaults to 512MB.
Memory file mapping. Mmap is a method of memory mapping files. Our APK, Dex, so and so on are all read through mmap, which will lead to the increase of virtual memory. The memory occupied by mmap has something to do with reading and writing.
After analyzing the kernel, system resources, and the resource occupation of each App, the final memory left for us to use is not much, so we should make rational use of system resources and really achieve "time allocation and timely release".
How to solve the problem of insufficient virtual memory at present, there are many cool techs in the industry to release the problem of insufficient virtual memory occupied by the system, there are probably the following aspects of optimization.
The default stack space of a Native thread is about 1m. After testing, the logic executed in the thread does not need so much space, so the stack space of the Native thread is halved, which can reduce the pthread_create OOM crash.
The [anon:libwebview reservation] area of the pre-allocated area of the system occupies 130MB memory. You can try to release the pre-allocated memory of WebView and reduce part of the virtual memory.
Virtual machine heap space is halved. As mentioned above, there are two areas of the same size, dalvik-main space and dalvik-main space 1. Halving virtual machine heap space is actually reducing the memory occupied by one of the main space.
Kuaishou optimizes the garbage collector jemalloc, freeing up the virtual memory occupied by anon:libc_malloc
The following statistics show that the garbage collector used before Android 7.0 App first started the virtual memory consumed by libc_malloc 156MBVss Pss Rss name 159744 kB 81789 kB 82320 kB [anon:libc_malloc] Android 11 was jemalloc The default garbage collector used after Android 11 is scudo.
The distribution of virtual memory after App startup is completed. The following figure shows the virtual memory (Vss) occupied by App after starting on Android 7.0.The distribution of virtual memory in different systems and different App is different. We can check our own App virtual memory allocation by parsing the / process/ pid / smaps file.
Https://android.googlesource.com/ platform / frameworks / base/+/3025ef332c29e255388f74b2afefe05f64bce07c / core / jni / android_os_Debug.cpp
As shown in the figure above, it is mainly divided into three parts:
Dalvik (that is, the Java heap), the area where the program allocates memory to objects while it is running
Program files dex, so, oat
Native
In view of the above problems, we optimize the project by the following means, focusing on optimizing the memory occupied by dalvik. Due to the space problem, we will make a detailed analysis in later articles:
Bitmap objects and pixel data are mainly placed in the Java heap on Android 3.0,7.0.The upper limit of Java heap is 512MB, while Native takes up virtual memory. 32 devices can use 3GB Java 64-bit devices to be larger, so we can try to allocate Bitmap to Native to ease the pressure on Java heap and reduce OOM crash.
When using a third-party picture library, we need to set different cache sizes for the high-end and low-end computers, so that we can ensure the experience on the high-end computers while reducing the OOM collapse rate of the low-end computers.
Converge Bitmap, avoid creating Bitmap repeatedly, exit the interface and release resources (Bitmap, animation, player, etc.) in time.
Memory recovery strategy: when Activity or Fragment is leaked, the associated animation, Bitmap, DrawingCache, background, listeners, etc., cannot be released. When we exit the interface, we recursively traverse all sub-view, release related resources, and reduce the memory occupied by memory leakage.
Converging threads, ancestral code uses new Thread, AsyncTask, creating thread pools and other operations in many places in the project to reduce the number of App creation threads and reduce system overhead by means of a unified thread pool.
Different strategies are adopted for low-end and high-end computers to reduce the memory consumption of low-end computers.
Memory leaks can never be solved, so we need to sort out the Top series leaks, focusing on the leaks that take up the most memory and those caused by the most frequently used scenarios
Complex creation of small objects, the accumulation of heap memory is too large, these are generally obvious stacks, according to the stack information can be solved. For example, create a Bitmap all the time in a circular animation
Large object, the single allocation memory of the heap is too large
Delete the code to reduce the memory occupied by the dex file
Reduce the number of dex in App. Non-essential functions can be dynamically distributed.
Load so files on demand. Do not load all so files in advance. Load them when you need them.
There is still a lot of memory available on the Java heap. Why is there still OOM? many of my friends have asked me such a question, probably due to the following reasons:
Memory fragmentation, not enough memory allocation for contiguous segments
Insufficient virtual memory
The number of threads or FD exceeds the current threshold for mobile phones.
At the end of the article, I would like to mention that when we do performance optimization, we need to pay attention not only to the performance indicator data, but also to the impact on the business indicator data, such as how much it can improve the duration of use, retention and so on.
Why care about business metrics data?
Performance metrics data, such as OOM collapse rate, Native collapse rate, ANR, etc., may be that only the partners of the client know what OOM, Native and ANR mean, but others (product managers, bosses, etc.) they do not know and will not care about them, but they are more sensitive to business indicator data such as duration and retention, and can better reflect the value of doing this. This is just my own point of view. Everyone stands from a different angle and has a different point of view.
This is the end of the full text, this article is just to sort out the memory-related knowledge points, and what factors will lead to
Crashes and corresponding solutions. The next article will introduce heap memory, which is the area where the program allocates memory to objects while it is running.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.