Reprinted to https://www.cnblogs.com/hucn/p/3572384.html
I encountered the problem of throwing an exception java.lang.OutOfMemoryError when I deployed locally: GC overhead limit exceeded caused the service to fail. Looking at the log, I found that too many resources were loaded into memory, the local performance was not good, and the gc time was consumed more. Two ways to solve this problem are to add parameters, -XX:-UseGCOverheadLimit, turn off this feature, and increase heap size, -Xmx1024m. The pit is filled, but why?
OOM everyone knows that JVM memory overflow, so what about GC overhead limit exceed?
GC overhead limt exceed check is a strategy defined by Hotspot VM 1.6. It predicts whether OOM is needed by counting GC time, throws exceptions ahead of time, and prevents OOM from happening. Sun's official definition of this is: "Parallel / Concurrent Recycler throws OutOfMemroyError when GC reclaims too long. Over 98% of the time is spent on GC and less than 2% of heap memory is recovered. Used to avoid memory too small to make the application work properly. "
Sounds useless.. What's the use of predicting OOM? Initially, it could only be used for Catch to release memory resources and avoid application hang-ups. It turns out that in general this strategy won't save your application, but it can be a last-ditch struggle before the application goes down, such as Heap Dump.
And sometimes this strategy can cause problems, such as frequent OOM s when loading a large memory data.
If you encounter this problem in your production environment, don't simply guess and avoid it without knowing why. You can see what caused the anomaly through - verbose:gc -XX:+PrintGCDetails. Usually, the reason is that the old area occupies too much, which leads to frequent Full GC, and ultimately leads to GC overhead limit exceed. If the gc log is not enough, you can use tools such as JProfile to check the memory usage, and if there is a memory leak in the old area. There is another way to analyze memory leaks - XX:+HeapDumpOnOutOfMemoryError, so that OOM will automatically do Heap Dump, you can use MAT to sort out. Also be aware of the young section, which may throw if there are too many transient object assignments.
It is not difficult to understand the information of the log, that is, to record the type, size and time of GC every time. For instance.
33.125: [GC [DefNew: 16000K->16000K(16192K), 0.0000574 secs][Tenured: 2973K->2704K(16384K), 0.1012650 secs] 18973K->2704K(32576K), 0.1015066 secs]
100.667:[Full GC [Tenured: 0K->210K(10240K), 0.0149142 secs] 4603K->210K(19456K), [Perm : 2999K->2999K(21248K)], 0.0150007 secs]
GC and Full GC represent the pause type of gc, and Full GC represents stop-the-world. On both sides of the arrow are the size of the area before and after gc. They are young area, tenured area and perm area, respectively. The total size of the area is in parentheses. The colon is preceded by the time when the GC occurs in seconds, calculated from the start of the jvm. DefNew stands for Serial Collector, Default New Generation is short for PSYoungGen, which stands for Parallel Scavenge collector. By analyzing the log, we can find the cause of GC overhead limit exceeded, and solve the problem by adjusting the corresponding parameters.
The explanations of nouns involved in the paper,
Eden Space: The heap memory pool where most objects allocate memory space.
Survivor Space: A heap memory pool that stores objects that survive in the gc of Eden Space.
Tenured Generation: A heap memory pool that stores gc objects that have survived several times in Survivor Space.
Permanent Generation: Non-heap space that stores class and method objects.
Code Cache: Non-heap space, JVM is used to store, compile and store native code.
Finally, the implementation of GC overhead limit exceed HotSpot is attached.
bool print_gc_overhead_limit_would_be_exceeded = false; if (is_full_gc) { if (gc_cost() > gc_cost_limit && free_in_old_gen < (size_t) mem_free_old_limit && free_in_eden < (size_t) mem_free_eden_limit) { // Collections, on average, are taking too much time, and // gc_cost() > gc_cost_limit // we have too little space available after a full gc. // total_free_limit < mem_free_limit // where // total_free_limit is the free space available in // both generations // total_mem is the total space available for allocation // in both generations (survivor spaces are not included // just as they are not included in eden_limit). // mem_free_limit is a fraction of total_mem judged to be an // acceptable amount that is still unused. // The heap can ask for the value of this variable when deciding // whether to thrown an OutOfMemory error. // Note that the gc time limit test only works for the collections // of the young gen + tenured gen and not for collections of the // permanent gen. That is because the calculation of the space // freed by the collection is the free space in the young gen + // tenured gen. // At this point the GC overhead limit is being exceeded. inc_gc_overhead_limit_count(); if (UseGCOverheadLimit) { if (gc_overhead_limit_count() >= AdaptiveSizePolicyGCTimeLimitThreshold){ // All conditions have been met for throwing an out-of-memory set_gc_overhead_limit_exceeded(true); // Avoid consecutive OOM due to the gc time limit by resetting // the counter. reset_gc_overhead_limit_count(); } else { // The required consecutive collections which exceed the // GC time limit may or may not have been reached. We // are approaching that condition and so as not to // throw an out-of-memory before all SoftRef's have been // cleared, set _should_clear_all_soft_refs in CollectorPolicy. // The clearing will be done on the next GC. bool near_limit = gc_overhead_limit_near(); if (near_limit) { collector_policy->set_should_clear_all_soft_refs(true); if (PrintGCDetails && Verbose) { gclog_or_tty->print_cr(" Nearing GC overhead limit, " "will be clearing all SoftReference"); } } } } // Set this even when the overhead limit will not // cause an out-of-memory. Diagnostic message indicating // that the overhead limit is being exceeded is sometimes // printed. print_gc_overhead_limit_would_be_exceeded = true; } else { // Did not exceed overhead limits reset_gc_overhead_limit_count(); } }