Deferring and combining write barriers for a garbage-collected heap

Active Publication Date: 2008-07-22
ORACLE INT CORP
View PDF83 Cites 61 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

[0044]The present invention provides a technique for reducing the number of write barriers without compromising garbage-collector performance or correctness. Conventionally, a compiler emits write-barrier code at a location immediately following a reference-modifying instruction in the mutator code. In contrast, a compiler embodying the invention may defer emitting the instruction's write-barrier code until a subsequent location in the mutator code, i.e., not immediately following the reference-modifying instruction. By deferring write barriers in this manner, the compiler can analyze the deferred write barriers and combine those that, when executed, provide the same information to the garbage collector. Preferably, the compiler emits the remaining, uncombined deferred write barriers at consecutive locations in the mutator code. Because redundant or unnecessary write-barrier code is removed from the mutator code, the inventive technique can minimize the amount of write-barrier overhead in the mutator, thereby enabling the mutator to execute faster and more efficiently.

Problems solved by technology

In the field of computer systems, considerable effort has been expended on the task of allocating memory to data objects.
The use of static memory allocation in writing certain long-lived applications makes it difficult to restrict storage requirements to the available memory space.
If the application fails to reclaim unused memory—or, worse, loses track of the address of a dynamically allocated segment of memory—its memory requirements will grow over time to exceed the system's available memory.
This kind of error is known as a “memory leak.” Another kind of error occurs when an application reclaims memory for reuse even though it still maintains a reference to that memory.
If the reclaimed memory is reallocated for a different purpose, the application may inadvertently manipulate the same memory in multiple inconsistent ways.
Specifically, a programmer dealing with a given sequence of code does tend to know whether some portion of memory is still in use for that sequence of code, but it is considerably more difficult for him to know what the rest of the application is doing with that memory.
Even in this simple case, though, there is a sense in which the application does not itself provide the entire garbage collector.
Now, some of the functionality that source-language constructs specify can be quite complicated, requiring many machine-language instructions for their implementation.
But inlining runs the risk that “code bloat” will result if the operation is invoked at many source-code locations.
Although the FIG. 3 arrangement is a popular one, it is by no means universal, and many further implementation types can be expected.
But it can also have significant adverse performance effects if it is not implemented carefully.
Obviously, such an approach can slow mutator operation significantly.
For many interactive and real-time applications, though, this approach is not acceptable.
The delay in mutator operation that the collection cycle's execution causes can be annoying to a user and can prevent a real-time application from responding to its environment with the required speed.
In an interactive system, for instance, a user may never notice hundred-millisecond interruptions for garbage collection, whereas most users would find interruptions lasting for two seconds to be annoying.
But it takes too long in other situations, so workers in this field have employed a number of approaches to expediting reference tracing.
On the other hand, laboriously scanning the entire mature generation for references to young-generation (or mature-generation) objects would ordinarily take too long, so write barriers are typically used to set card-table entries associated with the mature generation to thereby limit the amount of memory the collector searches for modified mature-generation references.
Clearly, this added code overhead may significantly increase the mutator's execution time, especially when the mutator code contains a relatively large number of reference-modifying instructions.
So adding write barriers to increase the garbage collector's efficiency tends to compromise the mutator's.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deferring and combining write barriers for a garbage-collected heap
  • Deferring and combining write barriers for a garbage-collected heap
  • Deferring and combining write barriers for a garbage-collected heap

Examples

Experimental program
Comparison scheme
Effect test

Example

I. Deferring and Combining Write Barriers in Compiled Code

A. Deferring Write Barriers in Compiled Code

[0093]Conventionally, a compiler emits a write barrier (i.e., write-barrier code, as shown in FIG. 7) at a location in the mutator code immediately following a reference-modifying mutator instruction. In contrast, a compiler embodying the present invention will not always emit a write barrier immediately following its corresponding reference-modifying instruction. Instead, the compiler may defer emitting the write-barrier code until a subsequent location in the mutator code. To that end, the compiler may maintain a list of reference-modifying instructions whose write barriers have been deferred. At some later point in the mutator code, the compiler may emit write barriers based on information stored in the list. Notably, the deferred write-barrier code may be emitted in accordance with various write-barrier implementations, such as sequential store buffers, precise card-marking, imprec

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention provides a technique for reducing the number of write barriers without compromising garbage collector performance or correctness. To that end, a compiler defers emitting write barriers until it reaches a subsequent instruction in the mutator code. At this point, the compiler may elide repeated or unnecessary write-barrier code so as to emit only those write barriers that provide useful information to the garbage collector. By eliminating write-barrier code in this manner, the amount of write-barrier overhead in the mutator can be minimized, consequently enabling the mutator to execute faster and more efficiently. Further, collocating write barriers after the predetermined instruction also enables the compiler to generate object code having better cache performance and more efficient use of guard code than is possible using conventional write-barrier implementations.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Owner ORACLE INT CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products