Wing gundam (talk | contribs) Mention userspace memory management |
Wing gundam (talk | contribs) →In userspace: Expand userspace stub section |
||
Line 43: | Line 43: | ||
Memory is usually classed by access rate as with [[primary storage]] and [[secondary storage]]. Memory management systems handle moving information between these two levels of memory. |
Memory is usually classed by access rate as with [[primary storage]] and [[secondary storage]]. Memory management systems handle moving information between these two levels of memory. |
||
== |
==Dynamic memory management in userspace== |
||
In programming languages, memory management is a form of [[resource management]]. It entails ensuring [[memory safety]] and the absence of [[memory leak]]s. Common paradigms include [[manual memory management]], [[tracing garbage collection]], [[reference counting]], and compile-time analysis |
In programming languages, memory management is a form of [[resource management]]. It entails ensuring [[memory safety]] and the absence of [[memory leak]]s. Common paradigms include [[manual memory management]], [[tracing garbage collection]], [[reference counting]], and compile-time analysis. |
||
===Static memory only=== |
|||
Early languages did not have any heap allocation. These include Algol pre-68, Cobol pre-2002, and Fortran pre-90. |
|||
===Manual memory management=== |
|||
While these languages allow heap allocation, they require the programmer to manually track and free all allocated memory. They include assembly, [[Pascal (programming language)|Pascal]], [[C (programming language)|C]], [[C++]], [[COBOL 2002]]+, [[Fortran 90]]+. |
|||
===Runtime garbage collection=== |
|||
A large class of programming languages do require the programmer to free leaked memory, and instead have a runtime facility that executes additional code to locate and free inaccessible memory. |
|||
[[Tracing garbage collection]] involves periodically determining whether there exist any references to heap objects, and freeing them if there do not. This is often expensive, and can result "sawtooth" memory profiles and pauses in slower systems when the garbage collector runs. These languages include [[Lisp (programming language)|Lisp]], Perl (sometimes), COBOL 2002 (object oriented), [[Java (programming language)|Java]] and other [[JVM]] languages, [[Javascript]], [[C Sharp (programming language)|C#]] and other [[.NET Framework|.NET]] languages, [[Ruby (programming language)|Ruby]], [[Go (programming language)|Go]]. |
|||
Garbage collection via [[reference counting]] increments and decrements the number of references to a heap allocation, and frees it as soon as the count reaches zero. In some languages, the reference count must be incremented manually (e.g. COM/C++, the original [[Objective-C]]); in others, the count is incremented automatically (e.g. COM/VBA, [[Objective-C]] with [[Automatic Reference Counting|ARC]], [[Python (programming language)|Python]]). Languages with reference counting include Perl (sometimes), [[Component Object Model|COM]], [[Objective-C]], [[Python (programming language)|Python]]. |
|||
===Compile-time analysis=== |
|||
Certain newer languages do not require the programmer to manage or deallocate any memory, yet lack the overhead of runtime garbage collection and still avoid leaking memory. Instead, the compiler determines the appropriate lifetimes of dynamically allocated variables, and inserts <code>free</code>s at the appropriate points. This is done in such a way as to eliminate both [[dangling pointer]]s and the need for [[null pointer]]s. Complete implementations often rely on [[Substructural type system#Affine type systems|affine types]]. |
|||
This method has inconsistent naming, and is known as borrow checking in [[Rust (programming language)|Rust]], compile-time garbage collection in [[Mercury (programming language)|Mercury]], and compile-time reference counting in [[ATS (programming language)|ATS]]). |
|||
====Unsafe or partial support==== |
|||
Certain languages have only partial or permanently broken support for compile-time analysis. In these systems, programming errors in compilable code can subvert the memory management system and leak memory, dereference null pointers, or cause other memory errors. These include [[Resource Acquisition Is Initialization|RAII]] in C++, and regions in [[Cyclone (programming language)|Cyclone]]. |
|||
==See also== |
==See also== |
Revision as of 07:35, 28 November 2014
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time.[1]
Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have an extensive effect on overall system performance.
Dynamic memory allocation
![](https://upload.wikimedia.org/wikipedia/commons/thumb/4/4a/External_Fragmentation.svg/450px-External_Fragmentation.svg.png)
Details
The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.
Several issues complicate the implementation, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's metadata can also inflate the size of (individually) small allocations. This is managed often by chunking. The memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever "lost" as a memory leak.
Efficiency
The specific dynamic memory allocation algorithm implemented can impact performance significantly. A study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software).[2]
Implementations
Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with the kernel, and may use any of the following methods:
Fixed-size blocks allocation
Fixed-size blocks allocation, also called memory pool allocation, uses a free list of fixed-size blocks of memory (often all of the same size). This works well for simple embedded systems where no large objects need to be allocated, but suffers from fragmentation, especially with long memory addresses. However, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games.
Buddy blocks
In this system, memory is allocated into several pools of memory instead of just one, where each pool represents blocks of memory of a certain power of two in size. All blocks of a particular size are kept in a sorted linked list or tree and all new blocks that are formed during allocation are added to their respective memory pools for later use. If a smaller size is requested than is available, the smallest available size is selected and halved. One of the resulting halves is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the next-largest size buddy-block list.
Systems with virtual memory
Virtual memory is a method of decoupling the memory organization from the physical hardware. The applications operate memory via virtual addresses. Each time an attempt to access stored data is made, virtual memory data orders translate the virtual address to a physical address. In this way addition of virtual memory enables granular control over memory systems and methods of access.
Protection
In virtual memory systems the operating system limits how a process can access the memory. This feature can be used to disallow a process to read or write to memory that is not allocated to it, preventing malicious or malfunctioning code in one program from interfering with the operation of another.
Sharing
Even though the memory allocated for specific processes is normally isolated, processes sometimes need to be able to share information. Shared memory is one of the fastest techniques for inter-process communication.
Physical organization
Memory is usually classed by access rate as with primary storage and secondary storage. Memory management systems handle moving information between these two levels of memory.
Dynamic memory management in userspace
In programming languages, memory management is a form of resource management. It entails ensuring memory safety and the absence of memory leaks. Common paradigms include manual memory management, tracing garbage collection, reference counting, and compile-time analysis.
Static memory only
Early languages did not have any heap allocation. These include Algol pre-68, Cobol pre-2002, and Fortran pre-90.
Manual memory management
While these languages allow heap allocation, they require the programmer to manually track and free all allocated memory. They include assembly, Pascal, C, C++, COBOL 2002+, Fortran 90+.
Runtime garbage collection
A large class of programming languages do require the programmer to free leaked memory, and instead have a runtime facility that executes additional code to locate and free inaccessible memory.
Tracing garbage collection involves periodically determining whether there exist any references to heap objects, and freeing them if there do not. This is often expensive, and can result "sawtooth" memory profiles and pauses in slower systems when the garbage collector runs. These languages include Lisp, Perl (sometimes), COBOL 2002 (object oriented), Java and other JVM languages, Javascript, C# and other .NET languages, Ruby, Go.
Garbage collection via reference counting increments and decrements the number of references to a heap allocation, and frees it as soon as the count reaches zero. In some languages, the reference count must be incremented manually (e.g. COM/C++, the original Objective-C); in others, the count is incremented automatically (e.g. COM/VBA, Objective-C with ARC, Python). Languages with reference counting include Perl (sometimes), COM, Objective-C, Python.
Compile-time analysis
Certain newer languages do not require the programmer to manage or deallocate any memory, yet lack the overhead of runtime garbage collection and still avoid leaking memory. Instead, the compiler determines the appropriate lifetimes of dynamically allocated variables, and inserts free
s at the appropriate points. This is done in such a way as to eliminate both dangling pointers and the need for null pointers. Complete implementations often rely on affine types.
This method has inconsistent naming, and is known as borrow checking in Rust, compile-time garbage collection in Mercury, and compile-time reference counting in ATS).
Unsafe or partial support
Certain languages have only partial or permanently broken support for compile-time analysis. In these systems, programming errors in compilable code can subvert the memory management system and leak memory, dereference null pointers, or cause other memory errors. These include RAII in C++, and regions in Cyclone.
See also
Notes
- ^ Gibson, Steve (August 15, 1988). "Tech Talk: Placing the IBM/Microsoft XMS Spec Into Perspective". InfoWorld.
- ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1002/spe.4380240602, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1002/spe.4380240602
instead.
References
- Donald Knuth. Fundamental Algorithms, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89683-4. Section 2.5: Dynamic Storage Allocation, pp. 435–456.
- Simple Memory Allocation Algorithms (originally published on OSDEV Community)
- Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/3-540-60368-9_19, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1007/3-540-60368-9_19
instead. - Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1145/378795.378821, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1145/378795.378821
instead. - Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1145/582419.582421, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with
|doi=10.1145/582419.582421
instead. - memorymanagement.org A small old site dedicated to memory management.
Further reading
- "Dynamic Storage Allocation: A Survey and Critical Review", Department of Computer Sciences University of Texas at Austin