Cygwin: mmap: allow remapping part of an existing anonymous mapping
[newlib-cygwin.git] / newlib / libc / stdlib / _mallocr.c
blob56968cbd1d0ee3c7742194b1d1d1c47abb949fd6
1 #include <newlib.h>
2 #ifdef MALLOC_PROVIDED
3 int _dummy_mallocr = 1;
4 #elif defined(_NANO_MALLOC)
5 # include "nano-mallocr.c"
6 #else
7 /* ---------- To make a malloc.h, start cutting here ------------ */
9 /*
10 A version of malloc/free/realloc written by Doug Lea and released to the
11 public domain. Send questions/comments/complaints/performance data
12 to dl@cs.oswego.edu
14 * VERSION 2.6.5 Wed Jun 17 15:55:16 1998 Doug Lea (dl at gee)
16 Note: There may be an updated version of this malloc obtainable at
17 ftp://g.oswego.edu/pub/misc/malloc.c
18 Check before installing!
20 Note: This version differs from 2.6.4 only by correcting a
21 statement ordering error that could cause failures only
22 when calls to this malloc are interposed with calls to
23 other memory allocators.
25 * Why use this malloc?
27 This is not the fastest, most space-conserving, most portable, or
28 most tunable malloc ever written. However it is among the fastest
29 while also being among the most space-conserving, portable and tunable.
30 Consistent balance across these factors results in a good general-purpose
31 allocator. For a high-level description, see
32 http://g.oswego.edu/dl/html/malloc.html
34 * Synopsis of public routines
36 (Much fuller descriptions are contained in the program documentation below.)
38 malloc(size_t n);
39 Return a pointer to a newly allocated chunk of at least n bytes, or null
40 if no space is available.
41 free(Void_t* p);
42 Release the chunk of memory pointed to by p, or no effect if p is null.
43 realloc(Void_t* p, size_t n);
44 Return a pointer to a chunk of size n that contains the same data
45 as does chunk p up to the minimum of (n, p's size) bytes, or null
46 if no space is available. The returned pointer may or may not be
47 the same as p. If p is null, equivalent to malloc. Unless the
48 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
49 size argument of zero (re)allocates a minimum-sized chunk.
50 memalign(size_t alignment, size_t n);
51 Return a pointer to a newly allocated chunk of n bytes, aligned
52 in accord with the alignment argument, which must be a power of
53 two.
54 valloc(size_t n);
55 Equivalent to memalign(pagesize, n), where pagesize is the page
56 size of the system (or as near to this as can be figured out from
57 all the includes/defines below.)
58 pvalloc(size_t n);
59 Equivalent to valloc(minimum-page-that-holds(n)), that is,
60 round up n to nearest pagesize.
61 calloc(size_t unit, size_t quantity);
62 Returns a pointer to quantity * unit bytes, with all locations
63 set to zero.
64 cfree(Void_t* p);
65 Equivalent to free(p).
66 malloc_trim(size_t pad);
67 Release all but pad bytes of freed top-most memory back
68 to the system. Return 1 if successful, else 0.
69 malloc_usable_size(Void_t* p);
70 Report the number usable allocated bytes associated with allocated
71 chunk p. This may or may not report more bytes than were requested,
72 due to alignment and minimum size constraints.
73 malloc_stats();
74 Prints brief summary statistics on stderr.
75 mallinfo()
76 Returns (by copy) a struct containing various summary statistics.
77 mallopt(int parameter_number, int parameter_value)
78 Changes one of the tunable parameters described below. Returns
79 1 if successful in changing the parameter, else 0.
81 * Vital statistics:
83 Alignment: 8-byte
84 8 byte alignment is currently hardwired into the design. This
85 seems to suffice for all current machines and C compilers.
87 Assumed pointer representation: 4 or 8 bytes
88 Code for 8-byte pointers is untested by me but has worked
89 reliably by Wolfram Gloger, who contributed most of the
90 changes supporting this.
92 Assumed size_t representation: 4 or 8 bytes
93 Note that size_t is allowed to be 4 bytes even if pointers are 8.
95 Minimum overhead per allocated chunk: 4 or 8 bytes
96 Each malloced chunk has a hidden overhead of 4 bytes holding size
97 and status information.
99 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
100 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
102 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
103 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
104 needed; 4 (8) for a trailing size field
105 and 8 (16) bytes for free list pointers. Thus, the minimum
106 allocatable size is 16/24/32 bytes.
108 Even a request for zero bytes (i.e., malloc(0)) returns a
109 pointer to something of the minimum allocatable size.
111 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
112 8-byte size_t: 2^63 - 16 bytes
114 It is assumed that (possibly signed) size_t bit values suffice to
115 represent chunk sizes. `Possibly signed' is due to the fact
116 that `size_t' may be defined on a system as either a signed or
117 an unsigned type. To be conservative, values that would appear
118 as negative numbers are avoided.
119 Requests for sizes with a negative sign bit will return a
120 minimum-sized chunk.
122 Maximum overhead wastage per allocated chunk: normally 15 bytes
124 Alignnment demands, plus the minimum allocatable size restriction
125 make the normal worst-case wastage 15 bytes (i.e., up to 15
126 more bytes will be allocated than were requested in malloc), with
127 two exceptions:
128 1. Because requests for zero bytes allocate non-zero space,
129 the worst case wastage for a request of zero bytes is 24 bytes.
130 2. For requests >= mmap_threshold that are serviced via
131 mmap(), the worst case wastage is 8 bytes plus the remainder
132 from a system page (the minimal mmap unit); typically 4096 bytes.
134 * Limitations
136 Here are some features that are NOT currently supported
138 * No user-definable hooks for callbacks and the like.
139 * No automated mechanism for fully checking that all accesses
140 to malloced memory stay within their bounds.
141 * No support for compaction.
143 * Synopsis of compile-time options:
145 People have reported using previous versions of this malloc on all
146 versions of Unix, sometimes by tweaking some of the defines
147 below. It has been tested most extensively on Solaris and
148 Linux. It is also reported to work on WIN32 platforms.
149 People have also reported adapting this malloc for use in
150 stand-alone embedded systems.
152 The implementation is in straight, hand-tuned ANSI C. Among other
153 consequences, it uses a lot of macros. Because of this, to be at
154 all usable, this code should be compiled using an optimizing compiler
155 (for example gcc -O2) that can simplify expressions and control
156 paths.
158 __STD_C (default: derived from C compiler defines)
159 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
160 a C compiler sufficiently close to ANSI to get away with it.
161 DEBUG (default: NOT defined)
162 Define to enable debugging. Adds fairly extensive assertion-based
163 checking to help track down memory errors, but noticeably slows down
164 execution.
165 SEPARATE_OBJECTS (default: NOT defined)
166 Define this to compile into separate .o files. You must then
167 compile malloc.c several times, defining a DEFINE_* macro each
168 time. The list of DEFINE_* macros appears below.
169 MALLOC_LOCK (default: NOT defined)
170 MALLOC_UNLOCK (default: NOT defined)
171 Define these to C expressions which are run to lock and unlock
172 the malloc data structures. Calls may be nested; that is,
173 MALLOC_LOCK may be called more than once before the corresponding
174 MALLOC_UNLOCK calls. MALLOC_LOCK must avoid waiting for a lock
175 that it already holds.
176 MALLOC_ALIGNMENT (default: NOT defined)
177 Define this to 16 if you need 16 byte alignment instead of 8 byte alignment
178 which is the normal default.
179 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
180 Define this if you think that realloc(p, 0) should be equivalent
181 to free(p). Otherwise, since malloc returns a unique pointer for
182 malloc(0), so does realloc(p, 0).
183 HAVE_MEMCPY (default: defined)
184 Define if you are not otherwise using ANSI STD C, but still
185 have memcpy and memset in your C library and want to use them.
186 Otherwise, simple internal versions are supplied.
187 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
188 Define as 1 if you want the C library versions of memset and
189 memcpy called in realloc and calloc (otherwise macro versions are used).
190 At least on some platforms, the simple macro versions usually
191 outperform libc versions.
192 HAVE_MMAP (default: defined as 1)
193 Define to non-zero to optionally make malloc() use mmap() to
194 allocate very large blocks.
195 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
196 Define to non-zero to optionally make realloc() use mremap() to
197 reallocate very large blocks.
198 malloc_getpagesize (default: derived from system #includes)
199 Either a constant or routine call returning the system page size.
200 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
201 Optionally define if you are on a system with a /usr/include/malloc.h
202 that declares struct mallinfo. It is not at all necessary to
203 define this even if you do, but will ensure consistency.
204 INTERNAL_SIZE_T (default: size_t)
205 Define to a 32-bit type (probably `unsigned int') if you are on a
206 64-bit machine, yet do not want or need to allow malloc requests of
207 greater than 2^31 to be handled. This saves space, especially for
208 very small chunks.
209 INTERNAL_LINUX_C_LIB (default: NOT defined)
210 Defined only when compiled as part of Linux libc.
211 Also note that there is some odd internal name-mangling via defines
212 (for example, internally, `malloc' is named `mALLOc') needed
213 when compiling in this case. These look funny but don't otherwise
214 affect anything.
215 _LIBC (default: NOT defined)
216 Defined only when compiled as part of the Cygnus newlib
217 distribution.
218 WIN32 (default: undefined)
219 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
220 LACKS_UNISTD_H (default: undefined)
221 Define this if your system does not have a <unistd.h>.
222 MORECORE (default: sbrk)
223 The name of the routine to call to obtain more memory from the system.
224 MORECORE_FAILURE (default: -1)
225 The value returned upon failure of MORECORE.
226 MORECORE_CLEARS (default 1)
227 True (1) if the routine mapped to MORECORE zeroes out memory (which
228 holds for sbrk).
229 DEFAULT_TRIM_THRESHOLD
230 DEFAULT_TOP_PAD
231 DEFAULT_MMAP_THRESHOLD
232 DEFAULT_MMAP_MAX
233 Default values of tunable parameters (described in detail below)
234 controlling interaction with host system routines (sbrk, mmap, etc).
235 These values may also be changed dynamically via mallopt(). The
236 preset defaults are those that give best performance for typical
237 programs/systems.
245 /* Preliminaries */
247 #ifndef __STD_C
248 #ifdef __STDC__
249 #define __STD_C 1
250 #else
251 #if __cplusplus
252 #define __STD_C 1
253 #else
254 #define __STD_C 0
255 #endif /*__cplusplus*/
256 #endif /*__STDC__*/
257 #endif /*__STD_C*/
259 #ifndef Void_t
260 #if __STD_C
261 #define Void_t void
262 #else
263 #define Void_t char
264 #endif
265 #endif /*Void_t*/
267 #if __STD_C
268 #include <stddef.h> /* for size_t */
269 #else
270 #include <sys/types.h>
271 #endif
273 #ifdef __cplusplus
274 extern "C" {
275 #endif
277 #include <stdio.h> /* needed for malloc_stats */
278 #include <limits.h> /* needed for overflow checks */
279 #include <errno.h> /* needed to set errno to ENOMEM */
281 #ifdef WIN32
282 #define WIN32_LEAN_AND_MEAN
283 #include <windows.h>
284 #endif
287 Compile-time options
293 Special defines for Cygnus newlib distribution.
297 #ifdef _LIBC
299 #include <sys/config.h>
302 In newlib, all the publically visible routines take a reentrancy
303 pointer. We don't currently do anything much with it, but we do
304 pass it to the lock routine.
307 #include <reent.h>
309 #define POINTER_UINT unsigned _POINTER_INT
310 #define SEPARATE_OBJECTS
311 #define HAVE_MMAP 0
312 #define MORECORE(size) _sbrk_r(reent_ptr, (size))
313 #define MORECORE_CLEARS 0
314 #define MALLOC_LOCK __malloc_lock(reent_ptr)
315 #define MALLOC_UNLOCK __malloc_unlock(reent_ptr)
317 #ifdef __CYGWIN__
318 # undef _WIN32
319 # undef WIN32
320 #endif
322 #ifndef _WIN32
323 #ifndef HAVE_SYSCONF_PAGESIZE
324 #ifdef SMALL_MEMORY
325 #define malloc_getpagesize (128)
326 #else
327 #define malloc_getpagesize (4096)
328 #endif
329 #endif
330 #endif
332 #if __STD_C
333 extern void __malloc_lock(struct _reent *);
334 extern void __malloc_unlock(struct _reent *);
335 #else
336 extern void __malloc_lock();
337 extern void __malloc_unlock();
338 #endif
340 #if __STD_C
341 #define RARG struct _reent *reent_ptr,
342 #define RONEARG struct _reent *reent_ptr
343 #else
344 #define RARG reent_ptr
345 #define RONEARG reent_ptr
346 #define RDECL struct _reent *reent_ptr;
347 #endif
349 #define RERRNO _REENT_ERRNO(reent_ptr)
350 #define RCALL reent_ptr,
351 #define RONECALL reent_ptr
353 #else /* ! _LIBC */
355 #define POINTER_UINT unsigned long
356 #define RARG
357 #define RONEARG
358 #define RDECL
359 #define RERRNO errno
360 #define RCALL
361 #define RONECALL
363 #endif /* ! _LIBC */
366 Debugging:
368 Because freed chunks may be overwritten with link fields, this
369 malloc will often die when freed memory is overwritten by user
370 programs. This can be very effective (albeit in an annoying way)
371 in helping track down dangling pointers.
373 If you compile with -DDEBUG, a number of assertion checks are
374 enabled that will catch more memory errors. You probably won't be
375 able to make much sense of the actual assertion errors, but they
376 should help you locate incorrectly overwritten memory. The
377 checking is fairly extensive, and will slow down execution
378 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
379 attempt to check every non-mmapped allocated and free chunk in the
380 course of computing the summmaries. (By nature, mmapped regions
381 cannot be checked very much automatically.)
383 Setting DEBUG may also be helpful if you are trying to modify
384 this code. The assertions in the check routines spell out in more
385 detail the assumptions and invariants underlying the algorithms.
389 #if DEBUG
390 #include <assert.h>
391 #else
392 #define assert(x) ((void)0)
393 #endif
397 SEPARATE_OBJECTS should be defined if you want each function to go
398 into a separate .o file. You must then compile malloc.c once per
399 function, defining the appropriate DEFINE_ macro. See below for the
400 list of macros.
403 #ifndef SEPARATE_OBJECTS
404 #define DEFINE_MALLOC
405 #define DEFINE_FREE
406 #define DEFINE_REALLOC
407 #define DEFINE_CALLOC
408 #define DEFINE_CFREE
409 #define DEFINE_MEMALIGN
410 #define DEFINE_VALLOC
411 #define DEFINE_PVALLOC
412 #define DEFINE_MALLINFO
413 #define DEFINE_MALLOC_STATS
414 #define DEFINE_MALLOC_USABLE_SIZE
415 #define DEFINE_MALLOPT
417 #define STATIC static
418 #else
419 #define STATIC
420 #endif
423 Define MALLOC_LOCK and MALLOC_UNLOCK to C expressions to run to
424 lock and unlock the malloc data structures. MALLOC_LOCK may be
425 called recursively.
428 #ifndef MALLOC_LOCK
429 #define MALLOC_LOCK
430 #endif
432 #ifndef MALLOC_UNLOCK
433 #define MALLOC_UNLOCK
434 #endif
437 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
438 of chunk sizes. On a 64-bit machine, you can reduce malloc
439 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
440 at the expense of not being able to handle requests greater than
441 2^31. This limitation is hardly ever a concern; you are encouraged
442 to set this. However, the default version is the same as size_t.
445 #ifndef INTERNAL_SIZE_T
446 #define INTERNAL_SIZE_T size_t
447 #endif
450 Following is needed on implementations whereby long > size_t.
451 The problem is caused because the code performs subtractions of
452 size_t values and stores the result in long values. In the case
453 where long > size_t and the first value is actually less than
454 the second value, the resultant value is positive. For example,
455 (long)(x - y) where x = 0 and y is 1 ends up being 0x00000000FFFFFFFF
456 which is 2*31 - 1 instead of 0xFFFFFFFFFFFFFFFF. This is due to the
457 fact that assignment from unsigned to signed won't sign extend.
460 #define long_sub_size_t(x, y) \
461 (sizeof (long) > sizeof (INTERNAL_SIZE_T) && x < y \
462 ? -(long) (y - x) \
463 : (long) (x - y))
466 REALLOC_ZERO_BYTES_FREES should be set if a call to
467 realloc with zero bytes should be the same as a call to free.
468 Some people think it should. Otherwise, since this malloc
469 returns a unique pointer for malloc(0), so does realloc(p, 0).
473 /* #define REALLOC_ZERO_BYTES_FREES */
477 WIN32 causes an emulation of sbrk to be compiled in
478 mmap-based options are not currently supported in WIN32.
481 /* #define WIN32 */
482 #ifdef WIN32
483 #define MORECORE wsbrk
484 #define HAVE_MMAP 0
485 #endif
489 HAVE_MEMCPY should be defined if you are not otherwise using
490 ANSI STD C, but still have memcpy and memset in your C library
491 and want to use them in calloc and realloc. Otherwise simple
492 macro versions are defined here.
494 USE_MEMCPY should be defined as 1 if you actually want to
495 have memset and memcpy called. People report that the macro
496 versions are often enough faster than libc versions on many
497 systems that it is better to use them.
501 #define HAVE_MEMCPY
503 /* Although the original macro is called USE_MEMCPY, newlib actually
504 uses memmove to handle cases whereby a platform's memcpy implementation
505 copies backwards and thus destructive overlap may occur in realloc
506 whereby we are reclaiming free memory prior to the old allocation. */
507 #ifndef USE_MEMCPY
508 #ifdef HAVE_MEMCPY
509 #define USE_MEMCPY 1
510 #else
511 #define USE_MEMCPY 0
512 #endif
513 #endif
515 #if (__STD_C || defined(HAVE_MEMCPY))
517 #if __STD_C
518 void* memset(void*, int, size_t);
519 void* memcpy(void*, const void*, size_t);
520 void* memmove(void*, const void*, size_t);
521 #else
522 Void_t* memset();
523 Void_t* memcpy();
524 Void_t* memmove();
525 #endif
526 #endif
528 #if USE_MEMCPY
530 /* The following macros are only invoked with (2n+1)-multiples of
531 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
532 for fast inline execution when n is small. */
534 #define MALLOC_ZERO(charp, nbytes) \
535 do { \
536 INTERNAL_SIZE_T mzsz = (nbytes); \
537 if(mzsz <= 9*sizeof(mzsz)) { \
538 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
539 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
540 *mz++ = 0; \
541 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
542 *mz++ = 0; \
543 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
544 *mz++ = 0; }}} \
545 *mz++ = 0; \
546 *mz++ = 0; \
547 *mz = 0; \
548 } else memset((charp), 0, mzsz); \
549 } while(0)
551 #define MALLOC_COPY(dest,src,nbytes) \
552 do { \
553 INTERNAL_SIZE_T mcsz = (nbytes); \
554 if(mcsz <= 9*sizeof(mcsz)) { \
555 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
556 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
557 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
558 *mcdst++ = *mcsrc++; \
559 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
560 *mcdst++ = *mcsrc++; \
561 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
562 *mcdst++ = *mcsrc++; }}} \
563 *mcdst++ = *mcsrc++; \
564 *mcdst++ = *mcsrc++; \
565 *mcdst = *mcsrc ; \
566 } else memmove(dest, src, mcsz); \
567 } while(0)
569 #else /* !USE_MEMCPY */
571 /* Use Duff's device for good zeroing/copying performance. */
573 #define MALLOC_ZERO(charp, nbytes) \
574 do { \
575 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
576 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
577 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
578 switch (mctmp) { \
579 case 0: for(;;) { *mzp++ = 0; \
580 case 7: *mzp++ = 0; \
581 case 6: *mzp++ = 0; \
582 case 5: *mzp++ = 0; \
583 case 4: *mzp++ = 0; \
584 case 3: *mzp++ = 0; \
585 case 2: *mzp++ = 0; \
586 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
588 } while(0)
590 #define MALLOC_COPY(dest,src,nbytes) \
591 do { \
592 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
593 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
594 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
595 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
596 switch (mctmp) { \
597 case 0: for(;;) { *mcdst++ = *mcsrc++; \
598 case 7: *mcdst++ = *mcsrc++; \
599 case 6: *mcdst++ = *mcsrc++; \
600 case 5: *mcdst++ = *mcsrc++; \
601 case 4: *mcdst++ = *mcsrc++; \
602 case 3: *mcdst++ = *mcsrc++; \
603 case 2: *mcdst++ = *mcsrc++; \
604 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
606 } while(0)
608 #endif
612 Define HAVE_MMAP to optionally make malloc() use mmap() to
613 allocate very large blocks. These will be returned to the
614 operating system immediately after a free().
617 #ifndef HAVE_MMAP
618 #define HAVE_MMAP 1
619 #endif
622 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
623 large blocks. This is currently only possible on Linux with
624 kernel versions newer than 1.3.77.
627 #ifndef HAVE_MREMAP
628 #ifdef INTERNAL_LINUX_C_LIB
629 #define HAVE_MREMAP 1
630 #else
631 #define HAVE_MREMAP 0
632 #endif
633 #endif
635 #if HAVE_MMAP
637 #include <unistd.h>
638 #include <fcntl.h>
639 #include <sys/mman.h>
641 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
642 #define MAP_ANONYMOUS MAP_ANON
643 #endif
645 #endif /* HAVE_MMAP */
648 Access to system page size. To the extent possible, this malloc
649 manages memory from the system in page-size units.
651 The following mechanics for getpagesize were adapted from
652 bsd/gnu getpagesize.h
655 #ifndef LACKS_UNISTD_H
656 # include <unistd.h>
657 #endif
659 #ifndef malloc_getpagesize
660 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
661 # ifndef _SC_PAGE_SIZE
662 # define _SC_PAGE_SIZE _SC_PAGESIZE
663 # endif
664 # endif
665 # ifdef _SC_PAGE_SIZE
666 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
667 # else
668 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
669 extern size_t getpagesize();
670 # define malloc_getpagesize getpagesize()
671 # else
672 # include <sys/param.h>
673 # ifdef EXEC_PAGESIZE
674 # define malloc_getpagesize EXEC_PAGESIZE
675 # else
676 # ifdef NBPG
677 # ifndef CLSIZE
678 # define malloc_getpagesize NBPG
679 # else
680 # define malloc_getpagesize (NBPG * CLSIZE)
681 # endif
682 # else
683 # ifdef NBPC
684 # define malloc_getpagesize NBPC
685 # else
686 # ifdef PAGESIZE
687 # define malloc_getpagesize PAGESIZE
688 # else
689 # define malloc_getpagesize (4096) /* just guess */
690 # endif
691 # endif
692 # endif
693 # endif
694 # endif
695 # endif
696 #endif
702 This version of malloc supports the standard SVID/XPG mallinfo
703 routine that returns a struct containing the same kind of
704 information you can get from malloc_stats. It should work on
705 any SVID/XPG compliant system that has a /usr/include/malloc.h
706 defining struct mallinfo. (If you'd like to install such a thing
707 yourself, cut out the preliminary declarations as described above
708 and below and save them in a malloc.h file. But there's no
709 compelling reason to bother to do this.)
711 The main declaration needed is the mallinfo struct that is returned
712 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
713 bunch of fields, most of which are not even meaningful in this
714 version of malloc. Some of these fields are are instead filled by
715 mallinfo() with other numbers that might possibly be of interest.
717 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
718 /usr/include/malloc.h file that includes a declaration of struct
719 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
720 version is declared below. These must be precisely the same for
721 mallinfo() to work.
725 /* #define HAVE_USR_INCLUDE_MALLOC_H */
727 #if HAVE_USR_INCLUDE_MALLOC_H
728 #include "/usr/include/malloc.h"
729 #else
731 /* SVID2/XPG mallinfo structure */
733 struct mallinfo {
734 size_t arena; /* total space allocated from system */
735 size_t ordblks; /* number of non-inuse chunks */
736 size_t smblks; /* unused -- always zero */
737 size_t hblks; /* number of mmapped regions */
738 size_t hblkhd; /* total space in mmapped regions */
739 size_t usmblks; /* unused -- always zero */
740 size_t fsmblks; /* unused -- always zero */
741 size_t uordblks; /* total allocated space */
742 size_t fordblks; /* total non-inuse space */
743 size_t keepcost; /* top-most, releasable (via malloc_trim) space */
746 /* SVID2/XPG mallopt options */
748 #define M_MXFAST 1 /* UNUSED in this malloc */
749 #define M_NLBLKS 2 /* UNUSED in this malloc */
750 #define M_GRAIN 3 /* UNUSED in this malloc */
751 #define M_KEEP 4 /* UNUSED in this malloc */
753 #endif
755 /* mallopt options that actually do something */
757 #define M_TRIM_THRESHOLD -1
758 #define M_TOP_PAD -2
759 #define M_MMAP_THRESHOLD -3
760 #define M_MMAP_MAX -4
764 #ifndef DEFAULT_TRIM_THRESHOLD
765 #define DEFAULT_TRIM_THRESHOLD (128L * 1024L)
766 #endif
769 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
770 to keep before releasing via malloc_trim in free().
772 Automatic trimming is mainly useful in long-lived programs.
773 Because trimming via sbrk can be slow on some systems, and can
774 sometimes be wasteful (in cases where programs immediately
775 afterward allocate more large chunks) the value should be high
776 enough so that your overall system performance would improve by
777 releasing.
779 The trim threshold and the mmap control parameters (see below)
780 can be traded off with one another. Trimming and mmapping are
781 two different ways of releasing unused memory back to the
782 system. Between these two, it is often possible to keep
783 system-level demands of a long-lived program down to a bare
784 minimum. For example, in one test suite of sessions measuring
785 the XF86 X server on Linux, using a trim threshold of 128K and a
786 mmap threshold of 192K led to near-minimal long term resource
787 consumption.
789 If you are using this malloc in a long-lived program, it should
790 pay to experiment with these values. As a rough guide, you
791 might set to a value close to the average size of a process
792 (program) running on your system. Releasing this much memory
793 would allow such a process to run in memory. Generally, it's
794 worth it to tune for trimming rather tham memory mapping when a
795 program undergoes phases where several large chunks are
796 allocated and released in ways that can reuse each other's
797 storage, perhaps mixed with phases where there are no such
798 chunks at all. And in well-behaved long-lived programs,
799 controlling release of large blocks via trimming versus mapping
800 is usually faster.
802 However, in most programs, these parameters serve mainly as
803 protection against the system-level effects of carrying around
804 massive amounts of unneeded memory. Since frequent calls to
805 sbrk, mmap, and munmap otherwise degrade performance, the default
806 parameters are set to relatively high values that serve only as
807 safeguards.
809 The default trim value is high enough to cause trimming only in
810 fairly extreme (by current memory consumption standards) cases.
811 It must be greater than page size to have any useful effect. To
812 disable trimming completely, you can set to (unsigned long)(-1);
818 #ifndef DEFAULT_TOP_PAD
819 #define DEFAULT_TOP_PAD (0)
820 #endif
823 M_TOP_PAD is the amount of extra `padding' space to allocate or
824 retain whenever sbrk is called. It is used in two ways internally:
826 * When sbrk is called to extend the top of the arena to satisfy
827 a new malloc request, this much padding is added to the sbrk
828 request.
830 * When malloc_trim is called automatically from free(),
831 it is used as the `pad' argument.
833 In both cases, the actual amount of padding is rounded
834 so that the end of the arena is always a system page boundary.
836 The main reason for using padding is to avoid calling sbrk so
837 often. Having even a small pad greatly reduces the likelihood
838 that nearly every malloc request during program start-up (or
839 after trimming) will invoke sbrk, which needlessly wastes
840 time.
842 Automatic rounding-up to page-size units is normally sufficient
843 to avoid measurable overhead, so the default is 0. However, in
844 systems where sbrk is relatively slow, it can pay to increase
845 this value, at the expense of carrying around more memory than
846 the program needs.
851 #ifndef DEFAULT_MMAP_THRESHOLD
852 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
853 #endif
857 M_MMAP_THRESHOLD is the request size threshold for using mmap()
858 to service a request. Requests of at least this size that cannot
859 be allocated using already-existing space will be serviced via mmap.
860 (If enough normal freed space already exists it is used instead.)
862 Using mmap segregates relatively large chunks of memory so that
863 they can be individually obtained and released from the host
864 system. A request serviced through mmap is never reused by any
865 other request (at least not directly; the system may just so
866 happen to remap successive requests to the same locations).
868 Segregating space in this way has the benefit that mmapped space
869 can ALWAYS be individually released back to the system, which
870 helps keep the system level memory demands of a long-lived
871 program low. Mapped memory can never become `locked' between
872 other chunks, as can happen with normally allocated chunks, which
873 menas that even trimming via malloc_trim would not release them.
875 However, it has the disadvantages that:
877 1. The space cannot be reclaimed, consolidated, and then
878 used to service later requests, as happens with normal chunks.
879 2. It can lead to more wastage because of mmap page alignment
880 requirements
881 3. It causes malloc performance to be more dependent on host
882 system memory management support routines which may vary in
883 implementation quality and may impose arbitrary
884 limitations. Generally, servicing a request via normal
885 malloc steps is faster than going through a system's mmap.
887 All together, these considerations should lead you to use mmap
888 only for relatively large requests.
895 #ifndef DEFAULT_MMAP_MAX
896 #if HAVE_MMAP
897 #define DEFAULT_MMAP_MAX (64)
898 #else
899 #define DEFAULT_MMAP_MAX (0)
900 #endif
901 #endif
904 M_MMAP_MAX is the maximum number of requests to simultaneously
905 service using mmap. This parameter exists because:
907 1. Some systems have a limited number of internal tables for
908 use by mmap.
909 2. In most systems, overreliance on mmap can degrade overall
910 performance.
911 3. If a program allocates many large regions, it is probably
912 better off using normal sbrk-based allocation routines that
913 can reclaim and reallocate normal heap memory. Using a
914 small value allows transition into this mode after the
915 first few allocations.
917 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
918 the default value is 0, and attempts to set it to non-zero values
919 in mallopt will fail.
927 Special defines for linux libc
929 Except when compiled using these special defines for Linux libc
930 using weak aliases, this malloc is NOT designed to work in
931 multithreaded applications. No semaphores or other concurrency
932 control are provided to ensure that multiple malloc or free calls
933 don't run at the same time, which could be disasterous. A single
934 semaphore could be used across malloc, realloc, and free (which is
935 essentially the effect of the linux weak alias approach). It would
936 be hard to obtain finer granularity.
941 #ifdef INTERNAL_LINUX_C_LIB
943 #if __STD_C
945 Void_t * __default_morecore_init (ptrdiff_t);
946 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
948 #else
950 Void_t * __default_morecore_init ();
951 Void_t *(*__morecore)() = __default_morecore_init;
953 #endif
955 #define MORECORE (*__morecore)
956 #define MORECORE_FAILURE 0
957 #define MORECORE_CLEARS 1
959 #else /* INTERNAL_LINUX_C_LIB */
961 #ifndef _LIBC
962 #if __STD_C
963 extern Void_t* sbrk(ptrdiff_t);
964 #else
965 extern Void_t* sbrk();
966 #endif
967 #endif
969 #ifndef MORECORE
970 #define MORECORE sbrk
971 #endif
973 #ifndef MORECORE_FAILURE
974 #define MORECORE_FAILURE -1
975 #endif
977 #ifndef MORECORE_CLEARS
978 #define MORECORE_CLEARS 1
979 #endif
981 #endif /* INTERNAL_LINUX_C_LIB */
983 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
985 #define cALLOc __libc_calloc
986 #define fREe __libc_free
987 #define mALLOc __libc_malloc
988 #define mEMALIGn __libc_memalign
989 #define rEALLOc __libc_realloc
990 #define vALLOc __libc_valloc
991 #define pvALLOc __libc_pvalloc
992 #define mALLINFo __libc_mallinfo
993 #define mALLOPt __libc_mallopt
995 #pragma weak calloc = __libc_calloc
996 #pragma weak free = __libc_free
997 #pragma weak cfree = __libc_free
998 #pragma weak malloc = __libc_malloc
999 #pragma weak memalign = __libc_memalign
1000 #pragma weak realloc = __libc_realloc
1001 #pragma weak valloc = __libc_valloc
1002 #pragma weak pvalloc = __libc_pvalloc
1003 #pragma weak mallinfo = __libc_mallinfo
1004 #pragma weak mallopt = __libc_mallopt
1006 #else
1008 #ifdef _LIBC
1010 #define cALLOc _calloc_r
1011 #define fREe _free_r
1012 #define mALLOc _malloc_r
1013 #define mEMALIGn _memalign_r
1014 #define rEALLOc _realloc_r
1015 #define vALLOc _valloc_r
1016 #define pvALLOc _pvalloc_r
1017 #define mALLINFo _mallinfo_r
1018 #define mALLOPt _mallopt_r
1020 #define malloc_stats _malloc_stats_r
1021 #define malloc_trim _malloc_trim_r
1022 #define malloc_usable_size _malloc_usable_size_r
1024 #define malloc_update_mallinfo __malloc_update_mallinfo
1026 #define malloc_av_ __malloc_av_
1027 #define malloc_current_mallinfo __malloc_current_mallinfo
1028 #define malloc_max_sbrked_mem __malloc_max_sbrked_mem
1029 #define malloc_max_total_mem __malloc_max_total_mem
1030 #define malloc_sbrk_base __malloc_sbrk_base
1031 #define malloc_top_pad __malloc_top_pad
1032 #define malloc_trim_threshold __malloc_trim_threshold
1034 #else /* ! _LIBC */
1036 #define cALLOc calloc
1037 #define fREe free
1038 #define mALLOc malloc
1039 #define mEMALIGn memalign
1040 #define rEALLOc realloc
1041 #define vALLOc valloc
1042 #define pvALLOc pvalloc
1043 #define mALLINFo mallinfo
1044 #define mALLOPt mallopt
1046 #endif /* ! _LIBC */
1047 #endif
1049 /* Public routines */
1051 #if __STD_C
1053 Void_t* mALLOc(RARG size_t);
1054 void fREe(RARG Void_t*);
1055 Void_t* rEALLOc(RARG Void_t*, size_t);
1056 Void_t* mEMALIGn(RARG size_t, size_t);
1057 Void_t* vALLOc(RARG size_t);
1058 Void_t* pvALLOc(RARG size_t);
1059 Void_t* cALLOc(RARG size_t, size_t);
1060 void cfree(Void_t*);
1061 int malloc_trim(RARG size_t);
1062 size_t malloc_usable_size(RARG Void_t*);
1063 void malloc_stats(RONEARG);
1064 int mALLOPt(RARG int, int);
1065 struct mallinfo mALLINFo(RONEARG);
1066 #else
1067 Void_t* mALLOc();
1068 void fREe();
1069 Void_t* rEALLOc();
1070 Void_t* mEMALIGn();
1071 Void_t* vALLOc();
1072 Void_t* pvALLOc();
1073 Void_t* cALLOc();
1074 void cfree();
1075 int malloc_trim();
1076 size_t malloc_usable_size();
1077 void malloc_stats();
1078 int mALLOPt();
1079 struct mallinfo mALLINFo();
1080 #endif
1083 #ifdef __cplusplus
1084 }; /* end of extern "C" */
1085 #endif
1087 /* ---------- To make a malloc.h, end cutting here ------------ */
1091 Emulation of sbrk for WIN32
1092 All code within the ifdef WIN32 is untested by me.
1096 #ifdef WIN32
1098 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
1099 ~(malloc_getpagesize-1))
1101 /* resrve 64MB to insure large contiguous space */
1102 #define RESERVED_SIZE (1024*1024*64)
1103 #define NEXT_SIZE (2048*1024)
1104 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
1106 struct GmListElement;
1107 typedef struct GmListElement GmListElement;
1109 struct GmListElement
1111 GmListElement* next;
1112 void* base;
1115 static GmListElement* head = 0;
1116 static unsigned int gNextAddress = 0;
1117 static unsigned int gAddressBase = 0;
1118 static unsigned int gAllocatedSize = 0;
1120 static
1121 GmListElement* makeGmListElement (void* bas)
1123 GmListElement* this;
1124 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1125 ASSERT (this);
1126 if (this)
1128 this->base = bas;
1129 this->next = head;
1130 head = this;
1132 return this;
1135 void gcleanup ()
1137 BOOL rval;
1138 ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
1139 if (gAddressBase && (gNextAddress - gAddressBase))
1141 rval = VirtualFree ((void*)gAddressBase,
1142 gNextAddress - gAddressBase,
1143 MEM_DECOMMIT);
1144 ASSERT (rval);
1146 while (head)
1148 GmListElement* next = head->next;
1149 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1150 ASSERT (rval);
1151 LocalFree (head);
1152 head = next;
1156 static
1157 void* findRegion (void* start_address, unsigned long size)
1159 MEMORY_BASIC_INFORMATION info;
1160 while ((unsigned long)start_address < TOP_MEMORY)
1162 VirtualQuery (start_address, &info, sizeof (info));
1163 if (info.State != MEM_FREE)
1164 start_address = (char*)info.BaseAddress + info.RegionSize;
1165 else if (info.RegionSize >= size)
1166 return start_address;
1167 else
1168 start_address = (char*)info.BaseAddress + info.RegionSize;
1170 return NULL;
1175 void* wsbrk (long size)
1177 void* tmp;
1178 if (size > 0)
1180 if (gAddressBase == 0)
1182 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1183 gNextAddress = gAddressBase =
1184 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1185 MEM_RESERVE, PAGE_NOACCESS);
1186 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1187 gAllocatedSize))
1189 long new_size = max (NEXT_SIZE, AlignPage (size));
1190 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1193 new_address = findRegion (new_address, new_size);
1195 if (new_address == 0)
1196 return (void*)-1;
1198 gAddressBase = gNextAddress =
1199 (unsigned int)VirtualAlloc (new_address, new_size,
1200 MEM_RESERVE, PAGE_NOACCESS);
1201 // repeat in case of race condition
1202 // The region that we found has been snagged
1203 // by another thread
1205 while (gAddressBase == 0);
1207 ASSERT (new_address == (void*)gAddressBase);
1209 gAllocatedSize = new_size;
1211 if (!makeGmListElement ((void*)gAddressBase))
1212 return (void*)-1;
1214 if ((size + gNextAddress) > AlignPage (gNextAddress))
1216 void* res;
1217 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1218 (size + gNextAddress -
1219 AlignPage (gNextAddress)),
1220 MEM_COMMIT, PAGE_READWRITE);
1221 if (res == 0)
1222 return (void*)-1;
1224 tmp = (void*)gNextAddress;
1225 gNextAddress = (unsigned int)tmp + size;
1226 return tmp;
1228 else if (size < 0)
1230 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1231 /* Trim by releasing the virtual memory */
1232 if (alignedGoal >= gAddressBase)
1234 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1235 MEM_DECOMMIT);
1236 gNextAddress = gNextAddress + size;
1237 return (void*)gNextAddress;
1239 else
1241 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1242 MEM_DECOMMIT);
1243 gNextAddress = gAddressBase;
1244 return (void*)-1;
1247 else
1249 return (void*)gNextAddress;
1253 #endif
1258 Type declarations
1262 struct malloc_chunk
1264 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1265 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1266 struct malloc_chunk* fd; /* double links -- used only if free. */
1267 struct malloc_chunk* bk;
1270 typedef struct malloc_chunk* mchunkptr;
1274 malloc_chunk details:
1276 (The following includes lightly edited explanations by Colin Plumb.)
1278 Chunks of memory are maintained using a `boundary tag' method as
1279 described in e.g., Knuth or Standish. (See the paper by Paul
1280 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1281 survey of such techniques.) Sizes of free chunks are stored both
1282 in the front of each chunk and at the end. This makes
1283 consolidating fragmented chunks into bigger chunks very fast. The
1284 size fields also hold bits representing whether chunks are free or
1285 in use.
1287 An allocated chunk looks like this:
1290 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1291 | Size of previous chunk, if allocated | |
1292 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1293 | Size of chunk, in bytes |P|
1294 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1295 | User data starts here... .
1297 . (malloc_usable_space() bytes) .
1299 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1300 | Size of chunk |
1301 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1304 Where "chunk" is the front of the chunk for the purpose of most of
1305 the malloc code, but "mem" is the pointer that is returned to the
1306 user. "Nextchunk" is the beginning of the next contiguous chunk.
1308 Chunks always begin on even word boundries, so the mem portion
1309 (which is returned to the user) is also on an even word boundary, and
1310 thus double-word aligned.
1312 Free chunks are stored in circular doubly-linked lists, and look like this:
1314 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1315 | Size of previous chunk |
1316 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1317 `head:' | Size of chunk, in bytes |P|
1318 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1319 | Forward pointer to next chunk in list |
1320 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1321 | Back pointer to previous chunk in list |
1322 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1323 | Unused space (may be 0 bytes long) .
1326 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1327 `foot:' | Size of chunk, in bytes |
1328 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1330 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1331 chunk size (which is always a multiple of two words), is an in-use
1332 bit for the *previous* chunk. If that bit is *clear*, then the
1333 word before the current chunk size contains the previous chunk
1334 size, and can be used to find the front of the previous chunk.
1335 (The very first chunk allocated always has this bit set,
1336 preventing access to non-existent (or non-owned) memory.)
1338 Note that the `foot' of the current chunk is actually represented
1339 as the prev_size of the NEXT chunk. (This makes it easier to
1340 deal with alignments etc).
1342 The two exceptions to all this are
1344 1. The special chunk `top', which doesn't bother using the
1345 trailing size field since there is no
1346 next contiguous chunk that would have to index off it. (After
1347 initialization, `top' is forced to always exist. If it would
1348 become less than MINSIZE bytes long, it is replenished via
1349 malloc_extend_top.)
1351 2. Chunks allocated via mmap, which have the second-lowest-order
1352 bit (IS_MMAPPED) set in their size fields. Because they are
1353 never merged or traversed from any other chunk, they have no
1354 foot size or inuse information.
1356 Available chunks are kept in any of several places (all declared below):
1358 * `av': An array of chunks serving as bin headers for consolidated
1359 chunks. Each bin is doubly linked. The bins are approximately
1360 proportionally (log) spaced. There are a lot of these bins
1361 (128). This may look excessive, but works very well in
1362 practice. All procedures maintain the invariant that no
1363 consolidated chunk physically borders another one. Chunks in
1364 bins are kept in size order, with ties going to the
1365 approximately least recently used chunk.
1367 The chunks in each bin are maintained in decreasing sorted order by
1368 size. This is irrelevant for the small bins, which all contain
1369 the same-sized chunks, but facilitates best-fit allocation for
1370 larger chunks. (These lists are just sequential. Keeping them in
1371 order almost never requires enough traversal to warrant using
1372 fancier ordered data structures.) Chunks of the same size are
1373 linked with the most recently freed at the front, and allocations
1374 are taken from the back. This results in LRU or FIFO allocation
1375 order, which tends to give each chunk an equal opportunity to be
1376 consolidated with adjacent freed chunks, resulting in larger free
1377 chunks and less fragmentation.
1379 * `top': The top-most available chunk (i.e., the one bordering the
1380 end of available memory) is treated specially. It is never
1381 included in any bin, is used only if no other chunk is
1382 available, and is released back to the system if it is very
1383 large (see M_TRIM_THRESHOLD).
1385 * `last_remainder': A bin holding only the remainder of the
1386 most recently split (non-top) chunk. This bin is checked
1387 before other non-fitting chunks, so as to provide better
1388 locality for runs of sequentially allocated chunks.
1390 * Implicitly, through the host system's memory mapping tables.
1391 If supported, requests greater than a threshold are usually
1392 serviced via calls to mmap, and then later released via munmap.
1401 /* sizes, alignments */
1403 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1404 #ifndef MALLOC_ALIGNMENT
1405 #define MALLOC_ALIGN 8
1406 #define MALLOC_ALIGNMENT (SIZE_SZ < 4 ? 8 : (SIZE_SZ + SIZE_SZ))
1407 #else
1408 #define MALLOC_ALIGN MALLOC_ALIGNMENT
1409 #endif
1410 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1411 #define MINSIZE (sizeof(struct malloc_chunk))
1413 /* conversion from malloc headers to user pointers, and back */
1415 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1416 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1418 /* pad request bytes into a usable size */
1420 #define request2size(req) \
1421 (((unsigned long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1422 (unsigned long)(MINSIZE + MALLOC_ALIGN_MASK)) ? ((MINSIZE + MALLOC_ALIGN_MASK) & ~(MALLOC_ALIGN_MASK)) : \
1423 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1425 /* Check if m has acceptable alignment */
1427 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1433 Physical chunk operations
1437 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1439 #define PREV_INUSE 0x1
1441 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1443 #define IS_MMAPPED 0x2
1445 /* Bits to mask off when extracting size */
1447 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1450 /* Ptr to next physical malloc_chunk. */
1452 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1454 /* Ptr to previous physical malloc_chunk */
1456 #define prev_chunk(p)\
1457 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1460 /* Treat space at ptr + offset as a chunk */
1462 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1468 Dealing with use bits
1471 /* extract p's inuse bit */
1473 #define inuse(p)\
1474 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1476 /* extract inuse bit of previous chunk */
1478 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1480 /* check for mmap()'ed chunk */
1482 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1484 /* set/clear chunk as in use without otherwise disturbing */
1486 #define set_inuse(p)\
1487 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1489 #define clear_inuse(p)\
1490 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1492 /* check/set/clear inuse bits in known places */
1494 #define inuse_bit_at_offset(p, s)\
1495 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1497 #define set_inuse_bit_at_offset(p, s)\
1498 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1500 #define clear_inuse_bit_at_offset(p, s)\
1501 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1507 Dealing with size fields
1510 /* Get size, ignoring use bits */
1512 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1514 /* Set size at head, without disturbing its use bit */
1516 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1518 /* Set size/use ignoring previous bits in header */
1520 #define set_head(p, s) ((p)->size = (s))
1522 /* Set size at footer (only when chunk is not in use) */
1524 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1531 Bins
1533 The bins, `av_' are an array of pairs of pointers serving as the
1534 heads of (initially empty) doubly-linked lists of chunks, laid out
1535 in a way so that each pair can be treated as if it were in a
1536 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1537 and chunks are the same).
1539 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1540 8 bytes apart. Larger bins are approximately logarithmically
1541 spaced. (See the table below.) The `av_' array is never mentioned
1542 directly in the code, but instead via bin access macros.
1544 Bin layout:
1546 64 bins of size 8
1547 32 bins of size 64
1548 16 bins of size 512
1549 8 bins of size 4096
1550 4 bins of size 32768
1551 2 bins of size 262144
1552 1 bin of size what's left
1554 There is actually a little bit of slop in the numbers in bin_index
1555 for the sake of speed. This makes no difference elsewhere.
1557 The special chunks `top' and `last_remainder' get their own bins,
1558 (this is implemented via yet more trickery with the av_ array),
1559 although `top' is never properly linked to its bin since it is
1560 always handled specially.
1564 #ifdef SEPARATE_OBJECTS
1565 #define av_ malloc_av_
1566 #endif
1568 #define NAV 128 /* number of bins */
1570 typedef struct malloc_chunk* mbinptr;
1572 /* access macros */
1574 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1575 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1576 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1579 The first 2 bins are never indexed. The corresponding av_ cells are instead
1580 used for bookkeeping. This is not to save space, but to simplify
1581 indexing, maintain locality, and avoid some initialization tests.
1584 #define top (bin_at(0)->fd) /* The topmost chunk */
1585 #define last_remainder (bin_at(1)) /* remainder from last split */
1589 Because top initially points to its own bin with initial
1590 zero size, thus forcing extension on the first malloc request,
1591 we avoid having any special code in malloc to check whether
1592 it even exists yet. But we still need to in malloc_extend_top.
1595 #define initial_top ((mchunkptr)(bin_at(0)))
1597 /* Helper macro to initialize bins */
1599 #define IAV(i) bin_at(i), bin_at(i)
1601 #ifdef DEFINE_MALLOC
1602 STATIC mbinptr av_[NAV * 2 + 2] = {
1603 0, 0,
1604 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1605 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1606 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1607 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1608 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1609 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1610 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1611 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1612 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1613 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1614 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1615 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1616 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1617 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1618 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1619 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1621 #else
1622 extern mbinptr av_[NAV * 2 + 2];
1623 #endif
1627 /* field-extraction macros */
1629 #define first(b) ((b)->fd)
1630 #define last(b) ((b)->bk)
1633 Indexing into bins
1636 #define bin_index(sz) \
1637 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1638 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1639 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1640 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1641 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1642 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1643 126)
1645 bins for chunks < 512 are all spaced SMALLBIN_WIDTH bytes apart, and hold
1646 identically sized chunks. This is exploited in malloc.
1649 #define MAX_SMALLBIN_SIZE 512
1650 #define SMALLBIN_WIDTH 8
1651 #define SMALLBIN_WIDTH_BITS 3
1652 #define MAX_SMALLBIN (MAX_SMALLBIN_SIZE / SMALLBIN_WIDTH) - 1
1654 #define smallbin_index(sz) (((unsigned long)(sz)) >> SMALLBIN_WIDTH_BITS)
1657 Requests are `small' if both the corresponding and the next bin are small
1660 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1665 To help compensate for the large number of bins, a one-level index
1666 structure is used for bin-by-bin searching. `binblocks' is a
1667 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1668 have any (possibly) non-empty bins, so they can be skipped over
1669 all at once during during traversals. The bits are NOT always
1670 cleared as soon as all bins in a block are empty, but instead only
1671 when all are noticed to be empty during traversal in malloc.
1674 #define BINBLOCKWIDTH 4 /* bins per block */
1676 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1678 /* bin<->block macros */
1680 #define idx2binblock(ix) ((unsigned long)1 << (ix / BINBLOCKWIDTH))
1681 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
1682 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
1688 /* Other static bookkeeping data */
1690 #ifdef SEPARATE_OBJECTS
1691 #define trim_threshold malloc_trim_threshold
1692 #define top_pad malloc_top_pad
1693 #define n_mmaps_max malloc_n_mmaps_max
1694 #define mmap_threshold malloc_mmap_threshold
1695 #define sbrk_base malloc_sbrk_base
1696 #define max_sbrked_mem malloc_max_sbrked_mem
1697 #define max_total_mem malloc_max_total_mem
1698 #define current_mallinfo malloc_current_mallinfo
1699 #define n_mmaps malloc_n_mmaps
1700 #define max_n_mmaps malloc_max_n_mmaps
1701 #define mmapped_mem malloc_mmapped_mem
1702 #define max_mmapped_mem malloc_max_mmapped_mem
1703 #endif
1705 /* variables holding tunable values */
1707 #ifdef DEFINE_MALLOC
1709 STATIC unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1710 STATIC unsigned long top_pad = DEFAULT_TOP_PAD;
1711 #if HAVE_MMAP
1712 STATIC unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1713 STATIC unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1714 #endif
1716 /* The first value returned from sbrk */
1717 STATIC char* sbrk_base = (char*)(-1);
1719 /* The maximum memory obtained from system via sbrk */
1720 STATIC unsigned long max_sbrked_mem = 0;
1722 /* The maximum via either sbrk or mmap */
1723 STATIC unsigned long max_total_mem = 0;
1725 /* internal working copy of mallinfo */
1726 STATIC struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1728 #if HAVE_MMAP
1730 /* Tracking mmaps */
1732 STATIC unsigned int n_mmaps = 0;
1733 STATIC unsigned int max_n_mmaps = 0;
1734 STATIC unsigned long mmapped_mem = 0;
1735 STATIC unsigned long max_mmapped_mem = 0;
1737 #endif
1739 #else /* ! DEFINE_MALLOC */
1741 extern unsigned long trim_threshold;
1742 extern unsigned long top_pad;
1743 #if HAVE_MMAP
1744 extern unsigned int n_mmaps_max;
1745 extern unsigned long mmap_threshold;
1746 #endif
1747 extern char* sbrk_base;
1748 extern unsigned long max_sbrked_mem;
1749 extern unsigned long max_total_mem;
1750 extern struct mallinfo current_mallinfo;
1751 #if HAVE_MMAP
1752 extern unsigned int n_mmaps;
1753 extern unsigned int max_n_mmaps;
1754 extern unsigned long mmapped_mem;
1755 extern unsigned long max_mmapped_mem;
1756 #endif
1758 #endif /* ! DEFINE_MALLOC */
1760 /* The total memory obtained from system via sbrk */
1761 #define sbrked_mem (current_mallinfo.arena)
1766 Debugging support
1769 #if DEBUG
1773 These routines make a number of assertions about the states
1774 of data structures that should be true at all times. If any
1775 are not true, it's very likely that a user program has somehow
1776 trashed memory. (It's also possible that there is a coding error
1777 in malloc. In which case, please report it!)
1780 #if __STD_C
1781 static void do_check_chunk(mchunkptr p)
1782 #else
1783 static void do_check_chunk(p) mchunkptr p;
1784 #endif
1786 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1788 /* No checkable chunk is mmapped */
1789 assert(!chunk_is_mmapped(p));
1791 /* Check for legal address ... */
1792 assert((char*)p >= sbrk_base);
1793 if (p != top)
1794 assert((char*)p + sz <= (char*)top);
1795 else
1796 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1801 #if __STD_C
1802 static void do_check_free_chunk(mchunkptr p)
1803 #else
1804 static void do_check_free_chunk(p) mchunkptr p;
1805 #endif
1807 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1808 mchunkptr next = chunk_at_offset(p, sz);
1810 do_check_chunk(p);
1812 /* Check whether it claims to be free ... */
1813 assert(!inuse(p));
1815 /* Unless a special marker, must have OK fields */
1816 if ((long)sz >= (long)MINSIZE)
1818 assert((sz & MALLOC_ALIGN_MASK) == 0);
1819 assert(aligned_OK(chunk2mem(p)));
1820 /* ... matching footer field */
1821 assert(next->prev_size == sz);
1822 /* ... and is fully consolidated */
1823 assert(prev_inuse(p));
1824 assert (next == top || inuse(next));
1826 /* ... and has minimally sane links */
1827 assert(p->fd->bk == p);
1828 assert(p->bk->fd == p);
1830 else /* markers are always of size SIZE_SZ */
1831 assert(sz == SIZE_SZ);
1834 #if __STD_C
1835 static void do_check_inuse_chunk(mchunkptr p)
1836 #else
1837 static void do_check_inuse_chunk(p) mchunkptr p;
1838 #endif
1840 mchunkptr next = next_chunk(p);
1841 do_check_chunk(p);
1843 /* Check whether it claims to be in use ... */
1844 assert(inuse(p));
1846 /* ... and is surrounded by OK chunks.
1847 Since more things can be checked with free chunks than inuse ones,
1848 if an inuse chunk borders them and debug is on, it's worth doing them.
1850 if (!prev_inuse(p))
1852 mchunkptr prv = prev_chunk(p);
1853 assert(next_chunk(prv) == p);
1854 do_check_free_chunk(prv);
1856 if (next == top)
1858 assert(prev_inuse(next));
1859 assert(chunksize(next) >= MINSIZE);
1861 else if (!inuse(next))
1862 do_check_free_chunk(next);
1866 #if __STD_C
1867 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1868 #else
1869 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1870 #endif
1872 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1873 long room = long_sub_size_t(sz, s);
1875 do_check_inuse_chunk(p);
1877 /* Legal size ... */
1878 assert((long)sz >= (long)MINSIZE);
1879 assert((sz & MALLOC_ALIGN_MASK) == 0);
1880 assert(room >= 0);
1881 assert(room < (long)MINSIZE);
1883 /* ... and alignment */
1884 assert(aligned_OK(chunk2mem(p)));
1887 /* ... and was allocated at front of an available chunk */
1888 assert(prev_inuse(p));
1893 #define check_free_chunk(P) do_check_free_chunk(P)
1894 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1895 #define check_chunk(P) do_check_chunk(P)
1896 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1897 #else
1898 #define check_free_chunk(P)
1899 #define check_inuse_chunk(P)
1900 #define check_chunk(P)
1901 #define check_malloced_chunk(P,N)
1902 #endif
1907 Macro-based internal utilities
1912 Linking chunks in bin lists.
1913 Call these only with variables, not arbitrary expressions, as arguments.
1917 Place chunk p of size s in its bin, in size order,
1918 putting it ahead of others of same size.
1922 #define frontlink(P, S, IDX, BK, FD) \
1924 if (S < MAX_SMALLBIN_SIZE) \
1926 IDX = smallbin_index(S); \
1927 mark_binblock(IDX); \
1928 BK = bin_at(IDX); \
1929 FD = BK->fd; \
1930 P->bk = BK; \
1931 P->fd = FD; \
1932 FD->bk = BK->fd = P; \
1934 else \
1936 IDX = bin_index(S); \
1937 BK = bin_at(IDX); \
1938 FD = BK->fd; \
1939 if (FD == BK) mark_binblock(IDX); \
1940 else \
1942 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1943 BK = FD->bk; \
1945 P->bk = BK; \
1946 P->fd = FD; \
1947 FD->bk = BK->fd = P; \
1952 /* take a chunk off a list */
1954 #define unlink(P, BK, FD) \
1956 BK = P->bk; \
1957 FD = P->fd; \
1958 FD->bk = BK; \
1959 BK->fd = FD; \
1962 /* Place p as the last remainder */
1964 #define link_last_remainder(P) \
1966 last_remainder->fd = last_remainder->bk = P; \
1967 P->fd = P->bk = last_remainder; \
1970 /* Clear the last_remainder bin */
1972 #define clear_last_remainder \
1973 (last_remainder->fd = last_remainder->bk = last_remainder)
1980 /* Routines dealing with mmap(). */
1982 #if HAVE_MMAP
1984 #ifdef DEFINE_MALLOC
1986 #if __STD_C
1987 static mchunkptr mmap_chunk(size_t size)
1988 #else
1989 static mchunkptr mmap_chunk(size) size_t size;
1990 #endif
1992 size_t page_mask = malloc_getpagesize - 1;
1993 mchunkptr p;
1995 #ifndef MAP_ANONYMOUS
1996 static int fd = -1;
1997 #endif
1999 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
2001 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
2002 * there is no following chunk whose prev_size field could be used.
2004 size = (size + SIZE_SZ + page_mask) & ~page_mask;
2006 #ifdef MAP_ANONYMOUS
2007 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
2008 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
2009 #else /* !MAP_ANONYMOUS */
2010 if (fd < 0)
2012 fd = open("/dev/zero", O_RDWR);
2013 if(fd < 0) return 0;
2015 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
2016 #endif
2018 if(p == (mchunkptr)-1) return 0;
2020 n_mmaps++;
2021 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
2023 /* We demand that eight bytes into a page must be 8-byte aligned. */
2024 assert(aligned_OK(chunk2mem(p)));
2026 /* The offset to the start of the mmapped region is stored
2027 * in the prev_size field of the chunk; normally it is zero,
2028 * but that can be changed in memalign().
2030 p->prev_size = 0;
2031 set_head(p, size|IS_MMAPPED);
2033 mmapped_mem += size;
2034 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2035 max_mmapped_mem = mmapped_mem;
2036 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2037 max_total_mem = mmapped_mem + sbrked_mem;
2038 return p;
2041 #endif /* DEFINE_MALLOC */
2043 #ifdef SEPARATE_OBJECTS
2044 #define munmap_chunk malloc_munmap_chunk
2045 #endif
2047 #ifdef DEFINE_FREE
2049 #if __STD_C
2050 STATIC void munmap_chunk(mchunkptr p)
2051 #else
2052 STATIC void munmap_chunk(p) mchunkptr p;
2053 #endif
2055 INTERNAL_SIZE_T size = chunksize(p);
2056 int ret;
2058 assert (chunk_is_mmapped(p));
2059 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2060 assert((n_mmaps > 0));
2061 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
2063 n_mmaps--;
2064 mmapped_mem -= (size + p->prev_size);
2066 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
2068 /* munmap returns non-zero on failure */
2069 assert(ret == 0);
2072 #else /* ! DEFINE_FREE */
2074 #if __STD_C
2075 extern void munmap_chunk(mchunkptr);
2076 #else
2077 extern void munmap_chunk();
2078 #endif
2080 #endif /* ! DEFINE_FREE */
2082 #if HAVE_MREMAP
2084 #ifdef DEFINE_REALLOC
2086 #if __STD_C
2087 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
2088 #else
2089 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
2090 #endif
2092 size_t page_mask = malloc_getpagesize - 1;
2093 INTERNAL_SIZE_T offset = p->prev_size;
2094 INTERNAL_SIZE_T size = chunksize(p);
2095 char *cp;
2097 assert (chunk_is_mmapped(p));
2098 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
2099 assert((n_mmaps > 0));
2100 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
2102 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
2103 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
2105 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
2107 if (cp == (char *)-1) return 0;
2109 p = (mchunkptr)(cp + offset);
2111 assert(aligned_OK(chunk2mem(p)));
2113 assert((p->prev_size == offset));
2114 set_head(p, (new_size - offset)|IS_MMAPPED);
2116 mmapped_mem -= size + offset;
2117 mmapped_mem += new_size;
2118 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
2119 max_mmapped_mem = mmapped_mem;
2120 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2121 max_total_mem = mmapped_mem + sbrked_mem;
2122 return p;
2125 #endif /* DEFINE_REALLOC */
2127 #endif /* HAVE_MREMAP */
2129 #endif /* HAVE_MMAP */
2134 #ifdef DEFINE_MALLOC
2137 Extend the top-most chunk by obtaining memory from system.
2138 Main interface to sbrk (but see also malloc_trim).
2141 #if __STD_C
2142 static void malloc_extend_top(RARG INTERNAL_SIZE_T nb)
2143 #else
2144 static void malloc_extend_top(RARG nb) RDECL INTERNAL_SIZE_T nb;
2145 #endif
2147 char* brk; /* return value from sbrk */
2148 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
2149 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2150 int correction_failed = 0; /* whether we should relax the assertion */
2151 char* new_brk; /* return of 2nd sbrk call */
2152 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2154 mchunkptr old_top = top; /* Record state of old top */
2155 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2156 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2158 /* Pad request with top_pad plus minimal overhead */
2160 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2161 unsigned long pagesz = malloc_getpagesize;
2163 /* If not the first time through, round to preserve page boundary */
2164 /* Otherwise, we need to correct to a page size below anyway. */
2165 /* (We also correct below if an intervening foreign sbrk call.) */
2167 if (sbrk_base != (char*)(-1))
2168 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2170 brk = (char*)(MORECORE (sbrk_size));
2172 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2173 if (brk == (char*)(MORECORE_FAILURE) ||
2174 (brk < old_end && old_top != initial_top))
2175 return;
2177 sbrked_mem += sbrk_size;
2179 if (brk == old_end /* can just add bytes to current top, unless
2180 previous correction failed */
2181 && ((POINTER_UINT)old_end & (pagesz - 1)) == 0)
2183 top_size = sbrk_size + old_top_size;
2184 set_head(top, top_size | PREV_INUSE);
2186 else
2188 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2189 sbrk_base = brk;
2190 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2191 sbrked_mem += brk - (char*)old_end;
2193 /* Guarantee alignment of first new chunk made from this space */
2194 front_misalign = (POINTER_UINT)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2195 if (front_misalign > 0)
2197 correction = (MALLOC_ALIGNMENT) - front_misalign;
2198 brk += correction;
2200 else
2201 correction = 0;
2203 /* Guarantee the next brk will be at a page boundary */
2204 correction += pagesz - ((POINTER_UINT)(brk + sbrk_size) & (pagesz - 1));
2206 /* To guarantee page boundary, correction should be less than pagesz */
2207 correction &= (pagesz - 1);
2209 /* Allocate correction */
2210 new_brk = (char*)(MORECORE (correction));
2211 if (new_brk == (char*)(MORECORE_FAILURE))
2213 correction = 0;
2214 correction_failed = 1;
2215 new_brk = brk + sbrk_size;
2216 if (front_misalign > 0)
2217 new_brk -= (MALLOC_ALIGNMENT) - front_misalign;
2220 sbrked_mem += correction;
2222 top = (mchunkptr)brk;
2223 top_size = new_brk - brk + correction;
2224 set_head(top, top_size | PREV_INUSE);
2226 if (old_top != initial_top)
2229 /* There must have been an intervening foreign sbrk call. */
2230 /* A double fencepost is necessary to prevent consolidation */
2232 /* If not enough space to do this, then user did something very wrong */
2233 if (old_top_size < MINSIZE)
2235 set_head(top, PREV_INUSE); /* will force null return from malloc */
2236 return;
2239 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2240 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2241 set_head_size(old_top, old_top_size);
2242 chunk_at_offset(old_top, old_top_size )->size =
2243 SIZE_SZ|PREV_INUSE;
2244 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2245 SIZE_SZ|PREV_INUSE;
2246 /* If possible, release the rest. */
2247 if (old_top_size >= MINSIZE)
2248 fREe(RCALL chunk2mem(old_top));
2252 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2253 max_sbrked_mem = sbrked_mem;
2254 #if HAVE_MMAP
2255 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2256 max_total_mem = mmapped_mem + sbrked_mem;
2257 #else
2258 if ((unsigned long)(sbrked_mem) > (unsigned long)max_total_mem)
2259 max_total_mem = sbrked_mem;
2260 #endif
2262 /* We always land on a page boundary */
2263 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0
2264 || correction_failed);
2267 #endif /* DEFINE_MALLOC */
2270 /* Main public routines */
2272 #ifdef DEFINE_MALLOC
2275 Malloc Algorthim:
2277 The requested size is first converted into a usable form, `nb'.
2278 This currently means to add 4 bytes overhead plus possibly more to
2279 obtain 8-byte alignment and/or to obtain a size of at least
2280 MINSIZE (currently 16 bytes), the smallest allocatable size.
2281 (All fits are considered `exact' if they are within MINSIZE bytes.)
2283 From there, the first successful of the following steps is taken:
2285 1. The bin corresponding to the request size is scanned, and if
2286 a chunk of exactly the right size is found, it is taken.
2288 2. The most recently remaindered chunk is used if it is big
2289 enough. This is a form of (roving) first fit, used only in
2290 the absence of exact fits. Runs of consecutive requests use
2291 the remainder of the chunk used for the previous such request
2292 whenever possible. This limited use of a first-fit style
2293 allocation strategy tends to give contiguous chunks
2294 coextensive lifetimes, which improves locality and can reduce
2295 fragmentation in the long run.
2297 3. Other bins are scanned in increasing size order, using a
2298 chunk big enough to fulfill the request, and splitting off
2299 any remainder. This search is strictly by best-fit; i.e.,
2300 the smallest (with ties going to approximately the least
2301 recently used) chunk that fits is selected.
2303 4. If large enough, the chunk bordering the end of memory
2304 (`top') is split off. (This use of `top' is in accord with
2305 the best-fit search rule. In effect, `top' is treated as
2306 larger (and thus less well fitting) than any other available
2307 chunk since it can be extended to be as large as necessary
2308 (up to system limitations).
2310 5. If the request size meets the mmap threshold and the
2311 system supports mmap, and there are few enough currently
2312 allocated mmapped regions, and a call to mmap succeeds,
2313 the request is allocated via direct memory mapping.
2315 6. Otherwise, the top of memory is extended by
2316 obtaining more space from the system (normally using sbrk,
2317 but definable to anything else via the MORECORE macro).
2318 Memory is gathered from the system (in system page-sized
2319 units) in a way that allows chunks obtained across different
2320 sbrk calls to be consolidated, but does not require
2321 contiguous memory. Thus, it should be safe to intersperse
2322 mallocs with other sbrk calls.
2325 All allocations are made from the the `lowest' part of any found
2326 chunk. (The implementation invariant is that prev_inuse is
2327 always true of any allocated chunk; i.e., that each allocated
2328 chunk borders either a previously allocated and still in-use chunk,
2329 or the base of its memory arena.)
2333 #if __STD_C
2334 Void_t* mALLOc(RARG size_t bytes)
2335 #else
2336 Void_t* mALLOc(RARG bytes) RDECL size_t bytes;
2337 #endif
2339 #ifdef MALLOC_PROVIDED
2341 return malloc (bytes); // Make sure that the pointer returned by malloc is returned back.
2343 #else
2345 mchunkptr victim; /* inspected/selected chunk */
2346 INTERNAL_SIZE_T victim_size; /* its size */
2347 int idx; /* index for bin traversal */
2348 mbinptr bin; /* associated bin */
2349 mchunkptr remainder; /* remainder from a split */
2350 long remainder_size; /* its size */
2351 int remainder_index; /* its bin index */
2352 unsigned long block; /* block traverser bit */
2353 int startidx; /* first bin of a traversed block */
2354 mchunkptr fwd; /* misc temp for linking */
2355 mchunkptr bck; /* misc temp for linking */
2356 mbinptr q; /* misc temp */
2358 INTERNAL_SIZE_T nb = request2size(bytes); /* padded request size; */
2360 /* Check for overflow and just fail, if so. */
2361 if (nb > INT_MAX || nb < bytes)
2363 RERRNO = ENOMEM;
2364 return 0;
2367 MALLOC_LOCK;
2369 /* Check for exact match in a bin */
2371 if (is_small_request(nb)) /* Faster version for small requests */
2373 idx = smallbin_index(nb);
2375 /* No traversal or size check necessary for small bins. */
2377 q = bin_at(idx);
2378 victim = last(q);
2380 #if MALLOC_ALIGN != 16
2381 /* Also scan the next one, since it would have a remainder < MINSIZE */
2382 if (victim == q)
2384 q = next_bin(q);
2385 victim = last(q);
2387 #endif
2388 if (victim != q)
2390 victim_size = chunksize(victim);
2391 unlink(victim, bck, fwd);
2392 set_inuse_bit_at_offset(victim, victim_size);
2393 check_malloced_chunk(victim, nb);
2394 MALLOC_UNLOCK;
2395 return chunk2mem(victim);
2398 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2401 else
2403 idx = bin_index(nb);
2404 bin = bin_at(idx);
2406 for (victim = last(bin); victim != bin; victim = victim->bk)
2408 victim_size = chunksize(victim);
2409 remainder_size = long_sub_size_t(victim_size, nb);
2411 if (remainder_size >= (long)MINSIZE) /* too big */
2413 --idx; /* adjust to rescan below after checking last remainder */
2414 break;
2417 else if (remainder_size >= 0) /* exact fit */
2419 unlink(victim, bck, fwd);
2420 set_inuse_bit_at_offset(victim, victim_size);
2421 check_malloced_chunk(victim, nb);
2422 MALLOC_UNLOCK;
2423 return chunk2mem(victim);
2427 ++idx;
2431 /* Try to use the last split-off remainder */
2433 if ( (victim = last_remainder->fd) != last_remainder)
2435 victim_size = chunksize(victim);
2436 remainder_size = long_sub_size_t(victim_size, nb);
2438 if (remainder_size >= (long)MINSIZE) /* re-split */
2440 remainder = chunk_at_offset(victim, nb);
2441 set_head(victim, nb | PREV_INUSE);
2442 link_last_remainder(remainder);
2443 set_head(remainder, remainder_size | PREV_INUSE);
2444 set_foot(remainder, remainder_size);
2445 check_malloced_chunk(victim, nb);
2446 MALLOC_UNLOCK;
2447 return chunk2mem(victim);
2450 clear_last_remainder;
2452 if (remainder_size >= 0) /* exhaust */
2454 set_inuse_bit_at_offset(victim, victim_size);
2455 check_malloced_chunk(victim, nb);
2456 MALLOC_UNLOCK;
2457 return chunk2mem(victim);
2460 /* Else place in bin */
2462 frontlink(victim, victim_size, remainder_index, bck, fwd);
2466 If there are any possibly nonempty big-enough blocks,
2467 search for best fitting chunk by scanning bins in blockwidth units.
2470 if ( (block = idx2binblock(idx)) <= binblocks)
2473 /* Get to the first marked block */
2475 if ( (block & binblocks) == 0)
2477 /* force to an even block boundary */
2478 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2479 block <<= 1;
2480 while ((block & binblocks) == 0)
2482 idx += BINBLOCKWIDTH;
2483 block <<= 1;
2487 /* For each possibly nonempty block ... */
2488 for (;;)
2490 startidx = idx; /* (track incomplete blocks) */
2491 q = bin = bin_at(idx);
2493 /* For each bin in this block ... */
2496 /* Find and use first big enough chunk ... */
2498 for (victim = last(bin); victim != bin; victim = victim->bk)
2500 victim_size = chunksize(victim);
2501 remainder_size = long_sub_size_t(victim_size, nb);
2503 if (remainder_size >= (long)MINSIZE) /* split */
2505 remainder = chunk_at_offset(victim, nb);
2506 set_head(victim, nb | PREV_INUSE);
2507 unlink(victim, bck, fwd);
2508 link_last_remainder(remainder);
2509 set_head(remainder, remainder_size | PREV_INUSE);
2510 set_foot(remainder, remainder_size);
2511 check_malloced_chunk(victim, nb);
2512 MALLOC_UNLOCK;
2513 return chunk2mem(victim);
2516 else if (remainder_size >= 0) /* take */
2518 set_inuse_bit_at_offset(victim, victim_size);
2519 unlink(victim, bck, fwd);
2520 check_malloced_chunk(victim, nb);
2521 MALLOC_UNLOCK;
2522 return chunk2mem(victim);
2527 bin = next_bin(bin);
2529 #if MALLOC_ALIGN == 16
2530 if (idx < MAX_SMALLBIN)
2532 bin = next_bin(bin);
2533 ++idx;
2535 #endif
2536 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2538 /* Clear out the block bit. */
2540 do /* Possibly backtrack to try to clear a partial block */
2542 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2544 binblocks &= ~block;
2545 break;
2547 --startidx;
2548 q = prev_bin(q);
2549 } while (first(q) == q);
2551 /* Get to the next possibly nonempty block */
2553 if ( (block <<= 1) <= binblocks && (block != 0) )
2555 while ((block & binblocks) == 0)
2557 idx += BINBLOCKWIDTH;
2558 block <<= 1;
2561 else
2562 break;
2567 /* Try to use top chunk */
2569 /* Require that there be a remainder, ensuring top always exists */
2570 remainder_size = long_sub_size_t(chunksize(top), nb);
2571 if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2574 #if HAVE_MMAP
2575 /* If big and would otherwise need to extend, try to use mmap instead */
2576 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2577 (victim = mmap_chunk(nb)) != 0)
2579 MALLOC_UNLOCK;
2580 return chunk2mem(victim);
2582 #endif
2584 /* Try to extend */
2585 malloc_extend_top(RCALL nb);
2586 remainder_size = long_sub_size_t(chunksize(top), nb);
2587 if (chunksize(top) < nb || remainder_size < (long)MINSIZE)
2589 MALLOC_UNLOCK;
2590 return 0; /* propagate failure */
2594 victim = top;
2595 set_head(victim, nb | PREV_INUSE);
2596 top = chunk_at_offset(victim, nb);
2597 set_head(top, remainder_size | PREV_INUSE);
2598 check_malloced_chunk(victim, nb);
2599 MALLOC_UNLOCK;
2600 return chunk2mem(victim);
2602 #endif /* MALLOC_PROVIDED */
2605 #endif /* DEFINE_MALLOC */
2607 #ifdef DEFINE_FREE
2611 free() algorithm :
2613 cases:
2615 1. free(0) has no effect.
2617 2. If the chunk was allocated via mmap, it is release via munmap().
2619 3. If a returned chunk borders the current high end of memory,
2620 it is consolidated into the top, and if the total unused
2621 topmost memory exceeds the trim threshold, malloc_trim is
2622 called.
2624 4. Other chunks are consolidated as they arrive, and
2625 placed in corresponding bins. (This includes the case of
2626 consolidating with the current `last_remainder').
2631 #if __STD_C
2632 void fREe(RARG Void_t* mem)
2633 #else
2634 void fREe(RARG mem) RDECL Void_t* mem;
2635 #endif
2637 #ifdef MALLOC_PROVIDED
2639 free (mem);
2641 #else
2643 mchunkptr p; /* chunk corresponding to mem */
2644 INTERNAL_SIZE_T hd; /* its head field */
2645 INTERNAL_SIZE_T sz; /* its size */
2646 int idx; /* its bin index */
2647 mchunkptr next; /* next contiguous chunk */
2648 INTERNAL_SIZE_T nextsz; /* its size */
2649 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2650 mchunkptr bck; /* misc temp for linking */
2651 mchunkptr fwd; /* misc temp for linking */
2652 int islr; /* track whether merging with last_remainder */
2654 if (mem == 0) /* free(0) has no effect */
2655 return;
2657 MALLOC_LOCK;
2659 p = mem2chunk(mem);
2660 hd = p->size;
2662 #if HAVE_MMAP
2663 if (hd & IS_MMAPPED) /* release mmapped memory. */
2665 munmap_chunk(p);
2666 MALLOC_UNLOCK;
2667 return;
2669 #endif
2671 check_inuse_chunk(p);
2673 sz = hd & ~PREV_INUSE;
2674 next = chunk_at_offset(p, sz);
2675 nextsz = chunksize(next);
2677 if (next == top) /* merge with top */
2679 sz += nextsz;
2681 if (!(hd & PREV_INUSE)) /* consolidate backward */
2683 prevsz = p->prev_size;
2684 p = chunk_at_offset(p, -prevsz);
2685 sz += prevsz;
2686 unlink(p, bck, fwd);
2689 set_head(p, sz | PREV_INUSE);
2690 top = p;
2691 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2692 malloc_trim(RCALL top_pad);
2693 MALLOC_UNLOCK;
2694 return;
2697 set_head(next, nextsz); /* clear inuse bit */
2699 islr = 0;
2701 if (!(hd & PREV_INUSE)) /* consolidate backward */
2703 prevsz = p->prev_size;
2704 p = chunk_at_offset(p, -prevsz);
2705 sz += prevsz;
2707 if (p->fd == last_remainder) /* keep as last_remainder */
2708 islr = 1;
2709 else
2710 unlink(p, bck, fwd);
2713 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2715 sz += nextsz;
2717 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2719 islr = 1;
2720 link_last_remainder(p);
2722 else
2723 unlink(next, bck, fwd);
2727 set_head(p, sz | PREV_INUSE);
2728 set_foot(p, sz);
2729 if (!islr)
2730 frontlink(p, sz, idx, bck, fwd);
2732 MALLOC_UNLOCK;
2734 #endif /* MALLOC_PROVIDED */
2737 #endif /* DEFINE_FREE */
2739 #ifdef DEFINE_REALLOC
2743 Realloc algorithm:
2745 Chunks that were obtained via mmap cannot be extended or shrunk
2746 unless HAVE_MREMAP is defined, in which case mremap is used.
2747 Otherwise, if their reallocation is for additional space, they are
2748 copied. If for less, they are just left alone.
2750 Otherwise, if the reallocation is for additional space, and the
2751 chunk can be extended, it is, else a malloc-copy-free sequence is
2752 taken. There are several different ways that a chunk could be
2753 extended. All are tried:
2755 * Extending forward into following adjacent free chunk.
2756 * Shifting backwards, joining preceding adjacent space
2757 * Both shifting backwards and extending forward.
2758 * Extending into newly sbrked space
2760 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2761 size argument of zero (re)allocates a minimum-sized chunk.
2763 If the reallocation is for less space, and the new request is for
2764 a `small' (<512 bytes) size, then the newly unused space is lopped
2765 off and freed.
2767 The old unix realloc convention of allowing the last-free'd chunk
2768 to be used as an argument to realloc is no longer supported.
2769 I don't know of any programs still relying on this feature,
2770 and allowing it would also allow too many other incorrect
2771 usages of realloc to be sensible.
2777 #if __STD_C
2778 Void_t* rEALLOc(RARG Void_t* oldmem, size_t bytes)
2779 #else
2780 Void_t* rEALLOc(RARG oldmem, bytes) RDECL Void_t* oldmem; size_t bytes;
2781 #endif
2783 #ifdef MALLOC_PROVIDED
2785 realloc (oldmem, bytes);
2787 #else
2789 INTERNAL_SIZE_T nb; /* padded request size */
2791 mchunkptr oldp; /* chunk corresponding to oldmem */
2792 INTERNAL_SIZE_T oldsize; /* its size */
2794 mchunkptr newp; /* chunk to return */
2795 INTERNAL_SIZE_T newsize; /* its size */
2796 Void_t* newmem; /* corresponding user mem */
2798 mchunkptr next; /* next contiguous chunk after oldp */
2799 INTERNAL_SIZE_T nextsize; /* its size */
2801 mchunkptr prev; /* previous contiguous chunk before oldp */
2802 INTERNAL_SIZE_T prevsize; /* its size */
2804 mchunkptr remainder; /* holds split off extra space from newp */
2805 INTERNAL_SIZE_T remainder_size; /* its size */
2807 mchunkptr bck; /* misc temp for linking */
2808 mchunkptr fwd; /* misc temp for linking */
2810 #ifdef REALLOC_ZERO_BYTES_FREES
2811 if (bytes == 0) { fREe(RCALL oldmem); return 0; }
2812 #endif
2815 /* realloc of null is supposed to be same as malloc */
2816 if (oldmem == 0) return mALLOc(RCALL bytes);
2818 MALLOC_LOCK;
2820 newp = oldp = mem2chunk(oldmem);
2821 newsize = oldsize = chunksize(oldp);
2824 nb = request2size(bytes);
2826 /* Check for overflow and just fail, if so. */
2827 if (nb > INT_MAX || nb < bytes)
2829 RERRNO = ENOMEM;
2830 return 0;
2833 #if HAVE_MMAP
2834 if (chunk_is_mmapped(oldp))
2836 #if HAVE_MREMAP
2837 newp = mremap_chunk(oldp, nb);
2838 if(newp)
2840 MALLOC_UNLOCK;
2841 return chunk2mem(newp);
2843 #endif
2844 /* Note the extra SIZE_SZ overhead. */
2845 if(oldsize - SIZE_SZ >= nb)
2847 MALLOC_UNLOCK;
2848 return oldmem; /* do nothing */
2850 /* Must alloc, copy, free. */
2851 newmem = mALLOc(RCALL bytes);
2852 if (newmem == 0)
2854 MALLOC_UNLOCK;
2855 return 0; /* propagate failure */
2857 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2858 munmap_chunk(oldp);
2859 MALLOC_UNLOCK;
2860 return newmem;
2862 #endif
2864 check_inuse_chunk(oldp);
2866 if ((long)(oldsize) < (long)(nb))
2869 /* Try expanding forward */
2871 next = chunk_at_offset(oldp, oldsize);
2872 if (next == top || !inuse(next))
2874 nextsize = chunksize(next);
2876 /* Forward into top only if a remainder */
2877 if (next == top)
2879 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2881 newsize += nextsize;
2882 top = chunk_at_offset(oldp, nb);
2883 set_head(top, (newsize - nb) | PREV_INUSE);
2884 set_head_size(oldp, nb);
2885 MALLOC_UNLOCK;
2886 return chunk2mem(oldp);
2890 /* Forward into next chunk */
2891 else if (((long)(nextsize + newsize) >= (long)(nb)))
2893 unlink(next, bck, fwd);
2894 newsize += nextsize;
2895 goto split;
2898 else
2900 next = 0;
2901 nextsize = 0;
2904 /* Try shifting backwards. */
2906 if (!prev_inuse(oldp))
2908 prev = prev_chunk(oldp);
2909 prevsize = chunksize(prev);
2911 /* try forward + backward first to save a later consolidation */
2913 if (next != 0)
2915 /* into top */
2916 if (next == top)
2918 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2920 unlink(prev, bck, fwd);
2921 newp = prev;
2922 newsize += prevsize + nextsize;
2923 newmem = chunk2mem(newp);
2924 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2925 top = chunk_at_offset(newp, nb);
2926 set_head(top, (newsize - nb) | PREV_INUSE);
2927 set_head_size(newp, nb);
2928 MALLOC_UNLOCK;
2929 return newmem;
2933 /* into next chunk */
2934 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2936 unlink(next, bck, fwd);
2937 unlink(prev, bck, fwd);
2938 newp = prev;
2939 newsize += nextsize + prevsize;
2940 newmem = chunk2mem(newp);
2941 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2942 goto split;
2946 /* backward only */
2947 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2949 unlink(prev, bck, fwd);
2950 newp = prev;
2951 newsize += prevsize;
2952 newmem = chunk2mem(newp);
2953 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2954 goto split;
2958 /* Must allocate */
2960 newmem = mALLOc (RCALL bytes);
2962 if (newmem == 0) /* propagate failure */
2964 MALLOC_UNLOCK;
2965 return 0;
2968 /* Avoid copy if newp is next chunk after oldp. */
2969 /* (This can only happen when new chunk is sbrk'ed.) */
2971 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2973 newsize += chunksize(newp);
2974 newp = oldp;
2975 goto split;
2978 /* Otherwise copy, free, and exit */
2979 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2980 fREe(RCALL oldmem);
2981 MALLOC_UNLOCK;
2982 return newmem;
2986 split: /* split off extra room in old or expanded chunk */
2988 remainder_size = long_sub_size_t(newsize, nb);
2990 if (remainder_size >= (long)MINSIZE) /* split off remainder */
2992 remainder = chunk_at_offset(newp, nb);
2993 set_head_size(newp, nb);
2994 set_head(remainder, remainder_size | PREV_INUSE);
2995 set_inuse_bit_at_offset(remainder, remainder_size);
2996 fREe(RCALL chunk2mem(remainder)); /* let free() deal with it */
2998 else
3000 set_head_size(newp, newsize);
3001 set_inuse_bit_at_offset(newp, newsize);
3004 check_inuse_chunk(newp);
3005 MALLOC_UNLOCK;
3006 return chunk2mem(newp);
3008 #endif /* MALLOC_PROVIDED */
3011 #endif /* DEFINE_REALLOC */
3013 #ifdef DEFINE_MEMALIGN
3017 memalign algorithm:
3019 memalign requests more than enough space from malloc, finds a spot
3020 within that chunk that meets the alignment request, and then
3021 possibly frees the leading and trailing space.
3023 The alignment argument must be a power of two. This property is not
3024 checked by memalign, so misuse may result in random runtime errors.
3026 8-byte alignment is guaranteed by normal malloc calls, so don't
3027 bother calling memalign with an argument of 8 or less.
3029 Overreliance on memalign is a sure way to fragment space.
3034 #if __STD_C
3035 Void_t* mEMALIGn(RARG size_t alignment, size_t bytes)
3036 #else
3037 Void_t* mEMALIGn(RARG alignment, bytes) RDECL size_t alignment; size_t bytes;
3038 #endif
3040 INTERNAL_SIZE_T nb; /* padded request size */
3041 char* m; /* memory returned by malloc call */
3042 mchunkptr p; /* corresponding chunk */
3043 char* brk; /* alignment point within p */
3044 mchunkptr newp; /* chunk to return */
3045 INTERNAL_SIZE_T newsize; /* its size */
3046 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
3047 mchunkptr remainder; /* spare room at end to split off */
3048 long remainder_size; /* its size */
3050 /* If need less alignment than we give anyway, just relay to malloc */
3052 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(RCALL bytes);
3054 /* Otherwise, ensure that it is at least a minimum chunk size */
3056 if (alignment < MINSIZE) alignment = MINSIZE;
3058 /* Call malloc with worst case padding to hit alignment. */
3060 nb = request2size(bytes);
3062 /* Check for overflow. */
3063 if (nb > __SIZE_MAX__ - (alignment + MINSIZE) || nb < bytes)
3065 RERRNO = ENOMEM;
3066 return 0;
3069 m = (char*)(mALLOc(RCALL nb + alignment + MINSIZE));
3071 if (m == 0) return 0; /* propagate failure */
3073 MALLOC_LOCK;
3075 p = mem2chunk(m);
3077 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
3079 #if HAVE_MMAP
3080 if(chunk_is_mmapped(p))
3082 MALLOC_UNLOCK;
3083 return chunk2mem(p); /* nothing more to do */
3085 #endif
3087 else /* misaligned */
3090 Find an aligned spot inside chunk.
3091 Since we need to give back leading space in a chunk of at
3092 least MINSIZE, if the first calculation places us at
3093 a spot with less than MINSIZE leader, we can move to the
3094 next aligned spot -- we've allocated enough total room so that
3095 this is always possible.
3098 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
3099 if ((long)(brk - (char*)(p)) < (long)MINSIZE) brk = brk + alignment;
3101 newp = (mchunkptr)brk;
3102 leadsize = brk - (char*)(p);
3103 newsize = chunksize(p) - leadsize;
3105 #if HAVE_MMAP
3106 if(chunk_is_mmapped(p))
3108 newp->prev_size = p->prev_size + leadsize;
3109 set_head(newp, newsize|IS_MMAPPED);
3110 MALLOC_UNLOCK;
3111 return chunk2mem(newp);
3113 #endif
3115 /* give back leader, use the rest */
3117 set_head(newp, newsize | PREV_INUSE);
3118 set_inuse_bit_at_offset(newp, newsize);
3119 set_head_size(p, leadsize);
3120 fREe(RCALL chunk2mem(p));
3121 p = newp;
3123 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
3126 /* Also give back spare room at the end */
3128 remainder_size = long_sub_size_t(chunksize(p), nb);
3130 if (remainder_size >= (long)MINSIZE)
3132 remainder = chunk_at_offset(p, nb);
3133 set_head(remainder, remainder_size | PREV_INUSE);
3134 set_head_size(p, nb);
3135 fREe(RCALL chunk2mem(remainder));
3138 check_inuse_chunk(p);
3139 MALLOC_UNLOCK;
3140 return chunk2mem(p);
3144 #endif /* DEFINE_MEMALIGN */
3146 #ifdef DEFINE_VALLOC
3149 valloc just invokes memalign with alignment argument equal
3150 to the page size of the system (or as near to this as can
3151 be figured out from all the includes/defines above.)
3154 #if __STD_C
3155 Void_t* vALLOc(RARG size_t bytes)
3156 #else
3157 Void_t* vALLOc(RARG bytes) RDECL size_t bytes;
3158 #endif
3160 return mEMALIGn (RCALL malloc_getpagesize, bytes);
3163 #endif /* DEFINE_VALLOC */
3165 #ifdef DEFINE_PVALLOC
3168 pvalloc just invokes valloc for the nearest pagesize
3169 that will accommodate request
3173 #if __STD_C
3174 Void_t* pvALLOc(RARG size_t bytes)
3175 #else
3176 Void_t* pvALLOc(RARG bytes) RDECL size_t bytes;
3177 #endif
3179 size_t pagesize = malloc_getpagesize;
3180 if (bytes > __SIZE_MAX__ - pagesize)
3182 RERRNO = ENOMEM;
3183 return 0;
3185 return mEMALIGn (RCALL pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
3188 #endif /* DEFINE_PVALLOC */
3190 #ifdef DEFINE_CALLOC
3194 calloc calls malloc, then zeroes out the allocated chunk.
3198 #if __STD_C
3199 Void_t* cALLOc(RARG size_t n, size_t elem_size)
3200 #else
3201 Void_t* cALLOc(RARG n, elem_size) RDECL size_t n; size_t elem_size;
3202 #endif
3204 mchunkptr p;
3205 INTERNAL_SIZE_T csz;
3207 INTERNAL_SIZE_T sz;
3209 #if MORECORE_CLEARS
3210 mchunkptr oldtop;
3211 INTERNAL_SIZE_T oldtopsize;
3212 #endif
3213 Void_t* mem;
3215 if (__builtin_mul_overflow((INTERNAL_SIZE_T) n, (INTERNAL_SIZE_T) elem_size, &sz))
3217 errno = ENOMEM;
3218 return 0;
3221 /* check if expand_top called, in which case don't need to clear */
3222 #if MORECORE_CLEARS
3223 MALLOC_LOCK;
3224 oldtop = top;
3225 oldtopsize = chunksize(top);
3226 #endif
3228 mem = mALLOc (RCALL sz);
3230 if (mem == 0)
3232 #if MORECORE_CLEARS
3233 MALLOC_UNLOCK;
3234 #endif
3235 return 0;
3237 else
3239 p = mem2chunk(mem);
3241 /* Two optional cases in which clearing not necessary */
3244 #if HAVE_MMAP
3245 if (chunk_is_mmapped(p))
3247 #if MORECORE_CLEARS
3248 MALLOC_UNLOCK;
3249 #endif
3250 return mem;
3252 #endif
3254 csz = chunksize(p);
3256 #if MORECORE_CLEARS
3257 if (p == oldtop && csz > oldtopsize)
3259 /* clear only the bytes from non-freshly-sbrked memory */
3260 csz = oldtopsize;
3262 MALLOC_UNLOCK;
3263 #endif
3265 MALLOC_ZERO(mem, csz - SIZE_SZ);
3266 return mem;
3270 #endif /* DEFINE_CALLOC */
3272 #if defined(DEFINE_CFREE) && !defined(__CYGWIN__)
3276 cfree just calls free. It is needed/defined on some systems
3277 that pair it with calloc, presumably for odd historical reasons.
3281 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3282 #if !defined(_LIBC) || !defined(_REENT_ONLY)
3283 #if __STD_C
3284 void cfree(Void_t *mem)
3285 #else
3286 void cfree(mem) Void_t *mem;
3287 #endif
3289 #ifdef _LIBC
3290 fREe(_REENT, mem);
3291 #else
3292 fREe(mem);
3293 #endif
3295 #endif
3296 #endif
3298 #endif /* DEFINE_CFREE */
3300 #ifdef DEFINE_FREE
3304 Malloc_trim gives memory back to the system (via negative
3305 arguments to sbrk) if there is unused memory at the `high' end of
3306 the malloc pool. You can call this after freeing large blocks of
3307 memory to potentially reduce the system-level memory requirements
3308 of a program. However, it cannot guarantee to reduce memory. Under
3309 some allocation patterns, some large free blocks of memory will be
3310 locked between two used chunks, so they cannot be given back to
3311 the system.
3313 The `pad' argument to malloc_trim represents the amount of free
3314 trailing space to leave untrimmed. If this argument is zero,
3315 only the minimum amount of memory to maintain internal data
3316 structures will be left (one page or less). Non-zero arguments
3317 can be supplied to maintain enough trailing space to service
3318 future expected allocations without having to re-obtain memory
3319 from the system.
3321 Malloc_trim returns 1 if it actually released any memory, else 0.
3325 #if __STD_C
3326 int malloc_trim(RARG size_t pad)
3327 #else
3328 int malloc_trim(RARG pad) RDECL size_t pad;
3329 #endif
3331 long top_size; /* Amount of top-most memory */
3332 long extra; /* Amount to release */
3333 char* current_brk; /* address returned by pre-check sbrk call */
3334 char* new_brk; /* address returned by negative sbrk call */
3336 unsigned long pagesz = malloc_getpagesize;
3338 MALLOC_LOCK;
3340 top_size = chunksize(top);
3341 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3343 if (extra < (long)pagesz) /* Not enough memory to release */
3345 MALLOC_UNLOCK;
3346 return 0;
3349 else
3351 /* Test to make sure no one else called sbrk */
3352 current_brk = (char*)(MORECORE (0));
3353 if (current_brk != (char*)(top) + top_size)
3355 MALLOC_UNLOCK;
3356 return 0; /* Apparently we don't own memory; must fail */
3359 else
3361 new_brk = (char*)(MORECORE (-extra));
3363 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3365 /* Try to figure out what we have */
3366 current_brk = (char*)(MORECORE (0));
3367 top_size = current_brk - (char*)top;
3368 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3370 sbrked_mem = current_brk - sbrk_base;
3371 set_head(top, top_size | PREV_INUSE);
3373 check_chunk(top);
3374 MALLOC_UNLOCK;
3375 return 0;
3378 else
3380 /* Success. Adjust top accordingly. */
3381 set_head(top, (top_size - extra) | PREV_INUSE);
3382 sbrked_mem -= extra;
3383 check_chunk(top);
3384 MALLOC_UNLOCK;
3385 return 1;
3391 #endif /* DEFINE_FREE */
3393 #ifdef DEFINE_MALLOC_USABLE_SIZE
3396 malloc_usable_size:
3398 This routine tells you how many bytes you can actually use in an
3399 allocated chunk, which may be more than you requested (although
3400 often not). You can use this many bytes without worrying about
3401 overwriting other allocated objects. Not a particularly great
3402 programming practice, but still sometimes useful.
3406 #if __STD_C
3407 size_t malloc_usable_size(RARG Void_t* mem)
3408 #else
3409 size_t malloc_usable_size(RARG mem) RDECL Void_t* mem;
3410 #endif
3412 mchunkptr p;
3413 if (mem == 0)
3414 return 0;
3415 else
3417 p = mem2chunk(mem);
3418 if(!chunk_is_mmapped(p))
3420 if (!inuse(p)) return 0;
3421 #if DEBUG
3422 MALLOC_LOCK;
3423 check_inuse_chunk(p);
3424 MALLOC_UNLOCK;
3425 #endif
3426 return chunksize(p) - SIZE_SZ;
3428 return chunksize(p) - 2*SIZE_SZ;
3432 #endif /* DEFINE_MALLOC_USABLE_SIZE */
3434 #ifdef DEFINE_MALLINFO
3436 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3438 STATIC void malloc_update_mallinfo()
3440 int i;
3441 mbinptr b;
3442 mchunkptr p;
3443 #if DEBUG
3444 mchunkptr q;
3445 #endif
3447 INTERNAL_SIZE_T avail = chunksize(top);
3448 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3450 for (i = 1; i < NAV; ++i)
3452 b = bin_at(i);
3453 for (p = last(b); p != b; p = p->bk)
3455 #if DEBUG
3456 check_free_chunk(p);
3457 for (q = next_chunk(p);
3458 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3459 q = next_chunk(q))
3460 check_inuse_chunk(q);
3461 #endif
3462 avail += chunksize(p);
3463 navail++;
3467 current_mallinfo.ordblks = navail;
3468 current_mallinfo.uordblks = sbrked_mem - avail;
3469 current_mallinfo.fordblks = avail;
3470 #if HAVE_MMAP
3471 current_mallinfo.hblks = n_mmaps;
3472 current_mallinfo.hblkhd = mmapped_mem;
3473 #endif
3474 current_mallinfo.keepcost = chunksize(top);
3478 #else /* ! DEFINE_MALLINFO */
3480 #if __STD_C
3481 extern void malloc_update_mallinfo(void);
3482 #else
3483 extern void malloc_update_mallinfo();
3484 #endif
3486 #endif /* ! DEFINE_MALLINFO */
3488 #ifdef DEFINE_MALLOC_STATS
3492 malloc_stats:
3494 Prints on stderr the amount of space obtain from the system (both
3495 via sbrk and mmap), the maximum amount (which may be more than
3496 current if malloc_trim and/or munmap got called), the maximum
3497 number of simultaneous mmap regions used, and the current number
3498 of bytes allocated via malloc (or realloc, etc) but not yet
3499 freed. (Note that this is the number of bytes allocated, not the
3500 number requested. It will be larger than the number requested
3501 because of alignment and bookkeeping overhead.)
3505 #if __STD_C
3506 void malloc_stats(RONEARG)
3507 #else
3508 void malloc_stats(RONEARG) RDECL
3509 #endif
3511 unsigned long local_max_total_mem;
3512 int local_sbrked_mem;
3513 struct mallinfo local_mallinfo;
3514 #if HAVE_MMAP
3515 unsigned long local_mmapped_mem, local_max_n_mmaps;
3516 #endif
3517 FILE *fp;
3519 MALLOC_LOCK;
3520 malloc_update_mallinfo();
3521 local_max_total_mem = max_total_mem;
3522 local_sbrked_mem = sbrked_mem;
3523 local_mallinfo = current_mallinfo;
3524 #if HAVE_MMAP
3525 local_mmapped_mem = mmapped_mem;
3526 local_max_n_mmaps = max_n_mmaps;
3527 #endif
3528 MALLOC_UNLOCK;
3530 #ifdef _LIBC
3531 _REENT_SMALL_CHECK_INIT(reent_ptr);
3532 fp = _stderr_r(reent_ptr);
3533 #define fprintf fiprintf
3534 #else
3535 fp = stderr;
3536 #endif
3538 fprintf(fp, "max system bytes = %10u\n",
3539 (unsigned int)(local_max_total_mem));
3540 #if HAVE_MMAP
3541 fprintf(fp, "system bytes = %10u\n",
3542 (unsigned int)(local_sbrked_mem + local_mmapped_mem));
3543 fprintf(fp, "in use bytes = %10u\n",
3544 (unsigned int)(local_mallinfo.uordblks + local_mmapped_mem));
3545 #else
3546 fprintf(fp, "system bytes = %10u\n",
3547 (unsigned int)local_sbrked_mem);
3548 fprintf(fp, "in use bytes = %10u\n",
3549 (unsigned int)local_mallinfo.uordblks);
3550 #endif
3551 #if HAVE_MMAP
3552 fprintf(fp, "max mmap regions = %10u\n",
3553 (unsigned int)local_max_n_mmaps);
3554 #endif
3557 #endif /* DEFINE_MALLOC_STATS */
3559 #ifdef DEFINE_MALLINFO
3562 mallinfo returns a copy of updated current mallinfo.
3565 #if __STD_C
3566 struct mallinfo mALLINFo(RONEARG)
3567 #else
3568 struct mallinfo mALLINFo(RONEARG) RDECL
3569 #endif
3571 struct mallinfo ret;
3573 MALLOC_LOCK;
3574 malloc_update_mallinfo();
3575 ret = current_mallinfo;
3576 MALLOC_UNLOCK;
3577 return ret;
3580 #endif /* DEFINE_MALLINFO */
3582 #ifdef DEFINE_MALLOPT
3585 mallopt:
3587 mallopt is the general SVID/XPG interface to tunable parameters.
3588 The format is to provide a (parameter-number, parameter-value) pair.
3589 mallopt then sets the corresponding parameter to the argument
3590 value if it can (i.e., so long as the value is meaningful),
3591 and returns 1 if successful else 0.
3593 See descriptions of tunable parameters above.
3597 #if __STD_C
3598 int mALLOPt(RARG int param_number, int value)
3599 #else
3600 int mALLOPt(RARG param_number, value) RDECL int param_number; int value;
3601 #endif
3603 MALLOC_LOCK;
3604 switch(param_number)
3606 case M_TRIM_THRESHOLD:
3607 trim_threshold = value; MALLOC_UNLOCK; return 1;
3608 case M_TOP_PAD:
3609 top_pad = value; MALLOC_UNLOCK; return 1;
3610 case M_MMAP_THRESHOLD:
3611 #if HAVE_MMAP
3612 mmap_threshold = value;
3613 #endif
3614 MALLOC_UNLOCK;
3615 return 1;
3616 case M_MMAP_MAX:
3617 #if HAVE_MMAP
3618 n_mmaps_max = value; MALLOC_UNLOCK; return 1;
3619 #else
3620 MALLOC_UNLOCK; return value == 0;
3621 #endif
3623 default:
3624 MALLOC_UNLOCK;
3625 return 0;
3629 #endif /* DEFINE_MALLOPT */
3633 History:
3635 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3636 * Fixed ordering problem with boundary-stamping
3638 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3639 * Added pvalloc, as recommended by H.J. Liu
3640 * Added 64bit pointer support mainly from Wolfram Gloger
3641 * Added anonymously donated WIN32 sbrk emulation
3642 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3643 * malloc_extend_top: fix mask error that caused wastage after
3644 foreign sbrks
3645 * Add linux mremap support code from HJ Liu
3647 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3648 * Integrated most documentation with the code.
3649 * Add support for mmap, with help from
3650 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3651 * Use last_remainder in more cases.
3652 * Pack bins using idea from colin@nyx10.cs.du.edu
3653 * Use ordered bins instead of best-fit threshhold
3654 * Eliminate block-local decls to simplify tracing and debugging.
3655 * Support another case of realloc via move into top
3656 * Fix error occuring when initial sbrk_base not word-aligned.
3657 * Rely on page size for units instead of SBRK_UNIT to
3658 avoid surprises about sbrk alignment conventions.
3659 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3660 (raymond@es.ele.tue.nl) for the suggestion.
3661 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3662 * More precautions for cases where other routines call sbrk,
3663 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3664 * Added macros etc., allowing use in linux libc from
3665 H.J. Lu (hjl@gnu.ai.mit.edu)
3666 * Inverted this history list
3668 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3669 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3670 * Removed all preallocation code since under current scheme
3671 the work required to undo bad preallocations exceeds
3672 the work saved in good cases for most test programs.
3673 * No longer use return list or unconsolidated bins since
3674 no scheme using them consistently outperforms those that don't
3675 given above changes.
3676 * Use best fit for very large chunks to prevent some worst-cases.
3677 * Added some support for debugging
3679 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3680 * Removed footers when chunks are in use. Thanks to
3681 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3683 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3684 * Added malloc_trim, with help from Wolfram Gloger
3685 (wmglo@Dent.MED.Uni-Muenchen.DE).
3687 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3689 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3690 * realloc: try to expand in both directions
3691 * malloc: swap order of clean-bin strategy;
3692 * realloc: only conditionally expand backwards
3693 * Try not to scavenge used bins
3694 * Use bin counts as a guide to preallocation
3695 * Occasionally bin return list chunks in first scan
3696 * Add a few optimizations from colin@nyx10.cs.du.edu
3698 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3699 * faster bin computation & slightly different binning
3700 * merged all consolidations to one part of malloc proper
3701 (eliminating old malloc_find_space & malloc_clean_bin)
3702 * Scan 2 returns chunks (not just 1)
3703 * Propagate failure in realloc if malloc returns 0
3704 * Add stuff to allow compilation on non-ANSI compilers
3705 from kpv@research.att.com
3707 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3708 * removed potential for odd address access in prev_chunk
3709 * removed dependency on getpagesize.h
3710 * misc cosmetics and a bit more internal documentation
3711 * anticosmetics: mangled names in macros to evade debugger strangeness
3712 * tested on sparc, hp-700, dec-mips, rs6000
3713 with gcc & native cc (hp, dec only) allowing
3714 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3716 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3717 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3718 structure of old version, but most details differ.)
3721 #endif