2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
8 Note: There may be an updated version of this malloc obtainable at
9 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10 Check before installing!
14 This library is all in one file to simplify the most common usage:
15 ftp it, compile it (-O3), and link it into another program. All of
16 the compile-time options default to reasonable values for use on
17 most platforms. You might later want to step through various
18 compile-time and dynamic tuning options.
20 For convenience, an include file for code using this malloc is at:
21 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
22 You don't really need this .h file unless you call functions not
23 defined in your system include files. The .h file contains only the
24 excerpts from this file needed for using this malloc on ANSI C/C++
25 systems, so long as you haven't changed compile-time options about
26 naming and tuning parameters. If you do, then you can create your
27 own malloc.h that does include all settings by cutting at the point
28 indicated below. Note that you may already by default be using a C
29 library containing a malloc that is based on some version of this
30 malloc (for example in linux). You might still want to use the one
31 in this file to customize settings or to avoid overheads associated
32 with library versions.
36 Supported pointer/size_t representation: 4 or 8 bytes
37 size_t MUST be an unsigned type of the same width as
38 pointers. (If you are using an ancient system that declares
39 size_t as a signed type, or need it to be a different width
40 than pointers, you can use a previous release of this malloc
41 (e.g. 2.7.2) supporting these.)
43 Alignment: 8 bytes (minimum)
44 This suffices for nearly all current machines and C compilers.
45 However, you can define MALLOC_ALIGNMENT to be wider than this
46 if necessary (up to 128bytes), at the expense of using more space.
48 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
49 8 or 16 bytes (if 8byte sizes)
50 Each malloced chunk has a hidden word of overhead holding size
51 and status information, and additional cross-check word
52 if FOOTERS is defined.
54 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
55 8-byte ptrs: 32 bytes (including overhead)
57 Even a request for zero bytes (i.e., malloc(0)) returns a
58 pointer to something of the minimum allocatable size.
59 The maximum overhead wastage (i.e., number of extra bytes
60 allocated than were requested in malloc) is less than or equal
61 to the minimum size, except for requests >= mmap_threshold that
62 are serviced via mmap(), where the worst case wastage is about
63 32 bytes plus the remainder from a system page (the minimal
64 mmap unit); typically 4096 or 8192 bytes.
66 Security: static-safe; optionally more or less
67 The "security" of malloc refers to the ability of malicious
68 code to accentuate the effects of errors (for example, freeing
69 space that is not currently malloc'ed or overwriting past the
70 ends of chunks) in code that calls malloc. This malloc
71 guarantees not to modify any memory locations below the base of
72 heap, i.e., static variables, even in the presence of usage
73 errors. The routines additionally detect most improper frees
74 and reallocs. All this holds as long as the static bookkeeping
75 for malloc itself is not corrupted by some other means. This
76 is only one aspect of security -- these checks do not, and
77 cannot, detect all possible programming errors.
79 If FOOTERS is defined nonzero, then each allocated chunk
80 carries an additional check word to verify that it was malloced
81 from its space. These check words are the same within each
82 execution of a program using malloc, but differ across
83 executions, so externally crafted fake chunks cannot be
84 freed. This improves security by rejecting frees/reallocs that
85 could corrupt heap memory, in addition to the checks preventing
86 writes to statics that are always on. This may further improve
87 security at the expense of time and space overhead. (Note that
88 FOOTERS may also be worth using with MSPACES.)
90 By default detected errors cause the program to abort (calling
91 "abort()"). You can override this to instead proceed past
92 errors by defining PROCEED_ON_ERROR. In this case, a bad free
93 has no effect, and a malloc that encounters a bad address
94 caused by user overwrites will ignore the bad address by
95 dropping pointers and indices to all known memory. This may
96 be appropriate for programs that should continue if at all
97 possible in the face of programming errors, although they may
98 run out of memory because dropped memory is never reclaimed.
100 If you don't like either of these options, you can define
101 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102 else. And if if you are sure that your program using malloc has
103 no errors or vulnerabilities, you can define INSECURE to 1,
104 which might (or might not) provide a small performance improvement.
106 It is also possible to limit the maximum total allocatable
107 space, using malloc_set_footprint_limit. This is not
108 designed as a security feature in itself (calls to set limits
109 are not screened or privileged), but may be useful as one
110 aspect of a secure implementation.
112 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
113 When USE_LOCKS is defined, each public call to malloc, free,
114 etc is surrounded with a lock. By default, this uses a plain
115 pthread mutex, win32 critical section, or a spin-lock if if
116 available for the platform and not disabled by setting
117 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined,
118 recursive versions are used instead (which are not required for
119 base functionality but may be needed in layered extensions).
120 Using a global lock is not especially fast, and can be a major
121 bottleneck. It is designed only to provide minimal protection
122 in concurrent environments, and to provide a basis for
123 extensions. If you are using malloc in a concurrent program,
124 consider instead using nedmalloc
125 (http://www.nedprod.com/programs/portable/nedmalloc/) or
126 ptmalloc (See http://www.malloc.de), which are derived from
127 versions of this malloc.
129 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
130 This malloc can use unix sbrk or any emulation (invoked using
131 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
132 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
133 memory. On most unix systems, it tends to work best if both
134 MORECORE and MMAP are enabled. On Win32, it uses emulations
135 based on VirtualAlloc. It also uses common C library functions
138 Compliance: I believe it is compliant with the Single Unix Specification
139 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
142 * Overview of algorithms
144 This is not the fastest, most space-conserving, most portable, or
145 most tunable malloc ever written. However it is among the fastest
146 while also being among the most space-conserving, portable and
147 tunable. Consistent balance across these factors results in a good
148 general-purpose allocator for malloc-intensive programs.
150 In most ways, this malloc is a best-fit allocator. Generally, it
151 chooses the best-fitting existing chunk for a request, with ties
152 broken in approximately least-recently-used order. (This strategy
153 normally maintains low fragmentation.) However, for requests less
154 than 256bytes, it deviates from best-fit when there is not an
155 exactly fitting available chunk by preferring to use space adjacent
156 to that used for the previous small request, as well as by breaking
157 ties in approximately most-recently-used order. (These enhance
158 locality of series of small allocations.) And for very large requests
159 (>= 256Kb by default), it relies on system memory mapping
160 facilities, if supported. (This helps avoid carrying around and
161 possibly fragmenting memory used only for large chunks.)
163 All operations (except malloc_stats and mallinfo) have execution
164 times that are bounded by a constant factor of the number of bits in
165 a size_t, not counting any clearing in calloc or copying in realloc,
166 or actions surrounding MORECORE and MMAP that have times
167 proportional to the number of non-contiguous regions returned by
168 system allocation routines, which is often just 1. In real-time
169 applications, you can optionally suppress segment traversals using
170 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
171 system allocators return non-contiguous spaces, at the typical
172 expense of carrying around more memory and increased fragmentation.
174 The implementation is not very modular and seriously overuses
175 macros. Perhaps someday all C compilers will do as good a job
176 inlining modular code as can now be done by brute-force expansion,
177 but now, enough of them seem not to.
179 Some compilers issue a lot of warnings about code that is
180 dead/unreachable only on some platforms, and also about intentional
181 uses of negation on unsigned types. All known cases of each can be
184 For a longer but out of date high-level description, see
185 http://gee.cs.oswego.edu/dl/html/malloc.html
188 If MSPACES is defined, then in addition to malloc, free, etc.,
189 this file also defines mspace_malloc, mspace_free, etc. These
190 are versions of malloc routines that take an "mspace" argument
191 obtained using create_mspace, to control all internal bookkeeping.
192 If ONLY_MSPACES is defined, only these versions are compiled.
193 So if you would like to use this allocator for only some allocations,
194 and your system malloc for others, you can compile with
195 ONLY_MSPACES and then do something like...
196 static mspace mymspace = create_mspace(0,0); // for example
197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
199 (Note: If you only need one instance of an mspace, you can instead
200 use "USE_DL_PREFIX" to relabel the global malloc.)
202 You can similarly create thread-local allocators by storing
203 mspaces as thread-locals. For example:
204 static __thread mspace tlms = 0;
205 void* tlmalloc(size_t bytes) {
206 if (tlms == 0) tlms = create_mspace(0, 0);
207 return mspace_malloc(tlms, bytes);
209 void tlfree(void* mem) { mspace_free(tlms, mem); }
211 Unless FOOTERS is defined, each mspace is completely independent.
212 You cannot allocate from one and free to another (although
213 conformance is only weakly checked, so usage errors are not always
214 caught). If FOOTERS is defined, then each chunk carries around a tag
215 indicating its originating mspace, and frees are directed to their
216 originating spaces. Normally, this requires use of locks.
218 ------------------------- Compile-time options ---------------------------
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted. You can also
223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
225 WIN32 default: defined if _WIN32 defined
226 Defining WIN32 sets up defaults for MS environment and compilers.
227 Otherwise defaults are for unix. Beware that there seem to be some
228 cases where this malloc might not be a pure drop-in replacement for
229 Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
230 SetDIBits()) may be due to bugs in some video driver implementations
231 when pixel buffers are malloc()ed, and the region spans more than
232 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
233 default granularity, pixel buffers may straddle virtual allocation
234 regions more often than when using the Microsoft allocator. You can
235 avoid this by using VirtualAlloc() and VirtualFree() for all pixel
236 buffers rather than using malloc(). If this is not possible,
237 recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
238 in cases where MSC and gcc (cygwin) are known to differ on WIN32,
239 conditions use _MSC_VER to distinguish them.
241 DLMALLOC_EXPORT default: extern
242 Defines how public APIs are declared. If you want to export via a
243 Windows DLL, you might define this as
244 #define DLMALLOC_EXPORT extern __declspec(dllexport)
245 If you want a POSIX ELF shared object, you might use
246 #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
248 MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *))
249 Controls the minimum alignment for malloc'ed chunks. It must be a
250 power of two and at least 8, even on machines for which smaller
251 alignments would suffice. It may be defined as larger than this
252 though. Note however that code and data structures are optimized for
253 the case of 8-byte alignment.
255 MSPACES default: 0 (false)
256 If true, compile in support for independent allocation spaces.
257 This is only supported if HAVE_MMAP is true.
259 ONLY_MSPACES default: 0 (false)
260 If true, only compile in mspace versions, not regular versions.
262 USE_LOCKS default: 0 (false)
263 Causes each call to each public routine to be surrounded with
264 pthread or WIN32 mutex lock/unlock. (If set true, this can be
265 overridden on a per-mspace basis for mspace versions.) If set to a
266 non-zero value other than 1, locks are used, but their
267 implementation is left out, so lock functions must be supplied manually,
270 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available
271 If true, uses custom spin locks for locking. This is currently
272 supported only gcc >= 4.1, older gccs on x86 platforms, and recent
273 MS compilers. Otherwise, posix locks or win32 critical sections are
276 USE_RECURSIVE_LOCKS default: not defined
277 If defined nonzero, uses recursive (aka reentrant) locks, otherwise
278 uses plain mutexes. This is not required for malloc proper, but may
279 be needed for layered allocators such as nedmalloc.
281 LOCK_AT_FORK default: not defined
282 If defined nonzero, performs pthread_atfork upon initialization
283 to initialize child lock while holding parent lock. The implementation
284 assumes that pthread locks (not custom locks) are being used. In other
285 cases, you may need to customize the implementation.
288 If true, provide extra checking and dispatching by placing
289 information in the footers of allocated chunks. This adds
290 space and time overhead.
293 If true, omit checks for usage errors and heap space overwrites.
295 USE_DL_PREFIX default: NOT defined
296 Causes compiler to prefix all public routines with the string 'dl'.
297 This can be useful when you only want to use this malloc in one part
298 of a program, using your regular system malloc elsewhere.
300 MALLOC_INSPECT_ALL default: NOT defined
301 If defined, compiles malloc_inspect_all and mspace_inspect_all, that
302 perform traversal of all heap space. Unless access to these
303 functions is otherwise restricted, you probably do not want to
304 include them in secure implementations.
306 ABORT default: defined as abort()
307 Defines how to abort on failed checks. On most systems, a failed
308 check cannot die with an "assert" or even print an informative
309 message, because the underlying print routines in turn call malloc,
310 which will fail again. Generally, the best policy is to simply call
311 abort(). It's not very useful to do more than this because many
312 errors due to overwriting will show up as address faults (null, odd
313 addresses etc) rather than malloc-triggered checks, so will also
314 abort. Also, most compilers know that abort() does not return, so
315 can better optimize code conditionally calling it.
317 PROCEED_ON_ERROR default: defined as 0 (false)
318 Controls whether detected bad addresses cause them to bypassed
319 rather than aborting. If set, detected bad arguments to free and
320 realloc are ignored. And all bookkeeping information is zeroed out
321 upon a detected overwrite of freed heap space, thus losing the
322 ability to ever return it from malloc again, but enabling the
323 application to proceed. If PROCEED_ON_ERROR is defined, the
324 static variable malloc_corruption_error_count is compiled in
325 and can be examined to see if errors have occurred. This option
326 generates slower code than the default abort policy.
328 DEBUG default: NOT defined
329 The DEBUG setting is mainly intended for people trying to modify
330 this code or diagnose problems when porting to new platforms.
331 However, it may also be able to better isolate user errors than just
332 using runtime checks. The assertions in the check routines spell
333 out in more detail the assumptions and invariants underlying the
334 algorithms. The checking is fairly extensive, and will slow down
335 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
336 set will attempt to check every non-mmapped allocated and free chunk
337 in the course of computing the summaries.
339 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
340 Debugging assertion failures can be nearly impossible if your
341 version of the assert macro causes malloc to be called, which will
342 lead to a cascade of further failures, blowing the runtime stack.
343 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
344 which will usually make debugging easier.
346 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
347 The action to take before "return 0" when malloc fails to be able to
348 return memory because there is none available.
350 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
351 True if this system supports sbrk or an emulation of it.
353 MORECORE default: sbrk
354 The name of the sbrk-style system routine to call to obtain more
355 memory. See below for guidance on writing custom MORECORE
356 functions. The type of the argument to sbrk/MORECORE varies across
357 systems. It cannot be size_t, because it supports negative
358 arguments, so it is normally the signed type of the same width as
359 size_t (sometimes declared as "intptr_t"). It doesn't much matter
360 though. Internally, we only call it with arguments less than half
361 the max value of a size_t, which should work across all reasonable
362 possibilities, although sometimes generating compiler warnings.
364 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE
365 If true, take advantage of fact that consecutive calls to MORECORE
366 with positive arguments always return contiguous increasing
367 addresses. This is true of unix sbrk. It does not hurt too much to
368 set it true anyway, since malloc copes with non-contiguities.
369 Setting it false when definitely non-contiguous saves time
370 and possibly wasted space it would take to discover this though.
372 MORECORE_CANNOT_TRIM default: NOT defined
373 True if MORECORE cannot release space back to the system when given
374 negative arguments. This is generally necessary only if you are
375 using a hand-crafted MORECORE function that cannot handle negative
378 NO_SEGMENT_TRAVERSAL default: 0
379 If non-zero, suppresses traversals of memory segments
380 returned by either MORECORE or CALL_MMAP. This disables
381 merging of segments that are contiguous, and selectively
382 releasing them to the OS if unused, but bounds execution times.
384 HAVE_MMAP default: 1 (true)
385 True if this system supports mmap or an emulation of it. If so, and
386 HAVE_MORECORE is not true, MMAP is used for all system
387 allocation. If set and HAVE_MORECORE is true as well, MMAP is
388 primarily used to directly allocate very large blocks. It is also
389 used as a backup strategy in cases where MORECORE fails to provide
390 space from system. Note: A single call to MUNMAP is assumed to be
391 able to unmap memory that may have be allocated using multiple calls
392 to MMAP, so long as they are adjacent.
394 HAVE_MREMAP default: 1 on linux, else 0
395 If true realloc() uses mremap() to re-allocate large blocks and
396 extend or shrink allocation spaces.
398 MMAP_CLEARS default: 1 except on WINCE.
399 True if mmap clears memory so calloc doesn't need to. This is true
400 for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
402 USE_BUILTIN_FFS default: 0 (i.e., not used)
403 Causes malloc to use the builtin ffs() function to compute indices.
404 Some compilers may recognize and intrinsify ffs to be faster than the
405 supplied C version. Also, the case of x86 using gcc is special-cased
406 to an asm instruction, so is already as fast as it can be, and so
407 this setting has no effect. Similarly for Win32 under recent MS compilers.
408 (On most x86s, the asm version is only slightly faster than the C version.)
410 malloc_getpagesize default: derive from system includes, or 4096.
411 The system page size. To the extent possible, this malloc manages
412 memory from the system in page-size units. This may be (and
413 usually is) a function rather than a constant. This is ignored
414 if WIN32, where page size is determined using getSystemInfo during
417 USE_DEV_RANDOM default: 0 (i.e., not used)
418 Causes malloc to use /dev/random to initialize secure magic seed for
419 stamping footers. Otherwise, the current time is used.
421 NO_MALLINFO default: 0
422 If defined, don't compile "mallinfo". This can be a simple way
423 of dealing with mismatches between system declarations and
426 MALLINFO_FIELD_TYPE default: size_t
427 The type of the fields in the mallinfo struct. This was originally
428 defined as "int" in SVID etc, but is more usefully defined as
429 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
431 NO_MALLOC_STATS default: 0
432 If defined, don't compile "malloc_stats". This avoids calls to
433 fprintf and bringing in stdio dependencies you might not want.
435 REALLOC_ZERO_BYTES_FREES default: not defined
436 This should be set if a call to realloc with zero bytes should
437 be the same as a call to free. Some people think it should. Otherwise,
438 since this malloc returns a unique pointer for malloc(0), so does
441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32
444 Define these if your system does not have these header files.
445 You might need to manually insert some of the declarations they provide.
447 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
448 system_info.dwAllocationGranularity in WIN32,
450 Also settable using mallopt(M_GRANULARITY, x)
451 The unit for allocating and deallocating memory from the system. On
452 most systems with contiguous MORECORE, there is no reason to
453 make this more than a page. However, systems with MMAP tend to
454 either require or encourage larger granularities. You can increase
455 this value to prevent system allocation functions to be called so
456 often, especially if they are slow. The value must be at least one
457 page and must be a power of two. Setting to 0 causes initialization
458 to either page size or win32 region size. (Note: In previous
459 versions of malloc, the equivalent of this option was called
462 DEFAULT_TRIM_THRESHOLD default: 2MB
463 Also settable using mallopt(M_TRIM_THRESHOLD, x)
464 The maximum amount of unused top-most memory to keep before
465 releasing via malloc_trim in free(). Automatic trimming is mainly
466 useful in long-lived programs using contiguous MORECORE. Because
467 trimming via sbrk can be slow on some systems, and can sometimes be
468 wasteful (in cases where programs immediately afterward allocate
469 more large chunks) the value should be high enough so that your
470 overall system performance would improve by releasing this much
471 memory. As a rough guide, you might set to a value close to the
472 average size of a process (program) running on your system.
473 Releasing this much memory would allow such a process to run in
474 memory. Generally, it is worth tuning trim thresholds when a
475 program undergoes phases where several large chunks are allocated
476 and released in ways that can reuse each other's storage, perhaps
477 mixed with phases where there are no such chunks at all. The trim
478 value must be greater than page size to have any useful effect. To
479 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
480 some people use of mallocing a huge space and then freeing it at
481 program startup, in an attempt to reserve system memory, doesn't
482 have the intended effect under automatic trimming, since that memory
483 will immediately be returned to the system.
485 DEFAULT_MMAP_THRESHOLD default: 256K
486 Also settable using mallopt(M_MMAP_THRESHOLD, x)
487 The request size threshold for using MMAP to directly service a
488 request. Requests of at least this size that cannot be allocated
489 using already-existing space will be serviced via mmap. (If enough
490 normal freed space already exists it is used instead.) Using mmap
491 segregates relatively large chunks of memory so that they can be
492 individually obtained and released from the host system. A request
493 serviced through mmap is never reused by any other request (at least
494 not directly; the system may just so happen to remap successive
495 requests to the same locations). Segregating space in this way has
496 the benefits that: Mmapped space can always be individually released
497 back to the system, which helps keep the system level memory demands
498 of a long-lived program low. Also, mapped memory doesn't become
499 `locked' between other chunks, as can happen with normally allocated
500 chunks, which means that even trimming via malloc_trim would not
501 release them. However, it has the disadvantage that the space
502 cannot be reclaimed, consolidated, and then used to service later
503 requests, as happens with normal chunks. The advantages of mmap
504 nearly always outweigh disadvantages for "large" chunks, but the
505 value of "large" may vary across systems. The default is an
506 empirically derived value that works well in most systems. You can
507 disable mmap by setting to MAX_SIZE_T.
509 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
510 The number of consolidated frees between checks to release
511 unused segments when freeing. When using non-contiguous segments,
512 especially with multiple mspaces, checking only for topmost space
513 doesn't always suffice to trigger trimming. To compensate for this,
514 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
515 current number of segments, if greater) try to release unused
516 segments to the OS when freeing chunks that result in
517 consolidation. The best value for this parameter is a compromise
518 between slowing down frees with relatively costly checks that
519 rarely trigger versus holding on to unused memory. To effectively
520 disable, set to MAX_SIZE_T. This may lead to a very slight speed
521 improvement at the expense of carrying around more memory.
526 #define LACKS_UNISTD_H
527 #define LACKS_FCNTL_H
528 #define LACKS_SYS_PARAM_H
529 #define LACKS_SYS_MMAN_H
530 #define LACKS_STRINGS_H
531 #define LACKS_SYS_TYPES_H
532 #define LACKS_SCHED_H
535 /* Version identifier to allow people to support multiple versions */
536 #ifndef DLMALLOC_VERSION
537 #define DLMALLOC_VERSION 20806
538 #endif /* DLMALLOC_VERSION */
540 #ifndef DLMALLOC_EXPORT
541 #define DLMALLOC_EXPORT extern
549 #define LACKS_FCNTL_H
551 #endif /* _WIN32_WCE */
554 #define WIN32_LEAN_AND_MEAN
558 #define HAVE_MORECORE 0
559 #define LACKS_UNISTD_H
560 #define LACKS_SYS_PARAM_H
561 #define LACKS_SYS_MMAN_H
562 #define LACKS_STRING_H
563 #define LACKS_STRINGS_H
564 #define LACKS_SYS_TYPES_H
565 #define LACKS_ERRNO_H
566 #define LACKS_SCHED_H
567 #ifndef MALLOC_FAILURE_ACTION
568 #define MALLOC_FAILURE_ACTION
569 #endif /* MALLOC_FAILURE_ACTION */
571 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
572 #define MMAP_CLEARS 0
574 #define MMAP_CLEARS 1
575 #endif /* _WIN32_WCE */
576 #endif /*MMAP_CLEARS */
579 #if defined(DARWIN) || defined(_DARWIN)
580 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
581 #ifndef HAVE_MORECORE
582 #define HAVE_MORECORE 0
584 /* OSX allocators provide 16 byte alignment */
585 #ifndef MALLOC_ALIGNMENT
586 #define MALLOC_ALIGNMENT ((size_t)16U)
588 #endif /* HAVE_MORECORE */
591 #ifndef LACKS_SYS_TYPES_H
592 #include <sys/types.h> /* For size_t */
593 #endif /* LACKS_SYS_TYPES_H */
595 /* The maximum possible size_t value has all bits set */
596 #define MAX_SIZE_T (~(size_t)0)
598 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
599 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
600 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
601 #endif /* USE_LOCKS */
603 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
604 #if ((defined(__GNUC__) && \
605 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \
606 defined(__i386__) || defined(__x86_64__))) || \
607 (defined(_MSC_VER) && _MSC_VER>=1310))
608 #ifndef USE_SPIN_LOCKS
609 #define USE_SPIN_LOCKS 1
610 #endif /* USE_SPIN_LOCKS */
612 #error "USE_SPIN_LOCKS defined without implementation"
613 #endif /* ... locks available... */
614 #elif !defined(USE_SPIN_LOCKS)
615 #define USE_SPIN_LOCKS 0
616 #endif /* USE_LOCKS */
619 #define ONLY_MSPACES 0
620 #endif /* ONLY_MSPACES */
624 #else /* ONLY_MSPACES */
626 #endif /* ONLY_MSPACES */
628 #ifndef MALLOC_ALIGNMENT
629 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
630 #endif /* MALLOC_ALIGNMENT */
635 #define ABORT abort()
637 #ifndef ABORT_ON_ASSERT_FAILURE
638 #define ABORT_ON_ASSERT_FAILURE 1
639 #endif /* ABORT_ON_ASSERT_FAILURE */
640 #ifndef PROCEED_ON_ERROR
641 #define PROCEED_ON_ERROR 0
642 #endif /* PROCEED_ON_ERROR */
646 #endif /* INSECURE */
647 #ifndef MALLOC_INSPECT_ALL
648 #define MALLOC_INSPECT_ALL 0
649 #endif /* MALLOC_INSPECT_ALL */
652 #endif /* HAVE_MMAP */
654 #define MMAP_CLEARS 1
655 #endif /* MMAP_CLEARS */
658 #define HAVE_MREMAP 1
659 #define _GNU_SOURCE /* Turns on mremap() definition */
661 #define HAVE_MREMAP 0
663 #endif /* HAVE_MREMAP */
664 #ifndef MALLOC_FAILURE_ACTION
665 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
666 #endif /* MALLOC_FAILURE_ACTION */
667 #ifndef HAVE_MORECORE
669 #define HAVE_MORECORE 0
670 #else /* ONLY_MSPACES */
671 #define HAVE_MORECORE 1
672 #endif /* ONLY_MSPACES */
673 #endif /* HAVE_MORECORE */
675 #define MORECORE_CONTIGUOUS 0
676 #else /* !HAVE_MORECORE */
677 #define MORECORE_DEFAULT sbrk
678 #ifndef MORECORE_CONTIGUOUS
679 #define MORECORE_CONTIGUOUS 1
680 #endif /* MORECORE_CONTIGUOUS */
681 #endif /* HAVE_MORECORE */
682 #ifndef DEFAULT_GRANULARITY
683 #if (MORECORE_CONTIGUOUS || defined(WIN32))
684 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
685 #else /* MORECORE_CONTIGUOUS */
686 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
687 #endif /* MORECORE_CONTIGUOUS */
688 #endif /* DEFAULT_GRANULARITY */
689 #ifndef DEFAULT_TRIM_THRESHOLD
690 #ifndef MORECORE_CANNOT_TRIM
691 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
692 #else /* MORECORE_CANNOT_TRIM */
693 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
694 #endif /* MORECORE_CANNOT_TRIM */
695 #endif /* DEFAULT_TRIM_THRESHOLD */
696 #ifndef DEFAULT_MMAP_THRESHOLD
698 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
699 #else /* HAVE_MMAP */
700 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
701 #endif /* HAVE_MMAP */
702 #endif /* DEFAULT_MMAP_THRESHOLD */
703 #ifndef MAX_RELEASE_CHECK_RATE
705 #define MAX_RELEASE_CHECK_RATE 4095
707 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
708 #endif /* HAVE_MMAP */
709 #endif /* MAX_RELEASE_CHECK_RATE */
710 #ifndef USE_BUILTIN_FFS
711 #define USE_BUILTIN_FFS 0
712 #endif /* USE_BUILTIN_FFS */
713 #ifndef USE_DEV_RANDOM
714 #define USE_DEV_RANDOM 0
715 #endif /* USE_DEV_RANDOM */
717 #define NO_MALLINFO 0
718 #endif /* NO_MALLINFO */
719 #ifndef MALLINFO_FIELD_TYPE
720 #define MALLINFO_FIELD_TYPE size_t
721 #endif /* MALLINFO_FIELD_TYPE */
722 #ifndef NO_MALLOC_STATS
723 #define NO_MALLOC_STATS 1
724 #endif /* NO_MALLOC_STATS */
725 #ifndef NO_SEGMENT_TRAVERSAL
726 #define NO_SEGMENT_TRAVERSAL 0
727 #endif /* NO_SEGMENT_TRAVERSAL */
730 mallopt tuning options. SVID/XPG defines four standard parameter
731 numbers for mallopt, normally defined in malloc.h. None of these
732 are used in this malloc, so setting them has no effect. But this
733 malloc does support the following options.
736 #define M_TRIM_THRESHOLD (-1)
737 #define M_GRANULARITY (-2)
738 #define M_MMAP_THRESHOLD (-3)
740 /* ------------------------ Mallinfo declarations ------------------------ */
744 This version of malloc supports the standard SVID/XPG mallinfo
745 routine that returns a struct containing usage properties and
746 statistics. It should work on any system that has a
747 /usr/include/malloc.h defining struct mallinfo. The main
748 declaration needed is the mallinfo struct that is returned (by-copy)
749 by mallinfo(). The malloinfo struct contains a bunch of fields that
750 are not even meaningful in this version of malloc. These fields are
751 are instead filled by mallinfo() with other numbers that might be of
754 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
755 /usr/include/malloc.h file that includes a declaration of struct
756 mallinfo. If so, it is included; else a compliant version is
757 declared below. These must be precisely the same for mallinfo() to
758 work. The original SVID version of this struct, defined on most
759 systems with mallinfo, declares all fields as ints. But some others
760 define as unsigned long. If your system defines the fields using a
761 type of different width than listed here, you MUST #include your
762 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
765 /* #define HAVE_USR_INCLUDE_MALLOC_H */
767 #ifdef HAVE_USR_INCLUDE_MALLOC_H
768 #include "/usr/include/malloc.h"
769 #else /* HAVE_USR_INCLUDE_MALLOC_H */
770 #ifndef STRUCT_MALLINFO_DECLARED
771 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
772 #define _STRUCT_MALLINFO
773 #define STRUCT_MALLINFO_DECLARED 1
775 MALLINFO_FIELD_TYPE arena
; /* non-mmapped space allocated from system */
776 MALLINFO_FIELD_TYPE ordblks
; /* number of free chunks */
777 MALLINFO_FIELD_TYPE smblks
; /* always 0 */
778 MALLINFO_FIELD_TYPE hblks
; /* always 0 */
779 MALLINFO_FIELD_TYPE hblkhd
; /* space in mmapped regions */
780 MALLINFO_FIELD_TYPE usmblks
; /* maximum total allocated space */
781 MALLINFO_FIELD_TYPE fsmblks
; /* always 0 */
782 MALLINFO_FIELD_TYPE uordblks
; /* total allocated space */
783 MALLINFO_FIELD_TYPE fordblks
; /* total free space */
784 MALLINFO_FIELD_TYPE keepcost
; /* releasable (via malloc_trim) space */
786 #endif /* STRUCT_MALLINFO_DECLARED */
787 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
788 #endif /* NO_MALLINFO */
791 Try to persuade compilers to inline. The most critical functions for
792 inlining are defined as macros, so these aren't used for them.
796 #if defined(__GNUC__)
797 #define FORCEINLINE __inline __attribute__ ((always_inline))
798 #elif defined(_MSC_VER)
799 #define FORCEINLINE __forceinline
803 #if defined(__GNUC__)
804 #define NOINLINE __attribute__ ((noinline))
805 #elif defined(_MSC_VER)
806 #define NOINLINE __declspec(noinline)
815 #define FORCEINLINE inline
817 #endif /* __cplusplus */
824 /* ------------------- Declarations of public routines ------------------- */
826 #ifndef USE_DL_PREFIX
827 #define dlcalloc calloc
829 #define dlmalloc malloc
830 #define dlmemalign memalign
831 #define dlposix_memalign posix_memalign
832 #define dlrealloc realloc
833 #define dlrealloc_in_place realloc_in_place
834 #define dlvalloc valloc
835 #define dlpvalloc pvalloc
836 #define dlmallinfo mallinfo
837 #define dlmallopt mallopt
838 #define dlmalloc_trim malloc_trim
839 #define dlmalloc_stats malloc_stats
840 #define dlmalloc_usable_size malloc_usable_size
841 #define dlmalloc_footprint malloc_footprint
842 #define dlmalloc_max_footprint malloc_max_footprint
843 #define dlmalloc_footprint_limit malloc_footprint_limit
844 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
845 #define dlmalloc_inspect_all malloc_inspect_all
846 #define dlindependent_calloc independent_calloc
847 #define dlindependent_comalloc independent_comalloc
848 #define dlbulk_free bulk_free
849 #endif /* USE_DL_PREFIX */
853 Returns a pointer to a newly allocated chunk of at least n bytes, or
854 null if no space is available, in which case errno is set to ENOMEM
857 If n is zero, malloc returns a minimum-sized chunk. (The minimum
858 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
859 systems.) Note that size_t is an unsigned type, so calls with
860 arguments that would be negative if signed are interpreted as
861 requests for huge amounts of space, which will often fail. The
862 maximum supported value of n differs across systems, but is in all
863 cases less than the maximum representable value of a size_t.
865 DLMALLOC_EXPORT
void* dlmalloc(size_t);
869 Releases the chunk of memory pointed to by p, that had been previously
870 allocated using malloc or a related routine such as realloc.
871 It has no effect if p is null. If p was not malloced or already
872 freed, free(p) will by default cause the current program to abort.
874 DLMALLOC_EXPORT
void dlfree(void*);
877 calloc(size_t n_elements, size_t element_size);
878 Returns a pointer to n_elements * element_size bytes, with all locations
881 DLMALLOC_EXPORT
void* dlcalloc(size_t, size_t);
884 realloc(void* p, size_t n)
885 Returns a pointer to a chunk of size n that contains the same data
886 as does chunk p up to the minimum of (n, p's size) bytes, or null
887 if no space is available.
889 The returned pointer may or may not be the same as p. The algorithm
890 prefers extending p in most cases when possible, otherwise it
891 employs the equivalent of a malloc-copy-free sequence.
893 If p is null, realloc is equivalent to malloc.
895 If space is not available, realloc returns null, errno is set (if on
896 ANSI) and p is NOT freed.
898 if n is for fewer bytes than already held by p, the newly unused
899 space is lopped off and freed if possible. realloc with a size
900 argument of zero (re)allocates a minimum-sized chunk.
902 The old unix realloc convention of allowing the last-free'd chunk
903 to be used as an argument to realloc is not supported.
905 DLMALLOC_EXPORT
void* dlrealloc(void*, size_t);
908 realloc_in_place(void* p, size_t n)
909 Resizes the space allocated for p to size n, only if this can be
910 done without moving p (i.e., only if there is adjacent space
911 available if n is greater than p's current allocated size, or n is
912 less than or equal to p's size). This may be used instead of plain
913 realloc if an alternative allocation strategy is needed upon failure
914 to expand space; for example, reallocation of a buffer that must be
915 memory-aligned or cleared. You can use realloc_in_place to trigger
916 these alternatives only when needed.
918 Returns p if successful; otherwise null.
920 DLMALLOC_EXPORT
void* dlrealloc_in_place(void*, size_t);
923 memalign(size_t alignment, size_t n);
924 Returns a pointer to a newly allocated chunk of n bytes, aligned
925 in accord with the alignment argument.
927 The alignment argument should be a power of two. If the argument is
928 not a power of two, the nearest greater power is used.
929 8-byte alignment is guaranteed by normal malloc calls, so don't
930 bother calling memalign with an argument of 8 or less.
932 Overreliance on memalign is a sure way to fragment space.
934 DLMALLOC_EXPORT
void* dlmemalign(size_t, size_t);
937 int posix_memalign(void** pp, size_t alignment, size_t n);
938 Allocates a chunk of n bytes, aligned in accord with the alignment
939 argument. Differs from memalign only in that it (1) assigns the
940 allocated memory to *pp rather than returning it, (2) fails and
941 returns EINVAL if the alignment is not a power of two (3) fails and
942 returns ENOMEM if memory cannot be allocated.
944 DLMALLOC_EXPORT
int dlposix_memalign(void**, size_t, size_t);
948 Equivalent to memalign(pagesize, n), where pagesize is the page
949 size of the system. If the pagesize is unknown, 4096 is used.
951 DLMALLOC_EXPORT
void* dlvalloc(size_t);
954 mallopt(int parameter_number, int parameter_value)
955 Sets tunable parameters The format is to provide a
956 (parameter-number, parameter-value) pair. mallopt then sets the
957 corresponding parameter to the argument value if it can (i.e., so
958 long as the value is meaningful), and returns 1 if successful else
959 0. To workaround the fact that mallopt is specified to use int,
960 not size_t parameters, the value -1 is specially treated as the
961 maximum unsigned size_t value.
963 SVID/XPG/ANSI defines four standard param numbers for mallopt,
964 normally defined in malloc.h. None of these are use in this malloc,
965 so setting them has no effect. But this malloc also supports other
966 options in mallopt. See below for details. Briefly, supported
967 parameters are as follows (listed defaults are for "typical"
970 Symbol param # default allowed param values
971 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables)
972 M_GRANULARITY -2 page size any power of 2 >= page size
973 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
975 DLMALLOC_EXPORT
int dlmallopt(int, int);
979 Returns the number of bytes obtained from the system. The total
980 number of bytes allocated by malloc, realloc etc., is less than this
981 value. Unlike mallinfo, this function returns only a precomputed
982 result, so can be called frequently to monitor memory consumption.
983 Even if locks are otherwise defined, this function does not use them,
984 so results might not be up to date.
986 DLMALLOC_EXPORT
size_t dlmalloc_footprint(void);
989 malloc_max_footprint();
990 Returns the maximum number of bytes obtained from the system. This
991 value will be greater than current footprint if deallocated space
992 has been reclaimed by the system. The peak number of bytes allocated
993 by malloc, realloc etc., is less than this value. Unlike mallinfo,
994 this function returns only a precomputed result, so can be called
995 frequently to monitor memory consumption. Even if locks are
996 otherwise defined, this function does not use them, so results might
999 DLMALLOC_EXPORT
size_t dlmalloc_max_footprint(void);
1002 malloc_footprint_limit();
1003 Returns the number of bytes that the heap is allowed to obtain from
1004 the system, returning the last value returned by
1005 malloc_set_footprint_limit, or the maximum size_t value if
1006 never set. The returned value reflects a permission. There is no
1007 guarantee that this number of bytes can actually be obtained from
1010 DLMALLOC_EXPORT
size_t dlmalloc_footprint_limit();
1013 malloc_set_footprint_limit();
1014 Sets the maximum number of bytes to obtain from the system, causing
1015 failure returns from malloc and related functions upon attempts to
1016 exceed this value. The argument value may be subject to page
1017 rounding to an enforceable limit; this actual value is returned.
1018 Using an argument of the maximum possible size_t effectively
1019 disables checks. If the argument is less than or equal to the
1020 current malloc_footprint, then all future allocations that require
1021 additional system memory will fail. However, invocation cannot
1022 retroactively deallocate existing used memory.
1024 DLMALLOC_EXPORT
size_t dlmalloc_set_footprint_limit(size_t bytes
);
1026 #if MALLOC_INSPECT_ALL
1028 malloc_inspect_all(void(*handler)(void *start,
1031 void* callback_arg),
1033 Traverses the heap and calls the given handler for each managed
1034 region, skipping all bytes that are (or may be) used for bookkeeping
1035 purposes. Traversal does not include include chunks that have been
1036 directly memory mapped. Each reported region begins at the start
1037 address, and continues up to but not including the end address. The
1038 first used_bytes of the region contain allocated data. If
1039 used_bytes is zero, the region is unallocated. The handler is
1040 invoked with the given callback argument. If locks are defined, they
1041 are held during the entire traversal. It is a bad idea to invoke
1042 other malloc functions from within the handler.
1044 For example, to count the number of in-use chunks with size greater
1045 than 1000, you could write:
1046 static int count = 0;
1047 void count_chunks(void* start, void* end, size_t used, void* arg) {
1048 if (used >= 1000) ++count;
1051 malloc_inspect_all(count_chunks, NULL);
1053 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1055 DLMALLOC_EXPORT
void dlmalloc_inspect_all(void(*handler
)(void*, void *, size_t, void*),
1058 #endif /* MALLOC_INSPECT_ALL */
1063 Returns (by copy) a struct containing various summary statistics:
1065 arena: current total non-mmapped bytes allocated from system
1066 ordblks: the number of free chunks
1067 smblks: always zero.
1068 hblks: current number of mmapped regions
1069 hblkhd: total bytes held in mmapped regions
1070 usmblks: the maximum total allocated space. This will be greater
1071 than current total if trimming has occurred.
1072 fsmblks: always zero
1073 uordblks: current total allocated space (normal or mmapped)
1074 fordblks: total free space
1075 keepcost: the maximum number of bytes that could ideally be released
1076 back to system via malloc_trim. ("ideally" means that
1077 it ignores page restrictions etc.)
1079 Because these fields are ints, but internal bookkeeping may
1080 be kept as longs, the reported values may wrap around zero and
1083 DLMALLOC_EXPORT
struct mallinfo
dlmallinfo(void);
1084 #endif /* NO_MALLINFO */
1087 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1089 independent_calloc is similar to calloc, but instead of returning a
1090 single cleared space, it returns an array of pointers to n_elements
1091 independent elements that can hold contents of size elem_size, each
1092 of which starts out cleared, and can be independently freed,
1093 realloc'ed etc. The elements are guaranteed to be adjacently
1094 allocated (this is not guaranteed to occur with multiple callocs or
1095 mallocs), which may also improve cache locality in some
1098 The "chunks" argument is optional (i.e., may be null, which is
1099 probably the most typical usage). If it is null, the returned array
1100 is itself dynamically allocated and should also be freed when it is
1101 no longer needed. Otherwise, the chunks array must be of at least
1102 n_elements in length. It is filled in with the pointers to the
1105 In either case, independent_calloc returns this pointer array, or
1106 null if the allocation failed. If n_elements is zero and "chunks"
1107 is null, it returns a chunk representing an array with zero elements
1108 (which should be freed if not wanted).
1110 Each element must be freed when it is no longer needed. This can be
1111 done all at once using bulk_free.
1113 independent_calloc simplifies and speeds up implementations of many
1114 kinds of pools. It may also be useful when constructing large data
1115 structures that initially have a fixed number of fixed-sized nodes,
1116 but the number is not known at compile time, and some of the nodes
1117 may later need to be freed. For example:
1119 struct Node { int item; struct Node* next; };
1121 struct Node* build_list() {
1123 int n = read_number_of_nodes_needed();
1124 if (n <= 0) return 0;
1125 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1126 if (pool == 0) die();
1127 // organize into a linked list...
1128 struct Node* first = pool[0];
1129 for (i = 0; i < n-1; ++i)
1130 pool[i]->next = pool[i+1];
1131 free(pool); // Can now free the array (or not, if it is needed later)
1135 DLMALLOC_EXPORT
void** dlindependent_calloc(size_t, size_t, void**);
1138 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1140 independent_comalloc allocates, all at once, a set of n_elements
1141 chunks with sizes indicated in the "sizes" array. It returns
1142 an array of pointers to these elements, each of which can be
1143 independently freed, realloc'ed etc. The elements are guaranteed to
1144 be adjacently allocated (this is not guaranteed to occur with
1145 multiple callocs or mallocs), which may also improve cache locality
1146 in some applications.
1148 The "chunks" argument is optional (i.e., may be null). If it is null
1149 the returned array is itself dynamically allocated and should also
1150 be freed when it is no longer needed. Otherwise, the chunks array
1151 must be of at least n_elements in length. It is filled in with the
1152 pointers to the chunks.
1154 In either case, independent_comalloc returns this pointer array, or
1155 null if the allocation failed. If n_elements is zero and chunks is
1156 null, it returns a chunk representing an array with zero elements
1157 (which should be freed if not wanted).
1159 Each element must be freed when it is no longer needed. This can be
1160 done all at once using bulk_free.
1162 independent_comallac differs from independent_calloc in that each
1163 element may have a different size, and also that it does not
1164 automatically clear elements.
1166 independent_comalloc can be used to speed up allocation in cases
1167 where several structs or objects must always be allocated at the
1168 same time. For example:
1173 void send_message(char* msg) {
1174 int msglen = strlen(msg);
1175 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1177 if (independent_comalloc(3, sizes, chunks) == 0)
1179 struct Head* head = (struct Head*)(chunks[0]);
1180 char* body = (char*)(chunks[1]);
1181 struct Foot* foot = (struct Foot*)(chunks[2]);
1185 In general though, independent_comalloc is worth using only for
1186 larger values of n_elements. For small values, you probably won't
1187 detect enough difference from series of malloc calls to bother.
1189 Overuse of independent_comalloc can increase overall memory usage,
1190 since it cannot reuse existing noncontiguous small chunks that
1191 might be available for some of the elements.
1193 DLMALLOC_EXPORT
void** dlindependent_comalloc(size_t, size_t*, void**);
1196 bulk_free(void* array[], size_t n_elements)
1197 Frees and clears (sets to null) each non-null pointer in the given
1198 array. This is likely to be faster than freeing them one-by-one.
1199 If footers are used, pointers that have been allocated in different
1200 mspaces are not freed or cleared, and the count of all such pointers
1201 is returned. For large arrays of pointers with poor locality, it
1202 may be worthwhile to sort this array before calling bulk_free.
1204 DLMALLOC_EXPORT
size_t dlbulk_free(void**, size_t n_elements
);
1208 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1209 round up n to nearest pagesize.
1211 DLMALLOC_EXPORT
void* dlpvalloc(size_t);
1214 malloc_trim(size_t pad);
1216 If possible, gives memory back to the system (via negative arguments
1217 to sbrk) if there is unused memory at the `high' end of the malloc
1218 pool or in unused MMAP segments. You can call this after freeing
1219 large blocks of memory to potentially reduce the system-level memory
1220 requirements of a program. However, it cannot guarantee to reduce
1221 memory. Under some allocation patterns, some large free blocks of
1222 memory will be locked between two used chunks, so they cannot be
1223 given back to the system.
1225 The `pad' argument to malloc_trim represents the amount of free
1226 trailing space to leave untrimmed. If this argument is zero, only
1227 the minimum amount of memory to maintain internal data structures
1228 will be left. Non-zero arguments can be supplied to maintain enough
1229 trailing space to service future expected allocations without having
1230 to re-obtain memory from the system.
1232 Malloc_trim returns 1 if it actually released any memory, else 0.
1234 DLMALLOC_EXPORT
int dlmalloc_trim(size_t);
1238 Prints on stderr the amount of space obtained from the system (both
1239 via sbrk and mmap), the maximum amount (which may be more than
1240 current if malloc_trim and/or munmap got called), and the current
1241 number of bytes allocated via malloc (or realloc, etc) but not yet
1242 freed. Note that this is the number of bytes allocated, not the
1243 number requested. It will be larger than the number requested
1244 because of alignment and bookkeeping overhead. Because it includes
1245 alignment wastage as being in use, this figure may be greater than
1246 zero even when no user-level chunks are allocated.
1248 The reported current and maximum system memory can be inaccurate if
1249 a program makes other calls to system memory allocation functions
1250 (normally sbrk) outside of malloc.
1252 malloc_stats prints only the most commonly interesting statistics.
1253 More information can be obtained by calling mallinfo.
1255 DLMALLOC_EXPORT
void dlmalloc_stats(void);
1258 malloc_usable_size(void* p);
1260 Returns the number of bytes you can actually use in
1261 an allocated chunk, which may be more than you requested (although
1262 often not) due to alignment and minimum size constraints.
1263 You can use this many bytes without worrying about
1264 overwriting other allocated objects. This is not a particularly great
1265 programming practice. malloc_usable_size can be more useful in
1266 debugging and assertions, for example:
1269 assert(malloc_usable_size(p) >= 256);
1271 size_t dlmalloc_usable_size(void*);
1273 #endif /* ONLY_MSPACES */
1278 mspace is an opaque type representing an independent
1279 region of space that supports mspace_malloc, etc.
1281 typedef void* mspace
;
1284 create_mspace creates and returns a new independent space with the
1285 given initial capacity, or, if 0, the default granularity size. It
1286 returns null if there is no system memory available to create the
1287 space. If argument locked is non-zero, the space uses a separate
1288 lock to control access. The capacity of the space will grow
1289 dynamically as needed to service mspace_malloc requests. You can
1290 control the sizes of incremental increases of this space by
1291 compiling with a different DEFAULT_GRANULARITY or dynamically
1292 setting with mallopt(M_GRANULARITY, value).
1294 DLMALLOC_EXPORT mspace
create_mspace(size_t capacity
, int locked
);
1297 destroy_mspace destroys the given space, and attempts to return all
1298 of its memory back to the system, returning the total number of
1299 bytes freed. After destruction, the results of access to all memory
1300 used by the space become undefined.
1302 DLMALLOC_EXPORT
size_t destroy_mspace(mspace msp
);
1305 create_mspace_with_base uses the memory supplied as the initial base
1306 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1307 space is used for bookkeeping, so the capacity must be at least this
1308 large. (Otherwise 0 is returned.) When this initial space is
1309 exhausted, additional memory will be obtained from the system.
1310 Destroying this space will deallocate all additionally allocated
1311 space (if possible) but not the initial base.
1313 DLMALLOC_EXPORT mspace
create_mspace_with_base(void* base
, size_t capacity
, int locked
);
1316 mspace_track_large_chunks controls whether requests for large chunks
1317 are allocated in their own untracked mmapped regions, separate from
1318 others in this mspace. By default large chunks are not tracked,
1319 which reduces fragmentation. However, such chunks are not
1320 necessarily released to the system upon destroy_mspace. Enabling
1321 tracking by setting to true may increase fragmentation, but avoids
1322 leakage when relying on destroy_mspace to release all memory
1323 allocated using this space. The function returns the previous
1326 DLMALLOC_EXPORT
int mspace_track_large_chunks(mspace msp
, int enable
);
1330 mspace_malloc behaves as malloc, but operates within
1333 DLMALLOC_EXPORT
void* mspace_malloc(mspace msp
, size_t bytes
);
1336 mspace_free behaves as free, but operates within
1339 If compiled with FOOTERS==1, mspace_free is not actually needed.
1340 free may be called instead of mspace_free because freed chunks from
1341 any space are handled by their originating spaces.
1343 DLMALLOC_EXPORT
void mspace_free(mspace msp
, void* mem
);
1346 mspace_realloc behaves as realloc, but operates within
1349 If compiled with FOOTERS==1, mspace_realloc is not actually
1350 needed. realloc may be called instead of mspace_realloc because
1351 realloced chunks from any space are handled by their originating
1354 DLMALLOC_EXPORT
void* mspace_realloc(mspace msp
, void* mem
, size_t newsize
);
1357 mspace_calloc behaves as calloc, but operates within
1360 DLMALLOC_EXPORT
void* mspace_calloc(mspace msp
, size_t n_elements
, size_t elem_size
);
1363 mspace_memalign behaves as memalign, but operates within
1366 DLMALLOC_EXPORT
void* mspace_memalign(mspace msp
, size_t alignment
, size_t bytes
);
1369 mspace_independent_calloc behaves as independent_calloc, but
1370 operates within the given space.
1372 DLMALLOC_EXPORT
void** mspace_independent_calloc(mspace msp
, size_t n_elements
,
1373 size_t elem_size
, void* chunks
[]);
1376 mspace_independent_comalloc behaves as independent_comalloc, but
1377 operates within the given space.
1379 DLMALLOC_EXPORT
void** mspace_independent_comalloc(mspace msp
, size_t n_elements
,
1380 size_t sizes
[], void* chunks
[]);
1383 mspace_footprint() returns the number of bytes obtained from the
1384 system for this space.
1386 DLMALLOC_EXPORT
size_t mspace_footprint(mspace msp
);
1389 mspace_max_footprint() returns the peak number of bytes obtained from the
1390 system for this space.
1392 DLMALLOC_EXPORT
size_t mspace_max_footprint(mspace msp
);
1397 mspace_mallinfo behaves as mallinfo, but reports properties of
1400 DLMALLOC_EXPORT
struct mallinfo
mspace_mallinfo(mspace msp
);
1401 #endif /* NO_MALLINFO */
1404 malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1406 DLMALLOC_EXPORT
size_t mspace_usable_size(const void* mem
);
1409 mspace_malloc_stats behaves as malloc_stats, but reports
1410 properties of the given space.
1412 DLMALLOC_EXPORT
void mspace_malloc_stats(mspace msp
);
1415 mspace_trim behaves as malloc_trim, but
1416 operates within the given space.
1418 DLMALLOC_EXPORT
int mspace_trim(mspace msp
, size_t pad
);
1421 An alias for mallopt.
1423 DLMALLOC_EXPORT
int mspace_mallopt(int, int);
1425 #endif /* MSPACES */
1428 } /* end of extern "C" */
1429 #endif /* __cplusplus */
1432 ========================================================================
1433 To make a fully customizable malloc.h header file, cut everything
1434 above this line, put into file malloc.h, edit to suit, and #include it
1435 on the next line, as well as in programs that use this malloc.
1436 ========================================================================
1439 /* #include "malloc.h" */
1441 /*------------------------------ internal #includes ---------------------- */
1444 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1445 #endif /* _MSC_VER */
1446 #if !NO_MALLOC_STATS
1447 #include <stdio.h> /* for printing in malloc_stats */
1448 #endif /* NO_MALLOC_STATS */
1449 #ifndef LACKS_ERRNO_H
1450 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1451 #endif /* LACKS_ERRNO_H */
1453 #if ABORT_ON_ASSERT_FAILURE
1455 #define assert(x) if(!(x)) ABORT
1456 #else /* ABORT_ON_ASSERT_FAILURE */
1458 #endif /* ABORT_ON_ASSERT_FAILURE */
1465 #if !defined(WIN32) && !defined(LACKS_TIME_H)
1466 #include <time.h> /* for magic initialization */
1468 #ifndef LACKS_STDLIB_H
1469 #include <stdlib.h> /* for abort() */
1470 #endif /* LACKS_STDLIB_H */
1471 #ifndef LACKS_STRING_H
1472 #include <string.h> /* for memset etc */
1473 #endif /* LACKS_STRING_H */
1475 #ifndef LACKS_STRINGS_H
1476 #include <strings.h> /* for ffs */
1477 #endif /* LACKS_STRINGS_H */
1478 #endif /* USE_BUILTIN_FFS */
1480 #ifndef LACKS_SYS_MMAN_H
1481 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1482 #if (defined(linux) && !defined(__USE_GNU))
1484 #include <sys/mman.h> /* for mmap */
1487 #include <sys/mman.h> /* for mmap */
1489 #endif /* LACKS_SYS_MMAN_H */
1490 #ifndef LACKS_FCNTL_H
1492 #endif /* LACKS_FCNTL_H */
1493 #endif /* HAVE_MMAP */
1494 #ifndef LACKS_UNISTD_H
1495 #include <unistd.h> /* for sbrk, sysconf */
1496 #else /* LACKS_UNISTD_H */
1497 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1498 extern void* sbrk(ptrdiff_t);
1499 #endif /* FreeBSD etc */
1500 #endif /* LACKS_UNISTD_H */
1502 /* Declarations for locking */
1505 #if defined (__SVR4) && defined (__sun) /* solaris */
1507 #elif !defined(LACKS_SCHED_H)
1509 #endif /* solaris or LACKS_SCHED_H */
1510 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1511 #include <pthread.h>
1512 #endif /* USE_RECURSIVE_LOCKS ... */
1513 #elif defined(_MSC_VER)
1515 /* These are already defined on AMD64 builds */
1518 #endif /* __cplusplus */
1519 LONG __cdecl
_InterlockedCompareExchange(LONG
volatile *Dest
, LONG Exchange
, LONG Comp
);
1520 LONG __cdecl
_InterlockedExchange(LONG
volatile *Target
, LONG Value
);
1523 #endif /* __cplusplus */
1524 #endif /* _M_AMD64 */
1525 #pragma intrinsic (_InterlockedCompareExchange)
1526 #pragma intrinsic (_InterlockedExchange)
1527 #define interlockedcompareexchange _InterlockedCompareExchange
1528 #define interlockedexchange _InterlockedExchange
1529 #elif defined(WIN32) && defined(__GNUC__)
1530 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1531 #define interlockedexchange __sync_lock_test_and_set
1533 #else /* USE_LOCKS */
1534 #endif /* USE_LOCKS */
1536 #ifndef LOCK_AT_FORK
1537 #define LOCK_AT_FORK 0
1540 /* Declarations for bit scanning on win32 */
1541 #if defined(_MSC_VER) && _MSC_VER>=1300
1542 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1545 #endif /* __cplusplus */
1546 unsigned char _BitScanForward(unsigned long *index
, unsigned long mask
);
1547 unsigned char _BitScanReverse(unsigned long *index
, unsigned long mask
);
1550 #endif /* __cplusplus */
1552 #define BitScanForward _BitScanForward
1553 #define BitScanReverse _BitScanReverse
1554 #pragma intrinsic(_BitScanForward)
1555 #pragma intrinsic(_BitScanReverse)
1556 #endif /* BitScanForward */
1557 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1560 #ifndef malloc_getpagesize
1561 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1562 # ifndef _SC_PAGE_SIZE
1563 # define _SC_PAGE_SIZE _SC_PAGESIZE
1566 # ifdef _SC_PAGE_SIZE
1567 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1569 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1570 extern size_t getpagesize();
1571 # define malloc_getpagesize getpagesize()
1573 # ifdef WIN32 /* use supplied emulation of getpagesize */
1574 # define malloc_getpagesize getpagesize()
1576 # ifndef LACKS_SYS_PARAM_H
1577 # include <sys/param.h>
1579 # ifdef EXEC_PAGESIZE
1580 # define malloc_getpagesize EXEC_PAGESIZE
1584 # define malloc_getpagesize NBPG
1586 # define malloc_getpagesize (NBPG * CLSIZE)
1590 # define malloc_getpagesize NBPC
1593 # define malloc_getpagesize PAGESIZE
1594 # else /* just guess */
1595 # define malloc_getpagesize ((size_t)4096U)
1606 /* ------------------- size_t and alignment properties -------------------- */
1608 /* The byte and bit size of a size_t */
1609 #define SIZE_T_SIZE (sizeof(size_t))
1610 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1612 /* Some constants coerced to size_t */
1613 /* Annoying but necessary to avoid errors on some platforms */
1614 #define SIZE_T_ZERO ((size_t)0)
1615 #define SIZE_T_ONE ((size_t)1)
1616 #define SIZE_T_TWO ((size_t)2)
1617 #define SIZE_T_FOUR ((size_t)4)
1618 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1619 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1620 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1621 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1623 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1624 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1626 /* True if address a has acceptable alignment */
1627 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1629 /* the number of bytes to offset an address to align it */
1630 #define align_offset(A)\
1631 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1632 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1634 /* -------------------------- MMAP preliminaries ------------------------- */
1637 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1638 checks to fail so compiler optimizer can delete code rather than
1639 using so many "#if"s.
1643 /* MORECORE and MMAP must return MFAIL on failure */
1644 #define MFAIL ((void*)(MAX_SIZE_T))
1645 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1650 #define MUNMAP_DEFAULT(a, s) munmap((a), (s))
1651 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1652 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1653 #define MAP_ANONYMOUS MAP_ANON
1654 #endif /* MAP_ANON */
1655 #ifdef MAP_ANONYMOUS
1656 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1657 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1658 #else /* MAP_ANONYMOUS */
1660 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1661 is unlikely to be needed, but is supplied just in case.
1663 #define MMAP_FLAGS (MAP_PRIVATE)
1664 static int dev_zero_fd
= -1; /* Cached file descriptor for /dev/zero. */
1665 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1666 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1667 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1668 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1669 #endif /* MAP_ANONYMOUS */
1671 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1675 /* Win32 MMAP via VirtualAlloc */
1676 static FORCEINLINE
void* win32mmap(size_t size
) {
1677 void* ptr
= VirtualAlloc(0, size
, MEM_RESERVE
|MEM_COMMIT
, PAGE_READWRITE
);
1678 return (ptr
!= 0)? ptr
: MFAIL
;
1681 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1682 static FORCEINLINE
void* win32direct_mmap(size_t size
) {
1683 void* ptr
= VirtualAlloc(0, size
, MEM_RESERVE
|MEM_COMMIT
|MEM_TOP_DOWN
,
1685 return (ptr
!= 0)? ptr
: MFAIL
;
1688 /* This function supports releasing coalesed segments */
1689 static FORCEINLINE
int win32munmap(void* ptr
, size_t size
) {
1690 MEMORY_BASIC_INFORMATION minfo
;
1691 char* cptr
= (char*)ptr
;
1693 if (VirtualQuery(cptr
, &minfo
, sizeof(minfo
)) == 0)
1695 if (minfo
.BaseAddress
!= cptr
|| minfo
.AllocationBase
!= cptr
||
1696 minfo
.State
!= MEM_COMMIT
|| minfo
.RegionSize
> size
)
1698 if (VirtualFree(cptr
, 0, MEM_RELEASE
) == 0)
1700 cptr
+= minfo
.RegionSize
;
1701 size
-= minfo
.RegionSize
;
1706 #define MMAP_DEFAULT(s) win32mmap(s)
1707 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s))
1708 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s)
1710 #endif /* HAVE_MMAP */
1714 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1716 #endif /* HAVE_MREMAP */
1719 * Define CALL_MORECORE
1723 #define CALL_MORECORE(S) MORECORE(S)
1724 #else /* MORECORE */
1725 #define CALL_MORECORE(S) MORECORE_DEFAULT(S)
1726 #endif /* MORECORE */
1727 #else /* HAVE_MORECORE */
1728 #define CALL_MORECORE(S) MFAIL
1729 #endif /* HAVE_MORECORE */
1732 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1735 #define USE_MMAP_BIT (SIZE_T_ONE)
1738 #define CALL_MMAP(s) MMAP(s)
1740 #define CALL_MMAP(s) MMAP_DEFAULT(s)
1743 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1745 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s))
1748 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1749 #else /* DIRECT_MMAP */
1750 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1751 #endif /* DIRECT_MMAP */
1752 #else /* HAVE_MMAP */
1753 #define USE_MMAP_BIT (SIZE_T_ZERO)
1755 #define MMAP(s) MFAIL
1756 #define MUNMAP(a, s) (-1)
1757 #define DIRECT_MMAP(s) MFAIL
1758 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1759 #define CALL_MMAP(s) MMAP(s)
1760 #define CALL_MUNMAP(a, s) MUNMAP((a), (s))
1761 #endif /* HAVE_MMAP */
1764 * Define CALL_MREMAP
1766 #if HAVE_MMAP && HAVE_MREMAP
1768 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1770 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1772 #else /* HAVE_MMAP && HAVE_MREMAP */
1773 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1774 #endif /* HAVE_MMAP && HAVE_MREMAP */
1776 /* mstate bit set if continguous morecore disabled or failed */
1777 #define USE_NONCONTIGUOUS_BIT (4U)
1779 /* segment bit set in create_mspace_with_base */
1780 #define EXTERN_BIT (8U)
1783 /* --------------------------- Lock preliminaries ------------------------ */
1786 When locks are defined, there is one global lock, plus
1787 one per-mspace lock.
1789 The global lock_ensures that mparams.magic and other unique
1790 mparams values are initialized only once. It also protects
1791 sequences of calls to MORECORE. In many cases sys_alloc requires
1792 two calls, that should not be interleaved with calls by other
1793 threads. This does not protect against direct calls to MORECORE
1794 by other threads not using this lock, so there is still code to
1795 cope the best we can on interference.
1797 Per-mspace locks surround calls to malloc, free, etc.
1798 By default, locks are simple non-reentrant mutexes.
1800 Because lock-protected regions generally have bounded times, it is
1801 OK to use the supplied simple spinlocks. Spinlocks are likely to
1802 improve performance for lightly contended applications, but worsen
1803 performance under heavy contention.
1805 If USE_LOCKS is > 1, the definitions of lock routines here are
1806 bypassed, in which case you will need to define the type MLOCK_T,
1807 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1808 and TRY_LOCK. You must also declare a
1809 static MLOCK_T malloc_global_mutex = { initialization values };.
1814 #define USE_LOCK_BIT (0U)
1815 #define INITIAL_LOCK(l) (0)
1816 #define DESTROY_LOCK(l) (0)
1817 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1818 #define RELEASE_MALLOC_GLOBAL_LOCK()
1822 /* ----------------------- User-defined locks ------------------------ */
1823 /* Define your own lock implementation here */
1824 /* #define INITIAL_LOCK(lk) ... */
1825 /* #define DESTROY_LOCK(lk) ... */
1826 /* #define ACQUIRE_LOCK(lk) ... */
1827 /* #define RELEASE_LOCK(lk) ... */
1828 /* #define TRY_LOCK(lk) ... */
1829 /* static MLOCK_T malloc_global_mutex = ... */
1831 #elif USE_SPIN_LOCKS
1833 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1834 /* Note CAS_LOCK defined to return 0 on success */
1836 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1837 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1)
1838 #define CLEAR_LOCK(sl) __sync_lock_release(sl)
1840 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1841 /* Custom spin locks for older gcc on x86 */
1842 static FORCEINLINE
int x86_cas_lock(int *sl
) {
1846 __asm__
__volatile__ ("lock; cmpxchgl %1, %2"
1848 : "r" (val
), "m" (*(sl
)), "0"(cmp
)
1853 static FORCEINLINE
void x86_clear_lock(int* sl
) {
1857 __asm__
__volatile__ ("lock; xchgl %0, %1"
1859 : "m" (*(sl
)), "0"(prev
)
1863 #define CAS_LOCK(sl) x86_cas_lock(sl)
1864 #define CLEAR_LOCK(sl) x86_clear_lock(sl)
1866 #else /* Win32 MSC */
1867 #define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1)
1868 #define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0)
1870 #endif /* ... gcc spins locks ... */
1872 /* How to yield for a spin lock */
1873 #define SPINS_PER_YIELD 63
1874 #if defined(_MSC_VER)
1875 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */
1876 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE)
1877 #elif defined (__SVR4) && defined (__sun) /* solaris */
1878 #define SPIN_LOCK_YIELD thr_yield();
1879 #elif !defined(LACKS_SCHED_H)
1880 #define SPIN_LOCK_YIELD sched_yield();
1882 #define SPIN_LOCK_YIELD
1883 #endif /* ... yield ... */
1885 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1886 /* Plain spin locks use single word (embedded in malloc_states) */
1887 static int spin_acquire_lock(int *sl
) {
1889 while (*(volatile int *)sl
!= 0 || CAS_LOCK(sl
)) {
1890 if ((++spins
& SPINS_PER_YIELD
) == 0) {
1898 #define TRY_LOCK(sl) !CAS_LOCK(sl)
1899 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl)
1900 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1901 #define INITIAL_LOCK(sl) (*sl = 0)
1902 #define DESTROY_LOCK(sl) (0)
1903 static MLOCK_T malloc_global_mutex
= 0;
1905 #else /* USE_RECURSIVE_LOCKS */
1906 /* types for lock owners */
1908 #define THREAD_ID_T DWORD
1909 #define CURRENT_THREAD GetCurrentThreadId()
1910 #define EQ_OWNER(X,Y) ((X) == (Y))
1913 Note: the following assume that pthread_t is a type that can be
1914 initialized to (casted) zero. If this is not the case, you will need to
1915 somehow redefine these or not use spin locks.
1917 #define THREAD_ID_T pthread_t
1918 #define CURRENT_THREAD pthread_self()
1919 #define EQ_OWNER(X,Y) pthread_equal(X, Y)
1922 struct malloc_recursive_lock
{
1925 THREAD_ID_T threadid
;
1928 #define MLOCK_T struct malloc_recursive_lock
1929 static MLOCK_T malloc_global_mutex
= { 0, 0, (THREAD_ID_T
)0};
1931 static FORCEINLINE
void recursive_release_lock(MLOCK_T
*lk
) {
1932 assert(lk
->sl
!= 0);
1934 CLEAR_LOCK(&lk
->sl
);
1938 static FORCEINLINE
int recursive_acquire_lock(MLOCK_T
*lk
) {
1939 THREAD_ID_T mythreadid
= CURRENT_THREAD
;
1942 if (*((volatile int *)(&lk
->sl
)) == 0) {
1943 if (!CAS_LOCK(&lk
->sl
)) {
1944 lk
->threadid
= mythreadid
;
1949 else if (EQ_OWNER(lk
->threadid
, mythreadid
)) {
1953 if ((++spins
& SPINS_PER_YIELD
) == 0) {
1959 static FORCEINLINE
int recursive_try_lock(MLOCK_T
*lk
) {
1960 THREAD_ID_T mythreadid
= CURRENT_THREAD
;
1961 if (*((volatile int *)(&lk
->sl
)) == 0) {
1962 if (!CAS_LOCK(&lk
->sl
)) {
1963 lk
->threadid
= mythreadid
;
1968 else if (EQ_OWNER(lk
->threadid
, mythreadid
)) {
1975 #define RELEASE_LOCK(lk) recursive_release_lock(lk)
1976 #define TRY_LOCK(lk) recursive_try_lock(lk)
1977 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk)
1978 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1979 #define DESTROY_LOCK(lk) (0)
1980 #endif /* USE_RECURSIVE_LOCKS */
1982 #elif defined(WIN32) /* Win32 critical sections */
1983 #define MLOCK_T CRITICAL_SECTION
1984 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0)
1985 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk)
1986 #define TRY_LOCK(lk) TryEnterCriticalSection(lk)
1987 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1988 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0)
1989 #define NEED_GLOBAL_LOCK_INIT
1991 static MLOCK_T malloc_global_mutex
;
1992 static volatile LONG malloc_global_mutex_status
;
1994 /* Use spin loop to initialize global lock */
1995 static void init_malloc_global_mutex() {
1997 long stat
= malloc_global_mutex_status
;
2000 /* transition to < 0 while initializing, then to > 0) */
2002 interlockedcompareexchange(&malloc_global_mutex_status
, (LONG
)-1, (LONG
)0) == 0) {
2003 InitializeCriticalSection(&malloc_global_mutex
);
2004 interlockedexchange(&malloc_global_mutex_status
, (LONG
)1);
2011 #else /* pthreads-based locks */
2012 #define MLOCK_T pthread_mutex_t
2013 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk)
2014 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk)
2015 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk))
2016 #define INITIAL_LOCK(lk) pthread_init_lock(lk)
2017 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk)
2019 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2020 /* Cope with old-style linux recursive lock initialization by adding */
2021 /* skipped internal declaration from pthread.h */
2022 extern int pthread_mutexattr_setkind_np
__P ((pthread_mutexattr_t
*__attr
,
2024 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2025 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2026 #endif /* USE_RECURSIVE_LOCKS ... */
2028 static MLOCK_T malloc_global_mutex
= PTHREAD_MUTEX_INITIALIZER
;
2030 static int pthread_init_lock (MLOCK_T
*lk
) {
2031 pthread_mutexattr_t attr
;
2032 if (pthread_mutexattr_init(&attr
)) return 1;
2033 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2034 if (pthread_mutexattr_settype(&attr
, PTHREAD_MUTEX_RECURSIVE
)) return 1;
2036 if (pthread_mutex_init(lk
, &attr
)) return 1;
2037 if (pthread_mutexattr_destroy(&attr
)) return 1;
2041 #endif /* ... lock types ... */
2043 /* Common code for all lock types */
2044 #define USE_LOCK_BIT (2U)
2046 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2047 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex);
2050 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2051 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex);
2054 #endif /* USE_LOCKS */
2056 /* ----------------------- Chunk representations ------------------------ */
2059 (The following includes lightly edited explanations by Colin Plumb.)
2061 The malloc_chunk declaration below is misleading (but accurate and
2062 necessary). It declares a "view" into memory allowing access to
2063 necessary fields at known offsets from a given base.
2065 Chunks of memory are maintained using a `boundary tag' method as
2066 originally described by Knuth. (See the paper by Paul Wilson
2067 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2068 techniques.) Sizes of free chunks are stored both in the front of
2069 each chunk and at the end. This makes consolidating fragmented
2070 chunks into bigger chunks fast. The head fields also hold bits
2071 representing whether chunks are free or in use.
2073 Here are some pictures to make it clearer. They are "exploded" to
2074 show that the state of a chunk can be thought of as extending from
2075 the high 31 bits of the head field of its header through the
2076 prev_foot and PINUSE_BIT bit of the following chunk header.
2078 A chunk that's in use looks like:
2080 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2081 | Size of previous chunk (if P = 0) |
2082 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2083 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2084 | Size of this chunk 1| +-+
2085 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2091 +- size - sizeof(size_t) available payload bytes -+
2095 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2096 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2097 | Size of next chunk (may or may not be in use) | +-+
2098 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2100 And if it's free, it looks like this:
2103 | User payload (must be in use, or we would have merged!) |
2104 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2105 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2106 | Size of this chunk 0| +-+
2107 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2109 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2111 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2113 +- size - sizeof(struct chunk) unused bytes -+
2115 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2116 | Size of this chunk |
2117 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2118 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2119 | Size of next chunk (must be in use, or we would have merged)| +-+
2120 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2124 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2127 Note that since we always merge adjacent free chunks, the chunks
2128 adjacent to a free chunk must be in use.
2130 Given a pointer to a chunk (which can be derived trivially from the
2131 payload pointer) we can, in O(1) time, find out whether the adjacent
2132 chunks are free, and if so, unlink them from the lists that they
2133 are on and merge them with the current chunk.
2135 Chunks always begin on even word boundaries, so the mem portion
2136 (which is returned to the user) is also on an even word boundary, and
2137 thus at least double-word aligned.
2139 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2140 chunk size (which is always a multiple of two words), is an in-use
2141 bit for the *previous* chunk. If that bit is *clear*, then the
2142 word before the current chunk size contains the previous chunk
2143 size, and can be used to find the front of the previous chunk.
2144 The very first chunk allocated always has this bit set, preventing
2145 access to non-existent (or non-owned) memory. If pinuse is set for
2146 any given chunk, then you CANNOT determine the size of the
2147 previous chunk, and might even get a memory addressing fault when
2150 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2151 the chunk size redundantly records whether the current chunk is
2152 inuse (unless the chunk is mmapped). This redundancy enables usage
2153 checks within free and realloc, and reduces indirection when freeing
2154 and consolidating chunks.
2156 Each freshly allocated chunk must have both cinuse and pinuse set.
2157 That is, each allocated chunk borders either a previously allocated
2158 and still in-use chunk, or the base of its memory arena. This is
2159 ensured by making all allocations from the `lowest' part of any
2160 found chunk. Further, no free chunk physically borders another one,
2161 so each free chunk is known to be preceded and followed by either
2162 inuse chunks or the ends of memory.
2164 Note that the `foot' of the current chunk is actually represented
2165 as the prev_foot of the NEXT chunk. This makes it easier to
2166 deal with alignments etc but can be very confusing when trying
2167 to extend or adapt this code.
2169 The exceptions to all this are
2171 1. The special chunk `top' is the top-most available chunk (i.e.,
2172 the one bordering the end of available memory). It is treated
2173 specially. Top is never included in any bin, is used only if
2174 no other chunk is available, and is released back to the
2175 system if it is very large (see M_TRIM_THRESHOLD). In effect,
2176 the top chunk is treated as larger (and thus less well
2177 fitting) than any other available chunk. The top chunk
2178 doesn't update its trailing size field since there is no next
2179 contiguous chunk that would have to index off it. However,
2180 space is still allocated for it (TOP_FOOT_SIZE) to enable
2181 separation or merging when space is extended.
2183 3. Chunks allocated via mmap, have both cinuse and pinuse bits
2184 cleared in their head fields. Because they are allocated
2185 one-by-one, each must carry its own prev_foot field, which is
2186 also used to hold the offset this chunk has within its mmapped
2187 region, which is needed to preserve alignment. Each mmapped
2188 chunk is trailed by the first two fields of a fake next-chunk
2189 for sake of usage checks.
2193 struct malloc_chunk
{
2194 size_t prev_foot
; /* Size of previous chunk (if free). */
2195 size_t head
; /* Size and inuse bits. */
2196 struct malloc_chunk
* fd
; /* double links -- used only if free. */
2197 struct malloc_chunk
* bk
;
2200 typedef struct malloc_chunk mchunk
;
2201 typedef struct malloc_chunk
* mchunkptr
;
2202 typedef struct malloc_chunk
* sbinptr
; /* The type of bins of chunks */
2203 typedef unsigned int bindex_t
; /* Described below */
2204 typedef unsigned int binmap_t
; /* Described below */
2205 typedef unsigned int flag_t
; /* The type of various bit flag sets */
2207 /* ------------------- Chunks sizes and alignments ----------------------- */
2209 #define MCHUNK_SIZE (sizeof(mchunk))
2212 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2214 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
2215 #endif /* FOOTERS */
2217 /* MMapped chunks need a second word of overhead ... */
2218 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2219 /* ... and additional padding for fake next-chunk at foot */
2220 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
2222 /* The smallest size we can malloc is an aligned minimal chunk */
2223 #define MIN_CHUNK_SIZE\
2224 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2226 /* conversion from malloc headers to user pointers, and back */
2227 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
2228 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2229 /* chunk associated with aligned address A */
2230 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
2232 /* Bounds on request (not chunk) sizes. */
2233 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
2234 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2236 /* pad request bytes into a usable size */
2237 #define pad_request(req) \
2238 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2240 /* pad request, checking for minimum (but not maximum) */
2241 #define request2size(req) \
2242 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2245 /* ------------------ Operations on head and foot fields ----------------- */
2248 The head field of a chunk is or'ed with PINUSE_BIT when previous
2249 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2250 use, unless mmapped, in which case both bits are cleared.
2252 FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2255 #define PINUSE_BIT (SIZE_T_ONE)
2256 #define CINUSE_BIT (SIZE_T_TWO)
2257 #define FLAG4_BIT (SIZE_T_FOUR)
2258 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
2259 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2261 /* Head value for fenceposts */
2262 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
2264 /* extraction of fields from head words */
2265 #define cinuse(p) ((p)->head & CINUSE_BIT)
2266 #define pinuse(p) ((p)->head & PINUSE_BIT)
2267 #define flag4inuse(p) ((p)->head & FLAG4_BIT)
2268 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT)
2269 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0)
2271 #define chunksize(p) ((p)->head & ~(FLAG_BITS))
2273 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
2274 #define set_flag4(p) ((p)->head |= FLAG4_BIT)
2275 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT)
2277 /* Treat space at ptr +/- offset as a chunk */
2278 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
2279 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2281 /* Ptr to next or previous physical malloc_chunk. */
2282 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2283 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2285 /* extract next chunk's pinuse bit */
2286 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
2288 /* Get/set size at footer */
2289 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2290 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2292 /* Set size, pinuse bit, and foot */
2293 #define set_size_and_pinuse_of_free_chunk(p, s)\
2294 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2296 /* Set size, pinuse bit, foot, and clear next pinuse */
2297 #define set_free_with_pinuse(p, s, n)\
2298 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2300 /* Get the internal overhead associated with chunk p */
2301 #define overhead_for(p)\
2302 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2304 /* Return true if malloced space is not necessarily cleared */
2306 #define calloc_must_clear(p) (!is_mmapped(p))
2307 #else /* MMAP_CLEARS */
2308 #define calloc_must_clear(p) (1)
2309 #endif /* MMAP_CLEARS */
2311 /* ---------------------- Overlaid data structures ----------------------- */
2314 When chunks are not in use, they are treated as nodes of either
2317 "Small" chunks are stored in circular doubly-linked lists, and look
2320 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2321 | Size of previous chunk |
2322 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2323 `head:' | Size of chunk, in bytes |P|
2324 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2325 | Forward pointer to next chunk in list |
2326 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2327 | Back pointer to previous chunk in list |
2328 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2329 | Unused space (may be 0 bytes long) .
2332 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2333 `foot:' | Size of chunk, in bytes |
2334 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2336 Larger chunks are kept in a form of bitwise digital trees (aka
2337 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
2338 free chunks greater than 256 bytes, their size doesn't impose any
2339 constraints on user chunk sizes. Each node looks like:
2341 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2342 | Size of previous chunk |
2343 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2344 `head:' | Size of chunk, in bytes |P|
2345 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2346 | Forward pointer to next chunk of same size |
2347 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2348 | Back pointer to previous chunk of same size |
2349 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2350 | Pointer to left child (child[0]) |
2351 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2352 | Pointer to right child (child[1]) |
2353 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354 | Pointer to parent |
2355 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2356 | bin index of this chunk |
2357 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2360 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2361 `foot:' | Size of chunk, in bytes |
2362 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2364 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
2365 of the same size are arranged in a circularly-linked list, with only
2366 the oldest chunk (the next to be used, in our FIFO ordering)
2367 actually in the tree. (Tree members are distinguished by a non-null
2368 parent pointer.) If a chunk with the same size an an existing node
2369 is inserted, it is linked off the existing node using pointers that
2370 work in the same way as fd/bk pointers of small chunks.
2372 Each tree contains a power of 2 sized range of chunk sizes (the
2373 smallest is 0x100 <= x < 0x180), which is is divided in half at each
2374 tree level, with the chunks in the smaller half of the range (0x100
2375 <= x < 0x140 for the top nose) in the left subtree and the larger
2376 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
2377 done by inspecting individual bits.
2379 Using these rules, each node's left subtree contains all smaller
2380 sizes than its right subtree. However, the node at the root of each
2381 subtree has no particular ordering relationship to either. (The
2382 dividing line between the subtree sizes is based on trie relation.)
2383 If we remove the last chunk of a given size from the interior of the
2384 tree, we need to replace it with a leaf node. The tree ordering
2385 rules permit a node to be replaced by any leaf below it.
2387 The smallest chunk in a tree (a common operation in a best-fit
2388 allocator) can be found by walking a path to the leftmost leaf in
2389 the tree. Unlike a usual binary tree, where we follow left child
2390 pointers until we reach a null, here we follow the right child
2391 pointer any time the left one is null, until we reach a leaf with
2392 both child pointers null. The smallest chunk in the tree will be
2393 somewhere along that path.
2395 The worst case number of steps to add, find, or remove a node is
2396 bounded by the number of bits differentiating chunks within
2397 bins. Under current bin calculations, this ranges from 6 up to 21
2398 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2399 is of course much better.
2402 struct malloc_tree_chunk
{
2403 /* The first four fields must be compatible with malloc_chunk */
2406 struct malloc_tree_chunk
* fd
;
2407 struct malloc_tree_chunk
* bk
;
2409 struct malloc_tree_chunk
* child
[2];
2410 struct malloc_tree_chunk
* parent
;
2414 typedef struct malloc_tree_chunk tchunk
;
2415 typedef struct malloc_tree_chunk
* tchunkptr
;
2416 typedef struct malloc_tree_chunk
* tbinptr
; /* The type of bins of trees */
2418 /* A little helper macro for trees */
2419 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2421 /* ----------------------------- Segments -------------------------------- */
2424 Each malloc space may include non-contiguous segments, held in a
2425 list headed by an embedded malloc_segment record representing the
2426 top-most space. Segments also include flags holding properties of
2427 the space. Large chunks that are directly allocated by mmap are not
2428 included in this list. They are instead independently created and
2429 destroyed without otherwise keeping track of them.
2431 Segment management mainly comes into play for spaces allocated by
2432 MMAP. Any call to MMAP might or might not return memory that is
2433 adjacent to an existing segment. MORECORE normally contiguously
2434 extends the current space, so this space is almost always adjacent,
2435 which is simpler and faster to deal with. (This is why MORECORE is
2436 used preferentially to MMAP when both are available -- see
2437 sys_alloc.) When allocating using MMAP, we don't use any of the
2438 hinting mechanisms (inconsistently) supported in various
2439 implementations of unix mmap, or distinguish reserving from
2440 committing memory. Instead, we just ask for space, and exploit
2441 contiguity when we get it. It is probably possible to do
2442 better than this on some systems, but no general scheme seems
2443 to be significantly better.
2445 Management entails a simpler variant of the consolidation scheme
2446 used for chunks to reduce fragmentation -- new adjacent memory is
2447 normally prepended or appended to an existing segment. However,
2448 there are limitations compared to chunk consolidation that mostly
2449 reflect the fact that segment processing is relatively infrequent
2450 (occurring only when getting memory from system) and that we
2451 don't expect to have huge numbers of segments:
2453 * Segments are not indexed, so traversal requires linear scans. (It
2454 would be possible to index these, but is not worth the extra
2455 overhead and complexity for most programs on most platforms.)
2456 * New segments are only appended to old ones when holding top-most
2457 memory; if they cannot be prepended to others, they are held in
2460 Except for the top-most segment of an mstate, each segment record
2461 is kept at the tail of its segment. Segments are added by pushing
2462 segment records onto the list headed by &mstate.seg for the
2465 Segment flags control allocation/merge/deallocation policies:
2466 * If EXTERN_BIT set, then we did not allocate this segment,
2467 and so should not try to deallocate or merge with others.
2468 (This currently holds only for the initial segment passed
2469 into create_mspace_with_base.)
2470 * If USE_MMAP_BIT set, the segment may be merged with
2471 other surrounding mmapped segments and trimmed/de-allocated
2473 * If neither bit is set, then the segment was obtained using
2474 MORECORE so can be merged with surrounding MORECORE'd segments
2475 and deallocated/trimmed using MORECORE with negative arguments.
2478 struct malloc_segment
{
2479 char* base
; /* base address */
2480 size_t size
; /* allocated size */
2481 struct malloc_segment
* next
; /* ptr to next segment */
2482 flag_t sflags
; /* mmap and extern flag */
2485 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT)
2486 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2488 typedef struct malloc_segment msegment
;
2489 typedef struct malloc_segment
* msegmentptr
;
2491 /* ---------------------------- malloc_state ----------------------------- */
2494 A malloc_state holds all of the bookkeeping for a space.
2495 The main fields are:
2498 The topmost chunk of the currently active segment. Its size is
2499 cached in topsize. The actual size of topmost space is
2500 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2501 fenceposts and segment records if necessary when getting more
2502 space from the system. The size at which to autotrim top is
2503 cached from mparams in trim_check, except that it is disabled if
2506 Designated victim (dv)
2507 This is the preferred chunk for servicing small requests that
2508 don't have exact fits. It is normally the chunk split off most
2509 recently to service another small request. Its size is cached in
2510 dvsize. The link fields of this chunk are not maintained since it
2511 is not kept in a bin.
2514 An array of bin headers for free chunks. These bins hold chunks
2515 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2516 chunks of all the same size, spaced 8 bytes apart. To simplify
2517 use in double-linked lists, each bin header acts as a malloc_chunk
2518 pointing to the real first node, if it exists (else pointing to
2519 itself). This avoids special-casing for headers. But to avoid
2520 waste, we allocate only the fd/bk pointers of bins, and then use
2521 repositioning tricks to treat these as the fields of a chunk.
2524 Treebins are pointers to the roots of trees holding a range of
2525 sizes. There are 2 equally spaced treebins for each power of two
2526 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2530 There is one bit map for small bins ("smallmap") and one for
2531 treebins ("treemap). Each bin sets its bit when non-empty, and
2532 clears the bit when empty. Bit operations are then used to avoid
2533 bin-by-bin searching -- nearly all "search" is done without ever
2534 looking at bins that won't be selected. The bit maps
2535 conservatively use 32 bits per map word, even if on 64bit system.
2536 For a good description of some of the bit-based techniques used
2537 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2538 supplement at http://hackersdelight.org/). Many of these are
2539 intended to reduce the branchiness of paths through malloc etc, as
2540 well as to reduce the number of memory locations read or written.
2543 A list of segments headed by an embedded malloc_segment record
2544 representing the initial space.
2546 Address check support
2547 The least_addr field is the least address ever obtained from
2548 MORECORE or MMAP. Attempted frees and reallocs of any address less
2549 than this are trapped (unless INSECURE is defined).
2552 A cross-check field that should always hold same value as mparams.magic.
2554 Max allowed footprint
2555 The maximum allowed bytes to allocate from system (zero means no limit)
2558 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2561 Each space keeps track of current and maximum system memory
2562 obtained via MORECORE or MMAP.
2565 Fields holding the amount of unused topmost memory that should trigger
2566 trimming, and a counter to force periodic scanning to release unused
2567 non-topmost segments.
2570 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2571 around every public call using this mspace.
2574 A void* pointer and a size_t field that can be used to help implement
2575 extensions to this malloc.
2578 /* Bin types, widths and sizes */
2579 #define NSMALLBINS (32U)
2580 #define NTREEBINS (32U)
2581 #define SMALLBIN_SHIFT (3U)
2582 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2583 #define TREEBIN_SHIFT (8U)
2584 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2585 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2586 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2588 struct malloc_state
{
2597 size_t release_checks
;
2599 mchunkptr smallbins
[(NSMALLBINS
+1)*2];
2600 tbinptr treebins
[NTREEBINS
];
2602 size_t max_footprint
;
2603 size_t footprint_limit
; /* zero means no limit */
2606 MLOCK_T mutex
; /* locate lock among fields that rarely change */
2607 #endif /* USE_LOCKS */
2609 void* extp
; /* Unused but available for extensions */
2613 typedef struct malloc_state
* mstate
;
2615 /* ------------- Global malloc_state and malloc_params ------------------- */
2618 malloc_params holds global properties, including those that can be
2619 dynamically set using mallopt. There is a single instance, mparams,
2620 initialized in init_mparams. Note that the non-zeroness of "magic"
2621 also serves as an initialization flag.
2624 struct malloc_params
{
2628 size_t mmap_threshold
;
2629 size_t trim_threshold
;
2630 flag_t default_mflags
;
2633 static struct malloc_params mparams
;
2635 /* Ensure mparams initialized */
2636 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2640 /* The global malloc_state used for all non-"mspace" calls */
2641 static struct malloc_state _gm_
;
2643 #define is_global(M) ((M) == &_gm_)
2645 #endif /* !ONLY_MSPACES */
2647 #define is_initialized(M) ((M)->top != 0)
2649 /* -------------------------- system alloc setup ------------------------- */
2651 /* Operations on mflags */
2653 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2654 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2656 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2658 #define disable_lock(M)
2661 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2662 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2664 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2666 #define disable_mmap(M)
2669 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2670 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2672 #define set_lock(M,L)\
2673 ((M)->mflags = (L)?\
2674 ((M)->mflags | USE_LOCK_BIT) :\
2675 ((M)->mflags & ~USE_LOCK_BIT))
2677 /* page-align a size */
2678 #define page_align(S)\
2679 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2681 /* granularity-align a size */
2682 #define granularity_align(S)\
2683 (((S) + (mparams.granularity - SIZE_T_ONE))\
2684 & ~(mparams.granularity - SIZE_T_ONE))
2687 /* For mmap, use granularity alignment on windows, else page-align */
2689 #define mmap_align(S) granularity_align(S)
2691 #define mmap_align(S) page_align(S)
2694 /* For sys_alloc, enough padding to ensure can malloc request on success */
2695 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2697 #define is_page_aligned(S)\
2698 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2699 #define is_granularity_aligned(S)\
2700 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2702 /* True if segment S holds address A */
2703 #define segment_holds(S, A)\
2704 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2706 /* Return segment holding given address */
2707 static msegmentptr
segment_holding(mstate m
, char* addr
) {
2708 msegmentptr sp
= &m
->seg
;
2710 if (addr
>= sp
->base
&& addr
< sp
->base
+ sp
->size
)
2712 if ((sp
= sp
->next
) == 0)
2717 /* Return true if segment contains a segment link */
2718 static int has_segment_link(mstate m
, msegmentptr ss
) {
2719 msegmentptr sp
= &m
->seg
;
2721 if ((char*)sp
>= ss
->base
&& (char*)sp
< ss
->base
+ ss
->size
)
2723 if ((sp
= sp
->next
) == 0)
2728 #ifndef MORECORE_CANNOT_TRIM
2729 #define should_trim(M,s) ((s) > (M)->trim_check)
2730 #else /* MORECORE_CANNOT_TRIM */
2731 #define should_trim(M,s) (0)
2732 #endif /* MORECORE_CANNOT_TRIM */
2735 TOP_FOOT_SIZE is padding at the end of a segment, including space
2736 that may be needed to place segment records and fenceposts when new
2737 noncontiguous segments are added.
2739 #define TOP_FOOT_SIZE\
2740 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2743 /* ------------------------------- Hooks -------------------------------- */
2746 PREACTION should be defined to return 0 on success, and nonzero on
2747 failure. If you are not using locking, you can redefine these to do
2752 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2753 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2754 #else /* USE_LOCKS */
2757 #define PREACTION(M) (0)
2758 #endif /* PREACTION */
2761 #define POSTACTION(M)
2762 #endif /* POSTACTION */
2764 #endif /* USE_LOCKS */
2767 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2768 USAGE_ERROR_ACTION is triggered on detected bad frees and
2769 reallocs. The argument p is an address that might have triggered the
2770 fault. It is ignored by the two predefined actions, but might be
2771 useful in custom actions that try to help diagnose errors.
2774 #if PROCEED_ON_ERROR
2776 /* A count of the number of corruption errors causing resets */
2777 int malloc_corruption_error_count
;
2779 /* default corruption action */
2780 static void reset_on_error(mstate m
);
2782 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2783 #define USAGE_ERROR_ACTION(m, p)
2785 #else /* PROCEED_ON_ERROR */
2787 #ifndef CORRUPTION_ERROR_ACTION
2788 #define CORRUPTION_ERROR_ACTION(m) ABORT
2789 #endif /* CORRUPTION_ERROR_ACTION */
2791 #ifndef USAGE_ERROR_ACTION
2792 #define USAGE_ERROR_ACTION(m,p) ABORT
2793 #endif /* USAGE_ERROR_ACTION */
2795 #endif /* PROCEED_ON_ERROR */
2798 /* -------------------------- Debugging setup ---------------------------- */
2802 #define check_free_chunk(M,P)
2803 #define check_inuse_chunk(M,P)
2804 #define check_malloced_chunk(M,P,N)
2805 #define check_mmapped_chunk(M,P)
2806 #define check_malloc_state(M)
2807 #define check_top_chunk(M,P)
2810 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2811 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2812 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2813 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2814 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2815 #define check_malloc_state(M) do_check_malloc_state(M)
2817 static void do_check_any_chunk(mstate m
, mchunkptr p
);
2818 static void do_check_top_chunk(mstate m
, mchunkptr p
);
2819 static void do_check_mmapped_chunk(mstate m
, mchunkptr p
);
2820 static void do_check_inuse_chunk(mstate m
, mchunkptr p
);
2821 static void do_check_free_chunk(mstate m
, mchunkptr p
);
2822 static void do_check_malloced_chunk(mstate m
, void* mem
, size_t s
);
2823 static void do_check_tree(mstate m
, tchunkptr t
);
2824 static void do_check_treebin(mstate m
, bindex_t i
);
2825 static void do_check_smallbin(mstate m
, bindex_t i
);
2826 static void do_check_malloc_state(mstate m
);
2827 static int bin_find(mstate m
, mchunkptr x
);
2828 static size_t traverse_and_check(mstate m
);
2831 /* ---------------------------- Indexing Bins ---------------------------- */
2833 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2834 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT)
2835 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2836 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2838 /* addressing by index. See above about smallbin repositioning */
2839 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2840 #define treebin_at(M,i) (&((M)->treebins[i]))
2842 /* assign tree index for size S to variable I. Use x86 asm if possible */
2843 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2844 #define compute_tree_index(S, I)\
2846 unsigned int X = S >> TREEBIN_SHIFT;\
2849 else if (X > 0xFFFF)\
2852 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2853 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2857 #elif defined (__INTEL_COMPILER)
2858 #define compute_tree_index(S, I)\
2860 size_t X = S >> TREEBIN_SHIFT;\
2863 else if (X > 0xFFFF)\
2866 unsigned int K = _bit_scan_reverse (X); \
2867 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2871 #elif defined(_MSC_VER) && _MSC_VER>=1300
2872 #define compute_tree_index(S, I)\
2874 size_t X = S >> TREEBIN_SHIFT;\
2877 else if (X > 0xFFFF)\
2881 _BitScanReverse((DWORD *) &K, (DWORD) X);\
2882 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2887 #define compute_tree_index(S, I)\
2889 size_t X = S >> TREEBIN_SHIFT;\
2892 else if (X > 0xFFFF)\
2895 unsigned int Y = (unsigned int)X;\
2896 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2897 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2899 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2900 K = 14 - N + ((Y <<= K) >> 15);\
2901 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2906 /* Bit representing maximum resolved size in a treebin at i */
2907 #define bit_for_tree_index(i) \
2908 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2910 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2911 #define leftshift_for_tree_index(i) \
2912 ((i == NTREEBINS-1)? 0 : \
2913 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2915 /* The size of the smallest chunk held in bin with index i */
2916 #define minsize_for_tree_index(i) \
2917 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2918 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2921 /* ------------------------ Operations on bin maps ----------------------- */
2923 /* bit corresponding to given index */
2924 #define idx2bit(i) ((binmap_t)(1) << (i))
2926 /* Mark/Clear bits with given index */
2927 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2928 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2929 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2931 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2932 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2933 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2935 /* isolate the least set bit of a bitmap */
2936 #define least_bit(x) ((x) & -(x))
2938 /* mask with all bits to left of least bit of x on */
2939 #define left_bits(x) ((x<<1) | -(x<<1))
2941 /* mask with all bits to left of or equal to least bit of x on */
2942 #define same_or_left_bits(x) ((x) | -(x))
2944 /* index corresponding to given bit. Use x86 asm if possible */
2946 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2947 #define compute_bit2idx(X, I)\
2950 J = __builtin_ctz(X); \
2954 #elif defined (__INTEL_COMPILER)
2955 #define compute_bit2idx(X, I)\
2958 J = _bit_scan_forward (X); \
2962 #elif defined(_MSC_VER) && _MSC_VER>=1300
2963 #define compute_bit2idx(X, I)\
2966 _BitScanForward((DWORD *) &J, X);\
2970 #elif USE_BUILTIN_FFS
2971 #define compute_bit2idx(X, I) I = ffs(X)-1
2974 #define compute_bit2idx(X, I)\
2976 unsigned int Y = X - 1;\
2977 unsigned int K = Y >> (16-4) & 16;\
2978 unsigned int N = K; Y >>= K;\
2979 N += K = Y >> (8-3) & 8; Y >>= K;\
2980 N += K = Y >> (4-2) & 4; Y >>= K;\
2981 N += K = Y >> (2-1) & 2; Y >>= K;\
2982 N += K = Y >> (1-0) & 1; Y >>= K;\
2983 I = (bindex_t)(N + Y);\
2988 /* ----------------------- Runtime Check Support ------------------------- */
2991 For security, the main invariant is that malloc/free/etc never
2992 writes to a static address other than malloc_state, unless static
2993 malloc_state itself has been corrupted, which cannot occur via
2994 malloc (because of these checks). In essence this means that we
2995 believe all pointers, sizes, maps etc held in malloc_state, but
2996 check all of those linked or offsetted from other embedded data
2997 structures. These checks are interspersed with main code in a way
2998 that tends to minimize their run-time cost.
3000 When FOOTERS is defined, in addition to range checking, we also
3001 verify footer fields of inuse chunks, which can be used guarantee
3002 that the mstate controlling malloc/free is intact. This is a
3003 streamlined version of the approach described by William Robertson
3004 et al in "Run-time Detection of Heap-based Overflows" LISA'03
3005 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
3006 of an inuse chunk holds the xor of its mstate and a random seed,
3007 that is checked upon calls to free() and realloc(). This is
3008 (probabalistically) unguessable from outside the program, but can be
3009 computed by any code successfully malloc'ing any chunk, so does not
3010 itself provide protection against code that has already broken
3011 security through some other means. Unlike Robertson et al, we
3012 always dynamically check addresses of all offset chunks (previous,
3013 next, etc). This turns out to be cheaper than relying on hashes.
3017 /* Check if address a is at least as high as any from MORECORE or MMAP */
3018 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3019 /* Check if address of next chunk n is higher than base chunk p */
3020 #define ok_next(p, n) ((char*)(p) < (char*)(n))
3021 /* Check if p has inuse status */
3022 #define ok_inuse(p) is_inuse(p)
3023 /* Check if p has its pinuse bit on */
3024 #define ok_pinuse(p) pinuse(p)
3026 #else /* !INSECURE */
3027 #define ok_address(M, a) (1)
3028 #define ok_next(b, n) (1)
3029 #define ok_inuse(p) (1)
3030 #define ok_pinuse(p) (1)
3031 #endif /* !INSECURE */
3033 #if (FOOTERS && !INSECURE)
3034 /* Check if (alleged) mstate m has expected magic field */
3035 #define ok_magic(M) ((M)->magic == mparams.magic)
3036 #else /* (FOOTERS && !INSECURE) */
3037 #define ok_magic(M) (1)
3038 #endif /* (FOOTERS && !INSECURE) */
3040 /* In gcc, use __builtin_expect to minimize impact of checks */
3042 #if defined(__GNUC__) && __GNUC__ >= 3
3043 #define RTCHECK(e) __builtin_expect(e, 1)
3045 #define RTCHECK(e) (e)
3047 #else /* !INSECURE */
3048 #define RTCHECK(e) (1)
3049 #endif /* !INSECURE */
3051 /* macros to set up inuse chunks with or without footers */
3055 #define mark_inuse_foot(M,p,s)
3057 /* Macros for setting head/foot of non-mmapped chunks */
3059 /* Set cinuse bit and pinuse bit of next chunk */
3060 #define set_inuse(M,p,s)\
3061 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3062 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3064 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3065 #define set_inuse_and_pinuse(M,p,s)\
3066 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3067 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3069 /* Set size, cinuse and pinuse bit of this chunk */
3070 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3071 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3075 /* Set foot of inuse chunk to be xor of mstate and seed */
3076 #define mark_inuse_foot(M,p,s)\
3077 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3079 #define get_mstate_for(p)\
3080 ((mstate)(((mchunkptr)((char*)(p) +\
3081 (chunksize(p))))->prev_foot ^ mparams.magic))
3083 #define set_inuse(M,p,s)\
3084 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3085 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3086 mark_inuse_foot(M,p,s))
3088 #define set_inuse_and_pinuse(M,p,s)\
3089 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3090 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3091 mark_inuse_foot(M,p,s))
3093 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3094 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3095 mark_inuse_foot(M, p, s))
3097 #endif /* !FOOTERS */
3099 /* ---------------------------- setting mparams -------------------------- */
3102 static void pre_fork(void) { ACQUIRE_LOCK(&(gm
)->mutex
); }
3103 static void post_fork_parent(void) { RELEASE_LOCK(&(gm
)->mutex
); }
3104 static void post_fork_child(void) { INITIAL_LOCK(&(gm
)->mutex
); }
3105 #endif /* LOCK_AT_FORK */
3107 /* Initialize mparams */
3108 static int init_mparams(void) {
3109 #ifdef NEED_GLOBAL_LOCK_INIT
3110 if (malloc_global_mutex_status
<= 0)
3111 init_malloc_global_mutex();
3114 ACQUIRE_MALLOC_GLOBAL_LOCK();
3115 if (mparams
.magic
== 0) {
3121 psize
= malloc_getpagesize
;
3122 gsize
= ((DEFAULT_GRANULARITY
!= 0)? DEFAULT_GRANULARITY
: psize
);
3125 SYSTEM_INFO system_info
;
3126 GetSystemInfo(&system_info
);
3127 psize
= system_info
.dwPageSize
;
3128 gsize
= ((DEFAULT_GRANULARITY
!= 0)?
3129 DEFAULT_GRANULARITY
: system_info
.dwAllocationGranularity
);
3133 /* Sanity-check configuration:
3134 size_t must be unsigned and as wide as pointer type.
3135 ints must be at least 4 bytes.
3136 alignment must be at least 8.
3137 Alignment, min chunk size, and page size must all be powers of 2.
3139 if ((sizeof(size_t) != sizeof(char*)) ||
3140 (MAX_SIZE_T
< MIN_CHUNK_SIZE
) ||
3141 (sizeof(int) < 4) ||
3142 (MALLOC_ALIGNMENT
< (size_t)8U) ||
3143 ((MALLOC_ALIGNMENT
& (MALLOC_ALIGNMENT
-SIZE_T_ONE
)) != 0) ||
3144 ((MCHUNK_SIZE
& (MCHUNK_SIZE
-SIZE_T_ONE
)) != 0) ||
3145 ((gsize
& (gsize
-SIZE_T_ONE
)) != 0) ||
3146 ((psize
& (psize
-SIZE_T_ONE
)) != 0))
3148 mparams
.granularity
= gsize
;
3149 mparams
.page_size
= psize
;
3150 mparams
.mmap_threshold
= DEFAULT_MMAP_THRESHOLD
;
3151 mparams
.trim_threshold
= DEFAULT_TRIM_THRESHOLD
;
3152 #if MORECORE_CONTIGUOUS
3153 mparams
.default_mflags
= USE_LOCK_BIT
|USE_MMAP_BIT
;
3154 #else /* MORECORE_CONTIGUOUS */
3155 mparams
.default_mflags
= USE_LOCK_BIT
|USE_MMAP_BIT
|USE_NONCONTIGUOUS_BIT
;
3156 #endif /* MORECORE_CONTIGUOUS */
3159 /* Set up lock for main malloc area */
3160 gm
->mflags
= mparams
.default_mflags
;
3161 (void)INITIAL_LOCK(&gm
->mutex
);
3164 pthread_atfork(&pre_fork
, &post_fork_parent
, &post_fork_child
);
3170 unsigned char buf
[sizeof(size_t)];
3171 /* Try to use /dev/urandom, else fall back on using time */
3172 if ((fd
= open("/dev/urandom", O_RDONLY
)) >= 0 &&
3173 read(fd
, buf
, sizeof(buf
)) == sizeof(buf
)) {
3174 magic
= *((size_t *) buf
);
3178 #endif /* USE_DEV_RANDOM */
3180 magic
= (size_t)(GetTickCount() ^ (size_t)0x55555555U
);
3181 #elif defined(LACKS_TIME_H)
3182 magic
= (size_t)&magic
^ (size_t)0x55555555U
;
3184 magic
= (size_t)(time(0) ^ (size_t)0x55555555U
);
3186 magic
|= (size_t)8U; /* ensure nonzero */
3187 magic
&= ~(size_t)7U; /* improve chances of fault for bad values */
3188 /* Until memory modes commonly available, use volatile-write */
3189 (*(volatile size_t *)(&(mparams
.magic
))) = magic
;
3193 RELEASE_MALLOC_GLOBAL_LOCK();
3197 /* support for mallopt */
3198 static int change_mparam(int param_number
, int value
) {
3200 ensure_initialization();
3201 val
= (value
== -1)? MAX_SIZE_T
: (size_t)value
;
3202 switch(param_number
) {
3203 case M_TRIM_THRESHOLD
:
3204 mparams
.trim_threshold
= val
;
3207 if (val
>= mparams
.page_size
&& ((val
& (val
-1)) == 0)) {
3208 mparams
.granularity
= val
;
3213 case M_MMAP_THRESHOLD
:
3214 mparams
.mmap_threshold
= val
;
3222 /* ------------------------- Debugging Support --------------------------- */
3224 /* Check properties of any chunk, whether free, inuse, mmapped etc */
3225 static void do_check_any_chunk(mstate m
, mchunkptr p
) {
3226 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3227 assert(ok_address(m
, p
));
3230 /* Check properties of top chunk */
3231 static void do_check_top_chunk(mstate m
, mchunkptr p
) {
3232 msegmentptr sp
= segment_holding(m
, (char*)p
);
3233 size_t sz
= p
->head
& ~INUSE_BITS
; /* third-lowest bit can be set! */
3235 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3236 assert(ok_address(m
, p
));
3237 assert(sz
== m
->topsize
);
3239 assert(sz
== ((sp
->base
+ sp
->size
) - (char*)p
) - TOP_FOOT_SIZE
);
3241 assert(!pinuse(chunk_plus_offset(p
, sz
)));
3244 /* Check properties of (inuse) mmapped chunks */
3245 static void do_check_mmapped_chunk(mstate m
, mchunkptr p
) {
3246 size_t sz
= chunksize(p
);
3247 size_t len
= (sz
+ (p
->prev_foot
) + MMAP_FOOT_PAD
);
3248 assert(is_mmapped(p
));
3249 assert(use_mmap(m
));
3250 assert((is_aligned(chunk2mem(p
))) || (p
->head
== FENCEPOST_HEAD
));
3251 assert(ok_address(m
, p
));
3252 assert(!is_small(sz
));
3253 assert((len
& (mparams
.page_size
-SIZE_T_ONE
)) == 0);
3254 assert(chunk_plus_offset(p
, sz
)->head
== FENCEPOST_HEAD
);
3255 assert(chunk_plus_offset(p
, sz
+SIZE_T_SIZE
)->head
== 0);
3258 /* Check properties of inuse chunks */
3259 static void do_check_inuse_chunk(mstate m
, mchunkptr p
) {
3260 do_check_any_chunk(m
, p
);
3261 assert(is_inuse(p
));
3262 assert(next_pinuse(p
));
3263 /* If not pinuse and not mmapped, previous chunk has OK offset */
3264 assert(is_mmapped(p
) || pinuse(p
) || next_chunk(prev_chunk(p
)) == p
);
3266 do_check_mmapped_chunk(m
, p
);
3269 /* Check properties of free chunks */
3270 static void do_check_free_chunk(mstate m
, mchunkptr p
) {
3271 size_t sz
= chunksize(p
);
3272 mchunkptr next
= chunk_plus_offset(p
, sz
);
3273 do_check_any_chunk(m
, p
);
3274 assert(!is_inuse(p
));
3275 assert(!next_pinuse(p
));
3276 assert (!is_mmapped(p
));
3277 if (p
!= m
->dv
&& p
!= m
->top
) {
3278 if (sz
>= MIN_CHUNK_SIZE
) {
3279 assert((sz
& CHUNK_ALIGN_MASK
) == 0);
3280 assert(is_aligned(chunk2mem(p
)));
3281 assert(next
->prev_foot
== sz
);
3283 assert (next
== m
->top
|| is_inuse(next
));
3284 assert(p
->fd
->bk
== p
);
3285 assert(p
->bk
->fd
== p
);
3287 else /* markers are always of size SIZE_T_SIZE */
3288 assert(sz
== SIZE_T_SIZE
);
3292 /* Check properties of malloced chunks at the point they are malloced */
3293 static void do_check_malloced_chunk(mstate m
, void* mem
, size_t s
) {
3295 mchunkptr p
= mem2chunk(mem
);
3296 size_t sz
= p
->head
& ~INUSE_BITS
;
3297 do_check_inuse_chunk(m
, p
);
3298 assert((sz
& CHUNK_ALIGN_MASK
) == 0);
3299 assert(sz
>= MIN_CHUNK_SIZE
);
3301 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3302 assert(is_mmapped(p
) || sz
< (s
+ MIN_CHUNK_SIZE
));
3306 /* Check a tree and its subtrees. */
3307 static void do_check_tree(mstate m
, tchunkptr t
) {
3310 bindex_t tindex
= t
->index
;
3311 size_t tsize
= chunksize(t
);
3313 compute_tree_index(tsize
, idx
);
3314 assert(tindex
== idx
);
3315 assert(tsize
>= MIN_LARGE_SIZE
);
3316 assert(tsize
>= minsize_for_tree_index(idx
));
3317 assert((idx
== NTREEBINS
-1) || (tsize
< minsize_for_tree_index((idx
+1))));
3319 do { /* traverse through chain of same-sized nodes */
3320 do_check_any_chunk(m
, ((mchunkptr
)u
));
3321 assert(u
->index
== tindex
);
3322 assert(chunksize(u
) == tsize
);
3323 assert(!is_inuse(u
));
3324 assert(!next_pinuse(u
));
3325 assert(u
->fd
->bk
== u
);
3326 assert(u
->bk
->fd
== u
);
3327 if (u
->parent
== 0) {
3328 assert(u
->child
[0] == 0);
3329 assert(u
->child
[1] == 0);
3332 assert(head
== 0); /* only one node on chain has parent */
3334 assert(u
->parent
!= u
);
3335 assert (u
->parent
->child
[0] == u
||
3336 u
->parent
->child
[1] == u
||
3337 *((tbinptr
*)(u
->parent
)) == u
);
3338 if (u
->child
[0] != 0) {
3339 assert(u
->child
[0]->parent
== u
);
3340 assert(u
->child
[0] != u
);
3341 do_check_tree(m
, u
->child
[0]);
3343 if (u
->child
[1] != 0) {
3344 assert(u
->child
[1]->parent
== u
);
3345 assert(u
->child
[1] != u
);
3346 do_check_tree(m
, u
->child
[1]);
3348 if (u
->child
[0] != 0 && u
->child
[1] != 0) {
3349 assert(chunksize(u
->child
[0]) < chunksize(u
->child
[1]));
3357 /* Check all the chunks in a treebin. */
3358 static void do_check_treebin(mstate m
, bindex_t i
) {
3359 tbinptr
* tb
= treebin_at(m
, i
);
3361 int empty
= (m
->treemap
& (1U << i
)) == 0;
3365 do_check_tree(m
, t
);
3368 /* Check all the chunks in a smallbin. */
3369 static void do_check_smallbin(mstate m
, bindex_t i
) {
3370 sbinptr b
= smallbin_at(m
, i
);
3371 mchunkptr p
= b
->bk
;
3372 unsigned int empty
= (m
->smallmap
& (1U << i
)) == 0;
3376 for (; p
!= b
; p
= p
->bk
) {
3377 size_t size
= chunksize(p
);
3379 /* each chunk claims to be free */
3380 do_check_free_chunk(m
, p
);
3381 /* chunk belongs in bin */
3382 assert(small_index(size
) == i
);
3383 assert(p
->bk
== b
|| chunksize(p
->bk
) == chunksize(p
));
3384 /* chunk is followed by an inuse chunk */
3386 if (q
->head
!= FENCEPOST_HEAD
)
3387 do_check_inuse_chunk(m
, q
);
3392 /* Find x in a bin. Used in other check functions. */
3393 static int bin_find(mstate m
, mchunkptr x
) {
3394 size_t size
= chunksize(x
);
3395 if (is_small(size
)) {
3396 bindex_t sidx
= small_index(size
);
3397 sbinptr b
= smallbin_at(m
, sidx
);
3398 if (smallmap_is_marked(m
, sidx
)) {
3403 } while ((p
= p
->fd
) != b
);
3408 compute_tree_index(size
, tidx
);
3409 if (treemap_is_marked(m
, tidx
)) {
3410 tchunkptr t
= *treebin_at(m
, tidx
);
3411 size_t sizebits
= size
<< leftshift_for_tree_index(tidx
);
3412 while (t
!= 0 && chunksize(t
) != size
) {
3413 t
= t
->child
[(sizebits
>> (SIZE_T_BITSIZE
-SIZE_T_ONE
)) & 1];
3419 if (u
== (tchunkptr
)x
)
3421 } while ((u
= u
->fd
) != t
);
3428 /* Traverse each chunk and check it; return total */
3429 static size_t traverse_and_check(mstate m
) {
3431 if (is_initialized(m
)) {
3432 msegmentptr s
= &m
->seg
;
3433 sum
+= m
->topsize
+ TOP_FOOT_SIZE
;
3435 mchunkptr q
= align_as_chunk(s
->base
);
3436 mchunkptr lastq
= 0;
3438 while (segment_holds(s
, q
) &&
3439 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3440 sum
+= chunksize(q
);
3442 assert(!bin_find(m
, q
));
3443 do_check_inuse_chunk(m
, q
);
3446 assert(q
== m
->dv
|| bin_find(m
, q
));
3447 assert(lastq
== 0 || is_inuse(lastq
)); /* Not 2 consecutive free */
3448 do_check_free_chunk(m
, q
);
3460 /* Check all properties of malloc_state. */
3461 static void do_check_malloc_state(mstate m
) {
3465 for (i
= 0; i
< NSMALLBINS
; ++i
)
3466 do_check_smallbin(m
, i
);
3467 for (i
= 0; i
< NTREEBINS
; ++i
)
3468 do_check_treebin(m
, i
);
3470 if (m
->dvsize
!= 0) { /* check dv chunk */
3471 do_check_any_chunk(m
, m
->dv
);
3472 assert(m
->dvsize
== chunksize(m
->dv
));
3473 assert(m
->dvsize
>= MIN_CHUNK_SIZE
);
3474 assert(bin_find(m
, m
->dv
) == 0);
3477 if (m
->top
!= 0) { /* check top chunk */
3478 do_check_top_chunk(m
, m
->top
);
3479 /*assert(m->topsize == chunksize(m->top)); redundant */
3480 assert(m
->topsize
> 0);
3481 assert(bin_find(m
, m
->top
) == 0);
3484 total
= traverse_and_check(m
);
3485 assert(total
<= m
->footprint
);
3486 assert(m
->footprint
<= m
->max_footprint
);
3490 /* ----------------------------- statistics ------------------------------ */
3493 static struct mallinfo
internal_mallinfo(mstate m
) {
3494 struct mallinfo nm
= { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3495 ensure_initialization();
3496 if (!PREACTION(m
)) {
3497 check_malloc_state(m
);
3498 if (is_initialized(m
)) {
3499 size_t nfree
= SIZE_T_ONE
; /* top always free */
3500 size_t mfree
= m
->topsize
+ TOP_FOOT_SIZE
;
3502 msegmentptr s
= &m
->seg
;
3504 mchunkptr q
= align_as_chunk(s
->base
);
3505 while (segment_holds(s
, q
) &&
3506 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3507 size_t sz
= chunksize(q
);
3520 nm
.hblkhd
= m
->footprint
- sum
;
3521 nm
.usmblks
= m
->max_footprint
;
3522 nm
.uordblks
= m
->footprint
- mfree
;
3523 nm
.fordblks
= mfree
;
3524 nm
.keepcost
= m
->topsize
;
3531 #endif /* !NO_MALLINFO */
3533 #if !NO_MALLOC_STATS
3534 static void internal_malloc_stats(mstate m
) {
3535 ensure_initialization();
3536 if (!PREACTION(m
)) {
3540 check_malloc_state(m
);
3541 if (is_initialized(m
)) {
3542 msegmentptr s
= &m
->seg
;
3543 maxfp
= m
->max_footprint
;
3545 used
= fp
- (m
->topsize
+ TOP_FOOT_SIZE
);
3548 mchunkptr q
= align_as_chunk(s
->base
);
3549 while (segment_holds(s
, q
) &&
3550 q
!= m
->top
&& q
->head
!= FENCEPOST_HEAD
) {
3552 used
-= chunksize(q
);
3558 POSTACTION(m
); /* drop lock */
3559 fprintf(stderr
, "max system bytes = %10lu\n", (unsigned long)(maxfp
));
3560 fprintf(stderr
, "system bytes = %10lu\n", (unsigned long)(fp
));
3561 fprintf(stderr
, "in use bytes = %10lu\n", (unsigned long)(used
));
3564 #endif /* NO_MALLOC_STATS */
3566 /* ----------------------- Operations on smallbins ----------------------- */
3569 Various forms of linking and unlinking are defined as macros. Even
3570 the ones for trees, which are very long but have very short typical
3571 paths. This is ugly but reduces reliance on inlining support of
3575 /* Link a free chunk into a smallbin */
3576 #define insert_small_chunk(M, P, S) {\
3577 bindex_t I = small_index(S);\
3578 mchunkptr B = smallbin_at(M, I);\
3580 assert(S >= MIN_CHUNK_SIZE);\
3581 if (!smallmap_is_marked(M, I))\
3582 mark_smallmap(M, I);\
3583 else if (RTCHECK(ok_address(M, B->fd)))\
3586 CORRUPTION_ERROR_ACTION(M);\
3594 /* Unlink a chunk from a smallbin */
3595 #define unlink_small_chunk(M, P, S) {\
3596 mchunkptr F = P->fd;\
3597 mchunkptr B = P->bk;\
3598 bindex_t I = small_index(S);\
3601 assert(chunksize(P) == small_index2size(I));\
3602 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3604 clear_smallmap(M, I);\
3606 else if (RTCHECK(B == smallbin_at(M,I) ||\
3607 (ok_address(M, B) && B->fd == P))) {\
3612 CORRUPTION_ERROR_ACTION(M);\
3616 CORRUPTION_ERROR_ACTION(M);\
3620 /* Unlink the first chunk from a smallbin */
3621 #define unlink_first_small_chunk(M, B, P, I) {\
3622 mchunkptr F = P->fd;\
3625 assert(chunksize(P) == small_index2size(I));\
3627 clear_smallmap(M, I);\
3629 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3634 CORRUPTION_ERROR_ACTION(M);\
3638 /* Replace dv node, binning the old one */
3639 /* Used only when dvsize known to be small */
3640 #define replace_dv(M, P, S) {\
3641 size_t DVS = M->dvsize;\
3642 assert(is_small(DVS));\
3644 mchunkptr DV = M->dv;\
3645 insert_small_chunk(M, DV, DVS);\
3651 /* ------------------------- Operations on trees ------------------------- */
3653 /* Insert chunk into tree */
3654 #define insert_large_chunk(M, X, S) {\
3657 compute_tree_index(S, I);\
3658 H = treebin_at(M, I);\
3660 X->child[0] = X->child[1] = 0;\
3661 if (!treemap_is_marked(M, I)) {\
3662 mark_treemap(M, I);\
3664 X->parent = (tchunkptr)H;\
3669 size_t K = S << leftshift_for_tree_index(I);\
3671 if (chunksize(T) != S) {\
3672 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3676 else if (RTCHECK(ok_address(M, C))) {\
3683 CORRUPTION_ERROR_ACTION(M);\
3688 tchunkptr F = T->fd;\
3689 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3697 CORRUPTION_ERROR_ACTION(M);\
3708 1. If x is a chained node, unlink it from its same-sized fd/bk links
3709 and choose its bk node as its replacement.
3710 2. If x was the last node of its size, but not a leaf node, it must
3711 be replaced with a leaf node (not merely one with an open left or
3712 right), to make sure that lefts and rights of descendents
3713 correspond properly to bit masks. We use the rightmost descendent
3714 of x. We could use any other leaf, but this is easy to locate and
3715 tends to counteract removal of leftmosts elsewhere, and so keeps
3716 paths shorter than minimally guaranteed. This doesn't loop much
3717 because on average a node in a tree is near the bottom.
3718 3. If x is the base of a chain (i.e., has parent links) relink
3719 x's parent and children to x's replacement (or null if none).
3722 #define unlink_large_chunk(M, X) {\
3723 tchunkptr XP = X->parent;\
3726 tchunkptr F = X->fd;\
3728 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3733 CORRUPTION_ERROR_ACTION(M);\
3738 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3739 ((R = *(RP = &(X->child[0]))) != 0)) {\
3741 while ((*(CP = &(R->child[1])) != 0) ||\
3742 (*(CP = &(R->child[0])) != 0)) {\
3745 if (RTCHECK(ok_address(M, RP)))\
3748 CORRUPTION_ERROR_ACTION(M);\
3753 tbinptr* H = treebin_at(M, X->index);\
3755 if ((*H = R) == 0) \
3756 clear_treemap(M, X->index);\
3758 else if (RTCHECK(ok_address(M, XP))) {\
3759 if (XP->child[0] == X) \
3765 CORRUPTION_ERROR_ACTION(M);\
3767 if (RTCHECK(ok_address(M, R))) {\
3770 if ((C0 = X->child[0]) != 0) {\
3771 if (RTCHECK(ok_address(M, C0))) {\
3776 CORRUPTION_ERROR_ACTION(M);\
3778 if ((C1 = X->child[1]) != 0) {\
3779 if (RTCHECK(ok_address(M, C1))) {\
3784 CORRUPTION_ERROR_ACTION(M);\
3788 CORRUPTION_ERROR_ACTION(M);\
3793 /* Relays to large vs small bin operations */
3795 #define insert_chunk(M, P, S)\
3796 if (is_small(S)) insert_small_chunk(M, P, S)\
3797 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3799 #define unlink_chunk(M, P, S)\
3800 if (is_small(S)) unlink_small_chunk(M, P, S)\
3801 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3804 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3807 #define internal_malloc(m, b) mspace_malloc(m, b)
3808 #define internal_free(m, mem) mspace_free(m,mem);
3809 #else /* ONLY_MSPACES */
3811 #define internal_malloc(m, b)\
3812 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3813 #define internal_free(m, mem)\
3814 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3816 #define internal_malloc(m, b) dlmalloc(b)
3817 #define internal_free(m, mem) dlfree(mem)
3818 #endif /* MSPACES */
3819 #endif /* ONLY_MSPACES */
3821 /* ----------------------- Direct-mmapping chunks ----------------------- */
3824 Directly mmapped chunks are set up with an offset to the start of
3825 the mmapped region stored in the prev_foot field of the chunk. This
3826 allows reconstruction of the required argument to MUNMAP when freed,
3827 and also allows adjustment of the returned chunk to meet alignment
3828 requirements (especially in memalign).
3831 /* Malloc using mmap */
3832 static void* mmap_alloc(mstate m
, size_t nb
) {
3833 size_t mmsize
= mmap_align(nb
+ SIX_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3834 if (m
->footprint_limit
!= 0) {
3835 size_t fp
= m
->footprint
+ mmsize
;
3836 if (fp
<= m
->footprint
|| fp
> m
->footprint_limit
)
3839 if (mmsize
> nb
) { /* Check for wrap around 0 */
3840 char* mm
= (char*)(CALL_DIRECT_MMAP(mmsize
));
3842 size_t offset
= align_offset(chunk2mem(mm
));
3843 size_t psize
= mmsize
- offset
- MMAP_FOOT_PAD
;
3844 mchunkptr p
= (mchunkptr
)(mm
+ offset
);
3845 p
->prev_foot
= offset
;
3847 mark_inuse_foot(m
, p
, psize
);
3848 chunk_plus_offset(p
, psize
)->head
= FENCEPOST_HEAD
;
3849 chunk_plus_offset(p
, psize
+SIZE_T_SIZE
)->head
= 0;
3851 if (m
->least_addr
== 0 || mm
< m
->least_addr
)
3853 if ((m
->footprint
+= mmsize
) > m
->max_footprint
)
3854 m
->max_footprint
= m
->footprint
;
3855 assert(is_aligned(chunk2mem(p
)));
3856 check_mmapped_chunk(m
, p
);
3857 return chunk2mem(p
);
3863 /* Realloc using mmap */
3864 static mchunkptr
mmap_resize(mstate m
, mchunkptr oldp
, size_t nb
, int flags
) {
3865 size_t oldsize
= chunksize(oldp
);
3866 (void)flags
; /* placate people compiling -Wunused */
3867 if (is_small(nb
)) /* Can't shrink mmap regions below small size */
3869 /* Keep old chunk if big enough but not too big */
3870 if (oldsize
>= nb
+ SIZE_T_SIZE
&&
3871 (oldsize
- nb
) <= (mparams
.granularity
<< 1))
3874 size_t offset
= oldp
->prev_foot
;
3875 size_t oldmmsize
= oldsize
+ offset
+ MMAP_FOOT_PAD
;
3876 size_t newmmsize
= mmap_align(nb
+ SIX_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3877 char* cp
= (char*)CALL_MREMAP((char*)oldp
- offset
,
3878 oldmmsize
, newmmsize
, flags
);
3880 mchunkptr newp
= (mchunkptr
)(cp
+ offset
);
3881 size_t psize
= newmmsize
- offset
- MMAP_FOOT_PAD
;
3883 mark_inuse_foot(m
, newp
, psize
);
3884 chunk_plus_offset(newp
, psize
)->head
= FENCEPOST_HEAD
;
3885 chunk_plus_offset(newp
, psize
+SIZE_T_SIZE
)->head
= 0;
3887 if (cp
< m
->least_addr
)
3889 if ((m
->footprint
+= newmmsize
- oldmmsize
) > m
->max_footprint
)
3890 m
->max_footprint
= m
->footprint
;
3891 check_mmapped_chunk(m
, newp
);
3899 /* -------------------------- mspace management -------------------------- */
3901 /* Initialize top chunk and its size */
3902 static void init_top(mstate m
, mchunkptr p
, size_t psize
) {
3903 /* Ensure alignment */
3904 size_t offset
= align_offset(chunk2mem(p
));
3905 p
= (mchunkptr
)((char*)p
+ offset
);
3910 p
->head
= psize
| PINUSE_BIT
;
3911 /* set size of fake trailing chunk holding overhead space only once */
3912 chunk_plus_offset(p
, psize
)->head
= TOP_FOOT_SIZE
;
3913 m
->trim_check
= mparams
.trim_threshold
; /* reset on each update */
3916 /* Initialize bins for a new mstate that is otherwise zeroed out */
3917 static void init_bins(mstate m
) {
3918 /* Establish circular links for smallbins */
3920 for (i
= 0; i
< NSMALLBINS
; ++i
) {
3921 sbinptr bin
= smallbin_at(m
,i
);
3922 bin
->fd
= bin
->bk
= bin
;
3926 #if PROCEED_ON_ERROR
3928 /* default corruption action */
3929 static void reset_on_error(mstate m
) {
3931 ++malloc_corruption_error_count
;
3932 /* Reinitialize fields to forget about all memory */
3933 m
->smallmap
= m
->treemap
= 0;
3934 m
->dvsize
= m
->topsize
= 0;
3939 for (i
= 0; i
< NTREEBINS
; ++i
)
3940 *treebin_at(m
, i
) = 0;
3943 #endif /* PROCEED_ON_ERROR */
3945 /* Allocate chunk and prepend remainder with chunk in successor base. */
3946 static void* prepend_alloc(mstate m
, char* newbase
, char* oldbase
,
3948 mchunkptr p
= align_as_chunk(newbase
);
3949 mchunkptr oldfirst
= align_as_chunk(oldbase
);
3950 size_t psize
= (char*)oldfirst
- (char*)p
;
3951 mchunkptr q
= chunk_plus_offset(p
, nb
);
3952 size_t qsize
= psize
- nb
;
3953 set_size_and_pinuse_of_inuse_chunk(m
, p
, nb
);
3955 assert((char*)oldfirst
> (char*)q
);
3956 assert(pinuse(oldfirst
));
3957 assert(qsize
>= MIN_CHUNK_SIZE
);
3959 /* consolidate remainder with first chunk of old base */
3960 if (oldfirst
== m
->top
) {
3961 size_t tsize
= m
->topsize
+= qsize
;
3963 q
->head
= tsize
| PINUSE_BIT
;
3964 check_top_chunk(m
, q
);
3966 else if (oldfirst
== m
->dv
) {
3967 size_t dsize
= m
->dvsize
+= qsize
;
3969 set_size_and_pinuse_of_free_chunk(q
, dsize
);
3972 if (!is_inuse(oldfirst
)) {
3973 size_t nsize
= chunksize(oldfirst
);
3974 unlink_chunk(m
, oldfirst
, nsize
);
3975 oldfirst
= chunk_plus_offset(oldfirst
, nsize
);
3978 set_free_with_pinuse(q
, qsize
, oldfirst
);
3979 insert_chunk(m
, q
, qsize
);
3980 check_free_chunk(m
, q
);
3983 check_malloced_chunk(m
, chunk2mem(p
), nb
);
3984 return chunk2mem(p
);
3987 /* Add a segment to hold a new noncontiguous region */
3988 static void add_segment(mstate m
, char* tbase
, size_t tsize
, flag_t mmapped
) {
3989 /* Determine locations and sizes of segment, fenceposts, old top */
3990 char* old_top
= (char*)m
->top
;
3991 msegmentptr oldsp
= segment_holding(m
, old_top
);
3992 char* old_end
= oldsp
->base
+ oldsp
->size
;
3993 size_t ssize
= pad_request(sizeof(struct malloc_segment
));
3994 char* rawsp
= old_end
- (ssize
+ FOUR_SIZE_T_SIZES
+ CHUNK_ALIGN_MASK
);
3995 size_t offset
= align_offset(chunk2mem(rawsp
));
3996 char* asp
= rawsp
+ offset
;
3997 char* csp
= (asp
< (old_top
+ MIN_CHUNK_SIZE
))? old_top
: asp
;
3998 mchunkptr sp
= (mchunkptr
)csp
;
3999 msegmentptr ss
= (msegmentptr
)(chunk2mem(sp
));
4000 mchunkptr tnext
= chunk_plus_offset(sp
, ssize
);
4001 mchunkptr p
= tnext
;
4004 /* reset top to new space */
4005 init_top(m
, (mchunkptr
)tbase
, tsize
- TOP_FOOT_SIZE
);
4007 /* Set up segment record */
4008 assert(is_aligned(ss
));
4009 set_size_and_pinuse_of_inuse_chunk(m
, sp
, ssize
);
4010 *ss
= m
->seg
; /* Push current record */
4011 m
->seg
.base
= tbase
;
4012 m
->seg
.size
= tsize
;
4013 m
->seg
.sflags
= mmapped
;
4016 /* Insert trailing fenceposts */
4018 mchunkptr nextp
= chunk_plus_offset(p
, SIZE_T_SIZE
);
4019 p
->head
= FENCEPOST_HEAD
;
4021 if ((char*)(&(nextp
->head
)) < old_end
)
4026 assert(nfences
>= 2);
4028 /* Insert the rest of old top into a bin as an ordinary free chunk */
4029 if (csp
!= old_top
) {
4030 mchunkptr q
= (mchunkptr
)old_top
;
4031 size_t psize
= csp
- old_top
;
4032 mchunkptr tn
= chunk_plus_offset(q
, psize
);
4033 set_free_with_pinuse(q
, psize
, tn
);
4034 insert_chunk(m
, q
, psize
);
4037 check_top_chunk(m
, m
->top
);
4040 /* -------------------------- System allocation -------------------------- */
4042 /* Get memory from system using MORECORE or MMAP */
4043 static void* sys_alloc(mstate m
, size_t nb
) {
4044 char* tbase
= CMFAIL
;
4046 flag_t mmap_flag
= 0;
4047 size_t asize
; /* allocation size */
4049 ensure_initialization();
4051 /* Directly map large chunks, but only if already initialized */
4052 if (use_mmap(m
) && nb
>= mparams
.mmap_threshold
&& m
->topsize
!= 0) {
4053 void* mem
= mmap_alloc(m
, nb
);
4058 asize
= granularity_align(nb
+ SYS_ALLOC_PADDING
);
4060 return 0; /* wraparound */
4061 if (m
->footprint_limit
!= 0) {
4062 size_t fp
= m
->footprint
+ asize
;
4063 if (fp
<= m
->footprint
|| fp
> m
->footprint_limit
)
4068 Try getting memory in any of three ways (in most-preferred to
4069 least-preferred order):
4070 1. A call to MORECORE that can normally contiguously extend memory.
4071 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4072 or main space is mmapped or a previous contiguous call failed)
4073 2. A call to MMAP new space (disabled if not HAVE_MMAP).
4074 Note that under the default settings, if MORECORE is unable to
4075 fulfill a request, and HAVE_MMAP is true, then mmap is
4076 used as a noncontiguous system allocator. This is a useful backup
4077 strategy for systems with holes in address spaces -- in this case
4078 sbrk cannot contiguously expand the heap, but mmap may be able to
4080 3. A call to MORECORE that cannot usually contiguously extend memory.
4081 (disabled if not HAVE_MORECORE)
4083 In all cases, we need to request enough bytes from system to ensure
4084 we can malloc nb bytes upon success, so pad with enough space for
4085 top_foot, plus alignment-pad to make sure we don't lose bytes if
4086 not on boundary, and round this up to a granularity unit.
4089 if (MORECORE_CONTIGUOUS
&& !use_noncontiguous(m
)) {
4091 size_t ssize
= asize
; /* sbrk call size */
4092 msegmentptr ss
= (m
->top
== 0)? 0 : segment_holding(m
, (char*)m
->top
);
4093 ACQUIRE_MALLOC_GLOBAL_LOCK();
4095 if (ss
== 0) { /* First time through or recovery */
4096 char* base
= (char*)CALL_MORECORE(0);
4097 if (base
!= CMFAIL
) {
4099 /* Adjust to end on a page boundary */
4100 if (!is_page_aligned(base
))
4101 ssize
+= (page_align((size_t)base
) - (size_t)base
);
4102 fp
= m
->footprint
+ ssize
; /* recheck limits */
4103 if (ssize
> nb
&& ssize
< HALF_MAX_SIZE_T
&&
4104 (m
->footprint_limit
== 0 ||
4105 (fp
> m
->footprint
&& fp
<= m
->footprint_limit
)) &&
4106 (br
= (char*)(CALL_MORECORE(ssize
))) == base
) {
4113 /* Subtract out existing available top space from MORECORE request. */
4114 ssize
= granularity_align(nb
- m
->topsize
+ SYS_ALLOC_PADDING
);
4115 /* Use mem here only if it did continuously extend old space */
4116 if (ssize
< HALF_MAX_SIZE_T
&&
4117 (br
= (char*)(CALL_MORECORE(ssize
))) == ss
->base
+ss
->size
) {
4123 if (tbase
== CMFAIL
) { /* Cope with partial failure */
4124 if (br
!= CMFAIL
) { /* Try to use/extend the space we did get */
4125 if (ssize
< HALF_MAX_SIZE_T
&&
4126 ssize
< nb
+ SYS_ALLOC_PADDING
) {
4127 size_t esize
= granularity_align(nb
+ SYS_ALLOC_PADDING
- ssize
);
4128 if (esize
< HALF_MAX_SIZE_T
) {
4129 char* end
= (char*)CALL_MORECORE(esize
);
4132 else { /* Can't use; try to release */
4133 (void) CALL_MORECORE(-ssize
);
4139 if (br
!= CMFAIL
) { /* Use the space we did get */
4144 disable_contiguous(m
); /* Don't try contiguous path in the future */
4147 RELEASE_MALLOC_GLOBAL_LOCK();
4150 if (HAVE_MMAP
&& tbase
== CMFAIL
) { /* Try MMAP */
4151 char* mp
= (char*)(CALL_MMAP(asize
));
4155 mmap_flag
= USE_MMAP_BIT
;
4159 if (HAVE_MORECORE
&& tbase
== CMFAIL
) { /* Try noncontiguous MORECORE */
4160 if (asize
< HALF_MAX_SIZE_T
) {
4163 ACQUIRE_MALLOC_GLOBAL_LOCK();
4164 br
= (char*)(CALL_MORECORE(asize
));
4165 end
= (char*)(CALL_MORECORE(0));
4166 RELEASE_MALLOC_GLOBAL_LOCK();
4167 if (br
!= CMFAIL
&& end
!= CMFAIL
&& br
< end
) {
4168 size_t ssize
= end
- br
;
4169 if (ssize
> nb
+ TOP_FOOT_SIZE
) {
4177 if (tbase
!= CMFAIL
) {
4179 if ((m
->footprint
+= tsize
) > m
->max_footprint
)
4180 m
->max_footprint
= m
->footprint
;
4182 if (!is_initialized(m
)) { /* first-time initialization */
4183 if (m
->least_addr
== 0 || tbase
< m
->least_addr
)
4184 m
->least_addr
= tbase
;
4185 m
->seg
.base
= tbase
;
4186 m
->seg
.size
= tsize
;
4187 m
->seg
.sflags
= mmap_flag
;
4188 m
->magic
= mparams
.magic
;
4189 m
->release_checks
= MAX_RELEASE_CHECK_RATE
;
4193 init_top(m
, (mchunkptr
)tbase
, tsize
- TOP_FOOT_SIZE
);
4197 /* Offset top by embedded malloc_state */
4198 mchunkptr mn
= next_chunk(mem2chunk(m
));
4199 init_top(m
, mn
, (size_t)((tbase
+ tsize
) - (char*)mn
) -TOP_FOOT_SIZE
);
4204 /* Try to merge with an existing segment */
4205 msegmentptr sp
= &m
->seg
;
4206 /* Only consider most recent segment if traversal suppressed */
4207 while (sp
!= 0 && tbase
!= sp
->base
+ sp
->size
)
4208 sp
= (NO_SEGMENT_TRAVERSAL
) ? 0 : sp
->next
;
4210 !is_extern_segment(sp
) &&
4211 (sp
->sflags
& USE_MMAP_BIT
) == mmap_flag
&&
4212 segment_holds(sp
, m
->top
)) { /* append */
4214 init_top(m
, m
->top
, m
->topsize
+ tsize
);
4217 if (tbase
< m
->least_addr
)
4218 m
->least_addr
= tbase
;
4220 while (sp
!= 0 && sp
->base
!= tbase
+ tsize
)
4221 sp
= (NO_SEGMENT_TRAVERSAL
) ? 0 : sp
->next
;
4223 !is_extern_segment(sp
) &&
4224 (sp
->sflags
& USE_MMAP_BIT
) == mmap_flag
) {
4225 char* oldbase
= sp
->base
;
4228 return prepend_alloc(m
, tbase
, oldbase
, nb
);
4231 add_segment(m
, tbase
, tsize
, mmap_flag
);
4235 if (nb
< m
->topsize
) { /* Allocate from new or extended top space */
4236 size_t rsize
= m
->topsize
-= nb
;
4237 mchunkptr p
= m
->top
;
4238 mchunkptr r
= m
->top
= chunk_plus_offset(p
, nb
);
4239 r
->head
= rsize
| PINUSE_BIT
;
4240 set_size_and_pinuse_of_inuse_chunk(m
, p
, nb
);
4241 check_top_chunk(m
, m
->top
);
4242 check_malloced_chunk(m
, chunk2mem(p
), nb
);
4243 return chunk2mem(p
);
4247 MALLOC_FAILURE_ACTION
;
4251 /* ----------------------- system deallocation -------------------------- */
4253 /* Unmap and unlink any mmapped segments that don't contain used chunks */
4254 static size_t release_unused_segments(mstate m
) {
4255 size_t released
= 0;
4257 msegmentptr pred
= &m
->seg
;
4258 msegmentptr sp
= pred
->next
;
4260 char* base
= sp
->base
;
4261 size_t size
= sp
->size
;
4262 msegmentptr next
= sp
->next
;
4264 if (is_mmapped_segment(sp
) && !is_extern_segment(sp
)) {
4265 mchunkptr p
= align_as_chunk(base
);
4266 size_t psize
= chunksize(p
);
4267 /* Can unmap if first chunk holds entire segment and not pinned */
4268 if (!is_inuse(p
) && (char*)p
+ psize
>= base
+ size
- TOP_FOOT_SIZE
) {
4269 tchunkptr tp
= (tchunkptr
)p
;
4270 assert(segment_holds(sp
, (char*)sp
));
4276 unlink_large_chunk(m
, tp
);
4278 if (CALL_MUNMAP(base
, size
) == 0) {
4280 m
->footprint
-= size
;
4281 /* unlink obsoleted record */
4285 else { /* back out if cannot unmap */
4286 insert_large_chunk(m
, tp
, psize
);
4290 if (NO_SEGMENT_TRAVERSAL
) /* scan only first segment */
4295 /* Reset check counter */
4296 m
->release_checks
= (((size_t) nsegs
> (size_t) MAX_RELEASE_CHECK_RATE
)?
4297 (size_t) nsegs
: (size_t) MAX_RELEASE_CHECK_RATE
);
4301 static int sys_trim(mstate m
, size_t pad
) {
4302 size_t released
= 0;
4303 ensure_initialization();
4304 if (pad
< MAX_REQUEST
&& is_initialized(m
)) {
4305 pad
+= TOP_FOOT_SIZE
; /* ensure enough room for segment overhead */
4307 if (m
->topsize
> pad
) {
4308 /* Shrink top space in granularity-size units, keeping at least one */
4309 size_t unit
= mparams
.granularity
;
4310 size_t extra
= ((m
->topsize
- pad
+ (unit
- SIZE_T_ONE
)) / unit
-
4312 msegmentptr sp
= segment_holding(m
, (char*)m
->top
);
4314 if (!is_extern_segment(sp
)) {
4315 if (is_mmapped_segment(sp
)) {
4317 sp
->size
>= extra
&&
4318 !has_segment_link(m
, sp
)) { /* can't shrink if pinned */
4319 size_t newsize
= sp
->size
- extra
;
4320 (void)newsize
; /* placate people compiling -Wunused-variable */
4321 /* Prefer mremap, fall back to munmap */
4322 if ((CALL_MREMAP(sp
->base
, sp
->size
, newsize
, 0) != MFAIL
) ||
4323 (CALL_MUNMAP(sp
->base
+ newsize
, extra
) == 0)) {
4328 else if (HAVE_MORECORE
) {
4329 if (extra
>= HALF_MAX_SIZE_T
) /* Avoid wrapping negative */
4330 extra
= (HALF_MAX_SIZE_T
) + SIZE_T_ONE
- unit
;
4331 ACQUIRE_MALLOC_GLOBAL_LOCK();
4333 /* Make sure end of memory is where we last set it. */
4334 char* old_br
= (char*)(CALL_MORECORE(0));
4335 if (old_br
== sp
->base
+ sp
->size
) {
4336 char* rel_br
= (char*)(CALL_MORECORE(-extra
));
4337 char* new_br
= (char*)(CALL_MORECORE(0));
4338 if (rel_br
!= CMFAIL
&& new_br
< old_br
)
4339 released
= old_br
- new_br
;
4342 RELEASE_MALLOC_GLOBAL_LOCK();
4346 if (released
!= 0) {
4347 sp
->size
-= released
;
4348 m
->footprint
-= released
;
4349 init_top(m
, m
->top
, m
->topsize
- released
);
4350 check_top_chunk(m
, m
->top
);
4354 /* Unmap any unused mmapped segments */
4356 released
+= release_unused_segments(m
);
4358 /* On failure, disable autotrim to avoid repeated failed future calls */
4359 if (released
== 0 && m
->topsize
> m
->trim_check
)
4360 m
->trim_check
= MAX_SIZE_T
;
4363 return (released
!= 0)? 1 : 0;
4366 /* Consolidate and bin a chunk. Differs from exported versions
4367 of free mainly in that the chunk need not be marked as inuse.
4369 static void dispose_chunk(mstate m
, mchunkptr p
, size_t psize
) {
4370 mchunkptr next
= chunk_plus_offset(p
, psize
);
4373 size_t prevsize
= p
->prev_foot
;
4374 if (is_mmapped(p
)) {
4375 psize
+= prevsize
+ MMAP_FOOT_PAD
;
4376 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
4377 m
->footprint
-= psize
;
4380 prev
= chunk_minus_offset(p
, prevsize
);
4383 if (RTCHECK(ok_address(m
, prev
))) { /* consolidate backward */
4385 unlink_chunk(m
, p
, prevsize
);
4387 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
4389 set_free_with_pinuse(p
, psize
, next
);
4394 CORRUPTION_ERROR_ACTION(m
);
4398 if (RTCHECK(ok_address(m
, next
))) {
4399 if (!cinuse(next
)) { /* consolidate forward */
4400 if (next
== m
->top
) {
4401 size_t tsize
= m
->topsize
+= psize
;
4403 p
->head
= tsize
| PINUSE_BIT
;
4410 else if (next
== m
->dv
) {
4411 size_t dsize
= m
->dvsize
+= psize
;
4413 set_size_and_pinuse_of_free_chunk(p
, dsize
);
4417 size_t nsize
= chunksize(next
);
4419 unlink_chunk(m
, next
, nsize
);
4420 set_size_and_pinuse_of_free_chunk(p
, psize
);
4428 set_free_with_pinuse(p
, psize
, next
);
4430 insert_chunk(m
, p
, psize
);
4433 CORRUPTION_ERROR_ACTION(m
);
4437 /* ---------------------------- malloc --------------------------- */
4439 /* allocate a large request from the best fitting chunk in a treebin */
4440 static void* tmalloc_large(mstate m
, size_t nb
) {
4442 size_t rsize
= -nb
; /* Unsigned negation */
4445 compute_tree_index(nb
, idx
);
4446 if ((t
= *treebin_at(m
, idx
)) != 0) {
4447 /* Traverse tree for this bin looking for node with size == nb */
4448 size_t sizebits
= nb
<< leftshift_for_tree_index(idx
);
4449 tchunkptr rst
= 0; /* The deepest untaken right subtree */
4452 size_t trem
= chunksize(t
) - nb
;
4455 if ((rsize
= trem
) == 0)
4459 t
= t
->child
[(sizebits
>> (SIZE_T_BITSIZE
-SIZE_T_ONE
)) & 1];
4460 if (rt
!= 0 && rt
!= t
)
4463 t
= rst
; /* set t to least subtree holding sizes > nb */
4469 if (t
== 0 && v
== 0) { /* set t to root of next non-empty treebin */
4470 binmap_t leftbits
= left_bits(idx2bit(idx
)) & m
->treemap
;
4471 if (leftbits
!= 0) {
4473 binmap_t leastbit
= least_bit(leftbits
);
4474 compute_bit2idx(leastbit
, i
);
4475 t
= *treebin_at(m
, i
);
4479 while (t
!= 0) { /* find smallest of tree or subtree */
4480 size_t trem
= chunksize(t
) - nb
;
4485 t
= leftmost_child(t
);
4488 /* If dv is a better fit, return 0 so malloc will use it */
4489 if (v
!= 0 && rsize
< (size_t)(m
->dvsize
- nb
)) {
4490 if (RTCHECK(ok_address(m
, v
))) { /* split */
4491 mchunkptr r
= chunk_plus_offset(v
, nb
);
4492 assert(chunksize(v
) == rsize
+ nb
);
4493 if (RTCHECK(ok_next(v
, r
))) {
4494 unlink_large_chunk(m
, v
);
4495 if (rsize
< MIN_CHUNK_SIZE
)
4496 set_inuse_and_pinuse(m
, v
, (rsize
+ nb
));
4498 set_size_and_pinuse_of_inuse_chunk(m
, v
, nb
);
4499 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4500 insert_chunk(m
, r
, rsize
);
4502 return chunk2mem(v
);
4505 CORRUPTION_ERROR_ACTION(m
);
4510 /* allocate a small request from the best fitting chunk in a treebin */
4511 static void* tmalloc_small(mstate m
, size_t nb
) {
4515 binmap_t leastbit
= least_bit(m
->treemap
);
4516 compute_bit2idx(leastbit
, i
);
4517 v
= t
= *treebin_at(m
, i
);
4518 rsize
= chunksize(t
) - nb
;
4520 while ((t
= leftmost_child(t
)) != 0) {
4521 size_t trem
= chunksize(t
) - nb
;
4528 if (RTCHECK(ok_address(m
, v
))) {
4529 mchunkptr r
= chunk_plus_offset(v
, nb
);
4530 assert(chunksize(v
) == rsize
+ nb
);
4531 if (RTCHECK(ok_next(v
, r
))) {
4532 unlink_large_chunk(m
, v
);
4533 if (rsize
< MIN_CHUNK_SIZE
)
4534 set_inuse_and_pinuse(m
, v
, (rsize
+ nb
));
4536 set_size_and_pinuse_of_inuse_chunk(m
, v
, nb
);
4537 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4538 replace_dv(m
, r
, rsize
);
4540 return chunk2mem(v
);
4544 CORRUPTION_ERROR_ACTION(m
);
4550 void* dlmalloc(size_t bytes
) {
4553 If a small request (< 256 bytes minus per-chunk overhead):
4554 1. If one exists, use a remainderless chunk in associated smallbin.
4555 (Remainderless means that there are too few excess bytes to
4556 represent as a chunk.)
4557 2. If it is big enough, use the dv chunk, which is normally the
4558 chunk adjacent to the one used for the most recent small request.
4559 3. If one exists, split the smallest available chunk in a bin,
4560 saving remainder in dv.
4561 4. If it is big enough, use the top chunk.
4562 5. If available, get memory from system and use it
4563 Otherwise, for a large request:
4564 1. Find the smallest available binned chunk that fits, and use it
4565 if it is better fitting than dv chunk, splitting if necessary.
4566 2. If better fitting than any binned chunk, use the dv chunk.
4567 3. If it is big enough, use the top chunk.
4568 4. If request size >= mmap threshold, try to directly mmap this chunk.
4569 5. If available, get memory from system and use it
4571 The ugly goto's here ensure that postaction occurs along all paths.
4575 ensure_initialization(); /* initialize in sys_alloc if not using locks */
4578 if (!PREACTION(gm
)) {
4581 if (bytes
<= MAX_SMALL_REQUEST
) {
4584 nb
= (bytes
< MIN_REQUEST
)? MIN_CHUNK_SIZE
: pad_request(bytes
);
4585 idx
= small_index(nb
);
4586 smallbits
= gm
->smallmap
>> idx
;
4588 if ((smallbits
& 0x3U
) != 0) { /* Remainderless fit to a smallbin. */
4590 idx
+= ~smallbits
& 1; /* Uses next bin if idx empty */
4591 b
= smallbin_at(gm
, idx
);
4593 assert(chunksize(p
) == small_index2size(idx
));
4594 unlink_first_small_chunk(gm
, b
, p
, idx
);
4595 set_inuse_and_pinuse(gm
, p
, small_index2size(idx
));
4597 check_malloced_chunk(gm
, mem
, nb
);
4601 else if (nb
> gm
->dvsize
) {
4602 if (smallbits
!= 0) { /* Use chunk in next nonempty smallbin */
4606 binmap_t leftbits
= (smallbits
<< idx
) & left_bits(idx2bit(idx
));
4607 binmap_t leastbit
= least_bit(leftbits
);
4608 compute_bit2idx(leastbit
, i
);
4609 b
= smallbin_at(gm
, i
);
4611 assert(chunksize(p
) == small_index2size(i
));
4612 unlink_first_small_chunk(gm
, b
, p
, i
);
4613 rsize
= small_index2size(i
) - nb
;
4614 /* Fit here cannot be remainderless if 4byte sizes */
4615 if (SIZE_T_SIZE
!= 4 && rsize
< MIN_CHUNK_SIZE
)
4616 set_inuse_and_pinuse(gm
, p
, small_index2size(i
));
4618 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4619 r
= chunk_plus_offset(p
, nb
);
4620 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4621 replace_dv(gm
, r
, rsize
);
4624 check_malloced_chunk(gm
, mem
, nb
);
4628 else if (gm
->treemap
!= 0 && (mem
= tmalloc_small(gm
, nb
)) != 0) {
4629 check_malloced_chunk(gm
, mem
, nb
);
4634 else if (bytes
>= MAX_REQUEST
)
4635 nb
= MAX_SIZE_T
; /* Too big to allocate. Force failure (in sys alloc) */
4637 nb
= pad_request(bytes
);
4638 if (gm
->treemap
!= 0 && (mem
= tmalloc_large(gm
, nb
)) != 0) {
4639 check_malloced_chunk(gm
, mem
, nb
);
4644 if (nb
<= gm
->dvsize
) {
4645 size_t rsize
= gm
->dvsize
- nb
;
4646 mchunkptr p
= gm
->dv
;
4647 if (rsize
>= MIN_CHUNK_SIZE
) { /* split dv */
4648 mchunkptr r
= gm
->dv
= chunk_plus_offset(p
, nb
);
4650 set_size_and_pinuse_of_free_chunk(r
, rsize
);
4651 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4653 else { /* exhaust dv */
4654 size_t dvs
= gm
->dvsize
;
4657 set_inuse_and_pinuse(gm
, p
, dvs
);
4660 check_malloced_chunk(gm
, mem
, nb
);
4664 else if (nb
< gm
->topsize
) { /* Split top */
4665 size_t rsize
= gm
->topsize
-= nb
;
4666 mchunkptr p
= gm
->top
;
4667 mchunkptr r
= gm
->top
= chunk_plus_offset(p
, nb
);
4668 r
->head
= rsize
| PINUSE_BIT
;
4669 set_size_and_pinuse_of_inuse_chunk(gm
, p
, nb
);
4671 check_top_chunk(gm
, gm
->top
);
4672 check_malloced_chunk(gm
, mem
, nb
);
4676 mem
= sys_alloc(gm
, nb
);
4686 /* ---------------------------- free --------------------------- */
4688 void dlfree(void* mem
) {
4690 Consolidate freed chunks with preceeding or succeeding bordering
4691 free chunks, if they exist, and then place in a bin. Intermixed
4692 with special cases for top, dv, mmapped chunks, and usage errors.
4696 mchunkptr p
= mem2chunk(mem
);
4698 mstate fm
= get_mstate_for(p
);
4699 if (!ok_magic(fm
)) {
4700 USAGE_ERROR_ACTION(fm
, p
);
4705 #endif /* FOOTERS */
4706 if (!PREACTION(fm
)) {
4707 check_inuse_chunk(fm
, p
);
4708 if (RTCHECK(ok_address(fm
, p
) && ok_inuse(p
))) {
4709 size_t psize
= chunksize(p
);
4710 mchunkptr next
= chunk_plus_offset(p
, psize
);
4712 size_t prevsize
= p
->prev_foot
;
4713 if (is_mmapped(p
)) {
4714 psize
+= prevsize
+ MMAP_FOOT_PAD
;
4715 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
4716 fm
->footprint
-= psize
;
4720 mchunkptr prev
= chunk_minus_offset(p
, prevsize
);
4723 if (RTCHECK(ok_address(fm
, prev
))) { /* consolidate backward */
4725 unlink_chunk(fm
, p
, prevsize
);
4727 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
4729 set_free_with_pinuse(p
, psize
, next
);
4738 if (RTCHECK(ok_next(p
, next
) && ok_pinuse(next
))) {
4739 if (!cinuse(next
)) { /* consolidate forward */
4740 if (next
== fm
->top
) {
4741 size_t tsize
= fm
->topsize
+= psize
;
4743 p
->head
= tsize
| PINUSE_BIT
;
4748 if (should_trim(fm
, tsize
))
4752 else if (next
== fm
->dv
) {
4753 size_t dsize
= fm
->dvsize
+= psize
;
4755 set_size_and_pinuse_of_free_chunk(p
, dsize
);
4759 size_t nsize
= chunksize(next
);
4761 unlink_chunk(fm
, next
, nsize
);
4762 set_size_and_pinuse_of_free_chunk(p
, psize
);
4770 set_free_with_pinuse(p
, psize
, next
);
4772 if (is_small(psize
)) {
4773 insert_small_chunk(fm
, p
, psize
);
4774 check_free_chunk(fm
, p
);
4777 tchunkptr tp
= (tchunkptr
)p
;
4778 insert_large_chunk(fm
, tp
, psize
);
4779 check_free_chunk(fm
, p
);
4780 if (--fm
->release_checks
== 0)
4781 release_unused_segments(fm
);
4787 USAGE_ERROR_ACTION(fm
, p
);
4794 #endif /* FOOTERS */
4797 void* dlcalloc(size_t n_elements
, size_t elem_size
) {
4800 if (n_elements
!= 0) {
4801 req
= n_elements
* elem_size
;
4802 if (((n_elements
| elem_size
) & ~(size_t)0xffff) &&
4803 (req
/ n_elements
!= elem_size
))
4804 req
= MAX_SIZE_T
; /* force downstream failure on overflow */
4806 mem
= dlmalloc(req
);
4807 if (mem
!= 0 && calloc_must_clear(mem2chunk(mem
)))
4808 memset(mem
, 0, req
);
4812 #endif /* !ONLY_MSPACES */
4814 /* ------------ Internal support for realloc, memalign, etc -------------- */
4816 /* Try to realloc; only in-place unless can_move true */
4817 static mchunkptr
try_realloc_chunk(mstate m
, mchunkptr p
, size_t nb
,
4820 size_t oldsize
= chunksize(p
);
4821 mchunkptr next
= chunk_plus_offset(p
, oldsize
);
4822 if (RTCHECK(ok_address(m
, p
) && ok_inuse(p
) &&
4823 ok_next(p
, next
) && ok_pinuse(next
))) {
4824 if (is_mmapped(p
)) {
4825 newp
= mmap_resize(m
, p
, nb
, can_move
);
4827 else if (oldsize
>= nb
) { /* already big enough */
4828 size_t rsize
= oldsize
- nb
;
4829 if (rsize
>= MIN_CHUNK_SIZE
) { /* split off remainder */
4830 mchunkptr r
= chunk_plus_offset(p
, nb
);
4831 set_inuse(m
, p
, nb
);
4832 set_inuse(m
, r
, rsize
);
4833 dispose_chunk(m
, r
, rsize
);
4837 else if (next
== m
->top
) { /* extend into top */
4838 if (oldsize
+ m
->topsize
> nb
) {
4839 size_t newsize
= oldsize
+ m
->topsize
;
4840 size_t newtopsize
= newsize
- nb
;
4841 mchunkptr newtop
= chunk_plus_offset(p
, nb
);
4842 set_inuse(m
, p
, nb
);
4843 newtop
->head
= newtopsize
|PINUSE_BIT
;
4845 m
->topsize
= newtopsize
;
4849 else if (next
== m
->dv
) { /* extend into dv */
4850 size_t dvs
= m
->dvsize
;
4851 if (oldsize
+ dvs
>= nb
) {
4852 size_t dsize
= oldsize
+ dvs
- nb
;
4853 if (dsize
>= MIN_CHUNK_SIZE
) {
4854 mchunkptr r
= chunk_plus_offset(p
, nb
);
4855 mchunkptr n
= chunk_plus_offset(r
, dsize
);
4856 set_inuse(m
, p
, nb
);
4857 set_size_and_pinuse_of_free_chunk(r
, dsize
);
4862 else { /* exhaust dv */
4863 size_t newsize
= oldsize
+ dvs
;
4864 set_inuse(m
, p
, newsize
);
4871 else if (!cinuse(next
)) { /* extend into next free chunk */
4872 size_t nextsize
= chunksize(next
);
4873 if (oldsize
+ nextsize
>= nb
) {
4874 size_t rsize
= oldsize
+ nextsize
- nb
;
4875 unlink_chunk(m
, next
, nextsize
);
4876 if (rsize
< MIN_CHUNK_SIZE
) {
4877 size_t newsize
= oldsize
+ nextsize
;
4878 set_inuse(m
, p
, newsize
);
4881 mchunkptr r
= chunk_plus_offset(p
, nb
);
4882 set_inuse(m
, p
, nb
);
4883 set_inuse(m
, r
, rsize
);
4884 dispose_chunk(m
, r
, rsize
);
4891 USAGE_ERROR_ACTION(m
, chunk2mem(p
));
4896 static void* internal_memalign(mstate m
, size_t alignment
, size_t bytes
) {
4898 if (alignment
< MIN_CHUNK_SIZE
) /* must be at least a minimum chunk size */
4899 alignment
= MIN_CHUNK_SIZE
;
4900 if ((alignment
& (alignment
-SIZE_T_ONE
)) != 0) {/* Ensure a power of 2 */
4901 size_t a
= MALLOC_ALIGNMENT
<< 1;
4902 while (a
< alignment
) a
<<= 1;
4905 if (bytes
>= MAX_REQUEST
- alignment
) {
4906 if (m
!= 0) { /* Test isn't needed but avoids compiler warning */
4907 MALLOC_FAILURE_ACTION
;
4911 size_t nb
= request2size(bytes
);
4912 size_t req
= nb
+ alignment
+ MIN_CHUNK_SIZE
- CHUNK_OVERHEAD
;
4913 mem
= internal_malloc(m
, req
);
4915 mchunkptr p
= mem2chunk(mem
);
4918 if ((((size_t)(mem
)) & (alignment
- 1)) != 0) { /* misaligned */
4920 Find an aligned spot inside chunk. Since we need to give
4921 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4922 the first calculation places us at a spot with less than
4923 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4924 We've allocated enough total room so that this is always
4927 char* br
= (char*)mem2chunk((size_t)(((size_t)((char*)mem
+ alignment
-
4930 char* pos
= ((size_t)(br
- (char*)(p
)) >= MIN_CHUNK_SIZE
)?
4932 mchunkptr newp
= (mchunkptr
)pos
;
4933 size_t leadsize
= pos
- (char*)(p
);
4934 size_t newsize
= chunksize(p
) - leadsize
;
4936 if (is_mmapped(p
)) { /* For mmapped chunks, just adjust offset */
4937 newp
->prev_foot
= p
->prev_foot
+ leadsize
;
4938 newp
->head
= newsize
;
4940 else { /* Otherwise, give back leader, use the rest */
4941 set_inuse(m
, newp
, newsize
);
4942 set_inuse(m
, p
, leadsize
);
4943 dispose_chunk(m
, p
, leadsize
);
4948 /* Give back spare room at the end */
4949 if (!is_mmapped(p
)) {
4950 size_t size
= chunksize(p
);
4951 if (size
> nb
+ MIN_CHUNK_SIZE
) {
4952 size_t remainder_size
= size
- nb
;
4953 mchunkptr remainder
= chunk_plus_offset(p
, nb
);
4954 set_inuse(m
, p
, nb
);
4955 set_inuse(m
, remainder
, remainder_size
);
4956 dispose_chunk(m
, remainder
, remainder_size
);
4961 assert (chunksize(p
) >= nb
);
4962 assert(((size_t)mem
& (alignment
- 1)) == 0);
4963 check_inuse_chunk(m
, p
);
4971 Common support for independent_X routines, handling
4972 all of the combinations that can result.
4974 bit 0 set if all elements are same size (using sizes[0])
4975 bit 1 set if elements should be zeroed
4977 static void** ialloc(mstate m
,
4983 size_t element_size
; /* chunksize of each element, if all same */
4984 size_t contents_size
; /* total size of elements */
4985 size_t array_size
; /* request size of pointer array */
4986 void* mem
; /* malloced aggregate space */
4987 mchunkptr p
; /* corresponding chunk */
4988 size_t remainder_size
; /* remaining bytes while splitting */
4989 void** marray
; /* either "chunks" or malloced ptr array */
4990 mchunkptr array_chunk
; /* chunk for malloced ptr array */
4991 flag_t was_enabled
; /* to disable mmap */
4995 ensure_initialization();
4996 /* compute array length, if needed */
4998 if (n_elements
== 0)
4999 return chunks
; /* nothing to do */
5004 /* if empty req, must still return chunk representing empty array */
5005 if (n_elements
== 0)
5006 return (void**)internal_malloc(m
, 0);
5008 array_size
= request2size(n_elements
* (sizeof(void*)));
5011 /* compute total element size */
5012 if (opts
& 0x1) { /* all-same-size */
5013 element_size
= request2size(*sizes
);
5014 contents_size
= n_elements
* element_size
;
5016 else { /* add up all the sizes */
5019 for (i
= 0; i
!= n_elements
; ++i
)
5020 contents_size
+= request2size(sizes
[i
]);
5023 size
= contents_size
+ array_size
;
5026 Allocate the aggregate chunk. First disable direct-mmapping so
5027 malloc won't use it, since we would not be able to later
5028 free/realloc space internal to a segregated mmap region.
5030 was_enabled
= use_mmap(m
);
5032 mem
= internal_malloc(m
, size
- CHUNK_OVERHEAD
);
5038 if (PREACTION(m
)) return 0;
5040 remainder_size
= chunksize(p
);
5042 assert(!is_mmapped(p
));
5044 if (opts
& 0x2) { /* optionally clear the elements */
5045 memset((size_t*)mem
, 0, remainder_size
- SIZE_T_SIZE
- array_size
);
5048 /* If not provided, allocate the pointer array as final part of chunk */
5050 size_t array_chunk_size
;
5051 array_chunk
= chunk_plus_offset(p
, contents_size
);
5052 array_chunk_size
= remainder_size
- contents_size
;
5053 marray
= (void**) (chunk2mem(array_chunk
));
5054 set_size_and_pinuse_of_inuse_chunk(m
, array_chunk
, array_chunk_size
);
5055 remainder_size
= contents_size
;
5058 /* split out elements */
5059 for (i
= 0; ; ++i
) {
5060 marray
[i
] = chunk2mem(p
);
5061 if (i
!= n_elements
-1) {
5062 if (element_size
!= 0)
5063 size
= element_size
;
5065 size
= request2size(sizes
[i
]);
5066 remainder_size
-= size
;
5067 set_size_and_pinuse_of_inuse_chunk(m
, p
, size
);
5068 p
= chunk_plus_offset(p
, size
);
5070 else { /* the final element absorbs any overallocation slop */
5071 set_size_and_pinuse_of_inuse_chunk(m
, p
, remainder_size
);
5077 if (marray
!= chunks
) {
5078 /* final element must have exactly exhausted chunk */
5079 if (element_size
!= 0) {
5080 assert(remainder_size
== element_size
);
5083 assert(remainder_size
== request2size(sizes
[i
]));
5085 check_inuse_chunk(m
, mem2chunk(marray
));
5087 for (i
= 0; i
!= n_elements
; ++i
)
5088 check_inuse_chunk(m
, mem2chunk(marray
[i
]));
5096 /* Try to free all pointers in the given array.
5097 Note: this could be made faster, by delaying consolidation,
5098 at the price of disabling some user integrity checks, We
5099 still optimize some consolidations by combining adjacent
5100 chunks before freeing, which will occur often if allocated
5101 with ialloc or the array is sorted.
5103 static size_t internal_bulk_free(mstate m
, void* array
[], size_t nelem
) {
5105 if (!PREACTION(m
)) {
5107 void** fence
= &(array
[nelem
]);
5108 for (a
= array
; a
!= fence
; ++a
) {
5111 mchunkptr p
= mem2chunk(mem
);
5112 size_t psize
= chunksize(p
);
5114 if (get_mstate_for(p
) != m
) {
5119 check_inuse_chunk(m
, p
);
5121 if (RTCHECK(ok_address(m
, p
) && ok_inuse(p
))) {
5122 void ** b
= a
+ 1; /* try to merge with next chunk */
5123 mchunkptr next
= next_chunk(p
);
5124 if (b
!= fence
&& *b
== chunk2mem(next
)) {
5125 size_t newsize
= chunksize(next
) + psize
;
5126 set_inuse(m
, p
, newsize
);
5130 dispose_chunk(m
, p
, psize
);
5133 CORRUPTION_ERROR_ACTION(m
);
5138 if (should_trim(m
, m
->topsize
))
5146 #if MALLOC_INSPECT_ALL
5147 static void internal_inspect_all(mstate m
,
5148 void(*handler
)(void *start
,
5151 void* callback_arg
),
5153 if (is_initialized(m
)) {
5154 mchunkptr top
= m
->top
;
5156 for (s
= &m
->seg
; s
!= 0; s
= s
->next
) {
5157 mchunkptr q
= align_as_chunk(s
->base
);
5158 while (segment_holds(s
, q
) && q
->head
!= FENCEPOST_HEAD
) {
5159 mchunkptr next
= next_chunk(q
);
5160 size_t sz
= chunksize(q
);
5164 used
= sz
- CHUNK_OVERHEAD
; /* must not be mmapped */
5165 start
= chunk2mem(q
);
5169 if (is_small(sz
)) { /* offset by possible bookkeeping */
5170 start
= (void*)((char*)q
+ sizeof(struct malloc_chunk
));
5173 start
= (void*)((char*)q
+ sizeof(struct malloc_tree_chunk
));
5176 if (start
< (void*)next
) /* skip if all space is bookkeeping */
5177 handler(start
, next
, used
, arg
);
5185 #endif /* MALLOC_INSPECT_ALL */
5187 /* ------------------ Exported realloc, memalign, etc -------------------- */
5191 void* dlrealloc(void* oldmem
, size_t bytes
) {
5194 mem
= dlmalloc(bytes
);
5196 else if (bytes
>= MAX_REQUEST
) {
5197 MALLOC_FAILURE_ACTION
;
5199 #ifdef REALLOC_ZERO_BYTES_FREES
5200 else if (bytes
== 0) {
5203 #endif /* REALLOC_ZERO_BYTES_FREES */
5205 size_t nb
= request2size(bytes
);
5206 mchunkptr oldp
= mem2chunk(oldmem
);
5210 mstate m
= get_mstate_for(oldp
);
5212 USAGE_ERROR_ACTION(m
, oldmem
);
5215 #endif /* FOOTERS */
5216 if (!PREACTION(m
)) {
5217 mchunkptr newp
= try_realloc_chunk(m
, oldp
, nb
, 1);
5220 check_inuse_chunk(m
, newp
);
5221 mem
= chunk2mem(newp
);
5224 mem
= internal_malloc(m
, bytes
);
5226 size_t oc
= chunksize(oldp
) - overhead_for(oldp
);
5227 memcpy(mem
, oldmem
, (oc
< bytes
)? oc
: bytes
);
5228 internal_free(m
, oldmem
);
5236 void* dlrealloc_in_place(void* oldmem
, size_t bytes
) {
5239 if (bytes
>= MAX_REQUEST
) {
5240 MALLOC_FAILURE_ACTION
;
5243 size_t nb
= request2size(bytes
);
5244 mchunkptr oldp
= mem2chunk(oldmem
);
5248 mstate m
= get_mstate_for(oldp
);
5250 USAGE_ERROR_ACTION(m
, oldmem
);
5253 #endif /* FOOTERS */
5254 if (!PREACTION(m
)) {
5255 mchunkptr newp
= try_realloc_chunk(m
, oldp
, nb
, 0);
5258 check_inuse_chunk(m
, newp
);
5267 void* dlmemalign(size_t alignment
, size_t bytes
) {
5268 if (alignment
<= MALLOC_ALIGNMENT
) {
5269 return dlmalloc(bytes
);
5271 return internal_memalign(gm
, alignment
, bytes
);
5274 int dlposix_memalign(void** pp
, size_t alignment
, size_t bytes
) {
5276 if (alignment
== MALLOC_ALIGNMENT
)
5277 mem
= dlmalloc(bytes
);
5279 size_t d
= alignment
/ sizeof(void*);
5280 size_t r
= alignment
% sizeof(void*);
5281 if (r
!= 0 || d
== 0 || (d
& (d
-SIZE_T_ONE
)) != 0)
5283 else if (bytes
<= MAX_REQUEST
- alignment
) {
5284 if (alignment
< MIN_CHUNK_SIZE
)
5285 alignment
= MIN_CHUNK_SIZE
;
5286 mem
= internal_memalign(gm
, alignment
, bytes
);
5297 void* dlvalloc(size_t bytes
) {
5299 ensure_initialization();
5300 pagesz
= mparams
.page_size
;
5301 return dlmemalign(pagesz
, bytes
);
5304 void* dlpvalloc(size_t bytes
) {
5306 ensure_initialization();
5307 pagesz
= mparams
.page_size
;
5308 return dlmemalign(pagesz
, (bytes
+ pagesz
- SIZE_T_ONE
) & ~(pagesz
- SIZE_T_ONE
));
5311 void** dlindependent_calloc(size_t n_elements
, size_t elem_size
,
5313 size_t sz
= elem_size
; /* serves as 1-element array */
5314 return ialloc(gm
, n_elements
, &sz
, 3, chunks
);
5317 void** dlindependent_comalloc(size_t n_elements
, size_t sizes
[],
5319 return ialloc(gm
, n_elements
, sizes
, 0, chunks
);
5322 size_t dlbulk_free(void* array
[], size_t nelem
) {
5323 return internal_bulk_free(gm
, array
, nelem
);
5326 #if MALLOC_INSPECT_ALL
5327 void dlmalloc_inspect_all(void(*handler
)(void *start
,
5330 void* callback_arg
),
5332 ensure_initialization();
5333 if (!PREACTION(gm
)) {
5334 internal_inspect_all(gm
, handler
, arg
);
5338 #endif /* MALLOC_INSPECT_ALL */
5340 int dlmalloc_trim(size_t pad
) {
5342 ensure_initialization();
5343 if (!PREACTION(gm
)) {
5344 result
= sys_trim(gm
, pad
);
5350 size_t dlmalloc_footprint(void) {
5351 return gm
->footprint
;
5354 size_t dlmalloc_max_footprint(void) {
5355 return gm
->max_footprint
;
5358 size_t dlmalloc_footprint_limit(void) {
5359 size_t maf
= gm
->footprint_limit
;
5360 return maf
== 0 ? MAX_SIZE_T
: maf
;
5363 size_t dlmalloc_set_footprint_limit(size_t bytes
) {
5364 size_t result
; /* invert sense of 0 */
5366 result
= granularity_align(1); /* Use minimal size */
5367 if (bytes
== MAX_SIZE_T
)
5368 result
= 0; /* disable */
5370 result
= granularity_align(bytes
);
5371 return gm
->footprint_limit
= result
;
5375 struct mallinfo
dlmallinfo(void) {
5376 return internal_mallinfo(gm
);
5378 #endif /* NO_MALLINFO */
5380 #if !NO_MALLOC_STATS
5381 void dlmalloc_stats() {
5382 internal_malloc_stats(gm
);
5384 #endif /* NO_MALLOC_STATS */
5386 int dlmallopt(int param_number
, int value
) {
5387 return change_mparam(param_number
, value
);
5390 size_t dlmalloc_usable_size(void* mem
) {
5392 mchunkptr p
= mem2chunk(mem
);
5394 return chunksize(p
) - overhead_for(p
);
5399 #endif /* !ONLY_MSPACES */
5401 /* ----------------------------- user mspaces ---------------------------- */
5405 static mstate
init_user_mstate(char* tbase
, size_t tsize
) {
5406 size_t msize
= pad_request(sizeof(struct malloc_state
));
5408 mchunkptr msp
= align_as_chunk(tbase
);
5409 mstate m
= (mstate
)(chunk2mem(msp
));
5410 memset(m
, 0, msize
);
5411 (void)INITIAL_LOCK(&m
->mutex
);
5412 msp
->head
= (msize
|INUSE_BITS
);
5413 m
->seg
.base
= m
->least_addr
= tbase
;
5414 m
->seg
.size
= m
->footprint
= m
->max_footprint
= tsize
;
5415 m
->magic
= mparams
.magic
;
5416 m
->release_checks
= MAX_RELEASE_CHECK_RATE
;
5417 m
->mflags
= mparams
.default_mflags
;
5420 disable_contiguous(m
);
5422 mn
= next_chunk(mem2chunk(m
));
5423 init_top(m
, mn
, (size_t)((tbase
+ tsize
) - (char*)mn
) - TOP_FOOT_SIZE
);
5424 check_top_chunk(m
, m
->top
);
5428 mspace
create_mspace(size_t capacity
, int locked
) {
5431 ensure_initialization();
5432 msize
= pad_request(sizeof(struct malloc_state
));
5433 if (capacity
< (size_t) -(msize
+ TOP_FOOT_SIZE
+ mparams
.page_size
)) {
5434 size_t rs
= ((capacity
== 0)? mparams
.granularity
:
5435 (capacity
+ TOP_FOOT_SIZE
+ msize
));
5436 size_t tsize
= granularity_align(rs
);
5437 char* tbase
= (char*)(CALL_MMAP(tsize
));
5438 if (tbase
!= CMFAIL
) {
5439 m
= init_user_mstate(tbase
, tsize
);
5440 m
->seg
.sflags
= USE_MMAP_BIT
;
5441 set_lock(m
, locked
);
5447 mspace
create_mspace_with_base(void* base
, size_t capacity
, int locked
) {
5450 ensure_initialization();
5451 msize
= pad_request(sizeof(struct malloc_state
));
5452 if (capacity
> msize
+ TOP_FOOT_SIZE
&&
5453 capacity
< (size_t) -(msize
+ TOP_FOOT_SIZE
+ mparams
.page_size
)) {
5454 m
= init_user_mstate((char*)base
, capacity
);
5455 m
->seg
.sflags
= EXTERN_BIT
;
5456 set_lock(m
, locked
);
5461 int mspace_track_large_chunks(mspace msp
, int enable
) {
5463 mstate ms
= (mstate
)msp
;
5464 if (!PREACTION(ms
)) {
5465 if (!use_mmap(ms
)) {
5478 size_t destroy_mspace(mspace msp
) {
5480 mstate ms
= (mstate
)msp
;
5482 msegmentptr sp
= &ms
->seg
;
5483 (void)DESTROY_LOCK(&ms
->mutex
); /* destroy before unmapped */
5485 char* base
= sp
->base
;
5486 size_t size
= sp
->size
;
5487 flag_t flag
= sp
->sflags
;
5488 (void)base
; /* placate people compiling -Wunused-variable */
5490 if ((flag
& USE_MMAP_BIT
) && !(flag
& EXTERN_BIT
) &&
5491 CALL_MUNMAP(base
, size
) == 0)
5496 USAGE_ERROR_ACTION(ms
,ms
);
5502 mspace versions of routines are near-clones of the global
5503 versions. This is not so nice but better than the alternatives.
5506 void* mspace_malloc(mspace msp
, size_t bytes
) {
5507 mstate ms
= (mstate
)msp
;
5508 if (!ok_magic(ms
)) {
5509 USAGE_ERROR_ACTION(ms
,ms
);
5512 if (!PREACTION(ms
)) {
5515 if (bytes
<= MAX_SMALL_REQUEST
) {
5518 nb
= (bytes
< MIN_REQUEST
)? MIN_CHUNK_SIZE
: pad_request(bytes
);
5519 idx
= small_index(nb
);
5520 smallbits
= ms
->smallmap
>> idx
;
5522 if ((smallbits
& 0x3U
) != 0) { /* Remainderless fit to a smallbin. */
5524 idx
+= ~smallbits
& 1; /* Uses next bin if idx empty */
5525 b
= smallbin_at(ms
, idx
);
5527 assert(chunksize(p
) == small_index2size(idx
));
5528 unlink_first_small_chunk(ms
, b
, p
, idx
);
5529 set_inuse_and_pinuse(ms
, p
, small_index2size(idx
));
5531 check_malloced_chunk(ms
, mem
, nb
);
5535 else if (nb
> ms
->dvsize
) {
5536 if (smallbits
!= 0) { /* Use chunk in next nonempty smallbin */
5540 binmap_t leftbits
= (smallbits
<< idx
) & left_bits(idx2bit(idx
));
5541 binmap_t leastbit
= least_bit(leftbits
);
5542 compute_bit2idx(leastbit
, i
);
5543 b
= smallbin_at(ms
, i
);
5545 assert(chunksize(p
) == small_index2size(i
));
5546 unlink_first_small_chunk(ms
, b
, p
, i
);
5547 rsize
= small_index2size(i
) - nb
;
5548 /* Fit here cannot be remainderless if 4byte sizes */
5549 if (SIZE_T_SIZE
!= 4 && rsize
< MIN_CHUNK_SIZE
)
5550 set_inuse_and_pinuse(ms
, p
, small_index2size(i
));
5552 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5553 r
= chunk_plus_offset(p
, nb
);
5554 set_size_and_pinuse_of_free_chunk(r
, rsize
);
5555 replace_dv(ms
, r
, rsize
);
5558 check_malloced_chunk(ms
, mem
, nb
);
5562 else if (ms
->treemap
!= 0 && (mem
= tmalloc_small(ms
, nb
)) != 0) {
5563 check_malloced_chunk(ms
, mem
, nb
);
5568 else if (bytes
>= MAX_REQUEST
)
5569 nb
= MAX_SIZE_T
; /* Too big to allocate. Force failure (in sys alloc) */
5571 nb
= pad_request(bytes
);
5572 if (ms
->treemap
!= 0 && (mem
= tmalloc_large(ms
, nb
)) != 0) {
5573 check_malloced_chunk(ms
, mem
, nb
);
5578 if (nb
<= ms
->dvsize
) {
5579 size_t rsize
= ms
->dvsize
- nb
;
5580 mchunkptr p
= ms
->dv
;
5581 if (rsize
>= MIN_CHUNK_SIZE
) { /* split dv */
5582 mchunkptr r
= ms
->dv
= chunk_plus_offset(p
, nb
);
5584 set_size_and_pinuse_of_free_chunk(r
, rsize
);
5585 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5587 else { /* exhaust dv */
5588 size_t dvs
= ms
->dvsize
;
5591 set_inuse_and_pinuse(ms
, p
, dvs
);
5594 check_malloced_chunk(ms
, mem
, nb
);
5598 else if (nb
< ms
->topsize
) { /* Split top */
5599 size_t rsize
= ms
->topsize
-= nb
;
5600 mchunkptr p
= ms
->top
;
5601 mchunkptr r
= ms
->top
= chunk_plus_offset(p
, nb
);
5602 r
->head
= rsize
| PINUSE_BIT
;
5603 set_size_and_pinuse_of_inuse_chunk(ms
, p
, nb
);
5605 check_top_chunk(ms
, ms
->top
);
5606 check_malloced_chunk(ms
, mem
, nb
);
5610 mem
= sys_alloc(ms
, nb
);
5620 void mspace_free(mspace msp
, void* mem
) {
5622 mchunkptr p
= mem2chunk(mem
);
5624 mstate fm
= get_mstate_for(p
);
5625 (void)msp
; /* placate people compiling -Wunused */
5627 mstate fm
= (mstate
)msp
;
5628 #endif /* FOOTERS */
5629 if (!ok_magic(fm
)) {
5630 USAGE_ERROR_ACTION(fm
, p
);
5633 if (!PREACTION(fm
)) {
5634 check_inuse_chunk(fm
, p
);
5635 if (RTCHECK(ok_address(fm
, p
) && ok_inuse(p
))) {
5636 size_t psize
= chunksize(p
);
5637 mchunkptr next
= chunk_plus_offset(p
, psize
);
5639 size_t prevsize
= p
->prev_foot
;
5640 if (is_mmapped(p
)) {
5641 psize
+= prevsize
+ MMAP_FOOT_PAD
;
5642 if (CALL_MUNMAP((char*)p
- prevsize
, psize
) == 0)
5643 fm
->footprint
-= psize
;
5647 mchunkptr prev
= chunk_minus_offset(p
, prevsize
);
5650 if (RTCHECK(ok_address(fm
, prev
))) { /* consolidate backward */
5652 unlink_chunk(fm
, p
, prevsize
);
5654 else if ((next
->head
& INUSE_BITS
) == INUSE_BITS
) {
5656 set_free_with_pinuse(p
, psize
, next
);
5665 if (RTCHECK(ok_next(p
, next
) && ok_pinuse(next
))) {
5666 if (!cinuse(next
)) { /* consolidate forward */
5667 if (next
== fm
->top
) {
5668 size_t tsize
= fm
->topsize
+= psize
;
5670 p
->head
= tsize
| PINUSE_BIT
;
5675 if (should_trim(fm
, tsize
))
5679 else if (next
== fm
->dv
) {
5680 size_t dsize
= fm
->dvsize
+= psize
;
5682 set_size_and_pinuse_of_free_chunk(p
, dsize
);
5686 size_t nsize
= chunksize(next
);
5688 unlink_chunk(fm
, next
, nsize
);
5689 set_size_and_pinuse_of_free_chunk(p
, psize
);
5697 set_free_with_pinuse(p
, psize
, next
);
5699 if (is_small(psize
)) {
5700 insert_small_chunk(fm
, p
, psize
);
5701 check_free_chunk(fm
, p
);
5704 tchunkptr tp
= (tchunkptr
)p
;
5705 insert_large_chunk(fm
, tp
, psize
);
5706 check_free_chunk(fm
, p
);
5707 if (--fm
->release_checks
== 0)
5708 release_unused_segments(fm
);
5714 USAGE_ERROR_ACTION(fm
, p
);
5721 void* mspace_calloc(mspace msp
, size_t n_elements
, size_t elem_size
) {
5724 mstate ms
= (mstate
)msp
;
5725 if (!ok_magic(ms
)) {
5726 USAGE_ERROR_ACTION(ms
,ms
);
5729 if (n_elements
!= 0) {
5730 req
= n_elements
* elem_size
;
5731 if (((n_elements
| elem_size
) & ~(size_t)0xffff) &&
5732 (req
/ n_elements
!= elem_size
))
5733 req
= MAX_SIZE_T
; /* force downstream failure on overflow */
5735 mem
= internal_malloc(ms
, req
);
5736 if (mem
!= 0 && calloc_must_clear(mem2chunk(mem
)))
5737 memset(mem
, 0, req
);
5741 void* mspace_realloc(mspace msp
, void* oldmem
, size_t bytes
) {
5744 mem
= mspace_malloc(msp
, bytes
);
5746 else if (bytes
>= MAX_REQUEST
) {
5747 MALLOC_FAILURE_ACTION
;
5749 #ifdef REALLOC_ZERO_BYTES_FREES
5750 else if (bytes
== 0) {
5751 mspace_free(msp
, oldmem
);
5753 #endif /* REALLOC_ZERO_BYTES_FREES */
5755 size_t nb
= request2size(bytes
);
5756 mchunkptr oldp
= mem2chunk(oldmem
);
5758 mstate m
= (mstate
)msp
;
5760 mstate m
= get_mstate_for(oldp
);
5762 USAGE_ERROR_ACTION(m
, oldmem
);
5765 #endif /* FOOTERS */
5766 if (!PREACTION(m
)) {
5767 mchunkptr newp
= try_realloc_chunk(m
, oldp
, nb
, 1);
5770 check_inuse_chunk(m
, newp
);
5771 mem
= chunk2mem(newp
);
5774 mem
= mspace_malloc(m
, bytes
);
5776 size_t oc
= chunksize(oldp
) - overhead_for(oldp
);
5777 memcpy(mem
, oldmem
, (oc
< bytes
)? oc
: bytes
);
5778 mspace_free(m
, oldmem
);
5786 void* mspace_realloc_in_place(mspace msp
, void* oldmem
, size_t bytes
) {
5789 if (bytes
>= MAX_REQUEST
) {
5790 MALLOC_FAILURE_ACTION
;
5793 size_t nb
= request2size(bytes
);
5794 mchunkptr oldp
= mem2chunk(oldmem
);
5796 mstate m
= (mstate
)msp
;
5798 mstate m
= get_mstate_for(oldp
);
5799 (void)msp
; /* placate people compiling -Wunused */
5801 USAGE_ERROR_ACTION(m
, oldmem
);
5804 #endif /* FOOTERS */
5805 if (!PREACTION(m
)) {
5806 mchunkptr newp
= try_realloc_chunk(m
, oldp
, nb
, 0);
5809 check_inuse_chunk(m
, newp
);
5818 void* mspace_memalign(mspace msp
, size_t alignment
, size_t bytes
) {
5819 mstate ms
= (mstate
)msp
;
5820 if (!ok_magic(ms
)) {
5821 USAGE_ERROR_ACTION(ms
,ms
);
5824 if (alignment
<= MALLOC_ALIGNMENT
)
5825 return mspace_malloc(msp
, bytes
);
5826 return internal_memalign(ms
, alignment
, bytes
);
5829 void** mspace_independent_calloc(mspace msp
, size_t n_elements
,
5830 size_t elem_size
, void* chunks
[]) {
5831 size_t sz
= elem_size
; /* serves as 1-element array */
5832 mstate ms
= (mstate
)msp
;
5833 if (!ok_magic(ms
)) {
5834 USAGE_ERROR_ACTION(ms
,ms
);
5837 return ialloc(ms
, n_elements
, &sz
, 3, chunks
);
5840 void** mspace_independent_comalloc(mspace msp
, size_t n_elements
,
5841 size_t sizes
[], void* chunks
[]) {
5842 mstate ms
= (mstate
)msp
;
5843 if (!ok_magic(ms
)) {
5844 USAGE_ERROR_ACTION(ms
,ms
);
5847 return ialloc(ms
, n_elements
, sizes
, 0, chunks
);
5850 size_t mspace_bulk_free(mspace msp
, void* array
[], size_t nelem
) {
5851 return internal_bulk_free((mstate
)msp
, array
, nelem
);
5854 #if MALLOC_INSPECT_ALL
5855 void mspace_inspect_all(mspace msp
,
5856 void(*handler
)(void *start
,
5859 void* callback_arg
),
5861 mstate ms
= (mstate
)msp
;
5863 if (!PREACTION(ms
)) {
5864 internal_inspect_all(ms
, handler
, arg
);
5869 USAGE_ERROR_ACTION(ms
,ms
);
5872 #endif /* MALLOC_INSPECT_ALL */
5874 int mspace_trim(mspace msp
, size_t pad
) {
5876 mstate ms
= (mstate
)msp
;
5878 if (!PREACTION(ms
)) {
5879 result
= sys_trim(ms
, pad
);
5884 USAGE_ERROR_ACTION(ms
,ms
);
5889 #if !NO_MALLOC_STATS
5890 void mspace_malloc_stats(mspace msp
) {
5891 mstate ms
= (mstate
)msp
;
5893 internal_malloc_stats(ms
);
5896 USAGE_ERROR_ACTION(ms
,ms
);
5899 #endif /* NO_MALLOC_STATS */
5901 size_t mspace_footprint(mspace msp
) {
5903 mstate ms
= (mstate
)msp
;
5905 result
= ms
->footprint
;
5908 USAGE_ERROR_ACTION(ms
,ms
);
5913 size_t mspace_max_footprint(mspace msp
) {
5915 mstate ms
= (mstate
)msp
;
5917 result
= ms
->max_footprint
;
5920 USAGE_ERROR_ACTION(ms
,ms
);
5925 size_t mspace_footprint_limit(mspace msp
) {
5927 mstate ms
= (mstate
)msp
;
5929 size_t maf
= ms
->footprint_limit
;
5930 result
= (maf
== 0) ? MAX_SIZE_T
: maf
;
5933 USAGE_ERROR_ACTION(ms
,ms
);
5938 size_t mspace_set_footprint_limit(mspace msp
, size_t bytes
) {
5940 mstate ms
= (mstate
)msp
;
5943 result
= granularity_align(1); /* Use minimal size */
5944 if (bytes
== MAX_SIZE_T
)
5945 result
= 0; /* disable */
5947 result
= granularity_align(bytes
);
5948 ms
->footprint_limit
= result
;
5951 USAGE_ERROR_ACTION(ms
,ms
);
5957 struct mallinfo
mspace_mallinfo(mspace msp
) {
5958 mstate ms
= (mstate
)msp
;
5959 if (!ok_magic(ms
)) {
5960 USAGE_ERROR_ACTION(ms
,ms
);
5962 return internal_mallinfo(ms
);
5964 #endif /* NO_MALLINFO */
5966 size_t mspace_usable_size(const void* mem
) {
5968 mchunkptr p
= mem2chunk(mem
);
5970 return chunksize(p
) - overhead_for(p
);
5975 int mspace_mallopt(int param_number
, int value
) {
5976 return change_mparam(param_number
, value
);
5979 #endif /* MSPACES */
5982 /* -------------------- Alternative MORECORE functions ------------------- */
5985 Guidelines for creating a custom version of MORECORE:
5987 * For best performance, MORECORE should allocate in multiples of pagesize.
5988 * MORECORE may allocate more memory than requested. (Or even less,
5989 but this will usually result in a malloc failure.)
5990 * MORECORE must not allocate memory when given argument zero, but
5991 instead return one past the end address of memory from previous
5993 * For best performance, consecutive calls to MORECORE with positive
5994 arguments should return increasing addresses, indicating that
5995 space has been contiguously extended.
5996 * Even though consecutive calls to MORECORE need not return contiguous
5997 addresses, it must be OK for malloc'ed chunks to span multiple
5998 regions in those cases where they do happen to be contiguous.
5999 * MORECORE need not handle negative arguments -- it may instead
6000 just return MFAIL when given negative arguments.
6001 Negative arguments are always multiples of pagesize. MORECORE
6002 must not misinterpret negative args as large positive unsigned
6003 args. You can suppress all such calls from even occurring by defining
6004 MORECORE_CANNOT_TRIM,
6006 As an example alternative MORECORE, here is a custom allocator
6007 kindly contributed for pre-OSX macOS. It uses virtually but not
6008 necessarily physically contiguous non-paged memory (locked in,
6009 present and won't get swapped out). You can use it by uncommenting
6010 this section, adding some #includes, and setting up the appropriate
6013 #define MORECORE osMoreCore
6015 There is also a shutdown routine that should somehow be called for
6016 cleanup upon program exit.
6018 #define MAX_POOL_ENTRIES 100
6019 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
6020 static int next_os_pool;
6021 void *our_os_pools[MAX_POOL_ENTRIES];
6023 void *osMoreCore(int size)
6026 static void *sbrk_top = 0;
6030 if (size < MINIMUM_MORECORE_SIZE)
6031 size = MINIMUM_MORECORE_SIZE;
6032 if (CurrentExecutionLevel() == kTaskLevel)
6033 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6036 return (void *) MFAIL;
6038 // save ptrs so they can be freed during cleanup
6039 our_os_pools[next_os_pool] = ptr;
6041 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6042 sbrk_top = (char *) ptr + size;
6047 // we don't currently support shrink behavior
6048 return (void *) MFAIL;
6056 // cleanup any allocated memory pools
6057 // called as last thing before shutting down driver
6059 void osCleanupMem(void)
6063 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6066 PoolDeallocate(*ptr);
6074 /* -----------------------------------------------------------------------
6076 v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
6077 * fix bad comparison in dlposix_memalign
6078 * don't reuse adjusted asize in sys_alloc
6079 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6080 * reduce compiler warnings -- thanks to all who reported/suggested these
6082 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
6083 * Always perform unlink checks unless INSECURE
6084 * Add posix_memalign.
6085 * Improve realloc to expand in more cases; expose realloc_in_place.
6086 Thanks to Peter Buhr for the suggestion.
6087 * Add footprint_limit, inspect_all, bulk_free. Thanks
6088 to Barry Hayes and others for the suggestions.
6089 * Internal refactorings to avoid calls while holding locks
6090 * Use non-reentrant locks by default. Thanks to Roland McGrath
6092 * Small fixes to mspace_destroy, reset_on_error.
6093 * Various configuration extensions/changes. Thanks
6094 to all who contributed these.
6096 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6097 * Update Creative Commons URL
6099 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee)
6100 * Use zeros instead of prev foot for is_mmapped
6101 * Add mspace_track_large_chunks; thanks to Jean Brouwers
6102 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6103 * Fix insufficient sys_alloc padding when using 16byte alignment
6104 * Fix bad error check in mspace_footprint
6105 * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6106 * Reentrant spin locks; thanks to Earl Chew and others
6107 * Win32 improvements; thanks to Niall Douglas and Earl Chew
6108 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6109 * Extension hook in malloc_state
6110 * Various small adjustments to reduce warnings on some compilers
6111 * Various configuration extensions/changes for more platforms. Thanks
6112 to all who contributed these.
6114 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
6115 * Add max_footprint functions
6116 * Ensure all appropriate literals are size_t
6117 * Fix conditional compilation problem for some #define settings
6118 * Avoid concatenating segments with the one provided
6119 in create_mspace_with_base
6120 * Rename some variables to avoid compiler shadowing warnings
6121 * Use explicit lock initialization.
6122 * Better handling of sbrk interference.
6123 * Simplify and fix segment insertion, trimming and mspace_destroy
6124 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6125 * Thanks especially to Dennis Flanagan for help on these.
6127 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
6128 * Fix memalign brace error.
6130 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
6131 * Fix improper #endif nesting in C++
6132 * Add explicit casts needed for C++
6134 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
6135 * Use trees for large bins
6137 * Use segments to unify sbrk-based and mmap-based system allocation,
6138 removing need for emulation on most platforms without sbrk.
6139 * Default safety checks
6140 * Optional footer checks. Thanks to William Robertson for the idea.
6141 * Internal code refactoring
6142 * Incorporate suggestions and platform-specific changes.
6143 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6144 Aaron Bachmann, Emery Berger, and others.
6145 * Speed up non-fastbin processing enough to remove fastbins.
6146 * Remove useless cfree() to avoid conflicts with other apps.
6147 * Remove internal memcpy, memset. Compilers handle builtins better.
6148 * Remove some options that no one ever used and rename others.
6150 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
6151 * Fix malloc_state bitmap array misdeclaration
6153 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
6154 * Allow tuning of FIRST_SORTED_BIN_SIZE
6155 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6156 * Better detection and support for non-contiguousness of MORECORE.
6157 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6158 * Bypass most of malloc if no frees. Thanks To Emery Berger.
6159 * Fix freeing of old top non-contiguous chunk im sysmalloc.
6160 * Raised default trim and map thresholds to 256K.
6161 * Fix mmap-related #defines. Thanks to Lubos Lunak.
6162 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6163 * Branch-free bin calculation
6164 * Default trim and mmap thresholds now 256K.
6166 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
6167 * Introduce independent_comalloc and independent_calloc.
6168 Thanks to Michael Pachos for motivation and help.
6169 * Make optional .h file available
6170 * Allow > 2GB requests on 32bit systems.
6171 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
6172 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6174 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6176 * memalign: check alignment arg
6177 * realloc: don't try to shift chunks backwards, since this
6178 leads to more fragmentation in some programs and doesn't
6179 seem to help in any others.
6180 * Collect all cases in malloc requiring system memory into sysmalloc
6181 * Use mmap as backup to sbrk
6182 * Place all internal state in malloc_state
6183 * Introduce fastbins (although similar to 2.5.1)
6184 * Many minor tunings and cosmetic improvements
6185 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6186 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6187 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
6188 * Include errno.h to support default failure action.
6190 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
6191 * return null for negative arguments
6192 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6193 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6194 (e.g. WIN32 platforms)
6195 * Cleanup header file inclusion for WIN32 platforms
6196 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6197 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6198 memory allocation routines
6199 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6200 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6201 usage of 'assert' in non-WIN32 code
6202 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6204 * Always call 'fREe()' rather than 'free()'
6206 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
6207 * Fixed ordering problem with boundary-stamping
6209 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
6210 * Added pvalloc, as recommended by H.J. Liu
6211 * Added 64bit pointer support mainly from Wolfram Gloger
6212 * Added anonymously donated WIN32 sbrk emulation
6213 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6214 * malloc_extend_top: fix mask error that caused wastage after
6216 * Add linux mremap support code from HJ Liu
6218 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
6219 * Integrated most documentation with the code.
6220 * Add support for mmap, with help from
6221 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6222 * Use last_remainder in more cases.
6223 * Pack bins using idea from colin@nyx10.cs.du.edu
6224 * Use ordered bins instead of best-fit threshhold
6225 * Eliminate block-local decls to simplify tracing and debugging.
6226 * Support another case of realloc via move into top
6227 * Fix error occuring when initial sbrk_base not word-aligned.
6228 * Rely on page size for units instead of SBRK_UNIT to
6229 avoid surprises about sbrk alignment conventions.
6230 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6231 (raymond@es.ele.tue.nl) for the suggestion.
6232 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6233 * More precautions for cases where other routines call sbrk,
6234 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
6235 * Added macros etc., allowing use in linux libc from
6236 H.J. Lu (hjl@gnu.ai.mit.edu)
6237 * Inverted this history list
6239 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
6240 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6241 * Removed all preallocation code since under current scheme
6242 the work required to undo bad preallocations exceeds
6243 the work saved in good cases for most test programs.
6244 * No longer use return list or unconsolidated bins since
6245 no scheme using them consistently outperforms those that don't
6246 given above changes.
6247 * Use best fit for very large chunks to prevent some worst-cases.
6248 * Added some support for debugging
6250 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
6251 * Removed footers when chunks are in use. Thanks to
6252 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
6254 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
6255 * Added malloc_trim, with help from Wolfram Gloger
6256 (wmglo@Dent.MED.Uni-Muenchen.DE).
6258 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
6260 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
6261 * realloc: try to expand in both directions
6262 * malloc: swap order of clean-bin strategy;
6263 * realloc: only conditionally expand backwards
6264 * Try not to scavenge used bins
6265 * Use bin counts as a guide to preallocation
6266 * Occasionally bin return list chunks in first scan
6267 * Add a few optimizations from colin@nyx10.cs.du.edu
6269 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
6270 * faster bin computation & slightly different binning
6271 * merged all consolidations to one part of malloc proper
6272 (eliminating old malloc_find_space & malloc_clean_bin)
6273 * Scan 2 returns chunks (not just 1)
6274 * Propagate failure in realloc if malloc returns 0
6275 * Add stuff to allow compilation on non-ANSI compilers
6276 from kpv@research.att.com
6278 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
6279 * removed potential for odd address access in prev_chunk
6280 * removed dependency on getpagesize.h
6281 * misc cosmetics and a bit more internal documentation
6282 * anticosmetics: mangled names in macros to evade debugger strangeness
6283 * tested on sparc, hp-700, dec-mips, rs6000
6284 with gcc & native cc (hp, dec only) allowing
6285 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6287 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
6288 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6289 structure of old version, but most details differ.)