/ Check-in [d67884f6]
Login

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:Removed dlmalloc.c (CVS 1705)
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | trunk
Files: files | file ages | folders
SHA1:d67884f6278a130c81bc75f472170eb6c799ddfa
User & Date: drh 2000-10-12 13:29:49
Context
2000-10-16
22:06
Added an interrupt capability (CVS 153) check-in: f7ea08b9 user: drh tags: trunk
2000-10-12
13:29
Removed dlmalloc.c (CVS 1705) check-in: d67884f6 user: drh tags: trunk
2000-10-11
19:30
Version 1.0.10 (CVS 492) check-in: e0c9e80b user: drh tags: trunk
Changes
Hide Diffs Side-by-Side Diffs Ignore Whitespace Patch

Deleted contrib/dlmalloc.c.

     1         -/* ---------- To make a malloc.h, start cutting here ------------ */
     2         -
     3         -/* 
     4         -  A version of malloc/free/realloc written by Doug Lea and released to the 
     5         -  public domain.  Send questions/comments/complaints/performance data
     6         -  to dl@cs.oswego.edu
     7         -
     8         -* VERSION 2.6.6  Sun Mar  5 19:10:03 2000  Doug Lea  (dl at gee)
     9         -  
    10         -   Note: There may be an updated version of this malloc obtainable at
    11         -           ftp://g.oswego.edu/pub/misc/malloc.c
    12         -         Check before installing!
    13         -
    14         -* Why use this malloc?
    15         -
    16         -  This is not the fastest, most space-conserving, most portable, or
    17         -  most tunable malloc ever written. However it is among the fastest
    18         -  while also being among the most space-conserving, portable and tunable.
    19         -  Consistent balance across these factors results in a good general-purpose 
    20         -  allocator. For a high-level description, see 
    21         -     http://g.oswego.edu/dl/html/malloc.html
    22         -
    23         -* Synopsis of public routines
    24         -
    25         -  (Much fuller descriptions are contained in the program documentation below.)
    26         -
    27         -  malloc(size_t n);
    28         -     Return a pointer to a newly allocated chunk of at least n bytes, or null
    29         -     if no space is available.
    30         -  free(Void_t* p);
    31         -     Release the chunk of memory pointed to by p, or no effect if p is null.
    32         -  realloc(Void_t* p, size_t n);
    33         -     Return a pointer to a chunk of size n that contains the same data
    34         -     as does chunk p up to the minimum of (n, p's size) bytes, or null
    35         -     if no space is available. The returned pointer may or may not be
    36         -     the same as p. If p is null, equivalent to malloc.  Unless the
    37         -     #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
    38         -     size argument of zero (re)allocates a minimum-sized chunk.
    39         -  memalign(size_t alignment, size_t n);
    40         -     Return a pointer to a newly allocated chunk of n bytes, aligned
    41         -     in accord with the alignment argument, which must be a power of
    42         -     two.
    43         -  valloc(size_t n);
    44         -     Equivalent to memalign(pagesize, n), where pagesize is the page
    45         -     size of the system (or as near to this as can be figured out from
    46         -     all the includes/defines below.)
    47         -  pvalloc(size_t n);
    48         -     Equivalent to valloc(minimum-page-that-holds(n)), that is,
    49         -     round up n to nearest pagesize.
    50         -  calloc(size_t unit, size_t quantity);
    51         -     Returns a pointer to quantity * unit bytes, with all locations
    52         -     set to zero.
    53         -  cfree(Void_t* p);
    54         -     Equivalent to free(p).
    55         -  malloc_trim(size_t pad);
    56         -     Release all but pad bytes of freed top-most memory back 
    57         -     to the system. Return 1 if successful, else 0.
    58         -  malloc_usable_size(Void_t* p);
    59         -     Report the number usable allocated bytes associated with allocated
    60         -     chunk p. This may or may not report more bytes than were requested,
    61         -     due to alignment and minimum size constraints.
    62         -  malloc_stats();
    63         -     Prints brief summary statistics on stderr.
    64         -  mallinfo()
    65         -     Returns (by copy) a struct containing various summary statistics.
    66         -  mallopt(int parameter_number, int parameter_value)
    67         -     Changes one of the tunable parameters described below. Returns
    68         -     1 if successful in changing the parameter, else 0.
    69         -
    70         -* Vital statistics:
    71         -
    72         -  Alignment:                            8-byte
    73         -       8 byte alignment is currently hardwired into the design.  This
    74         -       seems to suffice for all current machines and C compilers.
    75         -
    76         -  Assumed pointer representation:       4 or 8 bytes
    77         -       Code for 8-byte pointers is untested by me but has worked
    78         -       reliably by Wolfram Gloger, who contributed most of the
    79         -       changes supporting this.
    80         -
    81         -  Assumed size_t  representation:       4 or 8 bytes
    82         -       Note that size_t is allowed to be 4 bytes even if pointers are 8.        
    83         -
    84         -  Minimum overhead per allocated chunk: 4 or 8 bytes
    85         -       Each malloced chunk has a hidden overhead of 4 bytes holding size
    86         -       and status information.  
    87         -
    88         -  Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
    89         -                          8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
    90         -                                     
    91         -       When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
    92         -       ptrs but 4 byte size) or 24 (for 8/8) additional bytes are 
    93         -       needed; 4 (8) for a trailing size field
    94         -       and 8 (16) bytes for free list pointers. Thus, the minimum
    95         -       allocatable size is 16/24/32 bytes.
    96         -
    97         -       Even a request for zero bytes (i.e., malloc(0)) returns a
    98         -       pointer to something of the minimum allocatable size.
    99         -
   100         -  Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
   101         -                          8-byte size_t: 2^63 - 16 bytes
   102         -
   103         -       It is assumed that (possibly signed) size_t bit values suffice to
   104         -       represent chunk sizes. `Possibly signed' is due to the fact
   105         -       that `size_t' may be defined on a system as either a signed or
   106         -       an unsigned type. To be conservative, values that would appear
   107         -       as negative numbers are avoided.  
   108         -       Requests for sizes with a negative sign bit when the request
   109         -       size is treaded as a long will return null.
   110         -
   111         -  Maximum overhead wastage per allocated chunk: normally 15 bytes
   112         -
   113         -       Alignnment demands, plus the minimum allocatable size restriction
   114         -       make the normal worst-case wastage 15 bytes (i.e., up to 15
   115         -       more bytes will be allocated than were requested in malloc), with 
   116         -       two exceptions:
   117         -         1. Because requests for zero bytes allocate non-zero space,
   118         -            the worst case wastage for a request of zero bytes is 24 bytes.
   119         -         2. For requests >= mmap_threshold that are serviced via
   120         -            mmap(), the worst case wastage is 8 bytes plus the remainder
   121         -            from a system page (the minimal mmap unit); typically 4096 bytes.
   122         -
   123         -* Limitations
   124         -
   125         -    Here are some features that are NOT currently supported
   126         -
   127         -    * No user-definable hooks for callbacks and the like.
   128         -    * No automated mechanism for fully checking that all accesses
   129         -      to malloced memory stay within their bounds.
   130         -    * No support for compaction.
   131         -
   132         -* Synopsis of compile-time options:
   133         -
   134         -    People have reported using previous versions of this malloc on all
   135         -    versions of Unix, sometimes by tweaking some of the defines
   136         -    below. It has been tested most extensively on Solaris and
   137         -    Linux. It is also reported to work on WIN32 platforms.
   138         -    People have also reported adapting this malloc for use in
   139         -    stand-alone embedded systems.
   140         -
   141         -    The implementation is in straight, hand-tuned ANSI C.  Among other
   142         -    consequences, it uses a lot of macros.  Because of this, to be at
   143         -    all usable, this code should be compiled using an optimizing compiler
   144         -    (for example gcc -O2) that can simplify expressions and control
   145         -    paths.
   146         -
   147         -  __STD_C                  (default: derived from C compiler defines)
   148         -     Nonzero if using ANSI-standard C compiler, a C++ compiler, or
   149         -     a C compiler sufficiently close to ANSI to get away with it.
   150         -  DEBUG                    (default: NOT defined)
   151         -     Define to enable debugging. Adds fairly extensive assertion-based 
   152         -     checking to help track down memory errors, but noticeably slows down
   153         -     execution.
   154         -  REALLOC_ZERO_BYTES_FREES (default: NOT defined) 
   155         -     Define this if you think that realloc(p, 0) should be equivalent
   156         -     to free(p). Otherwise, since malloc returns a unique pointer for
   157         -     malloc(0), so does realloc(p, 0).
   158         -  HAVE_MEMCPY               (default: defined)
   159         -     Define if you are not otherwise using ANSI STD C, but still 
   160         -     have memcpy and memset in your C library and want to use them.
   161         -     Otherwise, simple internal versions are supplied.
   162         -  USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
   163         -     Define as 1 if you want the C library versions of memset and
   164         -     memcpy called in realloc and calloc (otherwise macro versions are used). 
   165         -     At least on some platforms, the simple macro versions usually
   166         -     outperform libc versions.
   167         -  HAVE_MMAP                 (default: defined as 1)
   168         -     Define to non-zero to optionally make malloc() use mmap() to
   169         -     allocate very large blocks.  
   170         -  HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
   171         -     Define to non-zero to optionally make realloc() use mremap() to
   172         -     reallocate very large blocks.  
   173         -  malloc_getpagesize        (default: derived from system #includes)
   174         -     Either a constant or routine call returning the system page size.
   175         -  HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined) 
   176         -     Optionally define if you are on a system with a /usr/include/malloc.h
   177         -     that declares struct mallinfo. It is not at all necessary to
   178         -     define this even if you do, but will ensure consistency.
   179         -  INTERNAL_SIZE_T           (default: size_t)
   180         -     Define to a 32-bit type (probably `unsigned int') if you are on a 
   181         -     64-bit machine, yet do not want or need to allow malloc requests of 
   182         -     greater than 2^31 to be handled. This saves space, especially for
   183         -     very small chunks.
   184         -  INTERNAL_LINUX_C_LIB      (default: NOT defined)
   185         -     Defined only when compiled as part of Linux libc.
   186         -     Also note that there is some odd internal name-mangling via defines
   187         -     (for example, internally, `malloc' is named `mALLOc') needed
   188         -     when compiling in this case. These look funny but don't otherwise
   189         -     affect anything.
   190         -  WIN32                     (default: undefined)
   191         -     Define this on MS win (95, nt) platforms to compile in sbrk emulation.
   192         -  LACKS_UNISTD_H            (default: undefined if not WIN32)
   193         -     Define this if your system does not have a <unistd.h>.
   194         -  LACKS_SYS_PARAM_H         (default: undefined if not WIN32)
   195         -     Define this if your system does not have a <sys/param.h>.
   196         -  MORECORE                  (default: sbrk)
   197         -     The name of the routine to call to obtain more memory from the system.
   198         -  MORECORE_FAILURE          (default: -1)
   199         -     The value returned upon failure of MORECORE.
   200         -  MORECORE_CLEARS           (default 1)
   201         -     True (1) if the routine mapped to MORECORE zeroes out memory (which
   202         -     holds for sbrk).
   203         -  DEFAULT_TRIM_THRESHOLD
   204         -  DEFAULT_TOP_PAD       
   205         -  DEFAULT_MMAP_THRESHOLD
   206         -  DEFAULT_MMAP_MAX      
   207         -     Default values of tunable parameters (described in detail below)
   208         -     controlling interaction with host system routines (sbrk, mmap, etc).
   209         -     These values may also be changed dynamically via mallopt(). The
   210         -     preset defaults are those that give best performance for typical
   211         -     programs/systems.
   212         -  USE_DL_PREFIX             (default: undefined)
   213         -     Prefix all public routines with the string 'dl'.  Useful to
   214         -     quickly avoid procedure declaration conflicts and linker symbol
   215         -     conflicts with existing memory allocation routines.
   216         -
   217         -
   218         -*/
   219         -
   220         -
   221         -
   222         -
   223         -/* Preliminaries */
   224         -
   225         -#ifndef __STD_C
   226         -#ifdef __STDC__
   227         -#define __STD_C     1
   228         -#else
   229         -#if __cplusplus
   230         -#define __STD_C     1
   231         -#else
   232         -#define __STD_C     0
   233         -#endif /*__cplusplus*/
   234         -#endif /*__STDC__*/
   235         -#endif /*__STD_C*/
   236         -
   237         -#ifndef Void_t
   238         -#if (__STD_C || defined(WIN32))
   239         -#define Void_t      void
   240         -#else
   241         -#define Void_t      char
   242         -#endif
   243         -#endif /*Void_t*/
   244         -
   245         -#if __STD_C
   246         -#include <stddef.h>   /* for size_t */
   247         -#else
   248         -#include <sys/types.h>
   249         -#endif
   250         -
   251         -#ifdef __cplusplus
   252         -extern "C" {
   253         -#endif
   254         -
   255         -#include <stdio.h>    /* needed for malloc_stats */
   256         -
   257         -
   258         -/*
   259         -  Compile-time options
   260         -*/
   261         -
   262         -
   263         -/*
   264         -    Debugging:
   265         -
   266         -    Because freed chunks may be overwritten with link fields, this
   267         -    malloc will often die when freed memory is overwritten by user
   268         -    programs.  This can be very effective (albeit in an annoying way)
   269         -    in helping track down dangling pointers.
   270         -
   271         -    If you compile with -DDEBUG, a number of assertion checks are
   272         -    enabled that will catch more memory errors. You probably won't be
   273         -    able to make much sense of the actual assertion errors, but they
   274         -    should help you locate incorrectly overwritten memory.  The
   275         -    checking is fairly extensive, and will slow down execution
   276         -    noticeably. Calling malloc_stats or mallinfo with DEBUG set will
   277         -    attempt to check every non-mmapped allocated and free chunk in the
   278         -    course of computing the summmaries. (By nature, mmapped regions
   279         -    cannot be checked very much automatically.)
   280         -
   281         -    Setting DEBUG may also be helpful if you are trying to modify 
   282         -    this code. The assertions in the check routines spell out in more 
   283         -    detail the assumptions and invariants underlying the algorithms.
   284         -
   285         -*/
   286         -
   287         -#if DEBUG 
   288         -#include <assert.h>
   289         -#else
   290         -#define assert(x) ((void)0)
   291         -#endif
   292         -
   293         -
   294         -/*
   295         -  INTERNAL_SIZE_T is the word-size used for internal bookkeeping
   296         -  of chunk sizes. On a 64-bit machine, you can reduce malloc
   297         -  overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
   298         -  at the expense of not being able to handle requests greater than
   299         -  2^31. This limitation is hardly ever a concern; you are encouraged
   300         -  to set this. However, the default version is the same as size_t.
   301         -*/
   302         -
   303         -#ifndef INTERNAL_SIZE_T
   304         -#define INTERNAL_SIZE_T size_t
   305         -#endif
   306         -
   307         -/*
   308         -  REALLOC_ZERO_BYTES_FREES should be set if a call to
   309         -  realloc with zero bytes should be the same as a call to free.
   310         -  Some people think it should. Otherwise, since this malloc
   311         -  returns a unique pointer for malloc(0), so does realloc(p, 0). 
   312         -*/
   313         -
   314         -
   315         -/*   #define REALLOC_ZERO_BYTES_FREES */
   316         -
   317         -
   318         -/* 
   319         -  WIN32 causes an emulation of sbrk to be compiled in
   320         -  mmap-based options are not currently supported in WIN32.
   321         -*/
   322         -
   323         -/* #define WIN32 */
   324         -#ifdef WIN32
   325         -#define MORECORE wsbrk
   326         -#define HAVE_MMAP 0
   327         -
   328         -#define LACKS_UNISTD_H
   329         -#define LACKS_SYS_PARAM_H
   330         -
   331         -/*
   332         -  Include 'windows.h' to get the necessary declarations for the
   333         -  Microsoft Visual C++ data structures and routines used in the 'sbrk'
   334         -  emulation.
   335         -  
   336         -  Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
   337         -  Visual C++ header files are included.
   338         -*/
   339         -#define WIN32_LEAN_AND_MEAN
   340         -#include <windows.h>
   341         -#endif
   342         -
   343         -
   344         -/*
   345         -  HAVE_MEMCPY should be defined if you are not otherwise using
   346         -  ANSI STD C, but still have memcpy and memset in your C library
   347         -  and want to use them in calloc and realloc. Otherwise simple
   348         -  macro versions are defined here.
   349         -
   350         -  USE_MEMCPY should be defined as 1 if you actually want to
   351         -  have memset and memcpy called. People report that the macro
   352         -  versions are often enough faster than libc versions on many
   353         -  systems that it is better to use them. 
   354         -
   355         -*/
   356         -
   357         -#define HAVE_MEMCPY 
   358         -
   359         -#ifndef USE_MEMCPY
   360         -#ifdef HAVE_MEMCPY
   361         -#define USE_MEMCPY 1
   362         -#else
   363         -#define USE_MEMCPY 0
   364         -#endif
   365         -#endif
   366         -
   367         -#if (__STD_C || defined(HAVE_MEMCPY)) 
   368         -
   369         -#if __STD_C
   370         -void* memset(void*, int, size_t);
   371         -void* memcpy(void*, const void*, size_t);
   372         -#else
   373         -#ifdef WIN32
   374         -// On Win32 platforms, 'memset()' and 'memcpy()' are already declared in
   375         -// 'windows.h'
   376         -#else
   377         -Void_t* memset();
   378         -Void_t* memcpy();
   379         -#endif
   380         -#endif
   381         -#endif
   382         -
   383         -#if USE_MEMCPY
   384         -
   385         -/* The following macros are only invoked with (2n+1)-multiples of
   386         -   INTERNAL_SIZE_T units, with a positive integer n. This is exploited
   387         -   for fast inline execution when n is small. */
   388         -
   389         -#define MALLOC_ZERO(charp, nbytes)                                            \
   390         -do {                                                                          \
   391         -  INTERNAL_SIZE_T mzsz = (nbytes);                                            \
   392         -  if(mzsz <= 9*sizeof(mzsz)) {                                                \
   393         -    INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
   394         -    if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
   395         -                                     *mz++ = 0;                               \
   396         -      if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
   397         -                                     *mz++ = 0;                               \
   398         -        if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
   399         -                                     *mz++ = 0; }}}                           \
   400         -                                     *mz++ = 0;                               \
   401         -                                     *mz++ = 0;                               \
   402         -                                     *mz   = 0;                               \
   403         -  } else memset((charp), 0, mzsz);                                            \
   404         -} while(0)
   405         -
   406         -#define MALLOC_COPY(dest,src,nbytes)                                          \
   407         -do {                                                                          \
   408         -  INTERNAL_SIZE_T mcsz = (nbytes);                                            \
   409         -  if(mcsz <= 9*sizeof(mcsz)) {                                                \
   410         -    INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
   411         -    INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
   412         -    if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
   413         -                                     *mcdst++ = *mcsrc++;                     \
   414         -      if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
   415         -                                     *mcdst++ = *mcsrc++;                     \
   416         -        if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
   417         -                                     *mcdst++ = *mcsrc++; }}}                 \
   418         -                                     *mcdst++ = *mcsrc++;                     \
   419         -                                     *mcdst++ = *mcsrc++;                     \
   420         -                                     *mcdst   = *mcsrc  ;                     \
   421         -  } else memcpy(dest, src, mcsz);                                             \
   422         -} while(0)
   423         -
   424         -#else /* !USE_MEMCPY */
   425         -
   426         -/* Use Duff's device for good zeroing/copying performance. */
   427         -
   428         -#define MALLOC_ZERO(charp, nbytes)                                            \
   429         -do {                                                                          \
   430         -  INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
   431         -  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
   432         -  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
   433         -  switch (mctmp) {                                                            \
   434         -    case 0: for(;;) { *mzp++ = 0;                                             \
   435         -    case 7:           *mzp++ = 0;                                             \
   436         -    case 6:           *mzp++ = 0;                                             \
   437         -    case 5:           *mzp++ = 0;                                             \
   438         -    case 4:           *mzp++ = 0;                                             \
   439         -    case 3:           *mzp++ = 0;                                             \
   440         -    case 2:           *mzp++ = 0;                                             \
   441         -    case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
   442         -  }                                                                           \
   443         -} while(0)
   444         -
   445         -#define MALLOC_COPY(dest,src,nbytes)                                          \
   446         -do {                                                                          \
   447         -  INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
   448         -  INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
   449         -  long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
   450         -  if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
   451         -  switch (mctmp) {                                                            \
   452         -    case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
   453         -    case 7:           *mcdst++ = *mcsrc++;                                    \
   454         -    case 6:           *mcdst++ = *mcsrc++;                                    \
   455         -    case 5:           *mcdst++ = *mcsrc++;                                    \
   456         -    case 4:           *mcdst++ = *mcsrc++;                                    \
   457         -    case 3:           *mcdst++ = *mcsrc++;                                    \
   458         -    case 2:           *mcdst++ = *mcsrc++;                                    \
   459         -    case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
   460         -  }                                                                           \
   461         -} while(0)
   462         -
   463         -#endif
   464         -
   465         -
   466         -/*
   467         -  Define HAVE_MMAP to optionally make malloc() use mmap() to
   468         -  allocate very large blocks.  These will be returned to the
   469         -  operating system immediately after a free().
   470         -*/
   471         -
   472         -#ifndef HAVE_MMAP
   473         -#define HAVE_MMAP 1
   474         -#endif
   475         -
   476         -/*
   477         -  Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
   478         -  large blocks.  This is currently only possible on Linux with
   479         -  kernel versions newer than 1.3.77.
   480         -*/
   481         -
   482         -#ifndef HAVE_MREMAP
   483         -#ifdef INTERNAL_LINUX_C_LIB
   484         -#define HAVE_MREMAP 1
   485         -#else
   486         -#define HAVE_MREMAP 0
   487         -#endif
   488         -#endif
   489         -
   490         -#if HAVE_MMAP
   491         -
   492         -#include <unistd.h>
   493         -#include <fcntl.h>
   494         -#include <sys/mman.h>
   495         -
   496         -#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
   497         -#define MAP_ANONYMOUS MAP_ANON
   498         -#endif
   499         -
   500         -#endif /* HAVE_MMAP */
   501         -
   502         -/*
   503         -  Access to system page size. To the extent possible, this malloc
   504         -  manages memory from the system in page-size units.
   505         -  
   506         -  The following mechanics for getpagesize were adapted from 
   507         -  bsd/gnu getpagesize.h 
   508         -*/
   509         -
   510         -#ifndef LACKS_UNISTD_H
   511         -#  include <unistd.h>
   512         -#endif
   513         -
   514         -#ifndef malloc_getpagesize
   515         -#  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
   516         -#    ifndef _SC_PAGE_SIZE
   517         -#      define _SC_PAGE_SIZE _SC_PAGESIZE
   518         -#    endif
   519         -#  endif
   520         -#  ifdef _SC_PAGE_SIZE
   521         -#    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
   522         -#  else
   523         -#    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
   524         -       extern size_t getpagesize();
   525         -#      define malloc_getpagesize getpagesize()
   526         -#    else
   527         -#      ifdef WIN32
   528         -#        define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
   529         -#      else
   530         -#        ifndef LACKS_SYS_PARAM_H
   531         -#          include <sys/param.h>
   532         -#        endif
   533         -#        ifdef EXEC_PAGESIZE
   534         -#          define malloc_getpagesize EXEC_PAGESIZE
   535         -#        else
   536         -#          ifdef NBPG
   537         -#            ifndef CLSIZE
   538         -#              define malloc_getpagesize NBPG
   539         -#            else
   540         -#              define malloc_getpagesize (NBPG * CLSIZE)
   541         -#            endif
   542         -#          else 
   543         -#            ifdef NBPC
   544         -#              define malloc_getpagesize NBPC
   545         -#            else
   546         -#              ifdef PAGESIZE
   547         -#                define malloc_getpagesize PAGESIZE
   548         -#              else
   549         -#                define malloc_getpagesize (4096) /* just guess */
   550         -#              endif
   551         -#            endif
   552         -#          endif 
   553         -#        endif
   554         -#      endif 
   555         -#    endif
   556         -#  endif
   557         -#endif
   558         -
   559         -
   560         -
   561         -/*
   562         -
   563         -  This version of malloc supports the standard SVID/XPG mallinfo
   564         -  routine that returns a struct containing the same kind of
   565         -  information you can get from malloc_stats. It should work on
   566         -  any SVID/XPG compliant system that has a /usr/include/malloc.h
   567         -  defining struct mallinfo. (If you'd like to install such a thing
   568         -  yourself, cut out the preliminary declarations as described above
   569         -  and below and save them in a malloc.h file. But there's no
   570         -  compelling reason to bother to do this.)
   571         -
   572         -  The main declaration needed is the mallinfo struct that is returned
   573         -  (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
   574         -  bunch of fields, most of which are not even meaningful in this
   575         -  version of malloc. Some of these fields are are instead filled by
   576         -  mallinfo() with other numbers that might possibly be of interest.
   577         -
   578         -  HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
   579         -  /usr/include/malloc.h file that includes a declaration of struct
   580         -  mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
   581         -  version is declared below.  These must be precisely the same for
   582         -  mallinfo() to work.
   583         -
   584         -*/
   585         -
   586         -/* #define HAVE_USR_INCLUDE_MALLOC_H */
   587         -
   588         -#if HAVE_USR_INCLUDE_MALLOC_H
   589         -#include "/usr/include/malloc.h"
   590         -#else
   591         -
   592         -/* SVID2/XPG mallinfo structure */
   593         -
   594         -struct mallinfo {
   595         -  int arena;    /* total space allocated from system */
   596         -  int ordblks;  /* number of non-inuse chunks */
   597         -  int smblks;   /* unused -- always zero */
   598         -  int hblks;    /* number of mmapped regions */
   599         -  int hblkhd;   /* total space in mmapped regions */
   600         -  int usmblks;  /* unused -- always zero */
   601         -  int fsmblks;  /* unused -- always zero */
   602         -  int uordblks; /* total allocated space */
   603         -  int fordblks; /* total non-inuse space */
   604         -  int keepcost; /* top-most, releasable (via malloc_trim) space */
   605         -};	
   606         -
   607         -/* SVID2/XPG mallopt options */
   608         -
   609         -#define M_MXFAST  1    /* UNUSED in this malloc */
   610         -#define M_NLBLKS  2    /* UNUSED in this malloc */
   611         -#define M_GRAIN   3    /* UNUSED in this malloc */
   612         -#define M_KEEP    4    /* UNUSED in this malloc */
   613         -
   614         -#endif
   615         -
   616         -/* mallopt options that actually do something */
   617         -
   618         -#define M_TRIM_THRESHOLD    -1
   619         -#define M_TOP_PAD           -2
   620         -#define M_MMAP_THRESHOLD    -3
   621         -#define M_MMAP_MAX          -4
   622         -
   623         -
   624         -
   625         -#ifndef DEFAULT_TRIM_THRESHOLD
   626         -#define DEFAULT_TRIM_THRESHOLD (128 * 1024)
   627         -#endif
   628         -
   629         -/*
   630         -    M_TRIM_THRESHOLD is the maximum amount of unused top-most memory 
   631         -      to keep before releasing via malloc_trim in free().
   632         -
   633         -      Automatic trimming is mainly useful in long-lived programs.
   634         -      Because trimming via sbrk can be slow on some systems, and can
   635         -      sometimes be wasteful (in cases where programs immediately
   636         -      afterward allocate more large chunks) the value should be high
   637         -      enough so that your overall system performance would improve by
   638         -      releasing.  
   639         -
   640         -      The trim threshold and the mmap control parameters (see below)
   641         -      can be traded off with one another. Trimming and mmapping are
   642         -      two different ways of releasing unused memory back to the
   643         -      system. Between these two, it is often possible to keep
   644         -      system-level demands of a long-lived program down to a bare
   645         -      minimum. For example, in one test suite of sessions measuring
   646         -      the XF86 X server on Linux, using a trim threshold of 128K and a
   647         -      mmap threshold of 192K led to near-minimal long term resource
   648         -      consumption.  
   649         -
   650         -      If you are using this malloc in a long-lived program, it should
   651         -      pay to experiment with these values.  As a rough guide, you
   652         -      might set to a value close to the average size of a process
   653         -      (program) running on your system.  Releasing this much memory
   654         -      would allow such a process to run in memory.  Generally, it's
   655         -      worth it to tune for trimming rather tham memory mapping when a
   656         -      program undergoes phases where several large chunks are
   657         -      allocated and released in ways that can reuse each other's
   658         -      storage, perhaps mixed with phases where there are no such
   659         -      chunks at all.  And in well-behaved long-lived programs,
   660         -      controlling release of large blocks via trimming versus mapping
   661         -      is usually faster.
   662         -
   663         -      However, in most programs, these parameters serve mainly as
   664         -      protection against the system-level effects of carrying around
   665         -      massive amounts of unneeded memory. Since frequent calls to
   666         -      sbrk, mmap, and munmap otherwise degrade performance, the default
   667         -      parameters are set to relatively high values that serve only as
   668         -      safeguards.
   669         -
   670         -      The default trim value is high enough to cause trimming only in
   671         -      fairly extreme (by current memory consumption standards) cases.
   672         -      It must be greater than page size to have any useful effect.  To
   673         -      disable trimming completely, you can set to (unsigned long)(-1);
   674         -
   675         -
   676         -*/
   677         -
   678         -
   679         -#ifndef DEFAULT_TOP_PAD
   680         -#define DEFAULT_TOP_PAD        (0)
   681         -#endif
   682         -
   683         -/*
   684         -    M_TOP_PAD is the amount of extra `padding' space to allocate or 
   685         -      retain whenever sbrk is called. It is used in two ways internally:
   686         -
   687         -      * When sbrk is called to extend the top of the arena to satisfy
   688         -        a new malloc request, this much padding is added to the sbrk
   689         -        request.
   690         -
   691         -      * When malloc_trim is called automatically from free(),
   692         -        it is used as the `pad' argument.
   693         -
   694         -      In both cases, the actual amount of padding is rounded 
   695         -      so that the end of the arena is always a system page boundary.
   696         -
   697         -      The main reason for using padding is to avoid calling sbrk so
   698         -      often. Having even a small pad greatly reduces the likelihood
   699         -      that nearly every malloc request during program start-up (or
   700         -      after trimming) will invoke sbrk, which needlessly wastes
   701         -      time. 
   702         -
   703         -      Automatic rounding-up to page-size units is normally sufficient
   704         -      to avoid measurable overhead, so the default is 0.  However, in
   705         -      systems where sbrk is relatively slow, it can pay to increase
   706         -      this value, at the expense of carrying around more memory than 
   707         -      the program needs.
   708         -
   709         -*/
   710         -
   711         -
   712         -#ifndef DEFAULT_MMAP_THRESHOLD
   713         -#define DEFAULT_MMAP_THRESHOLD (128 * 1024)
   714         -#endif
   715         -
   716         -/*
   717         -
   718         -    M_MMAP_THRESHOLD is the request size threshold for using mmap() 
   719         -      to service a request. Requests of at least this size that cannot 
   720         -      be allocated using already-existing space will be serviced via mmap.  
   721         -      (If enough normal freed space already exists it is used instead.)
   722         -
   723         -      Using mmap segregates relatively large chunks of memory so that
   724         -      they can be individually obtained and released from the host
   725         -      system. A request serviced through mmap is never reused by any
   726         -      other request (at least not directly; the system may just so
   727         -      happen to remap successive requests to the same locations).
   728         -
   729         -      Segregating space in this way has the benefit that mmapped space
   730         -      can ALWAYS be individually released back to the system, which
   731         -      helps keep the system level memory demands of a long-lived
   732         -      program low. Mapped memory can never become `locked' between
   733         -      other chunks, as can happen with normally allocated chunks, which
   734         -      menas that even trimming via malloc_trim would not release them.
   735         -
   736         -      However, it has the disadvantages that:
   737         -
   738         -         1. The space cannot be reclaimed, consolidated, and then
   739         -            used to service later requests, as happens with normal chunks. 
   740         -         2. It can lead to more wastage because of mmap page alignment
   741         -            requirements
   742         -         3. It causes malloc performance to be more dependent on host
   743         -            system memory management support routines which may vary in
   744         -            implementation quality and may impose arbitrary
   745         -            limitations. Generally, servicing a request via normal
   746         -            malloc steps is faster than going through a system's mmap.
   747         -
   748         -      All together, these considerations should lead you to use mmap
   749         -      only for relatively large requests.  
   750         -
   751         -
   752         -*/
   753         -
   754         -
   755         -
   756         -#ifndef DEFAULT_MMAP_MAX
   757         -#if HAVE_MMAP
   758         -#define DEFAULT_MMAP_MAX       (64)
   759         -#else
   760         -#define DEFAULT_MMAP_MAX       (0)
   761         -#endif
   762         -#endif
   763         -
   764         -/*
   765         -    M_MMAP_MAX is the maximum number of requests to simultaneously 
   766         -      service using mmap. This parameter exists because:
   767         -
   768         -         1. Some systems have a limited number of internal tables for
   769         -            use by mmap.
   770         -         2. In most systems, overreliance on mmap can degrade overall
   771         -            performance.
   772         -         3. If a program allocates many large regions, it is probably
   773         -            better off using normal sbrk-based allocation routines that
   774         -            can reclaim and reallocate normal heap memory. Using a
   775         -            small value allows transition into this mode after the
   776         -            first few allocations.
   777         -
   778         -      Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
   779         -      the default value is 0, and attempts to set it to non-zero values
   780         -      in mallopt will fail.
   781         -*/
   782         -
   783         -
   784         -
   785         -
   786         -/* 
   787         -    USE_DL_PREFIX will prefix all public routines with the string 'dl'.
   788         -      Useful to quickly avoid procedure declaration conflicts and linker
   789         -      symbol conflicts with existing memory allocation routines.
   790         -
   791         -*/
   792         -
   793         -/* #define USE_DL_PREFIX */
   794         -
   795         -
   796         -
   797         -
   798         -/* 
   799         -
   800         -  Special defines for linux libc
   801         -
   802         -  Except when compiled using these special defines for Linux libc
   803         -  using weak aliases, this malloc is NOT designed to work in
   804         -  multithreaded applications.  No semaphores or other concurrency
   805         -  control are provided to ensure that multiple malloc or free calls
   806         -  don't run at the same time, which could be disasterous. A single
   807         -  semaphore could be used across malloc, realloc, and free (which is
   808         -  essentially the effect of the linux weak alias approach). It would
   809         -  be hard to obtain finer granularity.
   810         -
   811         -*/
   812         -
   813         -
   814         -#ifdef INTERNAL_LINUX_C_LIB
   815         -
   816         -#if __STD_C
   817         -
   818         -Void_t * __default_morecore_init (ptrdiff_t);
   819         -Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
   820         -
   821         -#else
   822         -
   823         -Void_t * __default_morecore_init ();
   824         -Void_t *(*__morecore)() = __default_morecore_init;
   825         -
   826         -#endif
   827         -
   828         -#define MORECORE (*__morecore)
   829         -#define MORECORE_FAILURE 0
   830         -#define MORECORE_CLEARS 1 
   831         -
   832         -#else /* INTERNAL_LINUX_C_LIB */
   833         -
   834         -#if __STD_C
   835         -extern Void_t*     sbrk(ptrdiff_t);
   836         -#else
   837         -extern Void_t*     sbrk();
   838         -#endif
   839         -
   840         -#ifndef MORECORE
   841         -#define MORECORE sbrk
   842         -#endif
   843         -
   844         -#ifndef MORECORE_FAILURE
   845         -#define MORECORE_FAILURE -1
   846         -#endif
   847         -
   848         -#ifndef MORECORE_CLEARS
   849         -#define MORECORE_CLEARS 1
   850         -#endif
   851         -
   852         -#endif /* INTERNAL_LINUX_C_LIB */
   853         -
   854         -#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
   855         -
   856         -#define cALLOc		__libc_calloc
   857         -#define fREe		__libc_free
   858         -#define mALLOc		__libc_malloc
   859         -#define mEMALIGn	__libc_memalign
   860         -#define rEALLOc		__libc_realloc
   861         -#define vALLOc		__libc_valloc
   862         -#define pvALLOc		__libc_pvalloc
   863         -#define mALLINFo	__libc_mallinfo
   864         -#define mALLOPt		__libc_mallopt
   865         -
   866         -#pragma weak calloc = __libc_calloc
   867         -#pragma weak free = __libc_free
   868         -#pragma weak cfree = __libc_free
   869         -#pragma weak malloc = __libc_malloc
   870         -#pragma weak memalign = __libc_memalign
   871         -#pragma weak realloc = __libc_realloc
   872         -#pragma weak valloc = __libc_valloc
   873         -#pragma weak pvalloc = __libc_pvalloc
   874         -#pragma weak mallinfo = __libc_mallinfo
   875         -#pragma weak mallopt = __libc_mallopt
   876         -
   877         -#else
   878         -
   879         -#ifdef USE_DL_PREFIX
   880         -#define cALLOc		dlcalloc
   881         -#define fREe		dlfree
   882         -#define mALLOc		dlmalloc
   883         -#define mEMALIGn	dlmemalign
   884         -#define rEALLOc		dlrealloc
   885         -#define vALLOc		dlvalloc
   886         -#define pvALLOc		dlpvalloc
   887         -#define mALLINFo	dlmallinfo
   888         -#define mALLOPt		dlmallopt
   889         -#else /* USE_DL_PREFIX */
   890         -#define cALLOc		calloc
   891         -#define fREe		free
   892         -#define mALLOc		malloc
   893         -#define mEMALIGn	memalign
   894         -#define rEALLOc		realloc
   895         -#define vALLOc		valloc
   896         -#define pvALLOc		pvalloc
   897         -#define mALLINFo	mallinfo
   898         -#define mALLOPt		mallopt
   899         -#endif /* USE_DL_PREFIX */
   900         -
   901         -#endif
   902         -
   903         -/* Public routines */
   904         -
   905         -#if __STD_C
   906         -
   907         -Void_t* mALLOc(size_t);
   908         -void    fREe(Void_t*);
   909         -Void_t* rEALLOc(Void_t*, size_t);
   910         -Void_t* mEMALIGn(size_t, size_t);
   911         -Void_t* vALLOc(size_t);
   912         -Void_t* pvALLOc(size_t);
   913         -Void_t* cALLOc(size_t, size_t);
   914         -void    cfree(Void_t*);
   915         -int     malloc_trim(size_t);
   916         -size_t  malloc_usable_size(Void_t*);
   917         -void    malloc_stats();
   918         -int     mALLOPt(int, int);
   919         -struct mallinfo mALLINFo(void);
   920         -#else
   921         -Void_t* mALLOc();
   922         -void    fREe();
   923         -Void_t* rEALLOc();
   924         -Void_t* mEMALIGn();
   925         -Void_t* vALLOc();
   926         -Void_t* pvALLOc();
   927         -Void_t* cALLOc();
   928         -void    cfree();
   929         -int     malloc_trim();
   930         -size_t  malloc_usable_size();
   931         -void    malloc_stats();
   932         -int     mALLOPt();
   933         -struct mallinfo mALLINFo();
   934         -#endif
   935         -
   936         -
   937         -#ifdef __cplusplus
   938         -};  /* end of extern "C" */
   939         -#endif
   940         -
   941         -/* ---------- To make a malloc.h, end cutting here ------------ */
   942         -
   943         -
   944         -/* 
   945         -  Emulation of sbrk for WIN32
   946         -  All code within the ifdef WIN32 is untested by me.
   947         -
   948         -  Thanks to Martin Fong and others for supplying this.
   949         -*/
   950         -
   951         -
   952         -#ifdef WIN32
   953         -
   954         -#define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
   955         -~(malloc_getpagesize-1))
   956         -#define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
   957         -
   958         -/* resrve 64MB to insure large contiguous space */ 
   959         -#define RESERVED_SIZE (1024*1024*64)
   960         -#define NEXT_SIZE (2048*1024)
   961         -#define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
   962         -
   963         -struct GmListElement;
   964         -typedef struct GmListElement GmListElement;
   965         -
   966         -struct GmListElement 
   967         -{
   968         -	GmListElement* next;
   969         -	void* base;
   970         -};
   971         -
   972         -static GmListElement* head = 0;
   973         -static unsigned int gNextAddress = 0;
   974         -static unsigned int gAddressBase = 0;
   975         -static unsigned int gAllocatedSize = 0;
   976         -
   977         -static
   978         -GmListElement* makeGmListElement (void* bas)
   979         -{
   980         -	GmListElement* this;
   981         -	this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
   982         -	assert (this);
   983         -	if (this)
   984         -	{
   985         -		this->base = bas;
   986         -		this->next = head;
   987         -		head = this;
   988         -	}
   989         -	return this;
   990         -}
   991         -
   992         -void gcleanup ()
   993         -{
   994         -	BOOL rval;
   995         -	assert ( (head == NULL) || (head->base == (void*)gAddressBase));
   996         -	if (gAddressBase && (gNextAddress - gAddressBase))
   997         -	{
   998         -		rval = VirtualFree ((void*)gAddressBase, 
   999         -							gNextAddress - gAddressBase, 
  1000         -							MEM_DECOMMIT);
  1001         -        assert (rval);
  1002         -	}
  1003         -	while (head)
  1004         -	{
  1005         -		GmListElement* next = head->next;
  1006         -		rval = VirtualFree (head->base, 0, MEM_RELEASE);
  1007         -		assert (rval);
  1008         -		LocalFree (head);
  1009         -		head = next;
  1010         -	}
  1011         -}
  1012         -		
  1013         -static
  1014         -void* findRegion (void* start_address, unsigned long size)
  1015         -{
  1016         -	MEMORY_BASIC_INFORMATION info;
  1017         -	if (size >= TOP_MEMORY) return NULL;
  1018         -
  1019         -	while ((unsigned long)start_address + size < TOP_MEMORY)
  1020         -	{
  1021         -		VirtualQuery (start_address, &info, sizeof (info));
  1022         -		if ((info.State == MEM_FREE) && (info.RegionSize >= size))
  1023         -			return start_address;
  1024         -		else
  1025         -		{
  1026         -			// Requested region is not available so see if the
  1027         -			// next region is available.  Set 'start_address'
  1028         -			// to the next region and call 'VirtualQuery()'
  1029         -			// again.
  1030         -
  1031         -			start_address = (char*)info.BaseAddress + info.RegionSize; 
  1032         -
  1033         -			// Make sure we start looking for the next region
  1034         -			// on the *next* 64K boundary.  Otherwise, even if
  1035         -			// the new region is free according to
  1036         -			// 'VirtualQuery()', the subsequent call to
  1037         -			// 'VirtualAlloc()' (which follows the call to
  1038         -			// this routine in 'wsbrk()') will round *down*
  1039         -			// the requested address to a 64K boundary which
  1040         -			// we already know is an address in the
  1041         -			// unavailable region.  Thus, the subsequent call
  1042         -			// to 'VirtualAlloc()' will fail and bring us back
  1043         -			// here, causing us to go into an infinite loop.
  1044         -
  1045         -			start_address =
  1046         -				(void *) AlignPage64K((unsigned long) start_address);
  1047         -		}
  1048         -	}
  1049         -	return NULL;
  1050         -	
  1051         -}
  1052         -
  1053         -
  1054         -void* wsbrk (long size)
  1055         -{
  1056         -	void* tmp;
  1057         -	if (size > 0)
  1058         -	{
  1059         -		if (gAddressBase == 0)
  1060         -		{
  1061         -			gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
  1062         -			gNextAddress = gAddressBase = 
  1063         -				(unsigned int)VirtualAlloc (NULL, gAllocatedSize, 
  1064         -											MEM_RESERVE, PAGE_NOACCESS);
  1065         -		} else if (AlignPage (gNextAddress + size) > (gAddressBase +
  1066         -gAllocatedSize))
  1067         -		{
  1068         -			long new_size = max (NEXT_SIZE, AlignPage (size));
  1069         -			void* new_address = (void*)(gAddressBase+gAllocatedSize);
  1070         -			do 
  1071         -			{
  1072         -				new_address = findRegion (new_address, new_size);
  1073         -				
  1074         -				if (new_address == 0)
  1075         -					return (void*)-1;
  1076         -
  1077         -				gAddressBase = gNextAddress =
  1078         -					(unsigned int)VirtualAlloc (new_address, new_size,
  1079         -												MEM_RESERVE, PAGE_NOACCESS);
  1080         -				// repeat in case of race condition
  1081         -				// The region that we found has been snagged 
  1082         -				// by another thread
  1083         -			}
  1084         -			while (gAddressBase == 0);
  1085         -
  1086         -			assert (new_address == (void*)gAddressBase);
  1087         -
  1088         -			gAllocatedSize = new_size;
  1089         -
  1090         -			if (!makeGmListElement ((void*)gAddressBase))
  1091         -				return (void*)-1;
  1092         -		}
  1093         -		if ((size + gNextAddress) > AlignPage (gNextAddress))
  1094         -		{
  1095         -			void* res;
  1096         -			res = VirtualAlloc ((void*)AlignPage (gNextAddress),
  1097         -								(size + gNextAddress - 
  1098         -								 AlignPage (gNextAddress)), 
  1099         -								MEM_COMMIT, PAGE_READWRITE);
  1100         -			if (res == 0)
  1101         -				return (void*)-1;
  1102         -		}
  1103         -		tmp = (void*)gNextAddress;
  1104         -		gNextAddress = (unsigned int)tmp + size;
  1105         -		return tmp;
  1106         -	}
  1107         -	else if (size < 0)
  1108         -	{
  1109         -		unsigned int alignedGoal = AlignPage (gNextAddress + size);
  1110         -		/* Trim by releasing the virtual memory */
  1111         -		if (alignedGoal >= gAddressBase)
  1112         -		{
  1113         -			VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,  
  1114         -						 MEM_DECOMMIT);
  1115         -			gNextAddress = gNextAddress + size;
  1116         -			return (void*)gNextAddress;
  1117         -		}
  1118         -		else 
  1119         -		{
  1120         -			VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
  1121         -						 MEM_DECOMMIT);
  1122         -			gNextAddress = gAddressBase;
  1123         -			return (void*)-1;
  1124         -		}
  1125         -	}
  1126         -	else
  1127         -	{
  1128         -		return (void*)gNextAddress;
  1129         -	}
  1130         -}
  1131         -
  1132         -#endif
  1133         -
  1134         -
  1135         -
  1136         -/*
  1137         -  Type declarations
  1138         -*/
  1139         -
  1140         -
  1141         -struct malloc_chunk
  1142         -{
  1143         -  INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
  1144         -  INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
  1145         -  struct malloc_chunk* fd;   /* double links -- used only if free. */
  1146         -  struct malloc_chunk* bk;
  1147         -};
  1148         -
  1149         -typedef struct malloc_chunk* mchunkptr;
  1150         -
  1151         -/*
  1152         -
  1153         -   malloc_chunk details:
  1154         -
  1155         -    (The following includes lightly edited explanations by Colin Plumb.)
  1156         -
  1157         -    Chunks of memory are maintained using a `boundary tag' method as
  1158         -    described in e.g., Knuth or Standish.  (See the paper by Paul
  1159         -    Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
  1160         -    survey of such techniques.)  Sizes of free chunks are stored both
  1161         -    in the front of each chunk and at the end.  This makes
  1162         -    consolidating fragmented chunks into bigger chunks very fast.  The
  1163         -    size fields also hold bits representing whether chunks are free or
  1164         -    in use.
  1165         -
  1166         -    An allocated chunk looks like this:  
  1167         -
  1168         -
  1169         -    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1170         -            |             Size of previous chunk, if allocated            | |
  1171         -            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1172         -            |             Size of chunk, in bytes                         |P|
  1173         -      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1174         -            |             User data starts here...                          .
  1175         -            .                                                               .
  1176         -            .             (malloc_usable_space() bytes)                     .
  1177         -            .                                                               |
  1178         -nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1179         -            |             Size of chunk                                     |
  1180         -            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1181         -
  1182         -
  1183         -    Where "chunk" is the front of the chunk for the purpose of most of
  1184         -    the malloc code, but "mem" is the pointer that is returned to the
  1185         -    user.  "Nextchunk" is the beginning of the next contiguous chunk.
  1186         -
  1187         -    Chunks always begin on even word boundries, so the mem portion
  1188         -    (which is returned to the user) is also on an even word boundary, and
  1189         -    thus double-word aligned.
  1190         -
  1191         -    Free chunks are stored in circular doubly-linked lists, and look like this:
  1192         -
  1193         -    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1194         -            |             Size of previous chunk                            |
  1195         -            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1196         -    `head:' |             Size of chunk, in bytes                         |P|
  1197         -      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1198         -            |             Forward pointer to next chunk in list             |
  1199         -            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1200         -            |             Back pointer to previous chunk in list            |
  1201         -            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1202         -            |             Unused space (may be 0 bytes long)                .
  1203         -            .                                                               .
  1204         -            .                                                               |
  1205         -nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1206         -    `foot:' |             Size of chunk, in bytes                           |
  1207         -            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1208         -
  1209         -    The P (PREV_INUSE) bit, stored in the unused low-order bit of the
  1210         -    chunk size (which is always a multiple of two words), is an in-use
  1211         -    bit for the *previous* chunk.  If that bit is *clear*, then the
  1212         -    word before the current chunk size contains the previous chunk
  1213         -    size, and can be used to find the front of the previous chunk.
  1214         -    (The very first chunk allocated always has this bit set,
  1215         -    preventing access to non-existent (or non-owned) memory.)
  1216         -
  1217         -    Note that the `foot' of the current chunk is actually represented
  1218         -    as the prev_size of the NEXT chunk. (This makes it easier to
  1219         -    deal with alignments etc).
  1220         -
  1221         -    The two exceptions to all this are 
  1222         -
  1223         -     1. The special chunk `top', which doesn't bother using the 
  1224         -        trailing size field since there is no
  1225         -        next contiguous chunk that would have to index off it. (After
  1226         -        initialization, `top' is forced to always exist.  If it would
  1227         -        become less than MINSIZE bytes long, it is replenished via
  1228         -        malloc_extend_top.)
  1229         -
  1230         -     2. Chunks allocated via mmap, which have the second-lowest-order
  1231         -        bit (IS_MMAPPED) set in their size fields.  Because they are
  1232         -        never merged or traversed from any other chunk, they have no
  1233         -        foot size or inuse information.
  1234         -
  1235         -    Available chunks are kept in any of several places (all declared below):
  1236         -
  1237         -    * `av': An array of chunks serving as bin headers for consolidated
  1238         -       chunks. Each bin is doubly linked.  The bins are approximately
  1239         -       proportionally (log) spaced.  There are a lot of these bins
  1240         -       (128). This may look excessive, but works very well in
  1241         -       practice.  All procedures maintain the invariant that no
  1242         -       consolidated chunk physically borders another one. Chunks in
  1243         -       bins are kept in size order, with ties going to the
  1244         -       approximately least recently used chunk.
  1245         -
  1246         -       The chunks in each bin are maintained in decreasing sorted order by
  1247         -       size.  This is irrelevant for the small bins, which all contain
  1248         -       the same-sized chunks, but facilitates best-fit allocation for
  1249         -       larger chunks. (These lists are just sequential. Keeping them in
  1250         -       order almost never requires enough traversal to warrant using
  1251         -       fancier ordered data structures.)  Chunks of the same size are
  1252         -       linked with the most recently freed at the front, and allocations
  1253         -       are taken from the back.  This results in LRU or FIFO allocation
  1254         -       order, which tends to give each chunk an equal opportunity to be
  1255         -       consolidated with adjacent freed chunks, resulting in larger free
  1256         -       chunks and less fragmentation. 
  1257         -
  1258         -    * `top': The top-most available chunk (i.e., the one bordering the
  1259         -       end of available memory) is treated specially. It is never
  1260         -       included in any bin, is used only if no other chunk is
  1261         -       available, and is released back to the system if it is very
  1262         -       large (see M_TRIM_THRESHOLD).
  1263         -
  1264         -    * `last_remainder': A bin holding only the remainder of the
  1265         -       most recently split (non-top) chunk. This bin is checked
  1266         -       before other non-fitting chunks, so as to provide better
  1267         -       locality for runs of sequentially allocated chunks. 
  1268         -
  1269         -    *  Implicitly, through the host system's memory mapping tables.
  1270         -       If supported, requests greater than a threshold are usually 
  1271         -       serviced via calls to mmap, and then later released via munmap.
  1272         -
  1273         -*/
  1274         -
  1275         -
  1276         -
  1277         -
  1278         -
  1279         -
  1280         -/*  sizes, alignments */
  1281         -
  1282         -#define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
  1283         -#define MALLOC_ALIGNMENT       (SIZE_SZ + SIZE_SZ)
  1284         -#define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
  1285         -#define MINSIZE                (sizeof(struct malloc_chunk))
  1286         -
  1287         -/* conversion from malloc headers to user pointers, and back */
  1288         -
  1289         -#define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
  1290         -#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
  1291         -
  1292         -/* pad request bytes into a usable size */
  1293         -
  1294         -#define request2size(req) \
  1295         - (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
  1296         -  (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
  1297         -   (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
  1298         -
  1299         -/* Check if m has acceptable alignment */
  1300         -
  1301         -#define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
  1302         -
  1303         -
  1304         -
  1305         -
  1306         -/* 
  1307         -  Physical chunk operations  
  1308         -*/
  1309         -
  1310         -
  1311         -/* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
  1312         -
  1313         -#define PREV_INUSE 0x1 
  1314         -
  1315         -/* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
  1316         -
  1317         -#define IS_MMAPPED 0x2
  1318         -
  1319         -/* Bits to mask off when extracting size */
  1320         -
  1321         -#define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
  1322         -
  1323         -
  1324         -/* Ptr to next physical malloc_chunk. */
  1325         -
  1326         -#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
  1327         -
  1328         -/* Ptr to previous physical malloc_chunk */
  1329         -
  1330         -#define prev_chunk(p)\
  1331         -   ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
  1332         -
  1333         -
  1334         -/* Treat space at ptr + offset as a chunk */
  1335         -
  1336         -#define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
  1337         -
  1338         -
  1339         -
  1340         -
  1341         -/* 
  1342         -  Dealing with use bits 
  1343         -*/
  1344         -
  1345         -/* extract p's inuse bit */
  1346         -
  1347         -#define inuse(p)\
  1348         -((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
  1349         -
  1350         -/* extract inuse bit of previous chunk */
  1351         -
  1352         -#define prev_inuse(p)  ((p)->size & PREV_INUSE)
  1353         -
  1354         -/* check for mmap()'ed chunk */
  1355         -
  1356         -#define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
  1357         -
  1358         -/* set/clear chunk as in use without otherwise disturbing */
  1359         -
  1360         -#define set_inuse(p)\
  1361         -((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
  1362         -
  1363         -#define clear_inuse(p)\
  1364         -((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
  1365         -
  1366         -/* check/set/clear inuse bits in known places */
  1367         -
  1368         -#define inuse_bit_at_offset(p, s)\
  1369         - (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
  1370         -
  1371         -#define set_inuse_bit_at_offset(p, s)\
  1372         - (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
  1373         -
  1374         -#define clear_inuse_bit_at_offset(p, s)\
  1375         - (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
  1376         -
  1377         -
  1378         -
  1379         -
  1380         -/* 
  1381         -  Dealing with size fields 
  1382         -*/
  1383         -
  1384         -/* Get size, ignoring use bits */
  1385         -
  1386         -#define chunksize(p)          ((p)->size & ~(SIZE_BITS))
  1387         -
  1388         -/* Set size at head, without disturbing its use bit */
  1389         -
  1390         -#define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
  1391         -
  1392         -/* Set size/use ignoring previous bits in header */
  1393         -
  1394         -#define set_head(p, s)        ((p)->size = (s))
  1395         -
  1396         -/* Set size at footer (only when chunk is not in use) */
  1397         -
  1398         -#define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
  1399         -
  1400         -
  1401         -
  1402         -
  1403         -
  1404         -/*
  1405         -   Bins
  1406         -
  1407         -    The bins, `av_' are an array of pairs of pointers serving as the
  1408         -    heads of (initially empty) doubly-linked lists of chunks, laid out
  1409         -    in a way so that each pair can be treated as if it were in a
  1410         -    malloc_chunk. (This way, the fd/bk offsets for linking bin heads
  1411         -    and chunks are the same).
  1412         -
  1413         -    Bins for sizes < 512 bytes contain chunks of all the same size, spaced
  1414         -    8 bytes apart. Larger bins are approximately logarithmically
  1415         -    spaced. (See the table below.) The `av_' array is never mentioned
  1416         -    directly in the code, but instead via bin access macros.
  1417         -
  1418         -    Bin layout:
  1419         -
  1420         -    64 bins of size       8
  1421         -    32 bins of size      64
  1422         -    16 bins of size     512
  1423         -     8 bins of size    4096
  1424         -     4 bins of size   32768
  1425         -     2 bins of size  262144
  1426         -     1 bin  of size what's left
  1427         -
  1428         -    There is actually a little bit of slop in the numbers in bin_index
  1429         -    for the sake of speed. This makes no difference elsewhere.
  1430         -
  1431         -    The special chunks `top' and `last_remainder' get their own bins,
  1432         -    (this is implemented via yet more trickery with the av_ array),
  1433         -    although `top' is never properly linked to its bin since it is
  1434         -    always handled specially.
  1435         -
  1436         -*/
  1437         -
  1438         -#define NAV             128   /* number of bins */
  1439         -
  1440         -typedef struct malloc_chunk* mbinptr;
  1441         -
  1442         -/* access macros */
  1443         -
  1444         -#define bin_at(i)      ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
  1445         -#define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
  1446         -#define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
  1447         -
  1448         -/*
  1449         -   The first 2 bins are never indexed. The corresponding av_ cells are instead
  1450         -   used for bookkeeping. This is not to save space, but to simplify
  1451         -   indexing, maintain locality, and avoid some initialization tests.
  1452         -*/
  1453         -
  1454         -#define top            (bin_at(0)->fd)   /* The topmost chunk */
  1455         -#define last_remainder (bin_at(1))       /* remainder from last split */
  1456         -
  1457         -
  1458         -/*
  1459         -   Because top initially points to its own bin with initial
  1460         -   zero size, thus forcing extension on the first malloc request, 
  1461         -   we avoid having any special code in malloc to check whether 
  1462         -   it even exists yet. But we still need to in malloc_extend_top.
  1463         -*/
  1464         -
  1465         -#define initial_top    ((mchunkptr)(bin_at(0)))
  1466         -
  1467         -/* Helper macro to initialize bins */
  1468         -
  1469         -#define IAV(i)  bin_at(i), bin_at(i)
  1470         -
  1471         -static mbinptr av_[NAV * 2 + 2] = {
  1472         - 0, 0,
  1473         - IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
  1474         - IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
  1475         - IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
  1476         - IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
  1477         - IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
  1478         - IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
  1479         - IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
  1480         - IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
  1481         - IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
  1482         - IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
  1483         - IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
  1484         - IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
  1485         - IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
  1486         - IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
  1487         - IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
  1488         - IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
  1489         -};
  1490         -
  1491         -
  1492         -
  1493         -/* field-extraction macros */
  1494         -
  1495         -#define first(b) ((b)->fd)
  1496         -#define last(b)  ((b)->bk)
  1497         -
  1498         -/* 
  1499         -  Indexing into bins
  1500         -*/
  1501         -
  1502         -#define bin_index(sz)                                                          \
  1503         -(((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3): \
  1504         - ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6): \
  1505         - ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9): \
  1506         - ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12): \
  1507         - ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
  1508         - ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
  1509         -                                          126)                     
  1510         -/* 
  1511         -  bins for chunks < 512 are all spaced 8 bytes apart, and hold
  1512         -  identically sized chunks. This is exploited in malloc.
  1513         -*/
  1514         -
  1515         -#define MAX_SMALLBIN         63
  1516         -#define MAX_SMALLBIN_SIZE   512
  1517         -#define SMALLBIN_WIDTH        8
  1518         -
  1519         -#define smallbin_index(sz)  (((unsigned long)(sz)) >> 3)
  1520         -
  1521         -/* 
  1522         -   Requests are `small' if both the corresponding and the next bin are small
  1523         -*/
  1524         -
  1525         -#define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
  1526         -
  1527         -
  1528         -
  1529         -/*
  1530         -    To help compensate for the large number of bins, a one-level index
  1531         -    structure is used for bin-by-bin searching.  `binblocks' is a
  1532         -    one-word bitvector recording whether groups of BINBLOCKWIDTH bins
  1533         -    have any (possibly) non-empty bins, so they can be skipped over
  1534         -    all at once during during traversals. The bits are NOT always
  1535         -    cleared as soon as all bins in a block are empty, but instead only
  1536         -    when all are noticed to be empty during traversal in malloc.
  1537         -*/
  1538         -
  1539         -#define BINBLOCKWIDTH     4   /* bins per block */
  1540         -
  1541         -#define binblocks      (bin_at(0)->size) /* bitvector of nonempty blocks */
  1542         -
  1543         -/* bin<->block macros */
  1544         -
  1545         -#define idx2binblock(ix)    ((unsigned)1 << (ix / BINBLOCKWIDTH))
  1546         -#define mark_binblock(ii)   (binblocks |= idx2binblock(ii))
  1547         -#define clear_binblock(ii)  (binblocks &= ~(idx2binblock(ii)))
  1548         -
  1549         -
  1550         -
  1551         -
  1552         -
  1553         -/*  Other static bookkeeping data */
  1554         -
  1555         -/* variables holding tunable values */
  1556         -
  1557         -static unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
  1558         -static unsigned long top_pad          = DEFAULT_TOP_PAD;
  1559         -static unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
  1560         -static unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
  1561         -
  1562         -/* The first value returned from sbrk */
  1563         -static char* sbrk_base = (char*)(-1);
  1564         -
  1565         -/* The maximum memory obtained from system via sbrk */
  1566         -static unsigned long max_sbrked_mem = 0; 
  1567         -
  1568         -/* The maximum via either sbrk or mmap */
  1569         -static unsigned long max_total_mem = 0; 
  1570         -
  1571         -/* internal working copy of mallinfo */
  1572         -static struct mallinfo current_mallinfo = {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
  1573         -
  1574         -/* The total memory obtained from system via sbrk */
  1575         -#define sbrked_mem  (current_mallinfo.arena)
  1576         -
  1577         -/* Tracking mmaps */
  1578         -
  1579         -static unsigned int n_mmaps = 0;
  1580         -static unsigned int max_n_mmaps = 0;
  1581         -static unsigned long mmapped_mem = 0;
  1582         -static unsigned long max_mmapped_mem = 0;
  1583         -
  1584         -
  1585         -
  1586         -/* 
  1587         -  Debugging support 
  1588         -*/
  1589         -
  1590         -#if DEBUG
  1591         -
  1592         -
  1593         -/*
  1594         -  These routines make a number of assertions about the states
  1595         -  of data structures that should be true at all times. If any
  1596         -  are not true, it's very likely that a user program has somehow
  1597         -  trashed memory. (It's also possible that there is a coding error
  1598         -  in malloc. In which case, please report it!)
  1599         -*/
  1600         -
  1601         -#if __STD_C
  1602         -static void do_check_chunk(mchunkptr p) 
  1603         -#else
  1604         -static void do_check_chunk(p) mchunkptr p;
  1605         -#endif
  1606         -{ 
  1607         -  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
  1608         -
  1609         -  /* No checkable chunk is mmapped */
  1610         -  assert(!chunk_is_mmapped(p));
  1611         -
  1612         -  /* Check for legal address ... */
  1613         -  assert((char*)p >= sbrk_base);
  1614         -  if (p != top) 
  1615         -    assert((char*)p + sz <= (char*)top);
  1616         -  else
  1617         -    assert((char*)p + sz <= sbrk_base + sbrked_mem);
  1618         -
  1619         -}
  1620         -
  1621         -
  1622         -#if __STD_C
  1623         -static void do_check_free_chunk(mchunkptr p) 
  1624         -#else
  1625         -static void do_check_free_chunk(p) mchunkptr p;
  1626         -#endif
  1627         -{ 
  1628         -  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
  1629         -  mchunkptr next = chunk_at_offset(p, sz);
  1630         -
  1631         -  do_check_chunk(p);
  1632         -
  1633         -  /* Check whether it claims to be free ... */
  1634         -  assert(!inuse(p));
  1635         -
  1636         -  /* Unless a special marker, must have OK fields */
  1637         -  if ((long)sz >= (long)MINSIZE)
  1638         -  {
  1639         -    assert((sz & MALLOC_ALIGN_MASK) == 0);
  1640         -    assert(aligned_OK(chunk2mem(p)));
  1641         -    /* ... matching footer field */
  1642         -    assert(next->prev_size == sz);
  1643         -    /* ... and is fully consolidated */
  1644         -    assert(prev_inuse(p));
  1645         -    assert (next == top || inuse(next));
  1646         -    
  1647         -    /* ... and has minimally sane links */
  1648         -    assert(p->fd->bk == p);
  1649         -    assert(p->bk->fd == p);
  1650         -  }
  1651         -  else /* markers are always of size SIZE_SZ */
  1652         -    assert(sz == SIZE_SZ); 
  1653         -}
  1654         -
  1655         -#if __STD_C
  1656         -static void do_check_inuse_chunk(mchunkptr p) 
  1657         -#else
  1658         -static void do_check_inuse_chunk(p) mchunkptr p;
  1659         -#endif
  1660         -{ 
  1661         -  mchunkptr next = next_chunk(p);
  1662         -  do_check_chunk(p);
  1663         -
  1664         -  /* Check whether it claims to be in use ... */
  1665         -  assert(inuse(p));
  1666         -
  1667         -  /* ... and is surrounded by OK chunks.
  1668         -    Since more things can be checked with free chunks than inuse ones,
  1669         -    if an inuse chunk borders them and debug is on, it's worth doing them.
  1670         -  */
  1671         -  if (!prev_inuse(p)) 
  1672         -  {
  1673         -    mchunkptr prv = prev_chunk(p);
  1674         -    assert(next_chunk(prv) == p);
  1675         -    do_check_free_chunk(prv);
  1676         -  }
  1677         -  if (next == top)
  1678         -  {
  1679         -    assert(prev_inuse(next));
  1680         -    assert(chunksize(next) >= MINSIZE);
  1681         -  }
  1682         -  else if (!inuse(next))
  1683         -    do_check_free_chunk(next);
  1684         -
  1685         -}
  1686         -
  1687         -#if __STD_C
  1688         -static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) 
  1689         -#else
  1690         -static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
  1691         -#endif
  1692         -{
  1693         -  INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
  1694         -  long room = sz - s;
  1695         -
  1696         -  do_check_inuse_chunk(p);
  1697         -
  1698         -  /* Legal size ... */
  1699         -  assert((long)sz >= (long)MINSIZE);
  1700         -  assert((sz & MALLOC_ALIGN_MASK) == 0);
  1701         -  assert(room >= 0);
  1702         -  assert(room < (long)MINSIZE);
  1703         -
  1704         -  /* ... and alignment */
  1705         -  assert(aligned_OK(chunk2mem(p)));
  1706         -
  1707         -
  1708         -  /* ... and was allocated at front of an available chunk */
  1709         -  assert(prev_inuse(p));
  1710         -
  1711         -}
  1712         -
  1713         -
  1714         -#define check_free_chunk(P)  do_check_free_chunk(P)
  1715         -#define check_inuse_chunk(P) do_check_inuse_chunk(P)
  1716         -#define check_chunk(P) do_check_chunk(P)
  1717         -#define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
  1718         -#else
  1719         -#define check_free_chunk(P) 
  1720         -#define check_inuse_chunk(P)
  1721         -#define check_chunk(P)
  1722         -#define check_malloced_chunk(P,N)
  1723         -#endif
  1724         -
  1725         -
  1726         -
  1727         -/* 
  1728         -  Macro-based internal utilities
  1729         -*/
  1730         -
  1731         -
  1732         -/*  
  1733         -  Linking chunks in bin lists.
  1734         -  Call these only with variables, not arbitrary expressions, as arguments.
  1735         -*/
  1736         -
  1737         -/* 
  1738         -  Place chunk p of size s in its bin, in size order,
  1739         -  putting it ahead of others of same size.
  1740         -*/
  1741         -
  1742         -
  1743         -#define frontlink(P, S, IDX, BK, FD)                                          \
  1744         -{                                                                             \
  1745         -  if (S < MAX_SMALLBIN_SIZE)                                                  \
  1746         -  {                                                                           \
  1747         -    IDX = smallbin_index(S);                                                  \
  1748         -    mark_binblock(IDX);                                                       \
  1749         -    BK = bin_at(IDX);                                                         \
  1750         -    FD = BK->fd;                                                              \
  1751         -    P->bk = BK;                                                               \
  1752         -    P->fd = FD;                                                               \
  1753         -    FD->bk = BK->fd = P;                                                      \
  1754         -  }                                                                           \
  1755         -  else                                                                        \
  1756         -  {                                                                           \
  1757         -    IDX = bin_index(S);                                                       \
  1758         -    BK = bin_at(IDX);                                                         \
  1759         -    FD = BK->fd;                                                              \
  1760         -    if (FD == BK) mark_binblock(IDX);                                         \
  1761         -    else                                                                      \
  1762         -    {                                                                         \
  1763         -      while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
  1764         -      BK = FD->bk;                                                            \
  1765         -    }                                                                         \
  1766         -    P->bk = BK;                                                               \
  1767         -    P->fd = FD;                                                               \
  1768         -    FD->bk = BK->fd = P;                                                      \
  1769         -  }                                                                           \
  1770         -}
  1771         -
  1772         -
  1773         -/* take a chunk off a list */
  1774         -
  1775         -#define unlink(P, BK, FD)                                                     \
  1776         -{                                                                             \
  1777         -  BK = P->bk;                                                                 \
  1778         -  FD = P->fd;                                                                 \
  1779         -  FD->bk = BK;                                                                \
  1780         -  BK->fd = FD;                                                                \
  1781         -}                                                                             \
  1782         -
  1783         -/* Place p as the last remainder */
  1784         -
  1785         -#define link_last_remainder(P)                                                \
  1786         -{                                                                             \
  1787         -  last_remainder->fd = last_remainder->bk =  P;                               \
  1788         -  P->fd = P->bk = last_remainder;                                             \
  1789         -}
  1790         -
  1791         -/* Clear the last_remainder bin */
  1792         -
  1793         -#define clear_last_remainder \
  1794         -  (last_remainder->fd = last_remainder->bk = last_remainder)
  1795         -
  1796         -
  1797         -
  1798         -
  1799         -
  1800         -
  1801         -/* Routines dealing with mmap(). */
  1802         -
  1803         -#if HAVE_MMAP
  1804         -
  1805         -#if __STD_C
  1806         -static mchunkptr mmap_chunk(size_t size)
  1807         -#else
  1808         -static mchunkptr mmap_chunk(size) size_t size;
  1809         -#endif
  1810         -{
  1811         -  size_t page_mask = malloc_getpagesize - 1;
  1812         -  mchunkptr p;
  1813         -
  1814         -#ifndef MAP_ANONYMOUS
  1815         -  static int fd = -1;
  1816         -#endif
  1817         -
  1818         -  if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
  1819         -
  1820         -  /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
  1821         -   * there is no following chunk whose prev_size field could be used.
  1822         -   */
  1823         -  size = (size + SIZE_SZ + page_mask) & ~page_mask;
  1824         -
  1825         -#ifdef MAP_ANONYMOUS
  1826         -  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
  1827         -		      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
  1828         -#else /* !MAP_ANONYMOUS */
  1829         -  if (fd < 0) 
  1830         -  {
  1831         -    fd = open("/dev/zero", O_RDWR);
  1832         -    if(fd < 0) return 0;
  1833         -  }
  1834         -  p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
  1835         -#endif
  1836         -
  1837         -  if(p == (mchunkptr)-1) return 0;
  1838         -
  1839         -  n_mmaps++;
  1840         -  if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
  1841         -  
  1842         -  /* We demand that eight bytes into a page must be 8-byte aligned. */
  1843         -  assert(aligned_OK(chunk2mem(p)));
  1844         -
  1845         -  /* The offset to the start of the mmapped region is stored
  1846         -   * in the prev_size field of the chunk; normally it is zero,
  1847         -   * but that can be changed in memalign().
  1848         -   */
  1849         -  p->prev_size = 0;
  1850         -  set_head(p, size|IS_MMAPPED);
  1851         -  
  1852         -  mmapped_mem += size;
  1853         -  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
  1854         -    max_mmapped_mem = mmapped_mem;
  1855         -  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
  1856         -    max_total_mem = mmapped_mem + sbrked_mem;
  1857         -  return p;
  1858         -}
  1859         -
  1860         -#if __STD_C
  1861         -static void munmap_chunk(mchunkptr p)
  1862         -#else
  1863         -static void munmap_chunk(p) mchunkptr p;
  1864         -#endif
  1865         -{
  1866         -  INTERNAL_SIZE_T size = chunksize(p);
  1867         -  int ret;
  1868         -
  1869         -  assert (chunk_is_mmapped(p));
  1870         -  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
  1871         -  assert((n_mmaps > 0));
  1872         -  assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
  1873         -
  1874         -  n_mmaps--;
  1875         -  mmapped_mem -= (size + p->prev_size);
  1876         -
  1877         -  ret = munmap((char *)p - p->prev_size, size + p->prev_size);
  1878         -
  1879         -  /* munmap returns non-zero on failure */
  1880         -  assert(ret == 0);
  1881         -}
  1882         -
  1883         -#if HAVE_MREMAP
  1884         -
  1885         -#if __STD_C
  1886         -static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
  1887         -#else
  1888         -static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
  1889         -#endif
  1890         -{
  1891         -  size_t page_mask = malloc_getpagesize - 1;
  1892         -  INTERNAL_SIZE_T offset = p->prev_size;
  1893         -  INTERNAL_SIZE_T size = chunksize(p);
  1894         -  char *cp;
  1895         -
  1896         -  assert (chunk_is_mmapped(p));
  1897         -  assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
  1898         -  assert((n_mmaps > 0));
  1899         -  assert(((size + offset) & (malloc_getpagesize-1)) == 0);
  1900         -
  1901         -  /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
  1902         -  new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
  1903         -
  1904         -  cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
  1905         -
  1906         -  if (cp == (char *)-1) return 0;
  1907         -
  1908         -  p = (mchunkptr)(cp + offset);
  1909         -
  1910         -  assert(aligned_OK(chunk2mem(p)));
  1911         -
  1912         -  assert((p->prev_size == offset));
  1913         -  set_head(p, (new_size - offset)|IS_MMAPPED);
  1914         -
  1915         -  mmapped_mem -= size + offset;
  1916         -  mmapped_mem += new_size;
  1917         -  if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 
  1918         -    max_mmapped_mem = mmapped_mem;
  1919         -  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
  1920         -    max_total_mem = mmapped_mem + sbrked_mem;
  1921         -  return p;
  1922         -}
  1923         -
  1924         -#endif /* HAVE_MREMAP */
  1925         -
  1926         -#endif /* HAVE_MMAP */
  1927         -
  1928         -
  1929         -
  1930         -
  1931         -/* 
  1932         -  Extend the top-most chunk by obtaining memory from system.
  1933         -  Main interface to sbrk (but see also malloc_trim).
  1934         -*/
  1935         -
  1936         -#if __STD_C
  1937         -static void malloc_extend_top(INTERNAL_SIZE_T nb)
  1938         -#else
  1939         -static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
  1940         -#endif
  1941         -{
  1942         -  char*     brk;                  /* return value from sbrk */
  1943         -  INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
  1944         -  INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
  1945         -  char*     new_brk;              /* return of 2nd sbrk call */
  1946         -  INTERNAL_SIZE_T top_size;       /* new size of top chunk */
  1947         -
  1948         -  mchunkptr old_top     = top;  /* Record state of old top */
  1949         -  INTERNAL_SIZE_T old_top_size = chunksize(old_top);
  1950         -  char*     old_end      = (char*)(chunk_at_offset(old_top, old_top_size));
  1951         -
  1952         -  /* Pad request with top_pad plus minimal overhead */
  1953         -  
  1954         -  INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
  1955         -  unsigned long pagesz    = malloc_getpagesize;
  1956         -
  1957         -  /* If not the first time through, round to preserve page boundary */
  1958         -  /* Otherwise, we need to correct to a page size below anyway. */
  1959         -  /* (We also correct below if an intervening foreign sbrk call.) */
  1960         -
  1961         -  if (sbrk_base != (char*)(-1))
  1962         -    sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
  1963         -
  1964         -  brk = (char*)(MORECORE (sbrk_size));
  1965         -
  1966         -  /* Fail if sbrk failed or if a foreign sbrk call killed our space */
  1967         -  if (brk == (char*)(MORECORE_FAILURE) || 
  1968         -      (brk < old_end && old_top != initial_top))
  1969         -    return;     
  1970         -
  1971         -  sbrked_mem += sbrk_size;
  1972         -
  1973         -  if (brk == old_end) /* can just add bytes to current top */
  1974         -  {
  1975         -    top_size = sbrk_size + old_top_size;
  1976         -    set_head(top, top_size | PREV_INUSE);
  1977         -  }
  1978         -  else
  1979         -  {
  1980         -    if (sbrk_base == (char*)(-1))  /* First time through. Record base */
  1981         -      sbrk_base = brk;
  1982         -    else  /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
  1983         -      sbrked_mem += brk - (char*)old_end;
  1984         -
  1985         -    /* Guarantee alignment of first new chunk made from this space */
  1986         -    front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
  1987         -    if (front_misalign > 0) 
  1988         -    {
  1989         -      correction = (MALLOC_ALIGNMENT) - front_misalign;
  1990         -      brk += correction;
  1991         -    }
  1992         -    else
  1993         -      correction = 0;
  1994         -
  1995         -    /* Guarantee the next brk will be at a page boundary */
  1996         -
  1997         -    correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
  1998         -                   ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
  1999         -
  2000         -    /* Allocate correction */
  2001         -    new_brk = (char*)(MORECORE (correction));
  2002         -    if (new_brk == (char*)(MORECORE_FAILURE)) return; 
  2003         -
  2004         -    sbrked_mem += correction;
  2005         -
  2006         -    top = (mchunkptr)brk;
  2007         -    top_size = new_brk - brk + correction;
  2008         -    set_head(top, top_size | PREV_INUSE);
  2009         -
  2010         -    if (old_top != initial_top)
  2011         -    {
  2012         -
  2013         -      /* There must have been an intervening foreign sbrk call. */
  2014         -      /* A double fencepost is necessary to prevent consolidation */
  2015         -
  2016         -      /* If not enough space to do this, then user did something very wrong */
  2017         -      if (old_top_size < MINSIZE) 
  2018         -      {
  2019         -        set_head(top, PREV_INUSE); /* will force null return from malloc */
  2020         -        return;
  2021         -      }
  2022         -
  2023         -      /* Also keep size a multiple of MALLOC_ALIGNMENT */
  2024         -      old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
  2025         -      set_head_size(old_top, old_top_size);
  2026         -      chunk_at_offset(old_top, old_top_size          )->size =
  2027         -        SIZE_SZ|PREV_INUSE;
  2028         -      chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
  2029         -        SIZE_SZ|PREV_INUSE;
  2030         -      /* If possible, release the rest. */
  2031         -      if (old_top_size >= MINSIZE) 
  2032         -        fREe(chunk2mem(old_top));
  2033         -    }
  2034         -  }
  2035         -
  2036         -  if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem) 
  2037         -    max_sbrked_mem = sbrked_mem;
  2038         -  if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 
  2039         -    max_total_mem = mmapped_mem + sbrked_mem;
  2040         -
  2041         -  /* We always land on a page boundary */
  2042         -  assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
  2043         -}
  2044         -
  2045         -
  2046         -
  2047         -
  2048         -/* Main public routines */
  2049         -
  2050         -
  2051         -/*
  2052         -  Malloc Algorthim:
  2053         -
  2054         -    The requested size is first converted into a usable form, `nb'.
  2055         -    This currently means to add 4 bytes overhead plus possibly more to
  2056         -    obtain 8-byte alignment and/or to obtain a size of at least
  2057         -    MINSIZE (currently 16 bytes), the smallest allocatable size.
  2058         -    (All fits are considered `exact' if they are within MINSIZE bytes.)
  2059         -
  2060         -    From there, the first successful of the following steps is taken:
  2061         -
  2062         -      1. The bin corresponding to the request size is scanned, and if
  2063         -         a chunk of exactly the right size is found, it is taken.
  2064         -
  2065         -      2. The most recently remaindered chunk is used if it is big
  2066         -         enough.  This is a form of (roving) first fit, used only in
  2067         -         the absence of exact fits. Runs of consecutive requests use
  2068         -         the remainder of the chunk used for the previous such request
  2069         -         whenever possible. This limited use of a first-fit style
  2070         -         allocation strategy tends to give contiguous chunks
  2071         -         coextensive lifetimes, which improves locality and can reduce
  2072         -         fragmentation in the long run.
  2073         -
  2074         -      3. Other bins are scanned in increasing size order, using a
  2075         -         chunk big enough to fulfill the request, and splitting off
  2076         -         any remainder.  This search is strictly by best-fit; i.e.,
  2077         -         the smallest (with ties going to approximately the least
  2078         -         recently used) chunk that fits is selected.
  2079         -
  2080         -      4. If large enough, the chunk bordering the end of memory
  2081         -         (`top') is split off. (This use of `top' is in accord with
  2082         -         the best-fit search rule.  In effect, `top' is treated as
  2083         -         larger (and thus less well fitting) than any other available
  2084         -         chunk since it can be extended to be as large as necessary
  2085         -         (up to system limitations).
  2086         -
  2087         -      5. If the request size meets the mmap threshold and the
  2088         -         system supports mmap, and there are few enough currently
  2089         -         allocated mmapped regions, and a call to mmap succeeds,
  2090         -         the request is allocated via direct memory mapping.
  2091         -
  2092         -      6. Otherwise, the top of memory is extended by
  2093         -         obtaining more space from the system (normally using sbrk,
  2094         -         but definable to anything else via the MORECORE macro).
  2095         -         Memory is gathered from the system (in system page-sized
  2096         -         units) in a way that allows chunks obtained across different
  2097         -         sbrk calls to be consolidated, but does not require
  2098         -         contiguous memory. Thus, it should be safe to intersperse
  2099         -         mallocs with other sbrk calls.
  2100         -
  2101         -
  2102         -      All allocations are made from the the `lowest' part of any found
  2103         -      chunk. (The implementation invariant is that prev_inuse is
  2104         -      always true of any allocated chunk; i.e., that each allocated
  2105         -      chunk borders either a previously allocated and still in-use chunk,
  2106         -      or the base of its memory arena.)
  2107         -
  2108         -*/
  2109         -
  2110         -#if __STD_C
  2111         -Void_t* mALLOc(size_t bytes)
  2112         -#else
  2113         -Void_t* mALLOc(bytes) size_t bytes;
  2114         -#endif
  2115         -{
  2116         -  mchunkptr victim;                  /* inspected/selected chunk */
  2117         -  INTERNAL_SIZE_T victim_size;       /* its size */
  2118         -  int       idx;                     /* index for bin traversal */
  2119         -  mbinptr   bin;                     /* associated bin */
  2120         -  mchunkptr remainder;               /* remainder from a split */
  2121         -  long      remainder_size;          /* its size */
  2122         -  int       remainder_index;         /* its bin index */
  2123         -  unsigned long block;               /* block traverser bit */
  2124         -  int       startidx;                /* first bin of a traversed block */
  2125         -  mchunkptr fwd;                     /* misc temp for linking */
  2126         -  mchunkptr bck;                     /* misc temp for linking */
  2127         -  mbinptr q;                         /* misc temp */
  2128         -
  2129         -  INTERNAL_SIZE_T nb;
  2130         -
  2131         -  if ((long)bytes < 0) return 0;
  2132         -
  2133         -  nb = request2size(bytes);  /* padded request size; */
  2134         -
  2135         -  /* Check for exact match in a bin */
  2136         -
  2137         -  if (is_small_request(nb))  /* Faster version for small requests */
  2138         -  {
  2139         -    idx = smallbin_index(nb); 
  2140         -
  2141         -    /* No traversal or size check necessary for small bins.  */
  2142         -
  2143         -    q = bin_at(idx);
  2144         -    victim = last(q);
  2145         -
  2146         -    /* Also scan the next one, since it would have a remainder < MINSIZE */
  2147         -    if (victim == q)
  2148         -    {
  2149         -      q = next_bin(q);
  2150         -      victim = last(q);
  2151         -    }
  2152         -    if (victim != q)
  2153         -    {
  2154         -      victim_size = chunksize(victim);
  2155         -      unlink(victim, bck, fwd);
  2156         -      set_inuse_bit_at_offset(victim, victim_size);
  2157         -      check_malloced_chunk(victim, nb);
  2158         -      return chunk2mem(victim);
  2159         -    }
  2160         -
  2161         -    idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
  2162         -
  2163         -  }
  2164         -  else
  2165         -  {
  2166         -    idx = bin_index(nb);
  2167         -    bin = bin_at(idx);
  2168         -
  2169         -    for (victim = last(bin); victim != bin; victim = victim->bk)
  2170         -    {
  2171         -      victim_size = chunksize(victim);
  2172         -      remainder_size = victim_size - nb;
  2173         -      
  2174         -      if (remainder_size >= (long)MINSIZE) /* too big */
  2175         -      {
  2176         -        --idx; /* adjust to rescan below after checking last remainder */
  2177         -        break;   
  2178         -      }
  2179         -
  2180         -      else if (remainder_size >= 0) /* exact fit */
  2181         -      {
  2182         -        unlink(victim, bck, fwd);
  2183         -        set_inuse_bit_at_offset(victim, victim_size);
  2184         -        check_malloced_chunk(victim, nb);
  2185         -        return chunk2mem(victim);
  2186         -      }
  2187         -    }
  2188         -
  2189         -    ++idx; 
  2190         -
  2191         -  }
  2192         -
  2193         -  /* Try to use the last split-off remainder */
  2194         -
  2195         -  if ( (victim = last_remainder->fd) != last_remainder)
  2196         -  {
  2197         -    victim_size = chunksize(victim);
  2198         -    remainder_size = victim_size - nb;
  2199         -
  2200         -    if (remainder_size >= (long)MINSIZE) /* re-split */
  2201         -    {
  2202         -      remainder = chunk_at_offset(victim, nb);
  2203         -      set_head(victim, nb | PREV_INUSE);
  2204         -      link_last_remainder(remainder);
  2205         -      set_head(remainder, remainder_size | PREV_INUSE);
  2206         -      set_foot(remainder, remainder_size);
  2207         -      check_malloced_chunk(victim, nb);
  2208         -      return chunk2mem(victim);
  2209         -    }
  2210         -
  2211         -    clear_last_remainder;
  2212         -
  2213         -    if (remainder_size >= 0)  /* exhaust */
  2214         -    {
  2215         -      set_inuse_bit_at_offset(victim, victim_size);
  2216         -      check_malloced_chunk(victim, nb);
  2217         -      return chunk2mem(victim);
  2218         -    }
  2219         -
  2220         -    /* Else place in bin */
  2221         -
  2222         -    frontlink(victim, victim_size, remainder_index, bck, fwd);
  2223         -  }
  2224         -
  2225         -  /* 
  2226         -     If there are any possibly nonempty big-enough blocks, 
  2227         -     search for best fitting chunk by scanning bins in blockwidth units.
  2228         -  */
  2229         -
  2230         -  if ( (block = idx2binblock(idx)) <= binblocks)  
  2231         -  {
  2232         -
  2233         -    /* Get to the first marked block */
  2234         -
  2235         -    if ( (block & binblocks) == 0) 
  2236         -    {
  2237         -      /* force to an even block boundary */
  2238         -      idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
  2239         -      block <<= 1;
  2240         -      while ((block & binblocks) == 0)
  2241         -      {
  2242         -        idx += BINBLOCKWIDTH;
  2243         -        block <<= 1;
  2244         -      }
  2245         -    }
  2246         -      
  2247         -    /* For each possibly nonempty block ... */
  2248         -    for (;;)  
  2249         -    {
  2250         -      startidx = idx;          /* (track incomplete blocks) */
  2251         -      q = bin = bin_at(idx);
  2252         -
  2253         -      /* For each bin in this block ... */
  2254         -      do
  2255         -      {
  2256         -        /* Find and use first big enough chunk ... */
  2257         -
  2258         -        for (victim = last(bin); victim != bin; victim = victim->bk)
  2259         -        {
  2260         -          victim_size = chunksize(victim);
  2261         -          remainder_size = victim_size - nb;
  2262         -
  2263         -          if (remainder_size >= (long)MINSIZE) /* split */
  2264         -          {
  2265         -            remainder = chunk_at_offset(victim, nb);
  2266         -            set_head(victim, nb | PREV_INUSE);
  2267         -            unlink(victim, bck, fwd);
  2268         -            link_last_remainder(remainder);
  2269         -            set_head(remainder, remainder_size | PREV_INUSE);
  2270         -            set_foot(remainder, remainder_size);
  2271         -            check_malloced_chunk(victim, nb);
  2272         -            return chunk2mem(victim);
  2273         -          }
  2274         -
  2275         -          else if (remainder_size >= 0)  /* take */
  2276         -          {
  2277         -            set_inuse_bit_at_offset(victim, victim_size);
  2278         -            unlink(victim, bck, fwd);
  2279         -            check_malloced_chunk(victim, nb);
  2280         -            return chunk2mem(victim);
  2281         -          }
  2282         -
  2283         -        }
  2284         -
  2285         -       bin = next_bin(bin);
  2286         -
  2287         -      } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
  2288         -
  2289         -      /* Clear out the block bit. */
  2290         -
  2291         -      do   /* Possibly backtrack to try to clear a partial block */
  2292         -      {
  2293         -        if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
  2294         -        {
  2295         -          binblocks &= ~block;
  2296         -          break;
  2297         -        }
  2298         -        --startidx;
  2299         -       q = prev_bin(q);
  2300         -      } while (first(q) == q);
  2301         -
  2302         -      /* Get to the next possibly nonempty block */
  2303         -
  2304         -      if ( (block <<= 1) <= binblocks && (block != 0) ) 
  2305         -      {
  2306         -        while ((block & binblocks) == 0)
  2307         -        {
  2308         -          idx += BINBLOCKWIDTH;
  2309         -          block <<= 1;
  2310         -        }
  2311         -      }
  2312         -      else
  2313         -        break;
  2314         -    }
  2315         -  }
  2316         -
  2317         -
  2318         -  /* Try to use top chunk */
  2319         -
  2320         -  /* Require that there be a remainder, ensuring top always exists  */
  2321         -  if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
  2322         -  {
  2323         -
  2324         -#if HAVE_MMAP
  2325         -    /* If big and would otherwise need to extend, try to use mmap instead */
  2326         -    if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
  2327         -        (victim = mmap_chunk(nb)) != 0)
  2328         -      return chunk2mem(victim);
  2329         -#endif
  2330         -
  2331         -    /* Try to extend */
  2332         -    malloc_extend_top(nb);
  2333         -    if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
  2334         -      return 0; /* propagate failure */
  2335         -  }
  2336         -
  2337         -  victim = top;
  2338         -  set_head(victim, nb | PREV_INUSE);
  2339         -  top = chunk_at_offset(victim, nb);
  2340         -  set_head(top, remainder_size | PREV_INUSE);
  2341         -  check_malloced_chunk(victim, nb);
  2342         -  return chunk2mem(victim);
  2343         -
  2344         -}
  2345         -
  2346         -
  2347         -
  2348         -
  2349         -/*
  2350         -
  2351         -  free() algorithm :
  2352         -
  2353         -    cases:
  2354         -
  2355         -       1. free(0) has no effect.  
  2356         -
  2357         -       2. If the chunk was allocated via mmap, it is release via munmap().
  2358         -
  2359         -       3. If a returned chunk borders the current high end of memory,
  2360         -          it is consolidated into the top, and if the total unused
  2361         -          topmost memory exceeds the trim threshold, malloc_trim is
  2362         -          called.
  2363         -
  2364         -       4. Other chunks are consolidated as they arrive, and
  2365         -          placed in corresponding bins. (This includes the case of
  2366         -          consolidating with the current `last_remainder').
  2367         -
  2368         -*/
  2369         -
  2370         -
  2371         -#if __STD_C
  2372         -void fREe(Void_t* mem)
  2373         -#else
  2374         -void fREe(mem) Void_t* mem;
  2375         -#endif
  2376         -{
  2377         -  mchunkptr p;         /* chunk corresponding to mem */
  2378         -  INTERNAL_SIZE_T hd;  /* its head field */
  2379         -  INTERNAL_SIZE_T sz;  /* its size */
  2380         -  int       idx;       /* its bin index */
  2381         -  mchunkptr next;      /* next contiguous chunk */
  2382         -  INTERNAL_SIZE_T nextsz; /* its size */
  2383         -  INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
  2384         -  mchunkptr bck;       /* misc temp for linking */
  2385         -  mchunkptr fwd;       /* misc temp for linking */
  2386         -  int       islr;      /* track whether merging with last_remainder */
  2387         -
  2388         -  if (mem == 0)                              /* free(0) has no effect */
  2389         -    return;
  2390         -
  2391         -  p = mem2chunk(mem);
  2392         -  hd = p->size;
  2393         -
  2394         -#if HAVE_MMAP
  2395         -  if (hd & IS_MMAPPED)                       /* release mmapped memory. */
  2396         -  {
  2397         -    munmap_chunk(p);
  2398         -    return;
  2399         -  }
  2400         -#endif
  2401         -  
  2402         -  check_inuse_chunk(p);
  2403         -  
  2404         -  sz = hd & ~PREV_INUSE;
  2405         -  next = chunk_at_offset(p, sz);
  2406         -  nextsz = chunksize(next);
  2407         -  
  2408         -  if (next == top)                            /* merge with top */
  2409         -  {
  2410         -    sz += nextsz;
  2411         -
  2412         -    if (!(hd & PREV_INUSE))                    /* consolidate backward */
  2413         -    {
  2414         -      prevsz = p->prev_size;
  2415         -      p = chunk_at_offset(p, -((long) prevsz));
  2416         -      sz += prevsz;
  2417         -      unlink(p, bck, fwd);
  2418         -    }
  2419         -
  2420         -    set_head(p, sz | PREV_INUSE);
  2421         -    top = p;
  2422         -    if ((unsigned long)(sz) >= (unsigned long)trim_threshold) 
  2423         -      malloc_trim(top_pad); 
  2424         -    return;
  2425         -  }
  2426         -
  2427         -  set_head(next, nextsz);                    /* clear inuse bit */
  2428         -
  2429         -  islr = 0;
  2430         -
  2431         -  if (!(hd & PREV_INUSE))                    /* consolidate backward */
  2432         -  {
  2433         -    prevsz = p->prev_size;
  2434         -    p = chunk_at_offset(p, -((long) prevsz));
  2435         -    sz += prevsz;
  2436         -    
  2437         -    if (p->fd == last_remainder)             /* keep as last_remainder */
  2438         -      islr = 1;
  2439         -    else
  2440         -      unlink(p, bck, fwd);
  2441         -  }
  2442         -  
  2443         -  if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
  2444         -  {
  2445         -    sz += nextsz;
  2446         -    
  2447         -    if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
  2448         -    {
  2449         -      islr = 1;
  2450         -      link_last_remainder(p);   
  2451         -    }
  2452         -    else
  2453         -      unlink(next, bck, fwd);
  2454         -  }
  2455         -
  2456         -
  2457         -  set_head(p, sz | PREV_INUSE);
  2458         -  set_foot(p, sz);
  2459         -  if (!islr)
  2460         -    frontlink(p, sz, idx, bck, fwd);  
  2461         -}
  2462         -
  2463         -
  2464         -
  2465         -
  2466         -
  2467         -/*
  2468         -
  2469         -  Realloc algorithm:
  2470         -
  2471         -    Chunks that were obtained via mmap cannot be extended or shrunk
  2472         -    unless HAVE_MREMAP is defined, in which case mremap is used.
  2473         -    Otherwise, if their reallocation is for additional space, they are
  2474         -    copied.  If for less, they are just left alone.
  2475         -
  2476         -    Otherwise, if the reallocation is for additional space, and the
  2477         -    chunk can be extended, it is, else a malloc-copy-free sequence is
  2478         -    taken.  There are several different ways that a chunk could be
  2479         -    extended. All are tried:
  2480         -
  2481         -       * Extending forward into following adjacent free chunk.
  2482         -       * Shifting backwards, joining preceding adjacent space
  2483         -       * Both shifting backwards and extending forward.
  2484         -       * Extending into newly sbrked space
  2485         -
  2486         -    Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
  2487         -    size argument of zero (re)allocates a minimum-sized chunk.
  2488         -
  2489         -    If the reallocation is for less space, and the new request is for
  2490         -    a `small' (<512 bytes) size, then the newly unused space is lopped
  2491         -    off and freed.
  2492         -
  2493         -    The old unix realloc convention of allowing the last-free'd chunk
  2494         -    to be used as an argument to realloc is no longer supported.
  2495         -    I don't know of any programs still relying on this feature,
  2496         -    and allowing it would also allow too many other incorrect 
  2497         -    usages of realloc to be sensible.
  2498         -
  2499         -
  2500         -*/
  2501         -
  2502         -
  2503         -#if __STD_C
  2504         -Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
  2505         -#else
  2506         -Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
  2507         -#endif
  2508         -{
  2509         -  INTERNAL_SIZE_T    nb;      /* padded request size */
  2510         -
  2511         -  mchunkptr oldp;             /* chunk corresponding to oldmem */
  2512         -  INTERNAL_SIZE_T    oldsize; /* its size */
  2513         -
  2514         -  mchunkptr newp;             /* chunk to return */
  2515         -  INTERNAL_SIZE_T    newsize; /* its size */
  2516         -  Void_t*   newmem;           /* corresponding user mem */
  2517         -
  2518         -  mchunkptr next;             /* next contiguous chunk after oldp */
  2519         -  INTERNAL_SIZE_T  nextsize;  /* its size */
  2520         -
  2521         -  mchunkptr prev;             /* previous contiguous chunk before oldp */
  2522         -  INTERNAL_SIZE_T  prevsize;  /* its size */
  2523         -
  2524         -  mchunkptr remainder;        /* holds split off extra space from newp */
  2525         -  INTERNAL_SIZE_T  remainder_size;   /* its size */
  2526         -
  2527         -  mchunkptr bck;              /* misc temp for linking */
  2528         -  mchunkptr fwd;              /* misc temp for linking */
  2529         -
  2530         -#ifdef REALLOC_ZERO_BYTES_FREES
  2531         -  if (bytes == 0) { fREe(oldmem); return 0; }
  2532         -#endif
  2533         -
  2534         -  if ((long)bytes < 0) return 0;
  2535         -
  2536         -  /* realloc of null is supposed to be same as malloc */
  2537         -  if (oldmem == 0) return mALLOc(bytes);
  2538         -
  2539         -  newp    = oldp    = mem2chunk(oldmem);
  2540         -  newsize = oldsize = chunksize(oldp);
  2541         -
  2542         -
  2543         -  nb = request2size(bytes);
  2544         -
  2545         -#if HAVE_MMAP
  2546         -  if (chunk_is_mmapped(oldp)) 
  2547         -  {
  2548         -#if HAVE_MREMAP
  2549         -    newp = mremap_chunk(oldp, nb);
  2550         -    if(newp) return chunk2mem(newp);
  2551         -#endif
  2552         -    /* Note the extra SIZE_SZ overhead. */
  2553         -    if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
  2554         -    /* Must alloc, copy, free. */
  2555         -    newmem = mALLOc(bytes);
  2556         -    if (newmem == 0) return 0; /* propagate failure */
  2557         -    MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
  2558         -    munmap_chunk(oldp);
  2559         -    return newmem;
  2560         -  }
  2561         -#endif
  2562         -
  2563         -  check_inuse_chunk(oldp);
  2564         -
  2565         -  if ((long)(oldsize) < (long)(nb))  
  2566         -  {
  2567         -
  2568         -    /* Try expanding forward */
  2569         -
  2570         -    next = chunk_at_offset(oldp, oldsize);
  2571         -    if (next == top || !inuse(next)) 
  2572         -    {
  2573         -      nextsize = chunksize(next);
  2574         -
  2575         -      /* Forward into top only if a remainder */
  2576         -      if (next == top)
  2577         -      {
  2578         -        if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
  2579         -        {
  2580         -          newsize += nextsize;
  2581         -          top = chunk_at_offset(oldp, nb);
  2582         -          set_head(top, (newsize - nb) | PREV_INUSE);
  2583         -          set_head_size(oldp, nb);
  2584         -          return chunk2mem(oldp);
  2585         -        }
  2586         -      }
  2587         -
  2588         -      /* Forward into next chunk */
  2589         -      else if (((long)(nextsize + newsize) >= (long)(nb)))
  2590         -      { 
  2591         -        unlink(next, bck, fwd);
  2592         -        newsize  += nextsize;
  2593         -        goto split;
  2594         -      }
  2595         -    }
  2596         -    else
  2597         -    {
  2598         -      next = 0;
  2599         -      nextsize = 0;
  2600         -    }
  2601         -
  2602         -    /* Try shifting backwards. */
  2603         -
  2604         -    if (!prev_inuse(oldp))
  2605         -    {
  2606         -      prev = prev_chunk(oldp);
  2607         -      prevsize = chunksize(prev);
  2608         -
  2609         -      /* try forward + backward first to save a later consolidation */
  2610         -
  2611         -      if (next != 0)
  2612         -      {
  2613         -        /* into top */
  2614         -        if (next == top)
  2615         -        {
  2616         -          if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
  2617         -          {
  2618         -            unlink(prev, bck, fwd);
  2619         -            newp = prev;
  2620         -            newsize += prevsize + nextsize;
  2621         -            newmem = chunk2mem(newp);
  2622         -            MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
  2623         -            top = chunk_at_offset(newp, nb);
  2624         -            set_head(top, (newsize - nb) | PREV_INUSE);
  2625         -            set_head_size(newp, nb);
  2626         -            return newmem;
  2627         -          }
  2628         -        }
  2629         -
  2630         -        /* into next chunk */
  2631         -        else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
  2632         -        {
  2633         -          unlink(next, bck, fwd);
  2634         -          unlink(prev, bck, fwd);
  2635         -          newp = prev;
  2636         -          newsize += nextsize + prevsize;
  2637         -          newmem = chunk2mem(newp);
  2638         -          MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
  2639         -          goto split;
  2640         -        }
  2641         -      }
  2642         -      
  2643         -      /* backward only */
  2644         -      if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)  
  2645         -      {
  2646         -        unlink(prev, bck, fwd);
  2647         -        newp = prev;
  2648         -        newsize += prevsize;
  2649         -        newmem = chunk2mem(newp);
  2650         -        MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
  2651         -        goto split;
  2652         -      }
  2653         -    }
  2654         -
  2655         -    /* Must allocate */
  2656         -
  2657         -    newmem = mALLOc (bytes);
  2658         -
  2659         -    if (newmem == 0)  /* propagate failure */
  2660         -      return 0; 
  2661         -
  2662         -    /* Avoid copy if newp is next chunk after oldp. */
  2663         -    /* (This can only happen when new chunk is sbrk'ed.) */
  2664         -
  2665         -    if ( (newp = mem2chunk(newmem)) == next_chunk(oldp)) 
  2666         -    {
  2667         -      newsize += chunksize(newp);
  2668         -      newp = oldp;
  2669         -      goto split;
  2670         -    }
  2671         -
  2672         -    /* Otherwise copy, free, and exit */
  2673         -    MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
  2674         -    fREe(oldmem);
  2675         -    return newmem;
  2676         -  }
  2677         -
  2678         -
  2679         - split:  /* split off extra room in old or expanded chunk */
  2680         -
  2681         -  if (newsize - nb >= MINSIZE) /* split off remainder */
  2682         -  {
  2683         -    remainder = chunk_at_offset(newp, nb);
  2684         -    remainder_size = newsize - nb;
  2685         -    set_head_size(newp, nb);
  2686         -    set_head(remainder, remainder_size | PREV_INUSE);
  2687         -    set_inuse_bit_at_offset(remainder, remainder_size);
  2688         -    fREe(chunk2mem(remainder)); /* let free() deal with it */
  2689         -  }
  2690         -  else
  2691         -  {
  2692         -    set_head_size(newp, newsize);
  2693         -    set_inuse_bit_at_offset(newp, newsize);
  2694         -  }
  2695         -
  2696         -  check_inuse_chunk(newp);
  2697         -  return chunk2mem(newp);
  2698         -}
  2699         -
  2700         -
  2701         -
  2702         -
  2703         -/*
  2704         -
  2705         -  memalign algorithm:
  2706         -
  2707         -    memalign requests more than enough space from malloc, finds a spot
  2708         -    within that chunk that meets the alignment request, and then
  2709         -    possibly frees the leading and trailing space. 
  2710         -
  2711         -    The alignment argument must be a power of two. This property is not
  2712         -    checked by memalign, so misuse may result in random runtime errors.
  2713         -
  2714         -    8-byte alignment is guaranteed by normal malloc calls, so don't
  2715         -    bother calling memalign with an argument of 8 or less.
  2716         -
  2717         -    Overreliance on memalign is a sure way to fragment space.
  2718         -
  2719         -*/
  2720         -
  2721         -
  2722         -#if __STD_C
  2723         -Void_t* mEMALIGn(size_t alignment, size_t bytes)
  2724         -#else
  2725         -Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
  2726         -#endif
  2727         -{
  2728         -  INTERNAL_SIZE_T    nb;      /* padded  request size */
  2729         -  char*     m;                /* memory returned by malloc call */
  2730         -  mchunkptr p;                /* corresponding chunk */
  2731         -  char*     brk;              /* alignment point within p */
  2732         -  mchunkptr newp;             /* chunk to return */
  2733         -  INTERNAL_SIZE_T  newsize;   /* its size */
  2734         -  INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
  2735         -  mchunkptr remainder;        /* spare room at end to split off */
  2736         -  long      remainder_size;   /* its size */
  2737         -
  2738         -  if ((long)bytes < 0) return 0;
  2739         -
  2740         -  /* If need less alignment than we give anyway, just relay to malloc */
  2741         -
  2742         -  if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
  2743         -
  2744         -  /* Otherwise, ensure that it is at least a minimum chunk size */
  2745         -  
  2746         -  if (alignment <  MINSIZE) alignment = MINSIZE;
  2747         -
  2748         -  /* Call malloc with worst case padding to hit alignment. */
  2749         -
  2750         -  nb = request2size(bytes);
  2751         -  m  = (char*)(mALLOc(nb + alignment + MINSIZE));
  2752         -
  2753         -  if (m == 0) return 0; /* propagate failure */
  2754         -
  2755         -  p = mem2chunk(m);
  2756         -
  2757         -  if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
  2758         -  {
  2759         -#if HAVE_MMAP
  2760         -    if(chunk_is_mmapped(p))
  2761         -      return chunk2mem(p); /* nothing more to do */
  2762         -#endif
  2763         -  }
  2764         -  else /* misaligned */
  2765         -  {
  2766         -    /* 
  2767         -      Find an aligned spot inside chunk.
  2768         -      Since we need to give back leading space in a chunk of at 
  2769         -      least MINSIZE, if the first calculation places us at
  2770         -      a spot with less than MINSIZE leader, we can move to the
  2771         -      next aligned spot -- we've allocated enough total room so that
  2772         -      this is always possible.
  2773         -    */
  2774         -
  2775         -    brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
  2776         -    if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
  2777         -
  2778         -    newp = (mchunkptr)brk;
  2779         -    leadsize = brk - (char*)(p);
  2780         -    newsize = chunksize(p) - leadsize;
  2781         -
  2782         -#if HAVE_MMAP
  2783         -    if(chunk_is_mmapped(p)) 
  2784         -    {
  2785         -      newp->prev_size = p->prev_size + leadsize;
  2786         -      set_head(newp, newsize|IS_MMAPPED);
  2787         -      return chunk2mem(newp);
  2788         -    }
  2789         -#endif
  2790         -
  2791         -    /* give back leader, use the rest */
  2792         -
  2793         -    set_head(newp, newsize | PREV_INUSE);
  2794         -    set_inuse_bit_at_offset(newp, newsize);
  2795         -    set_head_size(p, leadsize);
  2796         -    fREe(chunk2mem(p));
  2797         -    p = newp;
  2798         -
  2799         -    assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
  2800         -  }
  2801         -
  2802         -  /* Also give back spare room at the end */
  2803         -
  2804         -  remainder_size = chunksize(p) - nb;
  2805         -
  2806         -  if (remainder_size >= (long)MINSIZE)
  2807         -  {
  2808         -    remainder = chunk_at_offset(p, nb);
  2809         -    set_head(remainder, remainder_size | PREV_INUSE);
  2810         -    set_head_size(p, nb);
  2811         -    fREe(chunk2mem(remainder));
  2812         -  }
  2813         -
  2814         -  check_inuse_chunk(p);
  2815         -  return chunk2mem(p);
  2816         -
  2817         -}
  2818         -
  2819         -
  2820         -
  2821         -
  2822         -/*
  2823         -    valloc just invokes memalign with alignment argument equal
  2824         -    to the page size of the system (or as near to this as can
  2825         -    be figured out from all the includes/defines above.)
  2826         -*/
  2827         -
  2828         -#if __STD_C
  2829         -Void_t* vALLOc(size_t bytes)
  2830         -#else
  2831         -Void_t* vALLOc(bytes) size_t bytes;
  2832         -#endif
  2833         -{
  2834         -  return mEMALIGn (malloc_getpagesize, bytes);
  2835         -}
  2836         -
  2837         -/* 
  2838         -  pvalloc just invokes valloc for the nearest pagesize
  2839         -  that will accommodate request
  2840         -*/
  2841         -
  2842         -
  2843         -#if __STD_C
  2844         -Void_t* pvALLOc(size_t bytes)
  2845         -#else
  2846         -Void_t* pvALLOc(bytes) size_t bytes;
  2847         -#endif
  2848         -{
  2849         -  size_t pagesize = malloc_getpagesize;
  2850         -  return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
  2851         -}
  2852         -
  2853         -/*
  2854         -
  2855         -  calloc calls malloc, then zeroes out the allocated chunk.
  2856         -
  2857         -*/
  2858         -
  2859         -#if __STD_C
  2860         -Void_t* cALLOc(size_t n, size_t elem_size)
  2861         -#else
  2862         -Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
  2863         -#endif
  2864         -{
  2865         -  mchunkptr p;
  2866         -  INTERNAL_SIZE_T csz;
  2867         -
  2868         -  INTERNAL_SIZE_T sz = n * elem_size;
  2869         -
  2870         -
  2871         -  /* check if expand_top called, in which case don't need to clear */
  2872         -#if MORECORE_CLEARS
  2873         -  mchunkptr oldtop = top;
  2874         -  INTERNAL_SIZE_T oldtopsize = chunksize(top);
  2875         -#endif
  2876         -  Void_t* mem = mALLOc (sz);
  2877         -
  2878         -  if ((long)n < 0) return 0;
  2879         -
  2880         -  if (mem == 0) 
  2881         -    return 0;
  2882         -  else
  2883         -  {
  2884         -    p = mem2chunk(mem);
  2885         -
  2886         -    /* Two optional cases in which clearing not necessary */
  2887         -
  2888         -
  2889         -#if HAVE_MMAP
  2890         -    if (chunk_is_mmapped(p)) return mem;
  2891         -#endif
  2892         -
  2893         -    csz = chunksize(p);
  2894         -
  2895         -#if MORECORE_CLEARS
  2896         -    if (p == oldtop && csz > oldtopsize) 
  2897         -    {
  2898         -      /* clear only the bytes from non-freshly-sbrked memory */
  2899         -      csz = oldtopsize;
  2900         -    }
  2901         -#endif
  2902         -
  2903         -    MALLOC_ZERO(mem, csz - SIZE_SZ);
  2904         -    return mem;
  2905         -  }
  2906         -}
  2907         -
  2908         -/*
  2909         - 
  2910         -  cfree just calls free. It is needed/defined on some systems
  2911         -  that pair it with calloc, presumably for odd historical reasons.
  2912         -
  2913         -*/
  2914         -
  2915         -#if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
  2916         -#if __STD_C
  2917         -void cfree(Void_t *mem)
  2918         -#else
  2919         -void cfree(mem) Void_t *mem;
  2920         -#endif
  2921         -{
  2922         -  fREe(mem);
  2923         -}
  2924         -#endif
  2925         -
  2926         -
  2927         -
  2928         -/*
  2929         -
  2930         -    Malloc_trim gives memory back to the system (via negative
  2931         -    arguments to sbrk) if there is unused memory at the `high' end of
  2932         -    the malloc pool. You can call this after freeing large blocks of
  2933         -    memory to potentially reduce the system-level memory requirements
  2934         -    of a program. However, it cannot guarantee to reduce memory. Under
  2935         -    some allocation patterns, some large free blocks of memory will be
  2936         -    locked between two used chunks, so they cannot be given back to
  2937         -    the system.
  2938         -
  2939         -    The `pad' argument to malloc_trim represents the amount of free
  2940         -    trailing space to leave untrimmed. If this argument is zero,
  2941         -    only the minimum amount of memory to maintain internal data
  2942         -    structures will be left (one page or less). Non-zero arguments
  2943         -    can be supplied to maintain enough trailing space to service
  2944         -    future expected allocations without having to re-obtain memory
  2945         -    from the system.
  2946         -
  2947         -    Malloc_trim returns 1 if it actually released any memory, else 0.
  2948         -
  2949         -*/
  2950         -
  2951         -#if __STD_C
  2952         -int malloc_trim(size_t pad)
  2953         -#else
  2954         -int malloc_trim(pad) size_t pad;
  2955         -#endif
  2956         -{
  2957         -  long  top_size;        /* Amount of top-most memory */
  2958         -  long  extra;           /* Amount to release */
  2959         -  char* current_brk;     /* address returned by pre-check sbrk call */
  2960         -  char* new_brk;         /* address returned by negative sbrk call */
  2961         -
  2962         -  unsigned long pagesz = malloc_getpagesize;
  2963         -
  2964         -  top_size = chunksize(top);
  2965         -  extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
  2966         -
  2967         -  if (extra < (long)pagesz)  /* Not enough memory to release */
  2968         -    return 0;
  2969         -
  2970         -  else
  2971         -  {
  2972         -    /* Test to make sure no one else called sbrk */
  2973         -    current_brk = (char*)(MORECORE (0));
  2974         -    if (current_brk != (char*)(top) + top_size)
  2975         -      return 0;     /* Apparently we don't own memory; must fail */
  2976         -
  2977         -    else
  2978         -    {
  2979         -      new_brk = (char*)(MORECORE (-extra));
  2980         -      
  2981         -      if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
  2982         -      {
  2983         -        /* Try to figure out what we have */
  2984         -        current_brk = (char*)(MORECORE (0));
  2985         -        top_size = current_brk - (char*)top;
  2986         -        if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
  2987         -        {
  2988         -          sbrked_mem = current_brk - sbrk_base;
  2989         -          set_head(top, top_size | PREV_INUSE);
  2990         -        }
  2991         -        check_chunk(top);
  2992         -        return 0; 
  2993         -      }
  2994         -
  2995         -      else
  2996         -      {
  2997         -        /* Success. Adjust top accordingly. */
  2998         -        set_head(top, (top_size - extra) | PREV_INUSE);
  2999         -        sbrked_mem -= extra;
  3000         -        check_chunk(top);
  3001         -        return 1;
  3002         -      }
  3003         -    }
  3004         -  }
  3005         -}
  3006         -
  3007         -
  3008         -
  3009         -/*
  3010         -  malloc_usable_size:
  3011         -
  3012         -    This routine tells you how many bytes you can actually use in an
  3013         -    allocated chunk, which may be more than you requested (although
  3014         -    often not). You can use this many bytes without worrying about
  3015         -    overwriting other allocated objects. Not a particularly great
  3016         -    programming practice, but still sometimes useful.
  3017         -
  3018         -*/
  3019         -
  3020         -#if __STD_C
  3021         -size_t malloc_usable_size(Void_t* mem)
  3022         -#else
  3023         -size_t malloc_usable_size(mem) Void_t* mem;
  3024         -#endif
  3025         -{
  3026         -  mchunkptr p;
  3027         -  if (mem == 0)
  3028         -    return 0;
  3029         -  else
  3030         -  {
  3031         -    p = mem2chunk(mem);
  3032         -    if(!chunk_is_mmapped(p))
  3033         -    {
  3034         -      if (!inuse(p)) return 0;
  3035         -      check_inuse_chunk(p);
  3036         -      return chunksize(p) - SIZE_SZ;
  3037         -    }
  3038         -    return chunksize(p) - 2*SIZE_SZ;
  3039         -  }
  3040         -}
  3041         -
  3042         -
  3043         -
  3044         -
  3045         -/* Utility to update current_mallinfo for malloc_stats and mallinfo() */
  3046         -
  3047         -static void malloc_update_mallinfo() 
  3048         -{
  3049         -  int i;
  3050         -  mbinptr b;
  3051         -  mchunkptr p;
  3052         -#if DEBUG
  3053         -  mchunkptr q;
  3054         -#endif
  3055         -
  3056         -  INTERNAL_SIZE_T avail = chunksize(top);
  3057         -  int   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
  3058         -
  3059         -  for (i = 1; i < NAV; ++i)
  3060         -  {
  3061         -    b = bin_at(i);
  3062         -    for (p = last(b); p != b; p = p->bk) 
  3063         -    {
  3064         -#if DEBUG
  3065         -      check_free_chunk(p);
  3066         -      for (q = next_chunk(p); 
  3067         -           q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE; 
  3068         -           q = next_chunk(q))
  3069         -        check_inuse_chunk(q);
  3070         -#endif
  3071         -      avail += chunksize(p);
  3072         -      navail++;
  3073         -    }
  3074         -  }
  3075         -
  3076         -  current_mallinfo.ordblks = navail;
  3077         -  current_mallinfo.uordblks = sbrked_mem - avail;
  3078         -  current_mallinfo.fordblks = avail;
  3079         -  current_mallinfo.hblks = n_mmaps;
  3080         -  current_mallinfo.hblkhd = mmapped_mem;
  3081         -  current_mallinfo.keepcost = chunksize(top);
  3082         -
  3083         -}
  3084         -
  3085         -
  3086         -
  3087         -/*
  3088         -
  3089         -  malloc_stats:
  3090         -
  3091         -    Prints on stderr the amount of space obtain from the system (both
  3092         -    via sbrk and mmap), the maximum amount (which may be more than
  3093         -    current if malloc_trim and/or munmap got called), the maximum
  3094         -    number of simultaneous mmap regions used, and the current number
  3095         -    of bytes allocated via malloc (or realloc, etc) but not yet
  3096         -    freed. (Note that this is the number of bytes allocated, not the
  3097         -    number requested. It will be larger than the number requested
  3098         -    because of alignment and bookkeeping overhead.)
  3099         -
  3100         -*/
  3101         -
  3102         -void malloc_stats()
  3103         -{
  3104         -  malloc_update_mallinfo();
  3105         -  fprintf(stderr, "max system bytes = %10u\n", 
  3106         -          (unsigned int)(max_total_mem));
  3107         -  fprintf(stderr, "system bytes     = %10u\n", 
  3108         -          (unsigned int)(sbrked_mem + mmapped_mem));
  3109         -  fprintf(stderr, "in use bytes     = %10u\n", 
  3110         -          (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
  3111         -#if HAVE_MMAP
  3112         -  fprintf(stderr, "max mmap regions = %10u\n", 
  3113         -          (unsigned int)max_n_mmaps);
  3114         -#endif
  3115         -}
  3116         -
  3117         -/*
  3118         -  mallinfo returns a copy of updated current mallinfo.
  3119         -*/
  3120         -
  3121         -struct mallinfo mALLINFo()
  3122         -{
  3123         -  malloc_update_mallinfo();
  3124         -  return current_mallinfo;
  3125         -}
  3126         -
  3127         -
  3128         -
  3129         -
  3130         -/*
  3131         -  mallopt:
  3132         -
  3133         -    mallopt is the general SVID/XPG interface to tunable parameters.
  3134         -    The format is to provide a (parameter-number, parameter-value) pair.
  3135         -    mallopt then sets the corresponding parameter to the argument
  3136         -    value if it can (i.e., so long as the value is meaningful),
  3137         -    and returns 1 if successful else 0.
  3138         -
  3139         -    See descriptions of tunable parameters above.
  3140         -
  3141         -*/
  3142         -
  3143         -#if __STD_C
  3144         -int mALLOPt(int param_number, int value)
  3145         -#else
  3146         -int mALLOPt(param_number, value) int param_number; int value;
  3147         -#endif
  3148         -{
  3149         -  switch(param_number) 
  3150         -  {
  3151         -    case M_TRIM_THRESHOLD:
  3152         -      trim_threshold = value; return 1; 
  3153         -    case M_TOP_PAD:
  3154         -      top_pad = value; return 1; 
  3155         -    case M_MMAP_THRESHOLD:
  3156         -      mmap_threshold = value; return 1;
  3157         -    case M_MMAP_MAX:
  3158         -#if HAVE_MMAP
  3159         -      n_mmaps_max = value; return 1;
  3160         -#else
  3161         -      if (value != 0) return 0; else  n_mmaps_max = value; return 1;
  3162         -#endif
  3163         -
  3164         -    default:
  3165         -      return 0;
  3166         -  }
  3167         -}
  3168         -
  3169         -/*
  3170         -
  3171         -History:
  3172         -
  3173         -    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
  3174         -      * return null for negative arguments
  3175         -      * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
  3176         -         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
  3177         -          (e.g. WIN32 platforms)
  3178         -         * Cleanup up header file inclusion for WIN32 platforms
  3179         -         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
  3180         -         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
  3181         -           memory allocation routines
  3182         -         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
  3183         -         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
  3184         -	   usage of 'assert' in non-WIN32 code
  3185         -         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
  3186         -           avoid infinite loop
  3187         -      * Always call 'fREe()' rather than 'free()'
  3188         -
  3189         -    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
  3190         -      * Fixed ordering problem with boundary-stamping
  3191         -
  3192         -    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
  3193         -      * Added pvalloc, as recommended by H.J. Liu
  3194         -      * Added 64bit pointer support mainly from Wolfram Gloger
  3195         -      * Added anonymously donated WIN32 sbrk emulation
  3196         -      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
  3197         -      * malloc_extend_top: fix mask error that caused wastage after
  3198         -        foreign sbrks
  3199         -      * Add linux mremap support code from HJ Liu
  3200         -   
  3201         -    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
  3202         -      * Integrated most documentation with the code.
  3203         -      * Add support for mmap, with help from 
  3204         -        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
  3205         -      * Use last_remainder in more cases.
  3206         -      * Pack bins using idea from  colin@nyx10.cs.du.edu
  3207         -      * Use ordered bins instead of best-fit threshhold
  3208         -      * Eliminate block-local decls to simplify tracing and debugging.
  3209         -      * Support another case of realloc via move into top
  3210         -      * Fix error occuring when initial sbrk_base not word-aligned.  
  3211         -      * Rely on page size for units instead of SBRK_UNIT to
  3212         -        avoid surprises about sbrk alignment conventions.
  3213         -      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
  3214         -        (raymond@es.ele.tue.nl) for the suggestion. 
  3215         -      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
  3216         -      * More precautions for cases where other routines call sbrk,
  3217         -        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
  3218         -      * Added macros etc., allowing use in linux libc from
  3219         -        H.J. Lu (hjl@gnu.ai.mit.edu)
  3220         -      * Inverted this history list
  3221         -
  3222         -    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
  3223         -      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
  3224         -      * Removed all preallocation code since under current scheme
  3225         -        the work required to undo bad preallocations exceeds
  3226         -        the work saved in good cases for most test programs.
  3227         -      * No longer use return list or unconsolidated bins since
  3228         -        no scheme using them consistently outperforms those that don't
  3229         -        given above changes.
  3230         -      * Use best fit for very large chunks to prevent some worst-cases.
  3231         -      * Added some support for debugging
  3232         -
  3233         -    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
  3234         -      * Removed footers when chunks are in use. Thanks to
  3235         -        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
  3236         -
  3237         -    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
  3238         -      * Added malloc_trim, with help from Wolfram Gloger 
  3239         -        (wmglo@Dent.MED.Uni-Muenchen.DE).
  3240         -
  3241         -    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
  3242         -
  3243         -    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
  3244         -      * realloc: try to expand in both directions
  3245         -      * malloc: swap order of clean-bin strategy;
  3246         -      * realloc: only conditionally expand backwards
  3247         -      * Try not to scavenge used bins
  3248         -      * Use bin counts as a guide to preallocation
  3249         -      * Occasionally bin return list chunks in first scan
  3250         -      * Add a few optimizations from colin@nyx10.cs.du.edu
  3251         -
  3252         -    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
  3253         -      * faster bin computation & slightly different binning
  3254         -      * merged all consolidations to one part of malloc proper
  3255         -         (eliminating old malloc_find_space & malloc_clean_bin)
  3256         -      * Scan 2 returns chunks (not just 1)
  3257         -      * Propagate failure in realloc if malloc returns 0
  3258         -      * Add stuff to allow compilation on non-ANSI compilers 
  3259         -          from kpv@research.att.com
  3260         -     
  3261         -    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
  3262         -      * removed potential for odd address access in prev_chunk
  3263         -      * removed dependency on getpagesize.h
  3264         -      * misc cosmetics and a bit more internal documentation
  3265         -      * anticosmetics: mangled names in macros to evade debugger strangeness
  3266         -      * tested on sparc, hp-700, dec-mips, rs6000 
  3267         -          with gcc & native cc (hp, dec only) allowing
  3268         -          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
  3269         -
  3270         -    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
  3271         -      * Based loosely on libg++-1.2X malloc. (It retains some of the overall 
  3272         -         structure of old version,  but most details differ.)
  3273         -
  3274         -*/
  3275         -
  3276         -