linux - Compile the kernel to increase the command line max length

07
2014-04
  • xpt

    Following up on http://stackoverflow.com/questions/14176089/system-command-line-max-length-in-perl, which say the max length for exec/command line arguments is controlled by ARG_MAX.

    So I want to increase such command line max length, and it seems recompiling the kernel is the only option. Fine. However my question is on the ARG_MAX, because everyone is saying that it is the one that needs to be increased, but I read from http://www.in-ulm.de/~mascheck/various/argmax/#linux that,

    ARG_MAX is not used in the kernel code itself pre-2.6.23. Post Linux 2.6.23, ARG_MAX is not hardcoded any more.

    So is ARG_MAX not used any more or not? How to increase the max length for exec/command line arguments then? because my command line max length is capped to some smaller value than I prefer.

    Thanks

  • Answers
  • mr.spuratic

    Since linux-2.6.23 ARG_MAX is not necessarily a pre-determined constant, the total size for arguments is permitted to be up to 1/4 of the stack size (see ulimit -s, stack size in kB; but /proc/1/limits is more definitive) . However, ARG_MAX is not just for the process arguments, it also contains the environment variables, which you may need to take into account.

    POSIX defines what ARG_MAX means, and an acceptable lower limit (_POSIX_ARG_MAX 4096). Its static value is (historically) available via a #define in the system headers, and it's also set in the linux kernel headers. Its effective value is available by sysconf() or getconf ARG_MAX from the command line.

    If you check the glibc headers (sys/param.h) you'll see this:

    /* The kernel headers defines ARG_MAX.  The value is wrong, though.  */
    #ifdef __undef_ARG_MAX
    # undef ARG_MAX
    # undef __undef_ARG_MAX
    #endif
    

    That's from glibc-2.17, this appeared around 2.11 (2009), first support for this dates to 2.8 (2008), but prior to 2.14 (2011) there was a bug in the above logic which prevented it from working as expected. The intention is to make sure ARG_MAX is undefined if it's not a constant, so programs should rely on sysconf(). (Even if it is defined it might only be a lower guaranteed limit, and programs should use sysconf() to determine the variable upper limits, see sysconf(3))

    You can check what your C compiler sees with (gcc, bash/zsh syntax only):

    $ gcc -E -dM -x c <(echo "#include <sys/param.h>") | fgrep ARG
    #define ARG_MAX 131072
    #define NCARGS ARG_MAX
    #define _POSIX_ARG_MAX 4096
    

    The above output is from an old system (2.6.27), which has the kernel support, but not the complete runtime (glibc support). If you see no ARG_MAX line then it is not a pre-determined limit, and you should use (sysconf) getconf ARG_MAX:

    $ getconf ARG_MAX
    2097152
    

    A useful way to check support is also:

    $ xargs --show-limits < /dev/null
    Your environment variables take up 2542 bytes
    POSIX upper limit on argument length (this system): 2092562
    POSIX smallest allowable upper limit on argument length (all systems): 4096
    Maximum length of command we could actually use: 2090020
    Size of command buffer we are actually using: 131072
    

    That's from a linux-2.6.37/glib-2.13 system with the higher limits. Note the final line of output, xargs defaults (build time) to a "sensible" limit, probably in case any of the processes it starts are not capable of handling very large values. You can modify that at run-time with the -s option. Also, if you have a ulimit -s in effect, those numbers may be lower. This should work correctly since findutils-4.3.9 (2007). See also: http://www.gnu.org/software/coreutils/faq/coreutils-faq.html#Argument-list-too-long

    To check perl:

    % perl -MPOSIX -e 'print ARG_MAX . "\n"';
    131072
    

    The above is again from an old system, a new system should show:

    % perl -MPOSIX -e 'print ARG_MAX . "\n"';
    Your vendor has not defined POSIX macro ARG_MAX, used at -e line 1
    

    To summarize:

    • If you're running a post-2.6.23 kernel, the kernel will permit larger sizes to be passed when creating a process. This is a necessary but not sufficient condition.
    • The parent process must not enforce any incorrect runtime limit (e.g. with a hard-coded ARG_MAX), it should check for exec() E2BIG error codes instead, and should use sysconf(_SC_ARG_MAX) if needed
    • The child process must not enforce any incorrect runtime limit, in particular its startup code which processes the kernel provided parameters must not have incorrect hard-coded limits (e.g. when setting up argc, argc, the environment area for run time use). This is typically done in libc (glibc).
    • For parent and child you may also need configure and build time support from libc (or equivalent). For glibc this requires glibc-2.8 at least (though it should be possible to work around it, it might not be simple or clean)

    A problem combination is an updated (linux >=2.6.23) kernel, but missing or suspect glibc support (glibc<=2.14)

    If you're running an older kernel, first make sure your vendor hasn't back-ported the feature. Otherwise you can in principle alter the kernel limit and recompile, but you may need to also modify at least some system headers or source code for working support.


    Programs should be able to handle arbitrary values, but this may not always be the case: http://pubs.opengroup.org/onlinepubs/009695399/basedefs/limits.h.html

    Applications should not assume any particular value for a limit. [...] It should be noted, however, that many of the listed limits are not invariant, and at runtime, the value of the limit may differ from those given in this header, for the following reasons:

    • The limit is pathname-dependent.
    • The limit differs between the compile and runtime machines.

    For these reasons, an application may use the fpathconf(), pathconf(), and sysconf() functions to determine the actual value of a limit at runtime.


  • Related Question

    Compiling the Linux kernel, how much size is needed?
  • ant2009

    I have downloaded the newest most stable Linux kernel, 2.6.33.2.

    I thought I would test this using VirtualBox. So I create a dynamically sized harddisk of 4 GB. And installed CentOS 5.3 with just the minimum packages.

    I setup the make menuconfig with just the default settings.

    After that I ran make and got the following error:

    net/bluetooth/hci_sysfs.o: final close failed: No space left on device
    make[2]: *** [net/bluetooth/hci_sysfs.o] Error 1
    make[1]: *** [net/bluetooth] Error 2
    make: *** [net] Error 2
    

    The amount of space I have left is:

    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol00
                          3.3G  3.3G     0 100% /
    /dev/hda1              99M   12M   82M  13% /boot
    tmpfs                 125M     0  125M   0% /dev/shm
    

    My virtual size is 4 GB, but the actual size is 3.5 GB.

    $ ls -hl
    total 7.5G
    -rw-------. 1 root root 3.5G 2010-04-13 14:08 LFS.vdi
    

    How much size should I give when compiling and installing a Linux kernel? Are there any guidelines to follow when doing this? This is my first time, so just experimenting with this.


  • Related Answers
  • Pro Backup

    An april 2010 linux kernel is about 60MB bzip2 archive, which after unpacking and compiling takes about 400-500MB.

    You can check your directory size with du -hs like:

    /mnt/storage/linux-2.6.33$ du -hs                               
    437M    .
    
  • ukanth

    From Guide,

    NOTE: If you do not have lot of disk space in /usr/src then you can unpack the kernel source package on any partition where you have free disk space (like /home). Because kernel compile needs lot of disk space for object files like *.o. For this reason the /usr/src/linux MUST be a soft link pointing to your source directory.