Linux Application's Core Dump

  • What is it and why may you need it?

Core dump is a file containing a process's memory snapshot when the process terminates unexpectedly. So, core dumps can be used for "offline" investigation of process's termination reasons, commonly used terminology for this investigation effort is "Post Mortem Debugging".

Core dump file's creation is triggered by Linux Kernel in response to process crash (process performed an invalid operation). Kernel sends a special signal to crashed process, so the process can handle it itself or use standard mechanisms and let Linux Kernel core dump file creation.

In some cases core dump may be produced on-demand using debugger.

Properly configured Linux Kernel and properly compiled application will let you capture core dump file and perform efficient post mortem debugging. You'll be able to review source code, go over call stacks of each application's threads, review values of global and local variables, check values of CPU registers, etc...

As core dump files contains process's memory snapshot, sensitive data can be leaked and exposed, this is security risk that should be considered...

  • What should be enabled in Linux Kernel build configuration?

CONFIG_COREDUMP

CONFIG_ELF_CORE

tip: If you are running on existing Linux system, check if those configurations were included in Kernel, otherwise core dump won't be produced. For example, on Ubuntu those configurations can be find in /boot/config-$(uname -r)

  • What should be configured in Linux Kernel at runtime?

Linux Kernel runtime configuration exposed to user space through the dedicated files in virtual file system, path /proc/sys/kernel. We'll use sysctl utility which is more common way for this purpose.


/proc/sys/kernel/core_pattern

By default, core dump file is named core, but we can manipulate the system to apply useful information to core dump file name or even call to dedicated script/program to let additional flexibility...

/proc/sys/kernel/core_pattern file is used for this purpose, there are few possible configuration options:

Option A

Following this option, system will produce core dump and name it according to the specified pattern.

sysctl -w kernel.core_pattern="/mnt/flash/core.%e.%p.%i.%s.%t"

Where:

           %%  a single % character
           %c  core file size soft resource limit of crashing process (since
               Linux 2.6.24)
           %d  dump mode—same as value returned by prctl(2) PR_GET_DUMPABLE
               (since Linux 3.7)
           %e  executable filename (without path prefix)
           %E  pathname of executable, with slashes ('/') replaced by
               exclamation marks ('!') (since Linux 3.0).
           %g  (numeric) real GID of dumped process
           %h  hostname (same as nodename returned by uname(2))
           %i  TID of thread that triggered core dump, as seen in the PID
               namespace in which the thread resides (since Linux 3.18)
           %I  TID of thread that triggered core dump, as seen in the
               initial PID namespace (since Linux 3.18)
           %p  PID of dumped process, as seen in the PID namespace in which
               the process resides
           %P  PID of dumped process, as seen in the initial PID namespace
               (since Linux 3.12)
           %s  number of signal causing dump
           %t  time of dump, expressed as seconds since the Epoch,
               1970-01-01 00:00:00 +0000 (UTC)
           %u  (numeric) real UID of dumped process

tip: Core dump file couldn't be written to the attached DiskOnKey, to overcome this limitation use Option B below.

Option B

This option involves feature available since kernel 2.6.19 - piping core dump to a program.

Following this option, upon application crash, system will invoke dedicated script (or program) to let us handle the core dump file creation. For example, to save space on storage by compressing the data using gzip utility.

sysctl -w kernel.core_pattern="|/bin/coredumper.sh %e %p %i %s %t"

Content of /bin/coredumper.sh file:

#!/bin/sh

exec /bin/gzip -f - >"/mnt/flash/core.$1.$2.$3.$4.$5.dump.gz"


/proc/sys/kernel/suid_dumpable

If core dump should be generated for process's running with suid (set owner user id upon execution), you need to configure the suid_dumpable setting as well.

Few words regarding suid...

suid is a special file permission given to an executable files, this enables other users to execute the file with effective permissions of the file's owner. Instead of normal "x" mark which represents execution permissions you will find "s" mark to indicate suid special permission to user. So, as you understand it could be a security risk because of core dumps may contain privileged information...

/proc/sys/kernel/suid_dumpable file is used for configuration and there are few possible options:

0 - (default) - traditional behaviour. Any process which has changed
  privilege levels or is execute only will not be dumped.
1 - (debug) - all processes dump core when possible. The core dump is
  owned by the current user and no security is applied. This is
  intended for system debugging situations only. Ptrace is unchecked.
  This is insecure as it allows regular users to examine the memory
  contents of privileged processes.
2 - (suidsafe) - any binary which normally would not be dumped is dumped
  anyway, but only if the "core_pattern" kernel sysctl is set to
  either a pipe handler or a fully qualified path. (For more details
  on this limitation, see CVE-2006-2451.) This mode is appropriate
  when administrators are attempting to debug problems in a normal
  environment, and either have a core dump pipe handler that knows
  to treat privileged core dumps with care, or specific directory
  defined for catching core dumps. If a core dump happens without
  a pipe handler or fully qualified path, a message will be emitted
  to syslog warning about the lack of a correct setting.

We'll choose option "1"

sysctl -w kernel.suid_dumpable=1

  • What should be configured before Application execution?

Each process running on Linux has it's "Max" resource limitation definitions such as: Max cpu time / Max file size / Max stack size / Max core file size / etc ...

Limitations are managed using 2 boundary types:

Soft Limit - value can be modified by the owner of the process up to Hard Limit values.

Hard Limit - value can be modified by the root only.

You can view limitations for the current shell session (when application is executed from this shell session, it will inherit these limitations):

ulimit -a

To let application core dump file creation, "Max core file size" value should be larger than 0 (zero). We'll set it to "unlimited" value, meaning that core dump file can be as large as required...

ulimit -c unlimited

tip: You can view limitations for the already running process, like following:

cat /proc/[PID]/limits

tip: Process can programmatically modify it's "Max core file size" limitations using getrlimit() and setrlimit() functions.


/proc/[PID]/coredump_filter

Additional setting worth to talk is coredump_filter. This is per process settings, controls which memory mappings/segments will be written to core dump file.

/proc/[PID]/coredump_filter file is used for configuration, the value in the file is a bitmask, if bit is set then memory will be dumped, otherwise not Possible options below:

           bit 0  Dump anonymous private mappings.
           bit 1  Dump anonymous shared mappings.
           bit 2  Dump file-backed private mappings.
           bit 3  Dump file-backed shared mappings.
           bit 4 (since Linux 2.6.24)
                  Dump ELF headers.
           bit 5 (since Linux 2.6.28)
                  Dump private huge pages.
           bit 6 (since Linux 2.6.28)
                  Dump shared huge pages.
           bit 7 (since Linux 4.4)
                  Dump private DAX pages.
           bit 8 (since Linux 4.4)
                  Dump shared DAX pages.

We'll leave it on its default value...

  • What about the Application?

There are few things we need to take care regarding the requirements to Application.

Application's elf file should contain enough debug (gcc -g3) information/symbols to let source code surfing when post mortem debugging. After application build, keep elf with this extra debug information for post mortem debugging and stripped version for releases...

If your application implements customized signal handling of signals which are core dump producers then you need to restore default handling, example below:

tip: Signals which are a core dump producers

       Signal      Standard   Action   Comment
       ────────────────────────────────────────────────────────────────────────
       SIGABRT      P1990      Core    Abort signal from abort(3)
       SIGBUS       P2001      Core    Bus error (bad memory access)
       SIGFPE       P1990      Core    Floating-point exception
       SIGILL       P1990      Core    Illegal Instruction
       SIGIOT         -        Core    IOT trap. A synonym for SIGABRT
       SIGQUIT      P1990      Core    Quit from keyboard
       SIGSEGV      P1990      Core    Invalid memory reference
       SIGSYS       P2001      Core    Bad system call (SVr4);
                                       see also seccomp(2)
       SIGTRAP      P2001      Core    Trace/breakpoint trap
       SIGUNUSED      -        Core    Synonymous with SIGSYS
       SIGXCPU      P2001      Core    CPU time limit exceeded (4.2BSD);
                                       see setrlimit(2)
       SIGXFSZ      P2001      Core    File size limit exceeded (4.2BSD);
                                       see setrlimit(2)
  • Post mortem debugging tools

Post mortem debugging performed using GDB, for example:

gdb /path_to_applications_elf_not_stripped /path_to_applications_core_dump_file

(gdb) bt

I prefer to use UI Front End for GDB provided by Eclipse CDT with Linux Tools installed.