TranslateProject/sources/tech/20230804 Behind -Hello World- on Linux.md
DarkSun faf22fd3f5 选题[tech]: 20230804 Behind "Hello World" on Linux
sources/tech/20230804 Behind -Hello World- on Linux.md
2023-08-05 05:03:17 +08:00

26 KiB
Raw Blame History

[#]: subject: "Behind "Hello World" on Linux" [#]: via: "https://jvns.ca/blog/2023/08/03/behind--hello-world/" [#]: author: "Julia Evans https://jvns.ca/" [#]: collector: "lujun9972" [#]: translator: " " [#]: reviewer: " " [#]: publisher: " " [#]: url: " "

Behind "Hello World" on Linux

Today I was thinking about what happens when you run a simple “Hello World” Python program on Linux, like this one?


    print("hello world")

Heres what it looks like at the command line:


    $ python3 hello.py
    hello world

But behind the scenes, theres a lot more going on. Ill describe some of what happens, and (much much more importantly!) explain some tools you can use to see whats going on behind the scenes yourself. Well use readelf, strace, ldd, debugfs, /proc, ltrace, dd, and stat. I wont talk about the Python-specific parts at all just what happens when you run any dynamically linked executable.

Heres a table of contents:

  1. parse “python3 hello.py”
  2. figure out the full path to python3
  3. stat, under the hood
  4. time to fork
  5. the shell calls execve
  6. get the binarys contents
  7. find the interpreter
  8. dynamic linking
  9. go to _start
  10. write a string

before execve

Before we even start the Python interpreter, there are a lot of things that have to happen. What executable are we even running? Where is it?

1: The shell parses the string python3 hello.py into a command to run and a list of arguments: python3, and ['hello.py']

A bunch of things like glob expansion could happen here. For example if you run python3 *.py, the shell will expand that into python3 hello.py

2: The shell figures out the full path to python3

Now we know we need to run python3. But whats the full path to that binary? The way this works is that theres a special environment variable named PATH.

See for yourself : Run echo $PATH in your shell. For me it looks like this.


    $ echo $PATH
    /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

When you run a command, the shell will search every directory in that list (in order) to try to find a match.

In fish (my shell), you can see the path resolution logic here. It uses the stat system call to check if files exist.

See for yourself : Run strace -e stat bash, and then run a command like python3. You should see output like this:


    stat("/usr/local/sbin/python3", 0x7ffcdd871f40) = -1 ENOENT (No such file or directory)
    stat("/usr/local/bin/python3", 0x7ffcdd871f40) = -1 ENOENT (No such file or directory)
    stat("/usr/sbin/python3", 0x7ffcdd871f40) = -1 ENOENT (No such file or directory)
    stat("/usr/bin/python3", {st_mode=S_IFREG|0755, st_size=5479736, ...}) = 0

You can see that it finds the binary at /usr/bin/python3 and stops: it doesnt continue searching /sbin or /bin.

2.1: A note on execvp

If you want to run the same PATH searching logic as the shell does without reimplementing it yourself, you can use the libc function execvp (or one of the other exec* functions with p in the name).

3: stat, under the hood

Now you might be wondering Julia, what is stat doing? Well, when your OS opens a file, its split into 2 steps.

  1. It maps the filename to an inode , which contains metadata about the file
  2. It uses the inode to get the files contents

The stat system call just returns the contents of the files inodes it doesnt read the contents at all. The advantage of this is that its a lot faster. Lets go on a short adventure into inodes. (this great post “A disk is a bunch of bits” by Dmitry Mazin has more details)


    $ stat /usr/bin/python3
      File: /usr/bin/python3 -> python3.9
      Size: 9           Blocks: 0          IO Block: 4096   symbolic link
    Device: fe01h/65025d    Inode: 6206        Links: 1
    Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
    Access: 2023-08-03 14:17:28.890364214 +0000
    Modify: 2021-04-05 12:00:48.000000000 +0000
    Change: 2021-06-22 04:22:50.936969560 +0000
     Birth: 2021-06-22 04:22:50.924969237 +0000

See for yourself : Lets go see where exactly that inode is on our hard drive.

First, we have to find our hard drives device name


    $ df
    ...
    tmpfs             100016      604     99412   1% /run
    /dev/vda1       25630792 14488736  10062712  60% /
    ...

Looks like its /dev/vda1. Next, lets find out where the inode for /usr/bin/python3 is on our hard drive:


    $ sudo debugfs /dev/vda1
    debugfs 1.46.2 (28-Feb-2021)
    debugfs:  imap /usr/bin/python3
    Inode 6206 is part of block group 0
        located at block 658, offset 0x0d00

I have no idea how debugfs is figuring out the location of the inode for that filename, but were going to leave that alone.

Now, we need to calculate how many bytes into our hard drive “block 658, offset 0x0d00” is on the big array of bytes that is your hard drive. Each block is 4096 bytes, so we need to go 4096 * 658 + 0x0d00 bytes. A calculator tells me thats 2698496


    $ sudo dd if=/dev/vda1 bs=1 skip=2698496 count=256 2>/dev/null | hexdump -C
    00000000  ff a1 00 00 09 00 00 00  f8 b6 cb 64 9a 65 d1 60  |...........d.e.`|
    00000010  f0 fb 6a 60 00 00 00 00  00 00 01 00 00 00 00 00  |..j`............|
    00000020  00 00 00 00 01 00 00 00  70 79 74 68 6f 6e 33 2e  |........python3.|
    00000030  39 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |9...............|
    00000040  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    *
    00000060  00 00 00 00 12 4a 95 8c  00 00 00 00 00 00 00 00  |.....J..........|
    00000070  00 00 00 00 00 00 00 00  00 00 00 00 2d cb 00 00  |............-...|
    00000080  20 00 bd e7 60 15 64 df  00 00 00 00 d8 84 47 d4  | ...`.d.......G.|
    00000090  9a 65 d1 60 54 a4 87 dc  00 00 00 00 00 00 00 00  |.e.`T...........|
    000000a0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|

Neat! Theres our inode! You can see it says python3 in it, which is a really good sign. Were not going to go through all of this, but the ext4 inode struct from the Linux kernel says that the first 16 bits are the “mode”, or permissions. So lets work that out how ffa1 corresponds to file permissions.

  • The bytes ffa1 correspond to the number 0xa1ff, or 41471 (because x86 is little endian)
  • 41471 in octal is 0120777
  • This is a bit weird that files permissions could definitely be 777, but what are the first 3 digits? Im not used to seeing those! You can find out what the 012 means in man inode (scroll down to “The file type and mode”). Theres a little table that says 012 means “symbolic link”.

Lets list the file and see if it is in fact a symbolic link with permissions 777:


    $ ls -l /usr/bin/python3
    lrwxrwxrwx 1 root root 9 Apr  5  2021 /usr/bin/python3 -> python3.9

It is! Hooray, we decoded it correctly.

4: Time to fork

Were still not ready to start python3. First, the shell needs to create a new child process to run. The way new processes start on Unix is a little weird first the process clones itself, and then runs execve, which replaces the cloned process with a new process.

  • See for yourself: Run strace -e clone bash, then run python3. You should see something like this:

    clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f03788f1a10) = 3708100

3708100 is the PID of the new process, which is a child of the shell process.

Some more tools to look at whats going on with processes:

  • pstree will show you a tree of all the processes on your system
  • cat /proc/PID/stat shows you some information about the process. The contents of that file are documented in man proc. For example the 4th field is the parent PID.

4.1: What the new process inherits.

The new process (which will become python3) has inherited a bunch of from the shell. For example, its inherited:

  1. environment variables : you can look at them with cat /proc/PID/environ | tr '\0' '\n'
  2. file descriptors for stdout and stderr: look at them with ls -l /proc/PID/fd
  3. a working directory (whatever the current directory is)
  4. namespaces and cgroups (if its in a container)
  5. the user and group thats running it
  6. probably more things Im not thinking of right now

5: The shell calls execve

Now were ready to start the Python interpreter!

See for yourself : Run strace -e -f execve bash, then run python3. The -f is important because we want to follow any forked child subprocesses. You should see something like this:


    [pid 3708381] execve("/usr/bin/python3", ["python3"], 0x560397748300 /* 21 vars */) = 0

The first argument is the binary, and the second argument is the list of command line arguments. The command line arguments get placed in a special location in the programs memory so that it can access them when it runs.

Now, whats going on inside execve?

6: get the binarys contents

The first thing that has to happen is that we need to open the python3 binary file and read its contents. So far weve only used the stat system call to access its metadata, but now we need its contents.

Lets look at the output of stat again:


    $ stat /usr/bin/python3
      File: /usr/bin/python3 -> python3.9
      Size: 9           Blocks: 0          IO Block: 4096   symbolic link
    Device: fe01h/65025d    Inode: 6206        Links: 1
    ...

This takes up 0 blocks of space on the disk. This is because the contents of the symbolic link (python3.9) are actually in the inode itself: you can see them here (from the binary contents of the inode above, its split across 2 lines in the hexdump output):


    00000020  00 00 00 00 01 00 00 00  70 79 74 68 6f 6e 33 2e  |........python3.|
    00000030  39 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |9...............|

So well need to open /usr/bin/python3.9 instead. All of this is happening inside the kernel so you wont see it another system call for that.

Every file is made up of a bunch of blocks on the hard drive. I think each of these blocks on my system is 4096 bytes, so the minimum size of a file is 4096 bytes even if the file is only 5 bytes, it still takes up 4KB on disk.

See for yourself : We can find the block numbers using debugfs like this: (again, I got these instructions from dmitry mazins “A disk is a bunch of bits” post)


    $ debugfs /dev/vda1
    debugfs:  blocks /usr/bin/python3.9
    145408 145409 145410 145411 145412 145413 145414 145415 145416 145417 145418 145419 145420 145421 145422 145423 145424 145425 145426 145427 145428 145429 145430 145431 145432 145433 145434 145435 145436 145437

Now we can use dd to read the first block of the file. Well set the block size to 4096 bytes, skip 145408 blocks, and read 1 block.


    $ dd if=/dev/vda1 bs=4096 skip=145408 count=1 2>/dev/null | hexdump -C | head
    00000000  7f 45 4c 46 02 01 01 00  00 00 00 00 00 00 00 00  |.ELF............|
    00000010  02 00 3e 00 01 00 00 00  c0 a5 5e 00 00 00 00 00  |..>.......^.....|
    00000020  40 00 00 00 00 00 00 00  b8 95 53 00 00 00 00 00  |@.........S.....|
    00000030  00 00 00 00 40 00 38 00  0b 00 40 00 1e 00 1d 00  |....@.8...@.....|
    00000040  06 00 00 00 04 00 00 00  40 00 00 00 00 00 00 00  |........@.......|
    00000050  40 00 40 00 00 00 00 00  40 00 40 00 00 00 00 00  |@.@.....@.@.....|
    00000060  68 02 00 00 00 00 00 00  68 02 00 00 00 00 00 00  |h.......h.......|
    00000070  08 00 00 00 00 00 00 00  03 00 00 00 04 00 00 00  |................|
    00000080  a8 02 00 00 00 00 00 00  a8 02 40 00 00 00 00 00  |..........@.....|
    00000090  a8 02 40 00 00 00 00 00  1c 00 00 00 00 00 00 00  |..@.............|

You can see that we get the exact same output as if we read the file with cat, like this:


    $ cat /usr/bin/python3.9 | hexdump -C | head
    00000000  7f 45 4c 46 02 01 01 00  00 00 00 00 00 00 00 00  |.ELF............|
    00000010  02 00 3e 00 01 00 00 00  c0 a5 5e 00 00 00 00 00  |..>.......^.....|
    00000020  40 00 00 00 00 00 00 00  b8 95 53 00 00 00 00 00  |@.........S.....|
    00000030  00 00 00 00 40 00 38 00  0b 00 40 00 1e 00 1d 00  |....@.8...@.....|
    00000040  06 00 00 00 04 00 00 00  40 00 00 00 00 00 00 00  |........@.......|
    00000050  40 00 40 00 00 00 00 00  40 00 40 00 00 00 00 00  |@.@.....@.@.....|
    00000060  68 02 00 00 00 00 00 00  68 02 00 00 00 00 00 00  |h.......h.......|
    00000070  08 00 00 00 00 00 00 00  03 00 00 00 04 00 00 00  |................|
    00000080  a8 02 00 00 00 00 00 00  a8 02 40 00 00 00 00 00  |..........@.....|
    00000090  a8 02 40 00 00 00 00 00  1c 00 00 00 00 00 00 00  |..@.............|

an aside on magic numbers

This file starts with ELF, which is a “magic number”, or a byte sequence that tells us that this is an ELF file. ELF is the binary file format on Linux.

Different file formats have different magic numbers, for example the magic number for gzip is 1f8b. The magic number at the beginning is how file blah.gz knows that its a gzip file.

I think file has a variety of heuristics for figuring out the file type of a file, not just magic numbers, but the magic number is an important one.

7: find the interpreter

Lets parse the ELF file to see whats in there.

See for yourself: Run readelf -a /usr/bin/python3.9. Heres what I get (though Ive redacted a LOT of stuff):


    $ readelf -a /usr/bin/python3.9
    ELF Header:
        Class:                             ELF64
        Machine:                           Advanced Micro Devices X86-64
    ...
    ->  Entry point address:               0x5ea5c0
    ...
    Program Headers:
      Type           Offset             VirtAddr           PhysAddr
      INTERP         0x00000000000002a8 0x00000000004002a8 0x00000000004002a8
                     0x000000000000001c 0x000000000000001c  R      0x1
    ->      [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
            ...
    ->        1238: 00000000005ea5c0    43 FUNC    GLOBAL DEFAULT   13 _start

Heres what I understand of whats going on here:

  1. its telling the kernel to run /lib64/ld-linux-x86-64.so.2 to start this program. This is called the dynamic linker and well talk about it next
  2. its specifying an entry point (at 0x5ea5c0, which is where this programs code starts)

Now lets talk about the dynamic linker.

8: dynamic linking

Okay! Weve read the bytes from disk and weve started this “interpreter” thing. What next? Well, if you run strace -o out.strace python3, youll see a bunch of stuff like this right after the execve system call:


    execve("/usr/bin/python3", ["python3"], 0x560af13472f0 /* 21 vars */) = 0
    brk(NULL)                       = 0xfcc000
    access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
    openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
    fstat(3, {st_mode=S_IFREG|0644, st_size=32091, ...}) = 0
    mmap(NULL, 32091, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f718a1e3000
    close(3)                        = 0
    openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
    read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 l\0\0\0\0\0\0"..., 832) = 832
    fstat(3, {st_mode=S_IFREG|0755, st_size=149520, ...}) = 0
    mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f718a1e1000
    ...
    close(3)                        = 0
    openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3

This all looks a bit intimidating at first, but the part I want you to pay attention to is openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpthread.so.0". This is opening a C threading library called pthread that the Python interpreter needs to run.

See for yourself: If you want to know which libraries a binary needs to load at runtime, you can use ldd. Heres what that looks like for me:


    $ ldd /usr/bin/python3.9
        linux-vdso.so.1 (0x00007ffc2aad7000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2fd6554000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2fd654e000)
        libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f2fd6549000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2fd6405000)
        libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f2fd63d6000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2fd63b9000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2fd61e3000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2fd6580000)

You can see that the first library listed is /lib/x86_64-linux-gnu/libpthread.so.0, which is why it was loaded first.

on LD_LIBRARY_PATH

Im honestly still a little confused about dynamic linking. Some things I know:

  • Dynamic linking happens in userspace and the dynamic linker on my system is at /lib64/ld-linux-x86-64.so.2. If youre missing the dynamic linker, you can end up with weird bugs like this weird “file not found” error
  • The dynamic linker uses the LD_LIBRARY_PATH environment variable to find libraries
  • The dynamic linker will also use the LD_PRELOAD environment to override any dynamically linked function you want (you can use this for fun hacks, or to replace your default memory allocator with an alternative one like jemalloc)
  • there are some mprotects in the strace output which are marking the library code as read-only, for security reasons
  • on Mac, its DYLD_LIBRARY_PATH instead of LD_LIBRARY_PATH

You might be wondering if dynamic linking happens in userspace, why dont we see a bunch of stat system calls where its searching through LD_LIBRARY_PATH for the libraries, the way we did when bash was searching the PATH?

Thats because ld has a cache in /etc/ld.so.cache, and all of those libraries have already been found in the past. You can see it opening the cache in the strace output openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3.

There are still a bunch of system calls after dynamic linking in the full strace output that I still dont really understand (whats prlimit64 doing? where does the locale stuff come in? whats gconv-modules.cache? whats rt_sigaction doing? whats arch_prctl? whats set_tid_address and set_robust_list?). But this feels like a good start.

aside: ldd is actually a simple shell script!

Someone on mastodon pointed out that ldd is actually a shell script that just sets the LD_TRACE_LOADED_OBJECTS=1 environment variable and starts the program. So you can do exactly the same thing like this:


    $ LD_TRACE_LOADED_OBJECTS=1 python3
        linux-vdso.so.1 (0x00007ffe13b0a000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f01a5a47000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f01a5a41000)
        libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f2fd6549000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2fd6405000)
        libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f2fd63d6000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2fd63b9000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2fd61e3000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2fd6580000)

Apparently ld is also a binary you can just run, so /lib64/ld-linux-x86-64.so.2 --list /usr/bin/python3.9 also does the the same thing.

9: go to _start

After dynamic linking is done, we go to _start in the Python interpreter. Then it does all the normal Python interpreter things youd expect.

Im not going to talk about this because here Im interested in general facts about how binaries are run on Linux, not the Python interpreter specifically.

10: write a string

We still need to print out “hello world” though. Under the hood, the Python print function calls some function from libc. But which one? Lets find out!

See for yourself : Run ltrace -o out python3 hello.py.


    $ ltrace -o out python3 hello.py
    $ grep hello out
    write(1, "hello world\n", 12) = 12

So it looks like its calling write

I honestly am always a little suspicious of ltrace unlike strace (which I would trust with my life), Im never totally sure that ltrace is actually reporting library calls accurately. But in this case it seems to be working. And if we look at the cpython source code, it does seem to be calling write() in some places. So Im willing to believe that.

whats libc?

We just said that Python calls the write function from libc. Whats libc? Its the C standard library, and its responsible for a lot of basic things like:

  • allocating memory with malloc
  • file I/O (opening/closing/
  • executing programs (with execvp, like we mentioned before)
  • looking up DNS records with getaddrinfo
  • managing threads with pthread

Programs dont have to use libc (on Linux, Go famously doesnt use it and calls Linux system calls directly instead), but most other programming languages I use (node, Python, Ruby, Rust) all use libc. Im not sure about Java.

You can find out if youre using libc by running ldd on your binary: if you see something like libc.so.6, thats libc.

why does libc matter?

You might be wondering why does it matter that Python calls the libc write and then libc calls the write system call? Why am I making a point of saying that libc is in the middle?

I think in this case it doesnt really matter (AFAIK the write libc function maps pretty directly to the write system call)

But there are different libc implementations, and sometimes they behave differently. The two main ones are glibc (GNU libc) and musl libc.

For example, until recently musls getaddrinfo didnt support TCP DNS, heres a blog post talking about a bug that that caused.

a little detour into stdout and terminals

In this program, stdout (the 1 file descriptor) is a terminal. And you can do funny things with terminals! Heres one:

  1. In a terminal, run ls -l /proc/self/fd/1. I get /dev/pts/2
  2. In another terminal window, write echo hello > /dev/pts/2
  3. Go back to the original terminal window. You should see hello printed there!

thats all for now!

Hopefully you have a better idea of how hello world gets printed! Im going to stop adding more details for now because this is already pretty long, but obviously theres more to say and I might add more if folks chip in with extra details. Id especially love suggestions for other tools you could use to inspect parts of the process that I havent explained here.

Id love to see a Mac version of this

One of my frustrations with Mac OS is that I dont know how to introspect my system on this level when I print hello world, I cant figure out how to spy on whats going on behind the scenes the way I can on Linux. Id love to see a really in depth explainer.

Some Mac equivalents I know about:

  • ldd -> otool -L
  • readelf -> otool
  • supposedly you can use dtruss or dtrace on mac instead of strace but Ive never been brave enough to turn off system integrity protection to get it to work

more reading

Some more links:


via: https://jvns.ca/blog/2023/08/03/behind--hello-world/

作者:Julia Evans 选题:lujun9972 译者:译者ID 校对:校对者ID

本文由 LCTT 原创编译,Linux中国 荣誉推出