Building a Cross Compiler Toolchain
A couple years ago when I still was in school I was once bored enough to attempt to build a cross compilation toolchain myself and bootstrap a simple, working GNU/Linux system running inside a VM.
I basically ended up copying the LFS instructions and it turned out to be very tedious, took me multiple attempts over several days, but in the end, most of it, most of the time mostly worked.
Fast forward to 2016: At my workplace, I was asked to rewrite the mtd-utils build system from a custom, broken Makefile to autotools, so you had something that actually worked and could easily be integrated into existing cross build toolchains like buildroot, or our own in-house crosstools-NG based system.
Anyway, after some in-depth learning about the internals of the autotools, I thought, "hmm... GCC and binutils, like all GNU packages, use an autotools build system. I know how to get autotools stuff running, so it can't be that hard to bootstrap a cross toolchain" and decided to give it a try over the weekend.
Turns out, it isn't. GCC 6.2.0 requires amazingly few clutches and I managed to get a GCC+musl cross toolchain running by around two in the morning on Saturday.
Nevertheless, some friends asked me to write about it and I figured that might be a good idea, since most instructions I found on the internet were useless. Many didn't work at all, some only worked for a specific target, but most of them were full of magic (i.e. "just run this, I have no idea why it works").
The LFS and CLFS books were very help full and mostly worked if you already knew exactly what you were doing. At the time I read them, they were also full of "just apply this patch don't ask" magic and generally lacked explanations, expecting you to just mindlessly copy shell commands.
Maybe there will be a follow up on how to boot strap a small Linux+Busybox system.
NOTE: I'm still working on the writeup for the ARM target as well as how to use your existing libc. I just didn't get around to it yet and decided to put the unfinished article up as is, because it's been sitting around for so long I was afraight I might otherwise forget about it completely.
I'm going to discuss two different target architectures: 32 bit ARM and 32 bit x86. The reason for this is that I have easy access to actual hardware for testing and the two require different clutches to build the toolchain.
We'll build a simple, straight forward cross toolchain. No Canadian cross or similar complex stuff.
The entire process itself consists of the following steps:
The main reason for compiling GCC twice is the inter-dependency between the compiler and the standard library.
First of all, the GCC build system needs to know what kind of C standard library we are using and where to find it. Not only does the compiler need to know what to link programs against, it also links executable programs programs against bootstrap object code provided by the libc that does stack setup, CALLs the main() function and calls exit(3) when main() returns. The libc also provides the dynamic linker that the compiler writes into the ELF interpreter field of dynamically linked programs.
Second, there is libgcc. libgcc contains low level platform specific helpers (like exception handling, soft float code, etc.) and is automatically linked to programs built with GCC. Libgcc source code comes with GCC and is compiled by the GCC build system specifically for our cross compiler & libc combination.
However, some functions in the libgcc need functions from the C standard library. Some larger libc implementations (like glibc) directly use utility functions from libgcc for e.g. stack unwinding (libgcc_s).
After building a GCC cross compiler, we need to cross compile libgcc, so we can then cross compile other stuff that needs libgcc like the libc. But we need an already cross compiled libc in the first place for compiling libgcc.
The solution is to build a minimalist GCC. With that we compile a minimal libgcc that has lots of features disabled and uses internal stubs for standard C functions instead of linking against libc.
We can then cross compile the libc and let the compiler link it against the minimal libgcc.
With that, we can compile the full GCC, pointing it at the C standard library for the target system and build a dynamically linked, fully featured libgcc along with it. We can simply install it over the existing GCC and libgcc in the toolchain directory.
If you already have an existing distro running on the target hardware, you already have a libc and libgcc for your target that you want to copy over and link against. In that case, you can skip the first pass (more on that later), but that would be kind of boring :-)
The following source packages are required for building the toolchain. The links below point to the exact versions that I used.
The following things are required for compiling GCC. We only need to point the GCC build system to the location of the source and it takes care of compiling it along.
For compiling all of this you will need:
In case you wonder: you need the C++ compiler to build GCC. The GCC code base mainly uses C99, but with some additional C++ features. makeinfo is used by the GNU utilities that generate info pages from texinfo. ncurses is mainly needed by the kernel build system.
I'm not entirely sure on the list as I normally work on systems with tons of development tools and libraries already installed, so I just used common sense, and also took a look at the configure output of the packages we are going to build. I also pulled some things from the README file of our in-house distro build system.
I wrote bash on the list simply because I'm too lazy to rid my shell one-liners of bash-isms, so we are going to work on bash.
To keep things clean, we start out in an empty, fresh working directory in which we want to piece our toolchain together.
At first, we set a few handy shell variables that will store the configuration of our toolchain:
The TARGET variable holds the target triplet of our system. It describes the target platform by pasting together CPU architecture, kernel and user land. This string is not arbitrary! The GNU build system parses this string to figure out what it is building for! If you compile a musl toolchain, the last part has to be musl! Otherwise, the GCC build system will assume a different libc provider and the second pass GCC will blow up in your face!
We also need the triplet for the local machine that we are going to build things on. We are going to use this later on when building GCC:
The OSTYPE is a shell builtin. Some guides suggest using another shell builtin MACHTYPE instead of the line above, however this delivered inconsistent results. On CentOS 7 I got this:
On Arch Linux, however it returned a different result, namely the same result that uname -m returns on both systems:
To get a similar result on Arch I had to piece the string together like this:
This however, produces garbage on the CentOS machine, so I used the HOST as defined above.
The CPU and ARCH variables both hold the target CPU architecture. The later is used for the kernel build system, the former for the GNU build system as the two can't decide on a common scheme for naming things.
We will store the absolute path to the working directory inside a shell variable called BUILDROOT and create a few directories to organize our stuff in:
I stored the downloaded packages in the download directory and extracted them to a directory called src.
We will later build packages outside the source tree (GCC even requires that nowadays), inside a sub directory of build.
Our final toolchain will end up in a directory called toolchain. We already create the sub directories bin and $TARGET in advance for the kernel and binutils build systems. The former directory will hold binaries of our toolchain with target prefix, the later will hold headers, libraries and binaries without prefix.
We store the toolchain location inside another shell variable that I called TCDIR and prepend the executable path of our toolchain to the PATH variable:
Right now, you should have a directory tree that looks something like this:
I previously mentioned that we only need to "point" the GCC build system to the locations of its dependency libraries. To simplify things, I created a bunch of symlinks inside the GCC source dir for the dependencies:
You could also install the libraries trough a package management system and let the GCC build system use them instead. However, some are closely tied to GCC, and the GCC build system tends to be quite fragile, so I prefer building them along for the local GCC build.
Theoretically you could also build the libraries separately beforehand and then just point the GCC configure script to their location. But if you inspect the configure output from the GCC build system, you can see that it sets quite a number of specific options depending on the target, so it's probably easiest to just create the symlinks and let the GCC build system do its thing.
Extracting the kernel headers
We create a build directory inside $BUILDROOT/build/linux. Building the kernel outside its source tree works a bit different compared to autotools based stuff.
According to the Makefile in the Linux source, you can either specify an environment variable called KBUILD_OUTPUT, or set a Makefile variable called O, where the later overrides the environment variable. The snippet above shows both ways.
The headers_check target runs a few trivial sanity checks on the headers we are going to install. It checks if a header includes something nonexistent, if the declarations inside the headers are sane and if kernel interna are leaked into user space. For stock kernel tar-balls, this shouldn't be necessary, but could come in handy when working with kernel git trees, potentially with local modifications.
Lastly (before switching back to the root directory), we actually install the kernel headers into e.g. "toolchain/i686-linux-musl/include" where the libc later expects them to be.
Since I've seen the question in a few forums: it doesn't matter if the kernel version exactely matches the one running on your target system. The kernel system call ABI is stable, so you can use an older kernel. Only if you use a much newer kernel, the libc might end up exposing or using features that your kernel does not yet support.
If you have some embedded board with a heavily modified vendor kernel and no upstream support, you are pretty much on your own. If in addition to that, the vendor breaks the ABI take the board and burn it (preferably outside; don't inhale the fumes).
Compiling cross binutils
We will compile binutils outside the source tree, inside the directory build/binutils. So first, we create the build directory and switch into it. To keep things clean, we use a shell variable srcdir to remember where we kept the binutils source. A pattern that we will repeat later:
From the binutils build directory we run the configure script:
In an autotools build system, there are three different system triplets at work:
We only set the --target option to tell the build system what target the assembler, linker and other tools should generate output for. We don't explicitly set the other options because the binutils build system is somewhat more robust than the GCC one and can figure out that it is being built for the local machine.
If we were doing a Canadian cross, we would set the --host option to the triplet of the existing cross toolchain in order to build binutils that run on a machine different from ours and generate output for yet another one.
The --prefix option specifies where to install files to, together with the make variable DESTDIR. When you run make DESTDIR=xy install on an automake generated makefile, it will install binaries to xy/prefix/bin, libraries to xy/prefix/lib, headers to xy/prefix/include and so on. The file type specific suffix can of course also be configured, but that is not really of interest right now.
The default prefix is /usr/local/. We set it to the top level directory of our toolchain (remember, TCDIR=$BUILDROOT/toolchain).
The --with-sysroot option tells the build system that our systems root directory is not '/' but actually '$TCDIR/$TARGET' (e.g. "toolchain/i686-linux-musl") and it should look for libraries and headers over there.
We disable the features nls (native language support, i.e. i18n) mainly because we don't need it.
Some architectures support executing code for other, related architectures (e.g. x86 code can run x86_64). On GNU/Linux distributions that support that, you typically have different versions of the same libraries (e.g. in lib/ and lib32/ directories) with programs for different architectures being linked to the appropriate libraries. We are only interested in a single architecture and don't need that, so we set --disable-multilib.
Now we can compile and install binutils:
The first make target, configure-host is binutils specific and just tells it to check out the system it is being built on, i.e. your local machine and make sure it has all the tools it needs for compiling. If it reports a problem, go fix it before continuing.
We then go on to build the binutils. You may want to speed up compilation by running a parallel build with make -j NUMBER-OF-PROCESSES.
Lastly, we run make install to install the binutils in the configured toolchain directory and go back to our root directory.
First pass GCC
Similar to above, we create a directory for building the compiler, change into it and store the source location in a variable:
Notice, how the build directory is called gcc-1. For the second pass, we will later create a different build directory. Not only does this out of tree build allow us to cleanly start afresh (because the source is left untouched), but current versions of GCC will flat out refuse to build inside the source tree.
The --prefix, --target and --with-sysroot work just like above for binutils.
This time we explicitly specify --build (i.e. the system that we are going to compile GCC on) and --host (i.e. the system that the GCC will run on). In our case those are the same. We use the machine triplet that we pieced together earlier. It might be generally wise to always set those, but here I only set them for GCC, because of my experience with the fragile GCC build system. And yes, I have seen older versions of GCC throw a fit or assume complete nonsense if you don't explicitly specify those.
The option --with-arch gives the build system slightly more specific information about the target processor architecture.
We also disable a bunch of stuff we don't need. I already explained nls and multilib above. We also disable a bunch of optimization stuff and helper libraries. Among other things, we also disable support for dynamic linking and threads.
The option --without-headers tells the build system that we don't have the headers for the libc yet and it should use minimal stubs instead where it needs them. The --with-newlib option is more of a hack. It tells that we are going to use the newlib as C standard library. This isn't actually true, but forces the build system to disable some libgcc features that depend on the libc.
The option --enable-languages accepts a comma separated list of languages that we want to build compilers for. For now, we only need a C compiler for compiling the libc.
If you are interested: Here is a detailed list of all GCC configure options.
We explicitly specify the make targets for GCC and cross-compiled libgcc for our target. We are not interested in anything else.
For the first make, you really want to specify a -j NUM-PROCESSES option here. Even the first pass GCC we are building here will take a while to compile on an ordinary desktop machine.
C standard library
We create our build directory and change there:
Musl is quite easy to build but requires some special handling, because it doesn't use autotools. The configure script is actually a hand written shell script that tries to emulate some of the typical autotools handling:
We override the shell variable CC to point to the cross compiler that we just built. Remember, we added the /bin of the toolchain directory to our PATH.
We do the same thing for actually compiling musl and we explicitly set the DESTDIR variable for installing:
Second pass GCC
As you can see, we are using a different build directory for the second pass gcc.
Most of the options should be familiar already.
For the second pass, we also build a C++ compiler. The options --enable-c99 and --enable-long-long are C++ specific. When our final compiler runs in C++98 mode, we allow it to expose C99 functions from the libc through a GNU extension. We also allow it to support the long long data type standardized in C99.
You may wonder why we didn't have to build a libstdc++ between the first and second pass, like the libc. The source code for the libstdc++ comes with the G++ compiler and is built automatically like libgcc. On the one hand, it is really just a library that adds C++ stuff on top of libc and the compiler doesn't depend on it. On the other hand, C++ does not have a standard ABI and it is all compiler and OS specific. So compiler vendors will typically ship their own libstdc++ implementation with the compiler.
The options --disable-libmpx and --disable-libssp are special hacks that we need for building an x86 cross compiler on AMD64. Those two libraries are used in code generation for utilizing some 64 bit instruction set features. The GCC build system is smart enough not to compile those libraries for the x86 target (because it simply does not have that CPU features), but for some reason tries to link the final compiler against the libraries, generating a linking error. Disabling those libraries altogether will stop that from happening.
We --disable-libsanitizer because it simply won't build for musl. I tried fixing it, but it simply assumes too much about the nonstandard internals of the libc. A quick Google search reveals that it has lots of similar issues with all kinds of libc & kernel combinations, so even if I fix it on my system, you may run into other problems on your system or with different versions of packets. It even has different problems with different versions of glibc. Projects like buildroot simply disable it when using musl. It "only" provides a static code analysis plugin for the C++ compiler.
The option --with-native-system-header-dir is of special interest for our cross compiler. Since we pointed the root directory to $TCDIR/$TARGET, the compiler will look for headers in $TCDIR/$TARGET/usr/include, but we didn't install them to /usr/include, we installed them to $TCDIR/$TARGET/include, so we have to tell the build system that is should look in /include (relative to the root directory) instead.
This time, we are going to build and install everything. You really want to do a parallel build here. On an ordinary desktop machine, this is going to take some time. You might want to go for a walk, watch an episode of Columbo or do whatever while this builds. If you are using a laptop or similar machine with thermal issues, you might want to open a window (assuming it is cold outside).
Testing the Toolchain
We quickly write our average hello world program into a file called test.c:
We can now use our cross compiler to compile this C file:
Running the program file on the resulting a.out will tell us that it has been properly compiled and linked for our target machine:
Of course, you won't be able to run the program on your build system, except maybe for the x86 version which will run on x86_64 if you have a 32 bit musl installed or if you compile it completely statically linked:
Cross compiling mtd-utils