Is there a tool that takes an executable, collects all the required .so files and produces either a static executable, or a package that runs everywhere?
Depends on the license and the specific piece of software. Redistribution of commercial software is may be restricted or require explicit approval.
You generally still also have to abide by license obligations for OSS too, e. G., GPL.
To be specific for the exampls, Nvidia has historically been quite restrictive (only on approval) here. Firmware has only recently been opened up a bit and drivers continue to be an issue iirc.
We fix this issue by distributing ours in a tar file with the executable bit set. Linux novices can just double click on the tar to exact it and double click again on the actual appimage.
Been doing it this way for years now, so it's well battle tested.
Well not a static binary in the sense that's commonly meant when speaking about static linking. But you can pack .so files into the executable as binary data and then dlopen the relevant memory ranges.
I don't think you can link shared objects into a static binary because you'd have to patch all instances where the code reads the PLT/GOT, but this can be arbitrarily mangled by the optimizer, and turn them back into relocations for the linker to then resolve them.
You can change the rpath though, which is sort of like an LD_LIBRARY_PATH baked into the object, which makes it relatively easy to bundle everything but libc with your binary.
edit: Mild correction, there is this: https://sourceforge.net/projects/statifier/ But the way this works is that it has the dynamic linker load everything (without ASLR / in a compact layout, presumably) and then dumps an image of the process.
Everything else is just increasingly fancy ways of copying shared objects around and making ld.so prefer the bundled libraries.
Yeah, in my 20 years of using and developing on GNU/Linux the only binary compatibility issues I experienced that I can think of now were related to either Adobe Flash, Adobe Reader or games.
Adobe stuff is of the kind that you'd prefer to not exist at all rather than have it fixed (and today you largely can pretend that it never existed already), and the situation for games has been pretty much fixed by Steam runtimes.
It's fine that some people care about it and some solutions are really clever, but it just doesn't seem to be an actual issue you stumble on in practice much.
For distro-packaged FOSS, binary compatibility isn't really a problem. Distributions like Debian already resolve dependencies by building from source and keeping a coherent set of libraries. Security fixes and updates propagate naturally.
Binary compatibility solutions mostly target cases where rebuilding isn't possible, typically closed source software.
Freezing and bundling software dependencies ultimately creates dependency hell rather than avoiding it.
This seems interesting even regardless of go. Is it realistic to create an executable which would work on very different kinds of Linux distros? e.g. 32-bit and 64-bit? Or maybe some general framework/library for building an arbitrary program at least for "any libc"?
Yeah while APE is a technically impressive trick, these issues far outweigh the minor convenience of having a single binary.
For most cases, a single Windows exe that targets the oldest version you want to support plus a single Glibc binary that dynamically links against the oldest version you want to support and so on is still the best option.
In my experience it seems to be an issue caused by optimizations in legacy code that relied on dlopen to implement a plugin system, or help with startup, since you could lazy load said plugins on demand and start faster.
If you forego the requirement of a runtime plugin system, is there anything realistically preventing greenfield projects from just being fully statically linked, assuming their dependencies dont rely on dlopen ?
It becomes tricky when you need to use system DLLs like X11 or GL/Vulkan (so you need to use the 'hacks' described in the article to work around that) - the problem is that those system DLLs then bring a dynamically linked glibc into the process, so suddenly you have two C stdlibs and the question is whether this works just fine or causes subtle breakage under the hood (e.g. the reason why MUSL doesn't implement dlopen).
E.g. in my experience: command line tools are fine to link statically with MUSL, but as soon as you need a window and 3D rendering it's not worth the hassle.
We had a time when static binaries where pretty much the only thing we had available.
Here is an idea, lets go back to pure UNIX distros using static binaries with OS IPC for any kind of application dynamism, I bet it will work out great, after all it did for several years.
I've been static linking my executables for years. The downside, that you might end up with an outdated library, is no match for the upsite: just take the binary and run it. As long as you're the only user of the system and the code is your own you're going to be just fine.
Is there a tool that takes an executable, collects all the required .so files and produces either a static executable, or a package that runs everywhere?
There are things like this.
The things I know of and can think of off the top of my head are:
1. appimage https://appimage.org/
2. nix-bundle https://github.com/nix-community/nix-bundle
3. guix via guix pack
4. A small collection of random small projects hardly anyone uses for docker to do this (i.e. https://github.com/NilsIrl/dockerc )
5. A docker image (a package that runs everywhere, assuming a docker runtime is available)
6. https://flatpak.org/
7. https://en.wikipedia.org/wiki/Snap_(software)
AppImage is the closest to what you want I think.
AppImage looks like what I need, thanks.
I wonder though, if I package say a .so file from nVidia, is that allowed by the license?
No, that's a copyright violation, and it won't run on AMD or Intel GPUs, or kernels with a different Nvidia driver version.
Don't forget - AppImage won't work if you package something with glibc, but run on musl/uclibc.
Depends on the license and the specific piece of software. Redistribution of commercial software is may be restricted or require explicit approval.
You generally still also have to abide by license obligations for OSS too, e. G., GPL.
To be specific for the exampls, Nvidia has historically been quite restrictive (only on approval) here. Firmware has only recently been opened up a bit and drivers continue to be an issue iirc.
I wish AppImage was slightly more user friendly and did not require the user to specifically make it executable.
We fix this issue by distributing ours in a tar file with the executable bit set. Linux novices can just double click on the tar to exact it and double click again on the actual appimage.
Been doing it this way for years now, so it's well battle tested.
Ermine: https://www.magicermine.com/
It works surprisingly well but their pricing is hidden and last time I contacted them as a student it was upwards of $350/year
You can "package" all .so files you need into one file, there are many tools which do this (like a zip file).
But you can't take .so files and make one "static" binary out of them.
Well not a static binary in the sense that's commonly meant when speaking about static linking. But you can pack .so files into the executable as binary data and then dlopen the relevant memory ranges.
I don't think you can link shared objects into a static binary because you'd have to patch all instances where the code reads the PLT/GOT, but this can be arbitrarily mangled by the optimizer, and turn them back into relocations for the linker to then resolve them.
You can change the rpath though, which is sort of like an LD_LIBRARY_PATH baked into the object, which makes it relatively easy to bundle everything but libc with your binary.
edit: Mild correction, there is this: https://sourceforge.net/projects/statifier/ But the way this works is that it has the dynamic linker load everything (without ASLR / in a compact layout, presumably) and then dumps an image of the process. Everything else is just increasingly fancy ways of copying shared objects around and making ld.so prefer the bundled libraries.
AppImage comes close to fulfilling this need:
https://appimage.github.io/appimagetool/
Myself, I've committed to using Lua for all my cross-platform development needs, and in that regard I find luastatic very, very useful ..
It's funny how people insist on wanting to link everything statically when shared libraries were specifically designed to have a better alternative.
Even worse is containers, which has the disadvantage of both.
Dynamic linking exists to make a specific set of tradeoffs. Neither better nor worse than static linking in the general sense.
It's easier to distribute software fully self-contained, if you ignore the pain of statically linking everything together :)
I'd never heard of detour. That's a pretty cool hack.
they were prominent in game hacking 2005ish windows
made hooking into game code much easier than before
That seems mostly useful for proprietary programs. I don't like it.
Yeah, in my 20 years of using and developing on GNU/Linux the only binary compatibility issues I experienced that I can think of now were related to either Adobe Flash, Adobe Reader or games.
Adobe stuff is of the kind that you'd prefer to not exist at all rather than have it fixed (and today you largely can pretend that it never existed already), and the situation for games has been pretty much fixed by Steam runtimes.
It's fine that some people care about it and some solutions are really clever, but it just doesn't seem to be an actual issue you stumble on in practice much.
Why? Foss software also benefits from less dependency hell.
For distro-packaged FOSS, binary compatibility isn't really a problem. Distributions like Debian already resolve dependencies by building from source and keeping a coherent set of libraries. Security fixes and updates propagate naturally.
Binary compatibility solutions mostly target cases where rebuilding isn't possible, typically closed source software. Freezing and bundling software dependencies ultimately creates dependency hell rather than avoiding it.
This seems interesting even regardless of go. Is it realistic to create an executable which would work on very different kinds of Linux distros? e.g. 32-bit and 64-bit? Or maybe some general framework/library for building an arbitrary program at least for "any libc"?
Cosmopolitan goes one further: [binaries] that runs natively on Linux + Mac + Windows + FreeBSD + OpenBSD + NetBSD + BIOS on AMD64 and ARM64
https://justine.lol/cosmopolitan/
>Linux
if you configure binfmt_misc
>Windows
if you disable Windows Defender
>OpenBSD
only older versions
Yeah while APE is a technically impressive trick, these issues far outweigh the minor convenience of having a single binary.
For most cases, a single Windows exe that targets the oldest version you want to support plus a single Glibc binary that dynamically links against the oldest version you want to support and so on is still the best option.
Clearly a joke if it uses the .lol tld.
It's his personal website lol.
Justine identifies as a woman.
"identifies as" is an unnecessarily dismissive choice of words. She is a woman.
Appimage exists that packs linux applications into a single executable file that you just download and open. It works on most linux distros
I vaguely remember that Appimage-based programs would fail for me because of fuse and glibc symbol version incompatibilties.
Gave up them afterwards. If I need to tweak dependencies might as well deal with the packet manager of my distro.
Yup. Just compile it as static executable. Static binaries are very undervalued imo.
The "just" is doing a lot of heavylifting here (as detailed in the article), especially for anything that's not a trivial cmdline tool.
In my experience it seems to be an issue caused by optimizations in legacy code that relied on dlopen to implement a plugin system, or help with startup, since you could lazy load said plugins on demand and start faster.
If you forego the requirement of a runtime plugin system, is there anything realistically preventing greenfield projects from just being fully statically linked, assuming their dependencies dont rely on dlopen ?
It becomes tricky when you need to use system DLLs like X11 or GL/Vulkan (so you need to use the 'hacks' described in the article to work around that) - the problem is that those system DLLs then bring a dynamically linked glibc into the process, so suddenly you have two C stdlibs and the question is whether this works just fine or causes subtle breakage under the hood (e.g. the reason why MUSL doesn't implement dlopen).
E.g. in my experience: command line tools are fine to link statically with MUSL, but as soon as you need a window and 3D rendering it's not worth the hassle.
Ack. I went down that rabbit hole to "just" build a static Python: https://beza1e1.tuxen.de/python_bazel.html
We had a time when static binaries where pretty much the only thing we had available.
Here is an idea, lets go back to pure UNIX distros using static binaries with OS IPC for any kind of application dynamism, I bet it will work out great, after all it did for several years.
Got to put that RAM to use.
I've been static linking my executables for years. The downside, that you might end up with an outdated library, is no match for the upsite: just take the binary and run it. As long as you're the only user of the system and the code is your own you're going to be just fine.
As TFA points out at the beginning, it's not so simple if you want to use the GPU.
Have fun statically linking nvidias proprietary opengl drivers