Interestingly the method of mitigation, which is mounting user-accessible filesystems nosuid, has become less and less common over the years.
When dinosaurs still roamed the earth, it used to be that all unix installs used separate /, /var, /usr, (/export)/home, ... directories, the rationale being mostly to facilitate remote mounting of shared (/usr) programs or to isolate filesystem corruptions (/ being mostly read-only and containing essential binaries).
But its quite convenient to have each of them mounted with adequate, security and performance enhancing mount options (in Linux for example: /=noatime, /usr=nodev,noatime, /var,/tmp,/home=nodev,nosuid).
I used this scheme for quite a long time and only recently abandoned it here and there because popular Linux distributions, or e.g. OpenSolaris ZFS-root no longer offer it on their default install media. And of course it wastes a little space having to judge the maximum expected size when installing the machine.
I think this change is mostly due admin time per machine. Even if you use LVM your admin time per machine is higher if you have lots of separate partitions.
The problem is that you don't really know which applications do rely on read access times, and because the performance jump between atime<->noatime and atime<->relatime is pretty much the same, I always go for the safer option.
Versions are not everything anymore. Distributions have so many patches, that unless you compiled glibc yourself (did you?), you'll get centos-libc-2.6, or debian-libc-2.6, or ...
Hard link creation can in recent Linux kernels be restricted, by SECURITY_YAMA_HARDLINKS config option which seems active by default at least on Ubuntu 10.10. So that platform is not vulnerable.
That's because, to redirect that file to file descriptor 3 for reading, you need read permission on the file. In my system, for example, /bin/ping cannot be read as a normal user (permissions 4711) and I get that same error.
You'll have to find another binary that has the SUID bit set and is readable. For example, in my system /bin/mount does the job. Still, in the last step I get the same error as reported by several other users (Inconsistency detected...)
I went to try this, and then realized that I don't have gcc installed on my production (-ish) boxes.
Is this sufficient?
Can you foresee me needing to install gcc, and to leave it installed? I.e. should I install it to do whatever compiling I need to do, and then remove it?
No, it's not sufficient. Assuming an attacker has the ability to run commands as a user on your machine, they could compile the exploit code on a remote host and copy it in.
This exploit works on an up-to-date RHEL5.3 server. Confirmed on a Rackspace Dedicated Hosting system.
NiekvdMaas's exploit code does not work on many systems because /tmp is a separate filesystem from /. Try using a different location for the 'exploit' directory and file.
Maybe a bit OT, but I long for the days when advisories were cool and funny, Gobbles-style. Nowadays it's so... sterile and corporate. Still cool on a technological level (sometimes), but the Wild West spirit made the vuln-dev scene much more appealing as a spectator sport.
For what it's worth, I get the error:
ln: failed to create hard link `/tmp/exploit/target' => `/bin/ping': Invalid cross-device link
Are there any other way to do this without ln?
I have no binaries on non-root-writable filesystems. tmp is on tmpfs, / and /home are on their own filesystems so I'd need to be root to create the files on a partition that this could work from.
Your /tmp is on a separate file system than /bin. _Hard_ links point to the same inode, therefore both parts of your link must be on the same file system (as opposed to a soft link, which is a separate inode saying "hey buddy, go over there").
It might work in something like /var/tmp, assuming that's not a soft link to /tmp like some systems do, or a separate file system.
That's a bit like saying that the defenders of SSH who claimed it's good for security should step up when there's a vulnerability.
I'm not quite sure what benefits dynamic linking is supposed to offer by way of security over static, other than when you upgrade a library due to a security weakness you don't have to relink everything.
Static linking would protect against attacks on linkers but dynamic linking offers so many non-security benefits that it's hard to ignore.
Having said that, I always keep a statically compiled busybox on any Linux host I run in case something goes wrong with glibc. Too many bad memories from broken RedHat upgrades from years back.
> Having said that, I always keep a statically compiled busybox on any Linux host I run in case something goes wrong with glibc
I just keep a live CD/USB on hand. A lot easier, and it gives the benefit of a graphical boot that can even access the internet to download further fixes.
About a decade ago there was a security bug detected in zlib. When we grepped through the Windows code, we discovered 46 different copies of it in the tree. It turns out there were actually 47 -- somebody had copied the code and renamed everything.
False dichotomy. Among the many things that are wrong with this statement, the most important is that usually when upgrading an so, you must rebuild the packages that depend on it. Just ask any Gentoo user.
That is only true of the application binary interface has changed. Usually this means the developer will update the number associated with the library.
Botan is a cryptographic library, it is on version 1.8.10, however its library is at 1.8.2, which means if the application was compiled against version 1.8.3 it will work just as well with 1.8.10 as it would with 1.8.2 as none of the binary interface has changed.
This is how it works in theory. It seems that many fixes would involve breaking things. And how do you enforce this behavior of bumping the version? Do all upstream developers agree to your interpretation of what version numbers mean?
The upstream bumps the version when the ABI breaks. This is a social convention, there is no requirement for them to do so, but if they don't package managers certainly will.
Also many fixes DON'T involve breaking the ABI, generally it is a fix within a function so the function return type, name and parameters stay the same as such the ABI is not broken as that is all that matters, you can change the function all you want so long as it still returns the same thing as before.
Yes it works very well in practice, and makes development much nicer, as libraries around my product can easily be updated without me having to update anything.
Really? Because I update libraries all the time on CentOS, Debian, etc. and very rarely does that require an update of all the packages that depend on it. With glibc, you occasionally have to restart services (cron, ssh, etc), but that's about it.
Not sure what the Gentoo guys are doing differently.
Perhaps upstream is doing some magic. Like applying patches to the distros official version and making sure it's compatible before distributing a binary.
When dinosaurs still roamed the earth, it used to be that all unix installs used separate /, /var, /usr, (/export)/home, ... directories, the rationale being mostly to facilitate remote mounting of shared (/usr) programs or to isolate filesystem corruptions (/ being mostly read-only and containing essential binaries).
But its quite convenient to have each of them mounted with adequate, security and performance enhancing mount options (in Linux for example: /=noatime, /usr=nodev,noatime, /var,/tmp,/home=nodev,nosuid).
I used this scheme for quite a long time and only recently abandoned it here and there because popular Linux distributions, or e.g. OpenSolaris ZFS-root no longer offer it on their default install media. And of course it wastes a little space having to judge the maximum expected size when installing the machine.