Tags: security

Fedora 10 a little bit more secure

Fedora 10 comes with filesystem capability support. Unfortunately it is not used by default in the packages which can take advantage of it. I think the excuse is that there people who build their own kernels and disable it. That's nonsense since there are many other options we rely on and which can be compiled out.

Anyway, you can do the following by hand. Unfortunately you have to do it every time the program is updated again.

sudo chmod u-s /bin/ping
sudo /usr/sbin/setcap cap_net_raw=ep /bin/ping
sudo chmod u-s /bin/ping6
sudo /usr/sbin/setcap cap_net_raw=ep /bin/ping6

Voilà, ping and ping6 are no SUID binaries anymore. Note that ls still signals (at least when you're using --color) that there is something special with the file, namely, there are filesystem attributes.

These are two easy cases. Other SUID programs need some research to see whether they can use filesystem capabilities as well and which capabilities they need.

SHA for crypt

Just a short note: I added SHA support to the Unix crypt implementation in glibc. The reason for all this (including replies to the extended "NIH" complaints) can be found here.

But I Have Nothing Of Interest On My Machine

I'm sick and tired of hearing people saying

I don't have to secure my machine since I have nothing of interest on it. Nobody would want to steal anything I have.

That's absolutely not the point. Yes, some attackers are after personal data like account numbers. But this is not all:

  • passwords are high on the list since people use the same password for all their accounts, be it banks, Amazon, eBay, whatever. Do you still agree you don't have anything interesting protected by those passwords?
  • if a machine can be taken over it can be used to a) sniff the local network, b) attack other machines, c) send spam. Some ISPs already stopped being lenient towards idiots who allow this to happen unchecked and they simply suspend the accounts. Do you care about having an Internet connection?

Security always matters even if the data stored on the machine is benign. Nobody should be allowed to even run machines which have no distinction between user and administrator. This includes more and more Linux people because new idiot distributions like Linspire, NimbleX, etc pop up. No machine should be without firewalls, in both directions. For RHEL/Fedora users it of course doesn't stop there, we have many more security features and if it would be up to me I would take out the switch to disable them.

Next time when you see somebody writing nonsense like the above (or hear them talking like this) do me a favor: smack them a bit so that they come to their senses. These are the people who create the opportunity for spam, phishing, and other illicit activities. Heck, they deserve more then a bit of smacking...

RSA conference, Day 1 (for me)

I had the podium discussion today (nothing special to report) and so I stayed a bit longer until my ride arrived. What to do? The show floor is boring for me nobody really targets developers. So join a few sessions.

The first by Eugene Kaspersky. Well known name, quite interesting title: The Dark Side of Cybercrime: Details on the Latest Hacker Tactics from Around the World. What would you expect when reading this? I myself expected to actually learn about attack vectors etc since this guy must be exposed to them on a daily basis.

Well, Mr. Kaspersky didn't think so. He spent the first 40-45 minutes on recounting the history of attacks, viruses, worms, trojans, etc. Some statistics thrown in, some pictures of authors. Then in the last 5-10 minutes he talks about attacks going on today but still only at the level of there will be phishing attacks, and data theft, and .... And suddenly it was all over?

If the title promises the latest tactics, why waste time on ancient history? When promising details, why only scratch the surface and throw out a few buzzwords? This was probably one of the most wasteful hour I've spent in a long time. Heck, I might have enjoyed an HR seminar more than this baloney.

Still not time to leave, so I go into the podium discussion about Virtualization and Security. I was skeptical from the get go. A panel without anyone who actually works on virtualization technology. Only security professionals, i.e., the people who benefit from security problems. Turns out this discussion is really meant as a big fright fest. It was an enumeration of additional problems in security, monitoring, auditing when you deploy virtualization. Close to the end one of the panelists actually asked (I paraphrase) And who in the audience still considers deploying virtualization after what you heard here today?

I'm always willing to accept that there are some new problems. They are mostly concerning the introduction of a new code base (hypervisor or the hardware emulation like KQEMU) and the interfaces between it an the VMs. But many (most?) of the problems they mentioned are home made or are simply problems which exist without virtualization. For instance, they were complaining about VLANs which are created between the domains so that a single NIC is sufficient for all domains. Dah! If this is a problem for you, don't do it. Use separate network cards for each domain. PCI forwarding is there and by the time people actually start deploying Intel will have VT-d in their chips (and AMD whatever they need). We'll soon enough have NICs with virtualization supoprt built in (Infiniband already can do this today). Once this is true I hear them shout but who audits the firmware which implements this (it'll indeed something mostly implemented in firmware). The answer here is again: do you audit the firmware of the NIC today? I don't think so and still it can very well be a security risk.

I took away from this that the security industry sees virtualization as yet another source of money and full employment. Yes, you'll have problems if you do stupid things when deploying virtualization. But the same is true without virtualization. I fail to see the difference. And the panel constantly reminded everybody that no company out there has a person who understands all the problems, front to back, from technical details about virtualization to specific problems of SOA deployments in virtualized environments. That's most probably true. But how is this difference from non-virtual deployments. I dare a security professional to step forward and prove s/he knows all this. Heck, I can think of a gazillion security-relevant details at low levels which are not known except to people who actually work on that code.

The organizers claim that they try to keep the sessions clear from being marketing sessions. Mr Kaspersky certainly didn't manage to do this, my podium discussion obviously couldn't (it was after all about three specific implementations), and this virtualization session was a big see, we are more than ever relevant session byt the security professions (with special plugs of the Center for Internet Security).

What was is there are sessions which actual practical advice for programmers, i.e., to cure the root of all the evil. My Thursday session is probably one of the very few exceptions. And the funny thing is: during my podium session people actually made it known that one of the things they like to hear about at conference is specifically this.

My opinion thus far: if you are a security professional, CSO, etc, run to San Francisco, don't walk. You'll get plenty of stories you can tell your boss to frighten her/him and give you a large budget and many underlings to have fun with. You'll also find people who want to sell you piece of mind and that should be well worth it to you. After all, you somehow have to spend the money your scared boss throws at you.

If you actually are interested in fixing the problem, don't bother. The organizers don't either.

Security Now! podcast

I happened to listen to a few episodes of the Security Now podcast, by Leo Laporte and Steve Gibson. It's mostly Windows stuff, hence uninteresting technically, but it's an eye opener nevertheless. And not in the positive sense. They, well Steve, often makes clueless comments about non-Windows OSes in hos attempt to give every OS its fair share. But this of course backfires when the comments are wrong or misleading.

But the worst thing I came across so far is in episode 71, called "Securable". That's a program of Steve's and of no relevance. But he tried to explain the NX feature of modern x86/x86-64 processors and this is what he said (see the transcript):

[...] what this does is essentially it allows the system to stop virtually all buffer overruns. And that’s big. I mean, all the security problems that we encounter with incredibly small exception are buffer overrun attacks. [...]

This is really what he thinks, he repeats it with different words later on in the show.

It seems for him buffer overflow is synonymous with inject code through a writing over buffer boundaries and then execute that code in place. Everybody who deals with security will laugh about such a definition. These are the first generation buffer flows which were exploited. At least on platforms which are secure. In Linux we have for the longest time means to protect against these kinds of attacks, starting from address space randomization to NX emulation. This does not in any way stop buffer overflows from being a problem.

Buffer overflows still can be used to redirect program execution. Overwriting return addresses for return-to-libc exploits (so other libraries), overwriting function pointers elsewhere, overwriting local variables and changing the direction of execution at branch points. The list goes on. These kinds of effects of buffer overruns are not detected by NX.

There are two ways I can interpret Steve's comments:

  1. On Windows, because it is such a soft target, attackers didn't have to bother with more sophisticated attacks and they really didn't happen. In this situation the attackers will simply adapt and use the attack vectors I described above.

  2. Steve doesn't know what he's talking about and he's doing his listeners a disservice by suggesting they are almost completely safe just because they enable NX.

For Steve's sake, and Leo's since he would be guilt by association, let's go with the first possibility. But all this means that Windows is years, many years, behind the Unix world when it comes to security. It might be a rude awakening for some people to find that the new features do not cure all problems.

Yes, MSFT has copied us on many levels and also implement things like address space randomization and stack canaries. This will help but only if the features are enabled. And this is the second eye opener from the show. Windows has apparently no fine grained control. This means at the slightest sign of problems the features will be turned off completely. They mentioned the BIOS and OS control of the NX bit. Since drivers and many applications are badly written the machines run with the feature turned off. One point for an easy sysadmin interface, but -100 points for security.

I think everybody who hopes that with the (slow) proliferation of MSFT's new OS release the Internet will be more secure is gravely mistaken. There are still not enough security features in place and those which are in place will be turned off. Heck, or they are not even implemented. Apparently several of the security features are not implemented in the 32-bit version to maintain compatibility.

This is very, very wrong. But it's been MSFT's goal, don't piss of the customer even if it's technically wrong and it causes huge problems for everybody. I'm a strong advocate of security over backward compatibility if there is a good reason. But usually it does not come to this because you can strengthen security without compromising backward compatibility. Case in point: see how we implemented non-executable stacks. Old programs continue to run while almost all new code automatically gets protected. And the case with, automatically again, get flagged as requiring an executable stack got fixed. It is one of Red Hat's release requirements that no binaries needs stack execution permission.

One last thing: it's really amusing to see that x86-64 pick-up (I mean real 64-bit code) is so slow on Windows. For the last 3 years I haven't been using any 32-bit machine except my laptop. This is no isolated case in the Linux world, we are well on the way to make 32-bit obsolete.

RSA conference

I perhaps should mention that I'll be talking at the RSA conference in San Francisco on February 6th and 8th. I don't know yet whether I'll be around outside of these two times. There are not too many other talks which I am interested in. Two I found conflict with my own appearances. I have a few others but hardly anything which deals with secure development and system software design. Maybe somebody has some proposals.

Pointer Encryption

Mark pointed out that I haven't mentioned anywhere public parts of the security features new in FC6. Well, here it is.

One of the remaining attack vectors in the runtime are function pointers in writable memory. Overwrite the value and you can redirect execution. Of course the pointer must actually be used and randomization must be overcome, but it's theoretically possible.

The remedy I've implemented in libc internally is to encrypt function pointers. I.e., they are not stored as-is but instead in a mangled form. This mangling consists in my code of XOR-ing the pointer value with a random 32/64-bit value. Each process has its own random value. The code was publicly committed back in December 2005 and is in FC6.

The only real challenge was to make this fast. Especially on platforms like x86 which have no fast PC-relative data access. To not use a fixed address the value is stored in the TCB.

What is protected? I hope meanwhile most function pointers in libc. Some are probably still missing and others cannot be handled this way since they are visible to the outside. For some broken programs (including UML) the setjmp change was the biggest. These programs tried to access the stored code address which now is not really useful anymore (program don't know how to decrypt the value). Other pointers which are encrypted are the iconv and atexit structures as well as some function pointer tables people don't really know about, they are completely internal.

Using encryption (instead of canaries) to protect structures like jmp_buf is at least as secure and in addition faster. Question is whether we can extend the use to other parts of the runtime. Runtimes for languages like C++ and Java just scream for such a protection, virtual function tables are a prime target.