Breaking Code

April 20, 2012

Hackito Ergo Sum 2012

Filed under: Conferences — Tags: , , , , , , , , , , — Mario Vilas @ 11:27 pm

Hi everyone. Last week I’ve attended Hackito Ergo Sum 2012, and I wanted to share with you some of the things that I found most interesting during the talks. This won’t be a detailed review of each talk, but rather an account of a few details on the talks that I personally found more interesting, in no particular order. If you’re looking for a detailed review of each talk check out this blog.

Oh, by the way. I totally made up the names of the talks. I think it’s more fun that way. :)

The event took place at the headquarters of the French Communist Party, and I have to say the conference room was quite impressive. It was an underground dome all covered with white metallic plates and lamps behind, giving a peculiar visual effect.

An additional advantage of this place is that some security agencies can’t send their spooks there. Hurray to the ridiculously outdated cold war laws! :roll:

One thing I didn’t like though, was that the slides were projected in a sort of tilted curved screen, making it a bit difficult to read the slides unless you were sitting in the middle. I don’t think I was the only one with this problem because I saw a lot of heads tilted sideways… ;)


IDA Toolbag: “How many of you use IDA? And how many of you like it?”

I can assure you there were much fewer hands raised for the second question. :)

This talk was about a new tool called “IDA Toolbag“, by Aaron Portnoy and Brandon Edwards. In a nutshell, it’s a combination of a lot of ideas that were already present but not quite integrated before: a collaboration plugin, path finding and process stalking, plus some improvements on the code searching, all tied together and with (finally!) a properly designed GUI. The authors understandably put a lot more emphasis on the collaboration features of the plugin, which are much more advanced than any other public plugin that I know of, and I have to say it seems quite powerful.

However what drew my attention the most was the care they took in thinking of usability from step one and modeling the GUI after common reversing tasks with IDA. Most hackers seem to believe usability and graphic interfases in general are not important at all, if not downright useless. But you know, consoles with green letters are cool and all (I’m looking at you, Pancake ;)) but the fact is, the more time you spend mastering the use of a tool, the more time could have been spent on actually using the tool.

Properly designed GUIs may not give you “h4xx0r cred” but they help you work faster, thinking more about the problems you want to solve and less about how to use the tools to solve them. And for one, I’m happy to see when a reversing tool doesn’t get in the way of your reversing.

Ok, I’ll stop my rant now, don’t worry. :) Back on the tool. Basically you use it like this: after opening or creating the IDB, load the plugin by running the Python command “import toolbag”. This causes a parallel database to be created and stored embedded into a new section of the binary in the IDB file. It’s done like this due to some limitations on the IDA API to store arbitrary data in the IDB. The biggest advantage of this is since the plugin is using SQLite underneath you can simply query this new database using SQL queries or from Python code.

The plugin also adds a new detachable window with some tabs inside, each tab provides a piece of the plugin functionality. There are enhancements to the IDA code search and a new improved mini graph window. Most notably the viewing history is now kept in a tree rather than the usual “breadcrumbs” pattern used by IDA, making it impossible to get lost when examining the code. Makes sense: the breadcrumbs pattern is suitable for linear tasks, and when you’re examining a disassembled binary you never do it linearly – what you really do is traverse the call graph, following code or data references.

Inside this new database there’s a virtual filesystem. Pretty much everything you do can be stored as files here, and sent to other people through the network. That’s very useful for collaboration – you can send your source code comments, viewing history, etc. to other people so they can import it into their own IDB files. This importing/exporting process can be quite selective, so you don’t run into the risk of overwriting your own changes with someone else’s, and you don’t share more than you wanted to.

A caveat I see right now is the fact that the plugin uses the pickle module to marshall data. Although it’s wrapped with a custom marshalling module to prevent attacks, and the GUI shows you what it is you’re about to unmarshall, just in case I still wouldn’t accept collaboration data from strangers. (Then again I wouldn’t accept IDB files from strangers either!). The authors also warn you about the security implications of this. Bad stuff may also happen if the binary you’re analyzing already contains the magic extra section where the database is stored – but if you’re blindly opening malware with IDA without checking for this kind of stuff you kinda deserve to be pwned, I guess. In any case the magic section name is configurable, just pick something nobody else would guess and you’re safe. One more thing: the network queues are not encrypted, so always use a VPN.

The plugin also allows remote debugging using Kenshoto’s VTrace. The marshalling module described above is also used to send Python code to a listener process, so this feature is more generic than it may seem at first. You can code your own custom modules to be executed remotely, do your stuff asynchronously, collect information and incorporate it to the local database. I can think of a lot of uses for this and I’m sure you do too. :)

Another thing I liked is how customizable everything is. Pretty much everything is configurable by editing the config.py file. All of IDA’s functionality can be replaced with a custom wrapper so it may be used outside of IDA. And I haven’t checked yet but I guess VTrace could also be replaced with PyDbg, WinAppDbg or PyKd should the need arise.

the slides for this talk are not yet available, but the documentation in the webpage is pretty extensive and the video is online here: http://www.ustream.tv/recorded/21835515


Turning weird Windows kernel bugs into easy exploits

In this talk Cesar Cerrudo showed three quite useful tricks to exploit vulnerabilities in kernel land on Windows. The twist is these tricks allow you to take vulnerabilities that are tipically seen as very difficult to exploit, and turn them quickly into weaponized exploits without even needing to run kernel land shellcode.

The key idea here is that we often think of running shellcode as the goal, when it’s only a means to an end. The real end in privilege escalation exploits is to, well, escalate privileges. So if it’s possible to do so without arbitrary code execution, all the better! This basic idea is also present in Gera’s Insecure Programming challenges and the Shellcoder’s Handbook chapter on alternative payloads.

In this case, the focus is on manipulating the process tokens to gain system privileges. This allows for very quick and stable local exploits with no kernel payload. In order to obtain the memory addresses of various kernel structures, we have a handy undocumented API call in ntdll.dll called NtQuerySystemInformation() that returns, among other information, the kernel pointer to the structures associated with the given handle value. By passing it a process handle we can obtain the pointer to the KPROCESS structure, and knowing the exact Windows version we can find the pointer to the primary token. This is based on @j00ru‘s call gate exploitation paper: call_gate_exploitation.pdf.

Armed with this knowledge, we have three useful tricks we can play. The simplest is to just write a NULL pointer in SecurityDescriptor field of the structure. This effectively removes all ACLs from the handle, and now we can do whatever we want with it. With this we can exploit any vulnerability that allows us to write a NULL pointer in an attacker controlled address.

The second trick then is to manipulate the tokens themselves to add more privileges. In Windows Vista and above tokens are represented by a _TOKEN structure with three UINT64 fields called “Present”, “Enabled” and “EnabledByDefault”. Each field contains a bitmask of privileges. Interestingly, we only need to set the corresponding bit in the “Enabled” field to effectively acquire a privilege. So if our vuln allows us to write arbitrary values we can simply write all 1’s here… but what if we have something trickier, like DEC instruction pointing to a user controlled address? What Cesar proposes is this: disable all your privileges using the Win32 APIs except for the one that corresponds to the highest bit of the bitmask (which happened to be a pretty harmless privilege that came by default, called “SeChangeNotifyPrivilege”). When you trigger the bug and decrement this value, the result will have all bits set BUT the highest one – so you gained all privileges but one. (If you have an INC instruction instead, your only choice will be to read your current privileges using the Win32 APIs to find out the value of this field, and trigger the bug multiple times to increment the value to the one you need).

Before Vista things were different, though. What you have instead is a pointer to a list of tokens identified by numeric values (the _LUID_AND_ATTRIBUTES structure). The trick here is to get the address of the process primary token instead (using the NtQuerySystemInformation() API again) and modify this numeric values to match other, more interesting privileges. For example with a DEC you can change privilege 0x15 (I don’t recall what that was, but it came by default) into 0x14 (the debug privilege) to be able to debug any process you want. From then you can just inject your userland shellcode into any privileged process (LSASS.EXE to grab all the passwords, for example).

And finally the last technique requires a vulnerability that can write an attacker controlled value into an attacker controlled address. The idea here is to copy a the System user’s identity token into the process primary token to escalate privileges. This token can’t be obtained directly though. In order to get it, Cesar hooked the NtOpenThreadToken() function and called MsiInstallProduct(). Any other API that uses the System identity token will do, this is just the one he used for the demo. Once you have the token handle you have to duplicate it (ntdll closes the handle when it’s done with it). Then you can call NtQuerySystemInformation() as usual to get the pointer to it. One important detail: to prevent the reference counter from going haywire, make sure to duplicate this handle a couple times in some other process that never dies (like our old friend LSASS.EXE).


NFC credit cards: “We haven’t broken any security or tried to, because there is none!”

The talk on NFC credit card security by Renaud Lifchitz was both surprisingly simple and scary.

It turns out contactless credit cards just spit out all their info on the radio waves in plaintext to whoever wants to listen, and the closest thing to a “protection” is the required physical distance to receive the signal (3cm to 5cm). And using the proper equipment it can be boosted to 1.5m for active reading and 15m for passive sniffing, so much for THAT.

The stupidest thing about it is the standard for contactless cards was made by the same credit card companies that sponsor PCI… but the cards themselves are a far cry from being PCI compliant. But don’t worry, because the vendors say the NFC cards use “highly secure dynamic cryptograms”… :roll: EPIC FAIL!

In conclusion: don’t get yourself an NFC credit card. Hell, don’t get a credit card at all if you ask me! But if you absolutely must have one, get yourself an RFID wallet to carry it.

The slides can be downloaded from here: HES-2012-rlifchitz-contactless-payments-insecurity.pdf. There’s also a Google Code project with the command line tool shown during the talk.


Android exploitation: pwning the heap like it’s 1999

This talk by Georg Wicherski was about Webkit exploitation. To sum it up, instead of exploiting the libc heap implementation you can target another allocator called RenderArena, built on top of the libc allocator, that can only allocate RenderObject objects. The advantage of this is that the RenderArena allocator is extremely predictable, and RenderObject objects have a vtable that get overwritten with the pointer to the next heap block on double frees. This talk gives two exploitation techniques (dubbed “The Wicherski” and “The Refined Aubizziere”) specific to the RenderArena allocator for use-after-free and type confusion bugs in Webkit. I won’t go into the details because the slides explain all this better than me :) but here are a few selected slides, to give the general idea:

You can download the slides from here: HES2012-gwicherski-exploiting-a-coalmine.pdf


Social engineering: “Advertising and religion are forms of social engineering too”

I’m usually quite partial to technical talks, especially when they’re about exploitation. But I still liked this one a lot. Matias Brutti painted a good picture of what the real social engineering practice is during a pentest, and did so with plenty of humor (giving religion as an example of pre-technology social engineering cracked me up) and with none of the self-important bullshit that usually plagues this topic. There was also no NLP nonsense at all, I liked that too.

He also gave some practical examples of ruses that can be used to lure unsuspecting vict… ahem, I mean targets of your pentest to open a backdoored Office document. My favorite was the following: create a fake Excel spreadsheet with the salaries of all the bosses in the company, then send a spoofed email to a few non-technical folks complaining about how much does that damn pointy-haired boss earn compared to regular employees. Instant success! You don’t even need to mass mail it, the employees themselves will spread your backdoor much better than you would. ;)

But be careful of what ruse you use. You might be a little too successful and end up pwning people outside the scope of your pentest (or even outside the company entirely!) and that would get you into a lot of trouble. Also make sure the topic of your ruse is something you can show later in your report… sex sells, but it makes you look bad when you have to show it to the CEO. (I once heard of a really nasty example of this. Legend has it some pentesting team once used this PDF file for a social engineering engagement. I’ll leave it to the readers to imagine the consequences! ;))

The talk ended with a set of free social engineering automation tools written by Matias himself. They help when gathering information for your targets and mass mail them, among other tasks. You can get the source code from Github: https://github.com/FreedomCoder


Autopwn with steroids: how math geeks can improve your pwnage

This talk was about how to mathematically plan a complete network infrastructure pentest from top to bottom, using whatever information is available at the time (target machines involved, software installed on them, vulnerable versions, open ports, etc.). The algorithm can also accept input during the execution of the plan and correct it to incorporate the new information, and the math is also backed by statistical information gathered from over 700 machines with different combinations of operating systems, hardware, etc.

I’m sure Carlos Sarraute gave a superb talk a usual, he really knows his stuff and already published some previous work on the same topic. But… unfortunately I arrived late :( so he was already past the introduction and knee-deep into the heavy math behind his work. (Suffice to say it involved statistics in four dimensional metaplanes to understand why I gave up almost instantly. I felt back in college, folks!)

Sorry to disappoint you all! I’m sure I can get him to explain it to me while drinking some beers another day… ;)

The slides are not yet available, but in the meantime you can read Carlos’ related past works a the Core Security website.


Detecting crypto: that awkward moment when a typo in Wikipedia ruins your TEA

Joan Calvet presents a proof-of-concept tool to automatically detect crypto code in malware and identify the algorithm being used. The task is divided in three parts: the first is detecting the cryptographic functions by analyzing an execution trace of the binary, the second is to find the inputs and outputs of said code during the execution, and the third is to detect the algorithm being used.

The first part is possibly the hardest. Some shortcuts are taken to make it easier: a potential crypto function consists of one or more chained loops, for a particular definition of “loop”. This allows for a quick and easy detection method that works in many cases, but of course not in all. In particular, state machines are discarded as potential crypto code. However, unrolled loops are successfully detected, because the tool compares the instructions being executed rather than the memory addresses where they happen to be. I’m not sure what would happen if loops were transformed into recursive function calls, but most malware authors won’t alter crypto code much anyway (more on that later).

The second part is about determining what are the inputs and outputs. In principle this is easier since all it has to do is track memory reads to addresses in areas where no writes happen and visceversa. The tricky part is finding out where the different arguments are. Just taking consecutive memory addresses won’t do, since that’s bound to happen all the time in the stack. The author’s solution is to separate the arguments based on what instructions are used to access them.

The third part is the simplest: the tool has reference implementations of all the supported algorithms, and they are all tested in all possible combinations of parameters. This brute force solution works well even for algorithms like AES, provided you consider the S-boxes as part of the input. This is also the part I find severely lacking: it’s trivial to alter the crypto algorithm to defeat this. A simple XOR against a hardcoded constant will change the output enough so you can’t find it by comparing against the reference implementation, and you won’t lose any of its security. Joan seemed quite aware of this, and even showed a funny example on how it can fail.

He was testing the tool against some malware samples that were supposed to be using TEA. The tool failed, and manual analysis revealed the algorithm was TEA alright… but on closer inspection there was a subtle difference of implementation: a pair of parenthesis was misplaced in the original source code! The strangest part was this exact same bug was present in other malware families as well. After some googling, the mistery was solved. All of these bugged malwares came from Russia, and the Russian version of Wikipedia contained a faulty reference implementation, that was copied and pasted into the malware code. To me, that says a lot on how malware is developed… and it also teaches to distrust code randomly found on the Internet. Maybe I’m being paranoid, but… who’s to say the bug wasn’t intentional?


…And thanks for all the fish!

There were a lot more talks I’m completely and unfairly skipping here: Travis Goodspeed’s talk on pwning radio devices, Daniel Mende and Enno Rey messing with Cisco VoIP phones, Ralf Philipp Weinmann’s on baseband reverse engineering, just to name a few. The level of all the talks was excellent but I’ve really spent a lot more time on this blog post than I originally intended :D plus I’m not confident enough with some topics to be talking about them, so I’ll leave it to you all to go to the HES website and read the slides. You can also check out the videos at ustream.

Many thanks as well to Phillipe Langlois, Jonathan Brossard, Malard Arnaud, Matthieu Suiche and the rest of the team, you guys really know how to throw a geek party! :)

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 2,480 other followers

%d bloggers like this: