Self Signed Certificates

September 30th, 2009

SSL – ‘Secure Sockets Layer’ is an encryption protocol. How it works is beyond the scope of this article, our only concern for now is self signed certificates. Sufficient to say, that SSL is based upon public key cryptography (PKI). It makes use of a private key and a public key. Being a Linux Security Freelancer it’s important to be able to advise on when certain technologies are acceptable, and when they aren’t.

If SSL certificates rely on public key cryptography, why do we need a certificate? Simply put, the certificate is usually signed by a ‘trusted’ Certificate Authority (CA)  thus informing the other party that the host is who he claims to be.

Whether the certificate is signed by a CA, or not, your level of encryption and underlying SSL is the same. You might want your CSR (Certificate Signing Request) signed by a trusted CA certain instances. Most websites that use ‘SSL’ today have their certificates signed by a trusted CA. If your web visitors trust you, and your CA has verified that you are who you claim to be, then logic follows that your visitors trust your signed certificate.

You may decide to use a self signed certificate in the instance that you want your data encrypted between your host and the remote host, and know that the remote host is who he claims to be, without needing to convince anyone else of it. In the instance that you are using SSL over HTTP, your browser would warn you that the certificate has not been signed by a known authority. As long as you accept this, your connection is just as encrypted as it would have been should a trusted CA have signed it.

One often overlooked problem though is the potential for a MITM (Man In The Middle) attack. A machine between yours and the remote host could sniff but as of writing, not decrypt your data. However should he perform a MITM attack and essentially fool you into connecting to his webservice with his self signed certificate, you wouldn’t know any better.. You’d receive the same popup warning, which you’d dismiss, and begin your session with an attacker instead of the remote host you were expecting.  One way of confirming, is that you could inspect the certificate, and look at it’s fingerprint. How many people would do that though?

Shrinking/Resizing ext3 Partitions

September 26th, 2009

Shrinking or expanding an ext3 partition is easy but is not without it’s risks. Before starting, you NEED to take a backup of your data. There’s a strong possibility that it will all disappear and your filesystem will become permenantly broken, as with any disk or filesystem procedure.

Please note:

  1. The steps below are the RAW STEPS required to resize your partition. This is a potentially dangerous procedure that could easily destroy/ruin/damage your partition, data, filesystem or other partitions on the same disk.
  2. DO NOT perform these steps on a live/production machine
  3. DO NOT perform these steps unless you have a full backup of your data/disk
  4. These steps are really for theoretical purposes only. They should work just fine, but tools such as gparted will do this for you.
ns3:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             9.4G  6.8G  2.2G  77% /
tmpfs                 443M     0  443M   0% /lib/init/rw
udev                   10M   92K   10M   1% /dev
tmpfs                 443M     0  443M   0% /dev/shm
/dev/sdb1              20G  9.8G  9.0G  52% /email

In my example, I’m going to resize /dev/sdb1 which is my /email partition. /dev/sdb1 is a partition residing on device /dev/sdb

ns3:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             9.4G  6.8G  2.2G  77% /
tmpfs                 443M     0  443M   0% /lib/init/rw
udev                   10M   92K   10M   1% /dev
tmpfs                 443M     0  443M   0% /dev/shm
/dev/sdb1              20G  9.8G  9.0G  52% /email
217.10.156.195:/email
31G  3.5G   26G  12% /email/carolesobell.com
ns3:~#

Read the rest of this entry »

PHP MySQL Developer – Using MySQLi Prepared Statements to Avoid SQL Injection

September 25th, 2009

I’m going to demonstrate a very short and simple method of avoiding SQL Injection at the SQL query level. You’ll need MySQLi support, on Debian you can apt-get install php5-mysql will contain everything that you need, and would be installed by default with your LAMP Installation.
Read the rest of this entry »

Web Application Security Consultant Methodology

September 18th, 2009

I wanted to share some thoughts on my general methodology when approaching web application pen testing. Depending on size, scope of work, complexity and a number of factors, there are two separate angles, usually both or a hybrid of both that I will take.

The first angle has to be the network security itself, all the way down the the physical security. As a penetration tester, I’ll test the web, database, storage and any other related networked devices inside and out. Port scanning their interfaces, spoofing IPs and MACs, asking myself questions such as “Does the database accept direct connections from any IP? Does Apache keep too many spare threads waiting?” We need to work our way from bottom to top of the OSI Model, a lot of which can be done using nc (Netcat) and a combination of scripts, as well as nmap.

Secondly, as the security consultant, I would then test the application itself and dependant or otherwise related applications. Crawl the site and it’s file hierarchy using wget or similar, and then run automated test tools, such as burpsuite, acunetix and a combination of curl/wget and shell scripts before manually drilling down into anything suspicious. Unfortunately for the user, the majority of web applications are insecure. A web crawl using a recursive wget, followed by some blind SQL injection checks will more often than not turn up the opportunity for SQL injection. Once this is done, the next query is how far I can go with the SQL injection. Do I have root access now to the database? Does the database user have unnecessary permissions? I should be able to SELECT and possible INSERT/UPDATE data. It’s unlikely that I should be able to actually alter, or even drop the database from the web user. Having table update priviledge on all tables is just as good as being able to drop the database itself though in terms of potential damage. This combination of security issues could lead to even further damage once the initial compromise has been made.

The next question is how far to proceed once an opportunity has been identified. Is demonstrating an opportunity for SQL injection sufficient, or does the opportunity need to be exploited? Once done, does data actually need reading or writing to the database or code changed? This generally depends on the scope of the work and purpose/setting of the systems under test. More will be discussed on this in later articles.

Once complete, I will generally report and rate the issues found from 1 to 5. 1 being informational, 2 being low severity, 3 being medium severity, 4 being high severity and 5 being critical severity. The database user having additional priviledges may fall into categories 3 or 4. An outdated but currently secure version of the webserver running may fall in to category 1, whilst an SQL injection opportunity will for sure fall in to category 5.

Scope depending, I can either discuss the issues and advise on strategies to resolve, or alternatively provide the report ready for the Client to pass on to his own IT consultant.

How to Find and Replace data in MySQL

September 17th, 2009

It’s really easy!

UPDATE mytable SET myfield = REPLACE(myfield, ‘replace this’, ‘with this’);

Take a backup of your database first!

Linux – Exim, Avenger and SpamAssassin Tips

September 17th, 2009

Further to Exim, MySQL, Courier IMAP, Courier POP3 & Spamassassin – vdomain and vuser set up, I’ve recently been receiving an increasing amount of spam, and have finally decided to take some positive action. Previously, my account would get hit with about 100 to 150 per day, of which 2 or 3 might get through. Lately, this has quickly increased to about 700+ of which at least 20 to 30 have been getting through, and I’ve been doing nothing but clearing spam day and night for the past few weeks. It is, however, critital that I do not catch any genuine email – I would rather keep on the side of caution and be more generous than not.
Read the rest of this entry »

Multithreaded Tunnel Proxy and OpenVPN experiment

September 16th, 2009

Further to the Multithreaded TCP Tunnel Proxy that I wrote a while ago, I’ve picked up a low end UK VPS and installed OpenVPN on it, as well as my local machine. I set up the iproute2 split access load balancer and established the OpenVPN connection.

Now, each of the two DSL lines is established at 17mbit giving me a theoretical maximum of 2.125MB/sec. In actual fact to kernel.org I can get a steady 1.7-1.8MB/sec which is more than enough. From my 100mbit UK VPS, I can get 8-9MB/sec from kernel.org without issue. Establishing OpenVPN over a single connection and then pulling a file from kernel.org leaves me with only 1.3MB/sec which I’m not best pleased about. Pulling the file through a proxy running on the UK VPS downloads at 1.6MB/sec minimum, so it isn’t my new route that’s causing the slow down, it’s OpenVPN.  Either way, I didn’t bother testing for any improvement with pptpd because I need OpenVPN’s single TCP connection anyway for this experiment to work.

The positive outcome of the story, is that with iproute2 load balancing set up, and OpenVPN established through the multithreaded TCP proxy over both connections, and using -t4. My single 1.3MB/sec became 2.2MB/sec which is IMHO an incredibly successful outcome.

A problem to note, is that on more than one occasion, netstat/lsof showed 3 TCP connections established over one DSL, and 1 over the other DSL. I just restarted my tcp tunnel a few times until I had them equally balanced. If this was a big enough problem -t6, -t8 or -t10 might have showed interesting results, but the more threads the more delay and potential issue with misordered packets. -t4 with iptables forcing the TCP connections equally over the DSLs might also be worth investigating. Nevertheless, as the experiment goes, a pleasing outcome!

Linux Color Directory Listings

September 15th, 2009

How to add color to ‘ls’?

Adding color to your ls directory listings is easy enough, just use ls –color. You can set this behavior as the default with alias ls=’ls –color’ which I personally find quite useful. It plays well with PuTTY.

The environment variable LS_COLORS dictates what colors are applied to what file types and file extensions.
Read the rest of this entry »

Linux DHCP Server

September 15th, 2009

DHCP is an acronym for Dynamic Host Configuration Protocol. It allows a host to broadcast a request for it’s IP settings. Hopefully, a DHCP server like the one we’ll be configuring will respond. Running tcpdump shows a dhcp request looks like:

17:26:02.003956 00:00:00:00:00:00 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 342: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request, length 300

Configuration is easy, to start with, just run ‘apt-get install dhcpd’
Read the rest of this entry »

Security Consultant – Ports & Port Knocking

September 10th, 2009

Port Knocking is a clever and interesting method of allowing remote firewall manipulation whilst leaving all ports closed to all IPs. When I attempt to initiate a TCP connection to a remote host I send a packet with a ‘SYN’ flag, indicating my intention, along with other information such as a source port, destination port, source IP and destination IP. The target machine has the option of responding by accepting, responding by rejecting, or simply ignoring the packet alltogether, known under iptables and most other firewalls as ACCEPT, REJECT or DROP.
Read the rest of this entry »

Shell Return Codes – Ping Monitoring

September 9th, 2009

BASH – The Bourne Again Shell amongst most if not all other shells allows each application to exit with a return code. Some shells and environments have limits on what range this integer can fall into. Something between 0 and 255 inclusive is always a safe bet. In BASH, the variable $? is populated with the return code of the last command to return control back to the shell. It is important to preserve the return code immediately after the application exits that we want to monitor, as subsequent commands will overwrite the variable. The ping tool returns 0 on success:

HOST=”192.168.1.5″
ping -c1 ${HOST} -q 2>&1 >/dev/null  #ping HOST once and do not print any output to the screen
RET=$?  #assign the return code to RET so we can preserve it for after the ‘if’
if [ ${RET} -eq 0 ]; then
#we were successful.
echo “We were successful”
else
#we weren’t successful
echo “Host ${HOST} failed ping monitoring on `date`” |mail -s “Uptime Monitoring” admin@example.com
fi

Now of course there are easier ways of achieving the above task, although I’ve laid out the script in this way hoping that the way I have laid it out illustrates capturing the code and preserving it beyond the ‘if’ that follows which would have overwritten it. Just as further illustration, calling ping invalid followed directly by echo $? shows a return code of ‘2’ – obviously the return code for such a failure. Calling echo $? again immediately after shows a return code of ‘0’ as the return code of ping was overwritten by the return code of the first echo statement. Bash builtins return codes to the shell as any other application would.

Security Consultant – PHP Developer – SQL Injection Attacks

September 6th, 2009

One of the most common form of attacks against web applications is SQL Injection. In the most part, the language that the web application is written in is irrelevant, be that PHP, ASP, Python, Perl, C, etc. As long as the back end database uses something SQL based, be that MySQL, MSSQL, etc, again, we’re in business. This probably covers over 99% of web applications out there. Both the security consultant and the php developer or web application developer in general has to be aware of the implications of SQL Injection. Here’s how it works:
Read the rest of this entry »

Security Consultant – PHP Developer – Exploiting Common PHP Code Flaws

September 4th, 2009

There are a number of PHP and in fact programming errors in general that PHP Programmers and Security Consultants need to be aware of. Specifically, how can a malicious user use the code to gain access above what he is supposed to.

Cross Site Scripting (XSS), Shell Execution and SQL Injection are all issues that programmers need to be aware of. Luckily, buffer overflows in their traditional sense are not something that PHP developers need to concern themselves with.

Here in it’s most basic sense is an example of how we can read arbitrary files on the filesystem that we should not have access to.
Read the rest of this entry »

Multithreaded Multi-Connection TCP Proxy Tunnel Update

September 4th, 2009

Further to post http://www.adampalmer.me/iodigitalsec/multithreaded-tcp-proxy-tunnel-code/

I have received a report from a user experiencing the following error:
# gcc -Wall -g -O2   -o tcp_tun tcp_tun.c  -lpthread
tcp_tun.c:44:37: error: getaddrinfo/getaddrinfo.h: No such file or directory
tcp_tun.c:45:37: error: getaddrinfo/getaddrinfo.c: No such file or directory

I think that this is a common error involving distros without getaddrinfo available. I have packaged up everything up with getaddrinfo and a configure/Makefile also. Please let me know your feedback.

tcp_tun-0.3-beta

Security Consultant – Basic NMAP Usage

September 2nd, 2009

nmap is one of the most useful tools for a security consultant in a penetration testing environment. It has a massive range of options, and only the most basic will be considered in this tutorial.

It goes without saying, that nmap should only be run against IPs and ports that you yourself have gained authorization to test. Here goes:
Read the rest of this entry »