Linux Namespaces

November 23rd, 2014

Starting from kernel 2.6.24, there are 6 different types of Linux namespaces. Namespaces are useful in isolating processes from the rest of the system, without needing to use full low level virtualization technology.

  • CLONE_NEWIPC: IPC Namespaces: SystemV IPC and POSIX Message Queues can be isolated.
  • CLONE_NEWPID: PID Namespaces: PIDs are isolated, meaning that a PID inside of the namespace can conflict with a PID outside of the namespace. PIDs inside the namespace will be mapped to other PIDs outside of the namespace. The first PID inside the namespace will be ‘1’ which outside of the namespace is assigned to init
  • CLONE_NEWNET: Network Namespaces: Networking (/proc/net, IPs, interfaces and routes) are isolated. Services can be run on the same ports within namespaces, and “duplicate” virtual interfaces can be created.
  • CLONE_NEWNS: Mount Namespaces. We have the ability to isolate mount points as they appear to processes. Using mount namespaces, we can achieve similar functionality to chroot() however with improved security.
  • CLONE_NEWUTS: UTS Namespaces. This namespaces primary purpose is to isolate the hostname and NIS name.
  • CLONE_NEWUSER: User Namespaces. Here, user and group IDs are different inside and outside of namespaces and can be duplicated.

Let’s look first at the structure of a C program, required to demonstrate process namespaces. The following has been tested on Debian 6 and 7.

First, we need to allocate a page of memory on the stack, and set a pointer to the end of that memory page. We use alloca to allocate stack memory rather than malloc which would allocate memory on the heap.

void *mem = alloca(sysconf(_SC_PAGESIZE)) + sysconf(_SC_PAGESIZE);

Next, we use clone to create a child process, passing the location of our child stack ‘mem’, as well as the required flags to specify a new namespace. We specify ‘callee’ as the function to execute within the child space:


After calling clone we then wait for the child process to finish, before terminating the parent. If not, the parent execution flow will continue and terminate immediately after, clearing up the child with it:

while (waitpid(mypid, &r, 0) < 0 && errno == EINTR)

Lastly, we’ll return to the shell with the exit code of the child:

	return WEXITSTATUS(r);

Now, let’s look at the callee function:

static int callee()
	int ret;
	mount("proc", "/proc", "proc", 0, "");
	setgroups(0, NULL);
	ret = execl("/bin/bash", "/bin/bash", NULL);
	return ret;

Here, we mount a /proc filesystem, and then set the uid (User ID) and gid (Group ID) to the value of ‘u’ before spawning the /bin/bash shell.
Read the rest of this entry »

Debian Wheezy Xen + Guest Howto

October 8th, 2014

Xen is usually my go to virtualization technology for Linux. Here’s a HOWTO on setting up Xen on Debian Wheezy and the first guest virtual machine.

First step is getting the required packages:

apt-get install xen-linux-system xen-tools xen-utils-4.1 xen-utils-common xenstore-utils xenwatch

Now, we’ll need to specify the Xen kernel as the default boot kernel on the host, and then reboot:
Read the rest of this entry »

MySQL Master-Master Replication, Heartbeat, DRBD, Apache, PHP, Varnish MegaHOWTO

October 8th, 2014

I created this HOWTO while building a new development environment today. The intention is to take a single Apache2/Varnish/MySQL environment and scale it to two servers, with one effectively a “hot-standby” – increase redundancy and continuity whilst maintaining current performance. This HOWTO is based on Linux Debian-76-wheezy-64-minimal 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u3 x86_64

Our current server has IP and our new server has IP

Section #1: Set up MySQL Master/Master Replication

First, we’ll set up MySQL master to master replication. In this configuration, data can be written and read from either host. Bear in mind that issues may exist with autoincrement fields when written to at the same time. There are other caveats with replication so ensure to research them along with how to deal with corruption and repair before considering this setup for a live application. Also be sure to be using the same version of MySQL on both servers – this may not always be necessary, however unless you are very familiar with any changes between versions, not doing so could spell disaster.

Read the rest of this entry »

Linux iproute2 multiple default gateways

October 5th, 2014

This article describes a Linux server set up with 2 interfaces (eth0) and (eth1). Each interface has a separate ISP, network details and default gateway. eth0 has two sets of network details on the same interface and so a virtual interface (eth0:0) must be created to handle the second IP.

By default, Linux only allows for one default gateway. Let’s see what happens if we try to use multiple uplinks with 1 default gateway. Assume eth0 is assigned and eth1 is assigned Let’s say our default gateway is (which of course is found through eth0) but there’s also a gateway on eth1 which we can’t enter as Linux only allows for the one.

Our routing table now looks like this:

root@www1:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface         UG    0      0        0 eth0   U     0      0        0 eth0     U     0      0        0 eth1

If a packet comes in to us, routed through the gateway from say, our machine will receive it. When it tries to reply to however, it runs down the routing table and sees that it’s not local to eth0 or eth1 and therefore will get routed out through the default gateway ( – the problem is, this is the wrong gateway and so the target machine will ignore our response due to it being out of sequence and from the wrong IP.

Using iproute2, Linux can track multiple routing tables and therefore multiple default gateways. If the packet comes in through one interface and to one IP, it will go out via a specific default gateway. The script to achieve this is as follows:

Read the rest of this entry »

Debian Linux Wheezy OpenVPN & Squid3 HOWTO with Transparent Proxying

October 4th, 2014

Before my last extended period travelling and using public networks, I decided to set up a new low spec virtual machine on one of my hosted servers. I trust my datacenter and their uplinks more than I trust the free WiFi and public networks I travel through, and so while all my internet traffic is being routed over an encrypted tunnel to my dedicated server, I’m a lot happier.

I threw Squid3 into the mix, as it caches common assets and the sites I visit. This speeds up my web access and page load time.

OpenVPN can be configured more simply with a ‘static key’ configuration, however I’ve chosen to go down the PKI route for future growth. On my new VPN server I run:

apt-get install openvpn

Once OpenVPN is installed, I’ll need to set up my PKI system, certificate authority (CA), server certificate (vpn) and my first client certificate (npn)

Read the rest of this entry »

Exim, DKIM and Debian Configuration

July 11th, 2014

DKIM is a system for cryptographically signing messages and confirming they were sent from a sending server authorized at domain level. A private and public key pair is generated. The private key is used to sign the messages, and the public key is published as a DNS TXT record for the domain name. This allows recipients to electronically verify that mail claiming to be from domain was actually sent by a server authorized to send mail on behalf of that domain. Implementing DKIM into a mail system increases trust and deliverability.

Setting up Exim to sign outgoing mail under DKIM (Domain Keys Identified Mail) is a reasonably quick and simple task. Assuming you’re using an up to date version of Debian with Exim4, the process is even easier.
Read the rest of this entry »

Enumerating and Hacking NFS

December 4th, 2013

Network File System (NFS) is used to share files and directories over the network through ‘exports’. When a client wants to gain access to a share on the remote server, the client will firstly attempt to mount the share. The list of allowed clients per share is located in /etc/exports on the server. The problem with this approach is that the only credential for access is the client’s IP address. If a trusted machine is taken over or otherwise spoofed, the attacker has full access to the share. All versions of NFS prior to version 4 utilize this same security model. The next issue to take into account is that wildcard ‘*s’ are permitted within the exports file. Site administrators often use wildcards without thinking through the implications to allow a range of hosts access to a share. Future changes to the network or network breaches may allow a user access to a share that the administrator had not intended.

Read the rest of this entry »