Grant Cohoe

Cloud plumber, bit wrangler, solution engineer, hockey nut.

Dynamic Dual-Home Linux Server

Requirements

I have a shiny new VM server sitting in my dorm room. I have access to two networks, one operated by my dorm organization and the other provided to me by RIT. Both get me to the internet, but do so through different paths/SLAs. I want my server to be accessible from both networks. I also want to be able to attach VM hosts to NAT’d networks behind each respective network. This gives me a total of four possible VM networks (primary-external, primary-internal, secondary-external, secondary-internal). I have a Hurricane Electric IPv6 tunnel endpoint configured on the server, and want IPv6 connectivity available on ALL networks regardless of being external or internal. And to complicate matters, my external IP addresses are given to me via DHCP so I cannot set anything statically. Make it so.

Dependencies

  • IPTables
  • EBTables
  • Kernel 2.6 or newer
  • IPCalc You also need access to two networks.

Script Setup

First, like any good shell script, we should define our command paths:

1
2
3
4
5
SERVICE=/sbin/service
IPTABLES=/sbin/iptables
IP6TABLES=/sbin/ip6tables
EBTABLES=/sbin/ebtables
IPCALC=/bin/ipcalc

Next we’ll define the interface names that we are going to use. These should already be pre-configured and have their appropriate IP information.

1
2
3
4
5
PRIMARY_EXTERNAL_INTERFACE="primary-net"
PRIMARY_INTERNAL_INTERFACE="primary-nat"
SECONDARY_EXTERNAL_INTERFACE="secondary-net"
SECONDARY_INTERNAL_INTERFACE="secondary-nat"
IPV6_TUNNEL_INTERFACE="he-ipv6"

Subnet Information

Now we need to load in the relevant subnet layouts. In an ideal world this would be set statically. However due to the nature of the environment I am in, both of my networks serve me my address via DHCP and is subject to change. This is very dirty and not very efficient, but it works:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Here we will get the subnet information for the primary network
PRIMARY_EXTERNAL_NETWORK=`$IPCALC -n $(ip -4 addr show dev $PRIMARY_EXTERNAL_INTERFACE | grep "inet" | cut -d ' ' -f 6) | cut -d = -f 2`
PRIMARY_EXTERNAL_PREFIX=`$IPCALC -p $(ip -4 addr show dev $PRIMARY_EXTERNAL_INTERFACE | grep "inet" | cut -d ' ' -f 6) | cut -d = -f 2`
PRIMARY_EXTERNAL_NETWORK="$PRIMARY_EXTERNAL_NETWORK/$PRIMARY_EXTERNAL_PREFIX"
# Then the NAT'd network behind the primary
PRIMARY_INTERNAL_NETWORK=`ip -4 addr show dev $PRIMARY_INTERNAL_INTERFACE | grep "inet" | cut -d ' ' -f 6 | sed "s/[0-9]\+\//0\//"`
# Now the subnet information for the secondary network
SECONDARY_EXTERNAL_NETWORK=`$IPCALC -n $(ip -4 addr show dev $SECONDARY_EXTERNAL_INTERFACE | grep "inet" | cut -d ' ' -f 6) | cut -d = -f 2`
SECONDARY_EXTERNAL_PREFIX=`$IPCALC -p $(ip -4 addr show dev $SECONDARY_EXTERNAL_INTERFACE | grep "inet" | cut -d ' ' -f 6) | cut -d = -f 2`
SECONDARY_EXTERNAL_NETWORK="$SECONDARY_EXTERNAL_NETWORK/$SECONDARY_EXTERNAL_PREFIX"
# And it's NAT'd network.
SECONDARY_INTERNAL_NETWORK=`ip -4 addr show dev $SECONDARY_INTERNAL_INTERFACE | grep "inet" | cut -d ' ' -f 6 | sed "s/[0-9]\+\//0\//"`

# This is where we load in the IP addresses of the interfaces
PRIMARY_EXTERNAL_IP=`ip -4 addr show dev $PRIMARY_EXTERNAL_INTERFACE | grep inet | cut -d ' ' -f 6 | sed "s/\/[0-9]\+$//"`
PRIMARY_INTERNAL_IP=`ip -4 addr show dev $PRIMARY_INTERNAL_INTERFACE | grep inet | cut -d ' ' -f 6 | sed "s/\/[0-9]\+$//"`
SECONDARY_EXTERNAL_IP=`ip -4 addr show dev $SECONDARY_EXTERNAL_INTERFACE | grep inet | cut -d ' ' -f 6 | sed "s/\/[0-9]\+$//"`
SECONDARY_INTERNAL_IP=`ip -4 addr show dev $SECONDARY_INTERNAL_INTERFACE | grep inet | cut -d ' ' -f 6 | sed "s/\/[0-9]\+$//"`

# We get the gateways by pinging once out of the appropriate interface to the multicast address of All Routers on the network. 
PRIMARY_GATEWAY_IP=`ping -I $PRIMARY_EXTERNAL_IP 224.0.0.2 -c 1 | grep "icmp_seq" | cut -d : -f 1 | awk '{print $4}'`
SECONDARY_GATEWAY_IP=`ping -I $SECONDARY_EXTERNAL_IP 224.0.0.2 -c 1 | grep "icmp_seq" | cut -d : -f 1 | awk '{print $4}'`

System Configuration

In order to get most of these features working, we need to enable IP forwarding in the kernel.

1
2
3
echo "Enabling IP forwarding..."
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1

Routes

The routes are the most critical part of making this whole system work. We use two cool features of the linux networking stack, Route Tables and Route Rules. Route Tables are similar to your VRF tables in Cisco-land. Basically you maintain several different routing tables on a single system in addition to the main one. In order to specify what table a packet should use, you configure Route Rules. A rule basically says “Packets that match this rule should use table A”. In this system I use rules to force a packet to use a route table based on its source address.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

# Kernel IP Routes
echo "Setting system default gateways"
ip route del default
ip route add default via $PRIMARY_GATEWAY_IP
ip -6 route add ::/0 dev $IPV6_TUNNEL_INTERFACE

echo "Adding default routes for primary external network..."
ip route flush table $PRIMARY_EXTERNAL_INTERFACE-routes
ip route add $PRIMARY_EXTERNAL_NETWORK dev $PRIMARY_EXTERNAL_INTERFACE src $PRIMARY_EXTERNAL_IP table $PRIMARY_EXTERNAL_INTERFACE-routes
ip route add default via $PRIMARY_GATEWAY_IP table $PRIMARY_EXTERNAL_INTERFACE-routes

echo "Adding default routes for secondary external network..."
ip route flush table $SECONDARY_EXTERNAL_INTERFACE-routes
ip route add $SECONDARY_EXTERNAL_NETWORK dev $SECONDARY_EXTERNAL_INTERFACE src $SECONDARY_EXTERNAL_IP table $SECONDARY_EXTERNAL_INTERFACE-routes
ip route add default via $SECONDARY_GATEWAY_IP table $SECONDARY_EXTERNAL_INTERFACE-routes

echo "Creating routes for primary internal network..."
ip route flush table $PRIMARY_INTERNAL_INTERFACE-routes
ip route add $PRIMARY_INTERNAL_NETWORK dev $PRIMARY_INTERNAL_INTERFACE table $PRIMARY_INTERNAL_INTERFACE-routes
ip route add default via $PRIMARY_GATEWAY_IP table $PRIMARY_INTERNAL_INTERFACE-routes

echo "Creating routes for secondary internal network..."
ip route flush table $SECONDARY_INTERNAL_INTERFACE-routes
ip route add $SECONDARY_INTERNAL_NETWORK dev $SECONDARY_INTERNAL_INTERFACE table $SECONDARY_INTERNAL_INTERFACE-routes
ip route add default via $SECONDARY_GATEWAY_IP table $SECONDARY_INTERNAL_INTERFACE-routes

echo "Creating route rules for primary external network..."
ip rule del from $PRIMARY_EXTERNAL_IP
ip rule add from $PRIMARY_EXTERNAL_IP table $PRIMARY_EXTERNAL_INTERFACE-routes

echo "Creating route rules for secondary external network..."
ip rule del from $SECONDARY_EXTERNAL_IP
ip rule add from $SECONDARY_EXTERNAL_IP table $SECONDARY_EXTERNAL_INTERFACE-routes

echo "Creating route rules for primary internal network..."
ip rule del from $PRIMARY_INTERNAL_NETWORK
ip rule add from $PRIMARY_INTERNAL_NETWORK lookup $PRIMARY_INTERNAL_INTERFACE-routes

echo "Creating route rules for secondary internal network..."
ip rule del from $SECONDARY_INTERNAL_NETWORK
ip rule add from $SECONDARY_INTERNAL_NETWORK lookup $SECONDARY_INTERNAL_INTERFACE-routes

Firewall

My script also maintains the host firewall rules, so we’ll enable those:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
echo "Reseting firewall rules..."
$SERVICE iptables restart
$SERVICE ip6tables restart

echo "Setting up IPv4 firewall rules..."
$IPTABLES -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
$IPTABLES -A INPUT -p ipv6 -j ACCEPT
$IPTABLES -A INPUT -p icmp -j ACCEPT
$IPTABLES -A INPUT -i lo -j ACCEPT
$IPTABLES -A INPUT -s mason.csh.rit.edu -p tcp --dport 5666 -j ACCEPT
$IPTABLES -A INPUT -p tcp --dport 53 -j ACCEPT
$IPTABLES -A INPUT -p udp --dport 53 -j ACCEPT
$IPTABLES -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
$IPTABLES -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
$IPTABLES -A INPUT -j REJECT --reject-with icmp-host-prohibited

echo "Setting up IPv6 firewall rules..."
$IP6TABLES -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
$IP6TABLES -A INPUT -p ipv6-icmp -j ACCEPT
$IP6TABLES -A INPUT -i lo -j ACCEPT
$IP6TABLES -A INPUT -p tcp  --dport 53 -j ACCEPT
$IP6TABLES -A INPUT -p udp  --dport 53 -j ACCEPT
$IP6TABLES -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
$IP6TABLES -A INPUT -j REJECT --reject-with icmp6-adm-prohibited

echo "Setting up firewall rules for primary internal network..."
$IPTABLES -A FORWARD -s $PRIMARY_INTERNAL_NETWORK -j ACCEPT
$IPTABLES -t nat -A POSTROUTING -s $PRIMARY_INTERNAL_NETWORK -j SNAT --to-source $PRIMARY_EXTERNAL_IP

echo "Setting up firewall rules for secondary internal network..."
$IPTABLES -A FORWARD -s $SECONDARY_INTERNAL_NETWORK -j ACCEPT
$IPTABLES -t nat -A POSTROUTING -s $SECONDARY_INTERNAL_NETWORK -j SNAT --to-source $SECONDARY_EXTERNAL_IP

echo "Setting up default firewall actions..."
$IPTABLES -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
$IPTABLES -A FORWARD -j REJECT --reject-with icmp-port-unreachable

Router Advertisements

One of the features of my networking setup is that all networks, no matter internal or external have IPv6 connectivity. This is achieved by sending Router Advertisements out all interfaces. This is all well and good for the VMs that I am hosting, but these advertisements travel out of the physical interfaces to other hosts on the network! This is not good in that I am allowing other users to use my IPv6 tunnel interface. This puzzled me for a long time until I discovered a handy little program called “ebtables”. Ebtables is basically iptables for layer 2. As such, I was able to filter all router advertisement broadcasts out of the physical interfaces.

1
2
3
4
5
6
echo "Blocking IPv6 router advertisement to the world..."
$SERVICE radvd stop
$SERVICE ebtables restart
$EBTABLES -A OUTPUT -d 33:33:0:0:0:1 -o eth1 -j DROP
$EBTABLES -A OUTPUT -d 33:33:0:0:0:1 -o eth0 -j DROP
$SERVICE radvd start

End Result

You now have a dual-homed server with two external networks, two internal networks, and IPv6 connectivity to all. Both external networks can receive their configuration via DHCP and will dynamically adjust their routing accordingly.

Facebook Friend Guidelines

If I am the requester or the requestee, it means that I have had a conversation with you (Not chat/texted. Actual person-to-person) on at least two separate occasions. (Occasionally this may be reduced to one depending on circumstances) Do not get this confused with what I consider a “dialog”. A dialog is nothing more than “Hi! How are you today? What’s up?”. A conversation entails we discussed and compared our thoughts on at least one topic at length >5min. Also If you are from my present, it means that I consider there a statistically significant chance that we will see each other in person again in the future. If you are from my past, then we talked at least twice on several (>2) occasions. If you are from my future, well that feature hasn’t been implemented yet and will probably cause me to segfault.

The logic behind these guidelines is relatively simple. There is no substitute for real life interaction. The idea of “meeting someone over Facebook” doesn’t sit well with me. Whenever I see a person friend-request someone they don’t really know or have had any degree of conversation with, it makes me question their ability to do so face-to-face. It also helps avoid the potential “stalker” effect that can come about easily with social media.

Take for example Jack and Jill. Jack is an avid fan of a school sports team. Jill is his favorite player on the sports team. Jack and Jill do not know each other, and have had nothing more than a dialog (as defined earlier) between them at various meet-n-greet events. Jill gets a friend-request from Jack several days later. Jill not wanting to potentially upset Jack accepts the friend-request. She now encounters frequent chat messages from Jack that lead a quick and dull conversation. Various other actions get Jack labeled as a “stalker” in Jills mind. This is not where Jack wanted to end up as he is now considered “the bad guy”. While his motives (assume everything here to be at face-value, no hidden agendas) are honorable, they have led to an awkward situation for both of them.

By my guidelines, Jill should never have accepted the friend-request. Similarly Jack should have never sent one. Since neither of them have had any level of in-person conversations, they have no common ground on which to establish a friendship.

I will occasionally receive friend-requests from people from High School. These I find myself subjecting to a higher degree of scrutiny since I do not associate with this crowd a whole lot anymore. If I did not particularly enjoy your presence back then, I probably have no reason to change that.

Secure Mirroring Highly-Available OpenLDAP Cluster

Background

My organization stores user information in an OpenLDAP directory server. This includes contact information, shell preferences, iButton IDs, etc. The directory server does not store the users password and relies on a Kerberos KDC to provide authentication via SASL. This is a widely supported configuration and is popular because you can achieve SSO (Single Sign-On) and keep passwords out of your directory. Our shell access machines are configured to do LDAP authentication of users which basically instructs PAM to do a bind to the LDAP server using the credentials provided by the user. If the bind is successful, the user logs in. If not, well sucks to be them. Before this project we had a single server handling the directory running CentOS 5.8/OpenLDAP 2.3. If this server went poof, then all userdata is lost and no one could log into any other servers (or websites if configured with Stanford Webauth. See the guide for more on this). Obviously this is a problem that needed correcting.

Technology

LDAP Synchronization

OpenLDAP 2.4 introduced a method of replication called “mirrormode”. This allows you to use their already existing syncrepl protocol to synchronize data between multiple servers. Previously (OpenLDAP <= 2.3) only allowed you to do a master-slave style of replication where the slaves are read-only and would be unwilling to perform changes. Obviously this has some limitations in that if your master (or “provider” in OpenLDAP-ish) were to die, then your users cannot do any changes until you get it back online. Using MirrorMode, you can create an “N-way Multi-Master” topology that allows all servers to be read/write and instantly replicate their changes to the others.

High Availability

To enable users to have LDAP access even if one of the servers dies, we need a way to keep the packets flowing and automatically failover to the secondary servers. Linux has a package called “Heartbeat” that allows this sort of functionality. If you are familiar with Cisco HSRP or the open-source VRRP, it works the same way. Server A is assigned an IP address (lets say 10.0.0.1) and Server B is assigned a different address (lets say 10.0.0.2). Heartbeat is configured to provide a third IP address (10.0.0.3) that will always be available between the two. On the primary server, an ethernet alias is created with the virtualized IP address. Periodic heartbeats (keep-alive packets) are sent out to the other servers to indicate “I’m still here!”. Should these messages start disappearing (like when the server dies), the secondary will notice the lack of updates and automatically create a similar alias interface on itself and assign that virtualized IP. This allows you to give your clients one host, but have it seemlessly float between several real ones.

SASL Authentication

The easy way of configuring MirrorMode requires you to store your replication DN’s credentials in plaintext in the config file. Obviously this is not very secure since you are storing passwords in plaintext. As such we can use the hosts Kerberos principal to bind to the LDAP server as the replication DN and perform all of the tasks we need to do. This is much better than plaintext!

SSL

Since we like being secure, we should really be using LDAPS (LDAP + SSL) to get our data. We will be setting this up too using our PKI. We will save this for the end since we want to ensure that our core functionality actually works first.

New Servers

I spun up two new machines, lets call them “warlock.example.com” and “ldap2.example.com”. Each of them runs my favorite Scientific Linux 6.2 but with OpenLDAP 2.4. You cannot do MirrorMode between 2.3 and 2.4.

Configuration

Packages

First we need to install the required packages for all of this nonsense to work.

  • openldap (LDAP libraries)
  • openldap-servers (Server)
  • openldap-clients (Client utilities)
  • cyrus-sasl (SASL daemon)
  • cyrus-sasl-ldap (LDAP authentication for SASL)
  • cyrus-sasl-gssapi (Kerberos authentication for SASL)
  • krb5-workstation (Kerberos client utilities)
  • heartbeat (Failover)

If you are coming off of a fresh minimal install, you might want to install openssh-clients and vim as well. Some packages may not be included in your base distribution and may need other repositories (EPEL, etc)

Kerberos Keytabs

Each host needs its own set of host/ and ldap/ principals as well as a shared one for the virtualized address. In the end, you need a keytab with the following principals:

  • host/ldap1.example.com@EXAMPLE.COM
  • ldap/ldap1.example.com@EXAMPLE.COM
  • host/ldap.example.com@EXAMPLE.COM
  • ldap/ldap.example.com@EXAMPLE.COM

WARNING: Each time you write a -randkey’d principal to a keytab, it’s KVNO (Key Version Number) is increased, thus invalidating all previous written principals. You need to merge the shared principals into each hosts keytab. See my guide on Kerberos Utilities for information on doing this.

Put this file at /etc/krb5.keytab and set an ACL on it such that the LDAP user can read it.

1
2
3
[root@ldap1 ~]# chmod 640 /etc/krb5.keytab 
[root@ldap1 ~]# setfacl -m u:ldap:r /etc/krb5.keytab 
[root@ldap1 ~]# 

If you see something like “kerberos error 13” or “get_sec_context: 13” it means that someone cannot read the keytab, usually the LDAP server. Fix it.

NTP

Kerberos and the OpenLDAP synchronization require synchronized clocks. If you aren’t already set up for this, follow my guide for setting up NTP on a system before continuing.

SASL

You need saslauthd to be running on both LDAP servers. If you do not already have this set up, follow my guide for SASL Authentication before continuing.

Logging

We will be using Syslog and Logrotate to manage the logfiles for the LDAP daemon (slapd). By default it will spit out to local4. This is configurable depending on your system, but for me I am leaving it as the default. Add an entry to your rsyslog config file (usually /etc/rsyslog.conf) for slapd

1
local4.*                /var/log/slapd.log

Now unless we tell logrotate to rotate this file, it will get infinitely large and cause you lots of headaches. I created a config file to automatically rotate this log according the the system defaults. This was done at /etc/logrotate.d/slapd:

1
2
3
/var/log/slapd.log {
    missingok
}

Then restart rsyslog. Logrotate runs on a cron job and is not required to be restarted.

Data Directory

Since I am grabbing the data from my old LDAP server, I will not be setting up a new data directory. On the old server and on the new master, the data directory is /var/lib/ldap/. I simply scp’d all of the contents from the old server over to this directory. If you do this, I recommend stopping the old server for a moment to ensure that there are no changes occurring while you work. After scp-ing everything, make sure to chown everything to the LDAP user. I also recommend running slapindex to ensure that all data gets reindexed.

Client Library Configuration Now to make it convenient to test your new configuration, edit your /etc/openldap/ldap.conf to change the URI and add the certificate settings.

1
2
3
4
BASE            dc=example,dc=com
URI             ldaps://ldap.example.com
TLS_CACERT      /etc/pki/tls/example-ca.crt
TLS_REQCERT     allow

When configuring this file on the servers, TLS_REQCERT must be set to ALLOW since we are doing Syncrepl over LDAPS. Obviously since we are using a shared certificate for ldap.example.com, it will not match the hostname of the server and will fail. On all of your clients, they should certainly “demand” the certificate. But in this instance that prevents us from accomplishing Syncrepl over LDAPS.

Certificates for LDAPS

Since people want to be secure, OpenLDAP has the ability to do sessions inside of TLS tunnels. This works the same way HTTPS traffic does. To do this you need to have the ability to generate SSL certificates based on a given CA. This procedure varies from organization to organization. Regardless of your method, the hostname you chose is critical as this will be the name that is verified. In this setup, we are creating a host called “ldap.example.com” that all LDAP clients will be configured to use. As such the same SSL certificate for both hosts will be generated for “ldap.example.com” and placed on each server.

I placed the public certificate at /etc/pki/tls/certs/slapd.pem, the secret key in /etc/pki/tls/private/slapd.pem, and my CA certificate at /etc/pki/tls/example-ca.crt. After obtaining my files, I verify them to make sure that they will actually work:

1
2
3
[root@ldap1 ~]# openssl verify -CAfile /etc/pki/tls/example-ca.crt /etc/pki/tls/certs/slapd.pem
/etc/pki/tls/certs/slapd.pem: OK
[root@ldap1 ~]# 

You need to allow the LDAP account to read the private certificate:

1
[root@ldap1 ~]# setfacl -m u:ldap:r /etc/pki/tls/private/slapd.pem

Server Configuration

If you are using a previous server configuration, just scp the entire /etc/openldap code over to the new servers. Make sure you blow away any files or directories that may have been added for you before you copy. If you are not, you might need to do a bit of setup. OpenLDAP 2.4 uses a new method of configuration called “cn=config”, which stores the configuration data in the LDAP database. However since this is not fully supported by most clients, I am still using the config file. For setting up a fresh install, see one of my other articles on this. (Still in progress at the time of this writing)

The following directives will need to be placed in your slapd.conf for the SSL certificates:

1
2
3
TLSCertificateFile /etc/pki/tls/certs/slapd.pem
TLSCertificateKeyFile /etc/pki/tls/private/slapd.pem
TLSCACertificateFile /etc/pki/tls/example-ca.crt

Depending on your system, you may need to configure the daemon to run on ldaps://. On Red-Hat based systems this is in /etc/sysconfig/ldap.

You need to add two things to your configuration for Syncrepl to function. First to your slapd.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Syncrepl ServerID
serverID 001

# Syncrepl configuration for mirroring instant replication between another 
# server. The binddn should be the host/ principal of this server 
# stored in the Kerberos keytab
syncrepl rid=001
provider=ldaps://ldap2.example.com
type=refreshAndPersist
retry="5 5 300 +"
searchbase="dc=example,dc=com"
attrs="*,+"
bindmethod=sasl
binddn="cn=ldap1,ou=hosts,dc=example,dc=com"
mirrormode TRUE

The serverID value must uniquely identify the server. Make sure you change it when inserting the configuration onto the secondary server. Likewise change the Syncrepl RID as well for the same reason.

Secondly you need need to allow the replication binddn full read access to all objects. This should go in your ACL file (which could be your slapd.conf, but shouldnt be).

1
2
3
access to *
        by dn.exact="cn=ldap1,ou=hosts,dc=example,dc=com" read
        by dn.exact="cn=ldap2,ou=hosts,dc=example,dc=com" read

WARNING: If you do not have an existing ACL setup, doing just these entries will prevent anything else from doing anything with your server. These directives are meant to be ADDED to an existing ACL.

You should be all set and ready to start the server.

1
2
3
[root@ldap1 ~]# service slapd start
Starting slapd:                                            [  OK  ] 
[root@ldap1 ~]# 

Make sure that when you do your SASL bind, the server reports your DN correctly. To verify this:

1
2
3
4
5
ldap1 ~ # su ldap -s /bin/bash
bash-4.1$ kinit -k -t /etc/krb5.keytab host/ldap1.example.com
bash-4.1$ ldapwhoami -Q
dn:cn=ldap1,ou=hosts,dc=example,dc=com
bash-4.1$

That DN is the one that should be in your ACL and have read access to everything.

Since the LDAP user will always need this principal, I recommend adding a cronjob to keep the ticket alive. I wrote a script that will renew your ticket and fix an SELinux permissioning problem as well. You can get it here. Just throw it into your /usr/local/sbin/ directory. There are better ways of doing this, but this one works just as well.

1
0 */2 * * * /usr/local/sbin/slaprenew

Firewall

Dont forget to open up ports 389 and 636 for LDAP and LDAPSSL!

Searching

After configuring both the master and the secondary servers, we now need to get the data initially replicated across your secondaries. Assuming you did like me and copied the data directory from an old server, your master is the only one that has real data. Start the service on this machine. If all went according to plan, you should be able to search for an object to verify that it works.

1
2
3
4
5
6
7
[root@ldap1 ~]# kinit user
Password for user@EXAMPLE.COM:
[root@ldap1 ~]# ldapsearch -h localhost uid=user uidNumber -Q -LLL
dn: uid=user,ou=users,dc=example,dc=com
uidNumber: 10046

[root@ldap1 ~]#

If you get errors, increase the debug level (-d 1) and work out any issues.

Replication

After doing a kinit as the LDAP user described above (host/ldap2.example.com), start the LDAP server on the secondary. If you tail the slapd log file, you should start seeing replication events occurring really fast. This is the server loading its data from the provider as specified in slapd.conf. Once it is done, try searching the server in a similar fashion as above.

1
2
3
4
5
6
7
8
9
10
11
12
[root@ldap2 ~]# tail /var/log/slapd.log
slapd[15759]: syncrepl_entry: rid=001 be_search (0)
slapd[15759]: syncrepl_entry: rid=001 uid=user,ou=Users,dc=example,dc=com
slapd[15759]: syncrepl_entry: rid=001 be_add uid=user,ou=Users,dc=example,dc=com (0)
slapd[15759]: syncrepl_entry: rid=001 LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD)
slapd[15759]: syncrepl_entry: rid=001 inserted UUID 84ba3544-5be8-102b-9a32-4dfcdb785320
slapd[15759]: syncrepl_entry: rid=001 be_search (0)
slapd[15759]: syncrepl_entry: rid=001 uid=user,ou=Users,dc=example,dc=com
slapd[15759]: syncrepl_entry: rid=001 be_add uid=user,ou=Users,dc=example,dc=com (0)
slapd[15759]: syncrepl_entry: rid=001 LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD)
slapd[15759]: syncrepl_entry: rid=001 inserted UUID 84c04164-5be8-102b-9a33-4dfcdb785320
slapd[15759]: syncrepl_entry: rid=001 be_search (0)

Heartbeat

Heartbeat provides failover for configured resources between several Linux systems. In this case we are going to provide high-availability of an alias IP address (10.0.0.3) which we will distribute to clients as the LDAP server (ldap.example.com). Heartbeat configuration is very simple and only requires three files:

/etc/ha.d/haresources contains the resources that we will be failing over. It should be the same on both servers.

1
ldap1.example.com IPaddr::10.0.0.3/24/eth0:1/10.0.0.255

The reason the name of the host above is not “ldap.example.com” is because the entry must be a node listed in the configuration file (below).

/etc/ha.d/authkeys contains a shared secret used to secure the heartbeat messages. This is to ensure that someone doesnt just throw up a server and claim to be a part of your cluster.

1
2
auth 1
1 md5 mysupersecretpassword

Make sure to chmod it to something not world-readable (600 is recommend).

Finally, /etc/ha.d/ha.cf is the Heartbeat configuration file. It should contain entries for each server in your cluster, timers, and log files.

1
2
3
4
5
6
7
8
9
10
mcast           eth0 225.0.0.1 694 1 0
auto_failback   on
keepalive 1
deadtime 5
debugfile       /var/log/ha-debug.log
logfile         /var/log/ha.log
logfacility     local0
udp             eth0
node            ldap1.example.com
node            ldap2.example.com

After all this is set, start the heartbeat service. After a bit you should see another ethernet interface show up on your primary. To test failover, unplug one of the machines and see what happens!

DNS Hack

A side affect of having the LDAP server respond on the shared IP address is that it gets confused about its hostname. As such you need to edit your /etc/hosts file to point the ldap.example.com name to each of the actual host IP addresses. In this example, ldap1 is 10.0.0.1 and ldap2 is 10.0.0.2. You should modify your hosts file to include:

1
2
10.0.0.1 ldap.example.com ldap
10.0.0.2 ldap.example.com ldap

The first entry should be the address of the local system (10.0.0.2 should be the first in the case of ldap2).

Your DNS server should be able to do forward and reverse lookups for ldap.example.com going to 10.0.0.3 (the highly available address).

If you are having issues binding between the servers or binding from clients, it is usually due to DNS problems.

Testing

If all went according to plan, you should be able to see Sync messages in your slapd log file.

1
2
slapd[11059]: slapd starting
slapd[11059]: do_syncrep2: rid=002 LDAP_RES_INTERMEDIATE - REFRESH_DELETE

Likewise if you log into a client machine (in my case, I made this the KDC) you should be able to search at only the ldap.example.com host. The others will return errors.

1
2
3
4
5
6
7
8
9
10
11
[root@kdc ~]# ldapsearch -Q -LLL uid=user displayName -h ldap.example.com
dn: uid=user,ou=users,dc=example,dc=com
displayName: Firstname Lastname (user)

[root@kdc ~]# ldapsearch -Q -LLL uid=user displayName -h ldap1.example.com
ldap_sasl_interactive_bind_s: Invalid credentials (49)
        additional info: SASL(-13): authentication failure: GSSAPI Failure: gss_accept_sec_context
[root@kdc ~]# ldapsearch -Q -LLL uid=user displayName -h ldap2.example.com
ldap_sasl_interactive_bind_s: Invalid credentials (49)
        additional info: SASL(-13): authentication failure: GSSAPI Failure: gss_accept_sec_context
[root@kdc ~]# 

The reason is that the client has a different view of what the server’s hostname is. This difference causes SASL to freak out and not allow you to bind.

Errors

1
2
3
4
5
[root@kdc ~]# ldapsearch -h ldap.example.com uid=user
SASL/GSSAPI authentication started
ldap_sasl_interactive_bind_s: Invalid credentials (49)
        additional info: SASL(-13): authentication failure: GSSAPI Failure: gss_accept_sec_context
[root@kdc ~]#

On the server reveals the error of:

1
2
slapd[9888]: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Wrong principal in request)
slapd[9888]: text=SASL(-13): authentication failure: GSSAPI Failure: gss_accept_sec_context

This means your DNS is not hacked or functional. Fix it according to the instructions

The LDAP log says:

1
2
slapd[1901]: do_syncrepl: rid=005 rc -2 retrying
slapd[1901]: slap_client_connect: URI=ldap://ldap2.example.com ldap_sasl_interactive_bind_s failed (-2)

and /var/log/messages contains:

1
GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Credentials cache permissions incorrect)

You need to change the SELinux context of your KRB5CC file (default is /tmp/krb5cc_$UIDOFLDAPUSER on Scientific Linux) to something the slapd process can read. Since every time you re-initialize the ticket you risk defaulting the permissions, I recommend using my script in your cronjob from above. If someone finds a better way to do this, please let me know!

If after enabling SSL on your client you receive (using -d 1):

1
TLS: can't connect: TLS error -5938:Encountered end of file.

Check your certificate files on the SERVER. Odds are you have an incorrect path.

Summary

You now have two independent LDAP servers running OpenLDAP 2.4 and synchronizing data between them. They are intended to listen on a Heartbeat-ed service IP address that will always be available even if one of the servers dies. You can also do SASL binds to each server to avoid having passwords stored in your configuration files.

If after reading this you feel there is something that can be improved or isnt clear, feel free to contact me! Also you can grab sample configuration files that I used at http://archive.grantcohoe.com/projects/ldap.

Kerberos Utilities

There are a number of useful Kerberos client utilities that can help you when working with authentication services.

kinit

kinit will initiate a new ticket from the Kerberos system. This is how you renew your tickets to access kerberized services or renew service principals for daemons. You can kinit interactively by simply running kinit and giving it the principal you want to init as:

1
2
3
gmcsrvx1 ~ # kinit grant
Password for grant@GRANTCOHOE.COM: 
gmcsrvx1 ~ # 

No response is good, and you can view your initialized ticket with klist (discussed later on).

You can also kinit with a keytab by giving it the path to a keytab and a principal.

1
2
gmcsrvx1 ~ # kinit -k -t /etc/krb5.keytab host/gmcsrvx1.grantcohoe.com
gmcsrvx1 ~ # 

Note that it did not prompt me for a password. That is because it is using the stored principal key in the keytab to authenticate to the Kerberos server.

klist

klist is commonly used for two purposes: 1) List your current Kerberos tickets and 2) List the principals stored in a keytab. Running klist without any arguments will perform the first action.

1
2
3
4
5
6
7
gmcsrvx1 ~ # klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: grant@GRANTCOHOE.COM

Valid starting     Expires            Service principal
04/18/12 12:44:31  04/19/12 12:44:30  krbtgt/GRANTCOHOE.COM@GRANTCOHOE.COM
  renew until 04/18/12 12:44:31

To list the contents of a keytab:

1
2
3
4
5
6
7
8
gmcsrvx1 ~ # klist -k -t /etc/krb5.keytab 
Keytab name: WRFILE:/etc/krb5.keytab
KVNO Timestamp         Principal
---- ----------------- -------------------------------------------------------
   2 02/16/12 14:50:12 host/gmcsrvx1.grantcohoe.com@GRANTCOHOE.COM
   2 02/16/12 14:50:12 host/gmcsrvx1.grantcohoe.com@GRANTCOHOE.COM
   2 02/16/12 14:50:12 host/gmcsrvx1.grantcohoe.com@GRANTCOHOE.COM
gmcsrvx1 ~ # 

The duplication of the principal names represents each of the encryption types that are stored in the keytab. In my case, I use three encryption types to store my principals. If you support older deprecated enc-types (as Kerberos calls them), you will see more entries here.

kdestroy

kdestroy destroys all Kerberos tickets for your current user. You can verify this by doing:

1
2
3
4
gmcsrvx1 ~ # kdestroy
gmcsrvx1 ~ # klist
klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_0)
gmcsrvx1 ~ # 

ktutil

The Keytab Utility lets you view and modify keytab files. To start, you need to read in a keytab

1
2
3
4
5
6
7
8
ktutil:  rkt /etc/krb5.keytab 
ktutil:  list
slot KVNO Principal
---- ---- --------------------------------------------------------------------
   1    2 host/gmcsrvx1.grantcohoe.com@GRANTCOHOE.COM
   2    2 host/gmcsrvx1.grantcohoe.com@GRANTCOHOE.COM
   3    2 host/gmcsrvx1.grantcohoe.com@GRANTCOHOE.COM
ktutil:  

If you want to merge two keytabs, you can repeat the read command and the second keytab will be appended to the list. You can also selectively add and delete entries to the keytab as well. Once you are done, you can write the keytab out to a file.

1
2
3
ktutil:  wkt /etc/krb5.keytab 
ktutil:  quit
gmcsrvx1 ~ # 

High-Availability With Multipath Routing

Multipath Routing

Multipath routing is when you have a router with multiple equal-cost paths to a single destination. Since each of these routes carries the same weight, the router will distribute traffic across all of them roughly equally. You can read more on this technology on Wikipedia. This is more about the implementation of such a topology.

The Topology

In my setup, I have three clients on the 192.168.0.0/24 subnet with gateway 192.168.0.254. Likewise I have three servers on the 10.0.0.0/24 subnet with gateway 10.0.0.254. I want my clients to always be able to access a webserver at host 172.16.0.1/32. This is configured as a loopback address on each of the three servers. This is the multipath part of the project. Each of the servers has the same IP address on a loopback interface and will therefore answer on it. However since the shared router between these two networks has no idea the 172.16.0.1/32 network is available via the servers, my clients cannot access it. This is where the routing comes into play. I will be using an OSPF daemon on my servers to send LSA’s (read up on OSPF if you are not familiar) to the router.

Server Configuration

Firewall

You need to allow OSPF traffic to enter your system. IPtables supports the OSPF transport so you only need a minimum of one rule:

1
iptables -I INPUT 1 -p ospf -j ACCEPT

Save you firewall configuration after changing it.

Package Installation

You need to install Quagga, a suite of routing daemons that will be used to talk OSPF with the router. These packages are available in most operating systems.

Interface Configuration

Configure your main interface statically as you normally would. You also need to setup a loopback interface for the highly-available host. Procedure for this varies by operating system. Since I am using Scientific Linux, I have a configuration file in /etc/sysconfig/network-scripts/ifcfg-lo:1:

1
2
3
4
DEVICE=lo:1
IPADDR=172.16.0.1
NETMASK=255.255.255.255
ONBOOT=yes

Make sure you restart your network process (or your host) after you do this. I first thought the reason this entire thing wasn’t working was a Linux-ism as my FreeBSD testing worked fine. I later discovered that it was because hot-adding an interface sometimes doesn’t add all the data we need to the kernel route table.

OSPF Configuration

Quagga’s ospfd has a single configuration file that is very Cisco-like in syntax. It is located at /etc/quagga/ospfd.conf:

1
2
3
4
5
6
7
hostname server1
interface eth0
router ospf
  ospf router-id 10.0.0.1
  network 10.0.0.0/24 area 0
  network 172.16.0.1/32 area 0
log stdout

ospfd has a partner program called zebra that manages the kernel route table entries learned via OSPF. While we may not necessarily want these, zebra must be running and configured in order for this to work. Edit your /etc/quagga/zebra.conf:

1
2
hostname server1
interface eth0

After you are all set, start zebra and ospfd (and make sure they are set to come up on boot).

Router Configuration

I am using a virtualized Cisco 7200 router in my testing environment. You may have to modify your interfaces, but otherwise this configuration is generic across all IOS:

1
2
3
4
5
6
R1>enable
R1#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
R1(config)#router ospf 1
R1(config-router)#network 10.0.0.0 0.0.0.255 area 0
R1(config-router)#end

This enables OSPF on the 10.0.0.0/24 network and will start listening for LSAs. When it processes a new one, the route will be inserted into the routing table. After a while, you should see something like this:

1
2
3
4
5
R1#show ip route ospf
     172.16.0.0/32 is subnetted, 1 subnets
O       172.16.0.1 [110/11] via 10.0.0.3, 00:04:42, GigabitEthernet1/0
                   [110/11] via 10.0.0.2, 00:04:42, GigabitEthernet1/0
                   [110/11] via 10.0.0.1, 00:04:42, GigabitEthernet1/0

This is multipath routing. The host 172.16.0.1/32 has three equal paths available to get to it.

Testing

To verify that my clients were actually getting load-balanced, I threw up an Apache webserver on each server with a static index.html file containing the ID of the server (server 1, server 2, server 3, etc). This way I can curl the highly-available IP address and can see which host I am getting routed to. You should see your clients get distributed out across any combination of the three servers.

Now lets simulate a host being taken offline for maintenance. Rather than having to fail over a bunch of services, all you need to do is stop the ospfd process on the server. After the OSPF dead timer expires, the route will be removed and no more traffic will be routed to it. That host is now available for downtime without affecting any of the others. The router just redistributes the traffic on the remaining routes.

But lets say the host immediately dies. There is no notification to the router that the backend host is no longer responding. The only thing you can do at this point is wait for the OSPF dead timer to expire. You can configure this to be a shorter amount of time, thus making “failover” more transparent.

Summary

Multipath routing is basically a poor-mans load balancer. It has no checks to see if the services are actually running and responding appropriately. It also does not behave very well with sessions or data that involves lots of writes. However for static web hosting or for caching nameservers (the use that sparked my interest in this technology), it works quite well.

Secure OpenLDAP Server With Kerberos Authentication

Host Configuration

There are some things we need to set up prior to installing and configuring OpenLDAP.

Kerberos

You will need a working KDC somewhere in your domain. You also need to have the server configured as a Kerberos client (/etc/krb5.conf) and be able to kinit without issue.

There are two principals that you will need in your /etc/krb5.keytab:

  • host/server.example.com@EXAMPLE.COM
  • ldap/server.example.com@EXAMPLE.COM

saslauthd

If you do not already have this working, see my guide on how to set it up.

Packages

You need the following for our setup:

  • krb5-server-ldap (for the Kerberos schema)
  • openldap
  • openldap-clients
  • openldap-servers
  • cyrus-sasl-gssapi
  • cyrus-sasl-ldap

SSL Certificates

You will need SSL certificates matching the hostname you intend your LDAP server to listen on (ldap.example.com is different than server.example.com). Procure these from your PKI administrator. I place mine in the default directories as shown:

  • Server Public Key - /etc/pki/tls/certs/slapd.pem
  • Server Private Key - /etc/pki/tls/private/slapd.pem
  • CA Certificate - /etc/pki/tls/cert.pem

ACLs

The LDAP user needs to be able to read the private key and keytab you configured earlier. Ensure that it can.

1
2
[root@localhost ~]# setfacl -m u:ldap:r /etc/krb5.keytab
[root@localhost ~]# setfacl -m u:ldap:r /etc/pki/tls/private/slapd.pem

OpenLDAP Configuration

Directory Cleanup

By default there are some extra files in the configuration directory. Since we are not using the cn=config method of configuring the server, we are simply going to remove the things we don’t want.

1
2
[root@localhost ~]# cd /etc/openldap/
[root@localhost ~]# rm -rf slapd.d ldap.conf cacerts

We will be re-creating ldap.conf later on with the settings that we want.

Kerberos Schema

Copy /usr/share/doc/krb5-server-ldap-1.9/kerberos.schema into your schema directory (default: /etc/openldap/schema. This contains all of the definitions for Kerberos-related attributes.

Data Directory

Decide where you want your data directory to be. By default this is /var/lib/ldap Copy over the standard DB_CONFIG file to that directory from /usr/share/openldap-servers/DB_CONFIG.example. This contains optimization information about the structure of the database.

Enable LDAPS

Some systems require you to explicitly enable LDAPS listening. On Red Hat Enterprise Linux-based distributions, this is done by changing SLAPD_LDAPS to YES in /etc/sysconfig/ldap.

slapd.conf

Create a /etc/openldap/slapd.conf to configure your server. You can grab a copy of my basic configuration file here.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
# Include base schema files provided by the openldap-servers package.
include /etc/openldap/schema/corba.schema
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/duaconf.schema
include /etc/openldap/schema/dyngroup.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/java.schema
include /etc/openldap/schema/misc.schema
include /etc/openldap/schema/nis.schema
include /etc/openldap/schema/openldap.schema
include /etc/openldap/schema/pmi.schema
include /etc/openldap/schema/ppolicy.schema

# Site-specific schema and ACL includes. These are either third-party or custom.
include /etc/openldap/schema/kerberos.schema

# Daemon files needed for the running of the daemon
pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args

# Limit SASL options to only GSSAPI and not other client-favorites. Apparently there is an issue where
# clients will default to non-working SASL mechanisms and will make you angry.
sasl-secprops noanonymous,noplain,noactive

# SASL connection information. The realm should be your Kerberos realm as configured for the system. The
# host should be the LEGITIMATE hostname of this server
sasl-realm EXAMPLE.COM
sasl-host ldap.example.com

# SSL certificate file paths
TLSCertificateFile /etc/pki/tls/certs/slapd.pem
TLSCertificateKeyFile /etc/pki/tls/private/slapd.pem
TLSCACertificateFile /etc/pki/tls/cert.pem

# Rewrite certain SASL bind DNs to more readable ones. Otherwise you bind as some crazy default
# that ends up in a different base than your actual one. This uses regex to rewrite that weird
# DN and make it become one that you can put within your suffix.
authz-policy from
authz-regexp "^uid=[^,/]+/admin,cn=example\.com,cn=gssapi,cn=auth" "cn=ldaproot,dc=example,dc=com"
authz-regexp "^uid=host/([^,]+)\.example\.com,cn=example\.com,cn=gssapi,cn=auth" "cn=$1,ou=hosts,dc=example,dc=com"
authz-regexp "^uid=([^,]+),cn=example\.com,cn=gssapi,cn=auth" "uid=$1,ou=users,dc=example,dc=com"

# Logging
#loglevel 16384
loglevel 256

# Actual LDAP database for things you want
database bdb
suffix "dc=example,dc=com"
rootdn "cn=ldaproot,dc=example,dc=com"
rootpw {MD5}X03MO1qnZdYdgyfeuILPmQ==
directory /var/lib/ldap

# Indicies for the database. These are used to improve performance of the database
index entryCSN eq
index entryUUID eq
index objectClass eq,pres
index ou,cn,mail eq,pres,sub,approx
index uidNumber,gidNumber,loginShell eq,pres

# Configuration database
database config
rootdn "cn=ldaproot,dc=example,dc=com"

# Monitoring database
database monitor
rootdn "cn=ldaproot,dc=example,dc=com"

To generate a password for your directory and hash it, you can use the slappasswd utility. See my guide on LDAP utilities for details. After that you should be all set. Start the slapd service, but dont query it yet.

Client Configuration

The /etc/openldap/ldap.conf file is the system-wide default LDAP connection settings. This way you do not have to specify the host and protocol each time you want to run a command. Make it now.

1
2
3
4
5
# OpenLDAP client configuration file. Used for host default settings
BASE            dc=example,dc=com
URI             ldaps://ldap.example.com
TLS_CACERT      /etc/pki/tls/cert.pem
TLS_REQCERT     demand

Population

Right now you should have a running server, but with no data in it. I created a simple example setup file to get a basic tree set up. At this point you need to architect how you want your directory to be organized. You may chose to follow this or chose your own.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
miranda ldap # cat setup.ldif
dn: dc=example,dc=com
dc: example
o: Example Corporation
objectclass: top
objectclass: dcObject
objectclass: organization

dn: ou=hosts,dc=example,dc=com
ou: hosts
objectclass: top
objectclass: organizationalUnit

dn: ou=users,dc=example,dc=com
ou: users
objectclass: top
objectclass: organizationalUnit

dn: cn=ldap1,ou=hosts,dc=example,dc=com
cn: ldap1
sn: ldap1
objectclass: top
objectclass: person

dn: cn=ldap2,ou=hosts,dc=example,dc=com
cn: ldap2
sn: ldap2
objectclass: top
objectclass: person

dn: uid=user,ou=users,dc=example,dc=com
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: posixAccount
objectclass: inetOrgPerson
ou: Users
uid: user
uidNumber: 10000
gidNumber: 150
givenName: Firstname
sn: Lastname
cn: Firstname Lastname
gecos: Firstname Lastname
Description: Firstname Lastname
homeDirectory: /users/user
loginShell: /bin/bash
mail: firstname.lastname@example.com
displayname: Firstname Lastname (user)
userPassword: {SASL}user@EXAMPLE.COM

dn: uid=admin,ou=users,dc=example,dc=com
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: posixAccount
objectclass: inetOrgPerson
ou: Users
uid: admin
uidNumber: 10001
gidNumber: 151
givenName: Firstname
sn: Lastname
cn: Firstname Lastname
gecos: Firstname Lastname
Description: Firstname Lastname
homeDirectory: /users/admin
loginShell: /bin/bash
mail: firstname.lastname@example.com
displayname: Firstname Lastname (admin)

You can add the contents of this file to your directory using ldapadd.

1
ldapadd -x -D cn=ldaproot,dc=example,dc=com -W -f setup.ldif

Testing

You should now be able to query your server and figure out who it thinks you are based on your Kerberos principal.

1
2
3
4
5
6
7
8
9
[root@ldap ~]# kinit user
Password for user@EXAMPLE.COM:
[root@ldap ~]# ldapwhoami
SASL/GSSAPI authentication started
SASL username: user@EXAMPLE.COM
SASL SSF: 56
SASL data security layer installed.
dn:uid=user,ou=users,dc=example,dc=com
[root@ldap ~]#

Here you can see that it thinks that I am uid=user,ou=users,dc=example,dc=com given my principal of user@EXAMPLE.COM.

Now try searching:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@ldap ~]# ldapsearch uid=user
SASL/GSSAPI authentication started
SASL username: user@EXAMPLE.COM
SASL SSF: 56
SASL data security layer installed.
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=com> (default) with scope subtree
# filter: uid=user
# requesting: ALL
#

# user, users, example.com
dn: uid=user,ou=users,dc=example,dc=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: posixAccount
objectClass: inetOrgPerson
ou: Users
uid: user
uidNumber: 10000
gidNumber: 150
givenName: Firstname
sn: Lastname
cn: Firstname Lastname
gecos: Firstname Lastname
description: Firstname Lastname
homeDirectory: /users/user
loginShell: /bin/bash
mail: firstname.lastname@example.com
displayName: Firstname Lastname (user)

# search result
search: 5
result: 0 Success

# numResponses: 2
# numEntries: 1
[root@ldap ~]#

Fix Me Maybe

Fix Me Maybe (parody of Call Me Maybe) Words by Grant Cohoe (cohoe) Music & original words by Carly Rae Jepsen

This was dedicated to Ross Delinger (rossdylan) because: “FIX THE DAMN IRC SERVER!” Like the original song, it then went a lot further than expected. What more might come of this?!

MP3 - Sung by Alex Howland (ducker)

Lyrics:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
I started as a project
Something come Fall we'd forget
The system full of neglect
But now you've logged into me.

Everything's been broke for awhile
It's not great R-T-P style
This wasn't supposed to be a trial
But now you've logged into me.

Your session was open
Bytes of logs were flowin'
Files were c-h-ownin'
Need to get them daemons going

I'm used to broken things
These users are crazy
But here's my address, 
so fix me maybe?

It's hard to figure out
What is wrong here
But here's my address, 
so fix me maybe?

I'm used to broken things
These users are crazy
But here's my address, 
so fix me maybe?

And all the other boys
Try and break me
But here's my address,
so fix me maybe?

You took your time with fix
Broke in no time 'cause they're pricks
You didn't help anything
But still, you're logged in me

I page, and cache and write
Every morn day and night
Processor cycles aren't tight
But still, you're logged in me

Your session was open
Bytes of logs were flowin'
Files were c-h-ownin'
Need to get them daemons going

I'm used to broken things
These users are crazy
But here's my address, 
so fix me maybe?

It's hard to figure out
What is wrong here
But here's my address, 
so fix me maybe?

I'm used to broken things
These users are crazy
But here's my address, 
so fix me maybe?

And all the other boys
Try and break me
But here's my address,
so fix me maybe?

Before you came into my logs 
I was broken so bad
I was broken so bad
I was broken so so bad

Before you came into my logs 
I was broken so bad
And you should know that
I was broken so bad

It's hard to figure out
What is wrong here
But here's my address, 
so fix me maybe?

I'm used to broken things
These users are crazy
But here's my address, 
so fix me maybe?

And all the other boys
Try and break me
But here's my address,
so fix me maybe?

Before you came into my logs 
I was broken so bad
I was broken so bad
I was broken so so bad

Before you came into my logs 
I was broken so bad
And you should know that

So fix me, maybe?

Makefile-based PKI Management

Using a couple of seed files, you can easily get started managing your own PKI for your network. You need to have OpenSSL and Make installed on the system you are working with.

First, pick a directory to store your certificates in. I am going to use /etc/pki/example.com since this is where all the other system SSL stuff is stored and for you SELinux people, will already have the right contexts. cd into that directory and create the Makefile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
CONFIG=config/openssl.cnf
PUB=host-cert.pem
PRIV=host-key.pem
REQ=req.pem
PUBNPRIV=host-combined.pem

issue:
  mkdir -p ${DEST}
  openssl req -new -nodes -out ${DEST}/${REQ} -config ${CONFIG}
  mv ${PRIV} ${DEST}/
  openssl ca -out ${DEST}/${PUB} -config ${CONFIG} -infiles ${DEST}/${REQ}
  if [ ${DEST} = mail -o ${DEST} = mail/ ]; then cp ${DEST}/${PUB} ${DEST}/${PUBNPRIV}; cat ${DEST}/${PRIV} >> ${DEST}/${PUBNPRIV}; fi
  chmod go= ${DEST}/${PRIV}
  if [ -e ${DEST}/${PUBNPRIV} ]; then chmod go= ${DEST}/${PUBNPRIV}; fi

revoke:
  openssl ca -config ${CONFIG} -revoke ${SOURCE}/${PUB}

This lets us perform two actions (targets): Issue and Revoke. Issue will create a new host certificate from your CA. Revoke will expire an already-existing certificate. This is mainly used for renewal since you need to revoke then issue.

Next we will need a configuration directory and some basic files:

1
2
3
[root@gmcsrvx1 example.com ]# mkdir config config/newcerts config/CA
[root@gmcsrvx1 example.com ]# touch config/index.txt
[root@gmcsrvx1 example.com ]# echo '01' > config/serial

After that, cd into the config directory and create an openssl.cnf file. This is the OpenSSL configuration file used for certificate generation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# OpenSSL configuration file.
#
# Adapted by Grant Cohoe 2011 (www.grantcohoe.com)
# Original by some past CSHer (www.csh.rit.edu)

# Establish working directory.

dir               = /etc/pki/example.com

[ req ]
default_bits      = 2048          # Size of keys
default_keyfile   = host-key.pem      # name of generated keys
default_md        = sha1          # message digest algorithm
string_mask       = nombstr           # permitted characters
distinguished_name    = req_distinguished_name
req_extensions        = v3_req            # always use v3 extensions for reqs

[ req_distinguished_name ]
# Variable name  Prompt string
#----------------------  ----------------------------------
0.organizationName        = Organization Name (company)
organizationalUnitName    = Organizational Unit Name (department, division)
emailAddress          = Email
emailAddress_max      = 40
localityName          = Locality Name (city)
stateOrProvinceName       = State or Province Name (full name)
countryName           = Country Name (2 letter code)
countryName_min       = 2
countryName_max       = 2
commonName            = Common Name (full hostname)
commonName_max            = 64

# Default values for the above, for consistency and less typing.
# Variable name  Value
#------------------------------  ------------------------------
0.organizationName_default        = Example Corp
organizationalUnitName_default    = System Engineering
emailAddress_default          = root@example.com
localityName_default          = Rochester
stateOrProvinceName_default       = New York
countryName_default               = US

[ v3_ca ]
basicConstraints      = CA:TRUE
subjectKeyIdentifier  = hash
authorityKeyIdentifier    = keyid:always,issuer:always

[ v3_req ]
basicConstraints      = CA:FALSE
subjectKeyIdentifier  = hash


[ ca ]
default_ca    = CA_default

[ CA_default ]
serial        = $dir/serial
database      = $dir/index.txt
new_certs_dir = $dir/newcerts
certificate   = $dir/CA/example-ca.crt
private_key   = $dir/CA/example-ca.key
default_days  = 365
default_md    = sha1
preserve      = no
email_in_dn   = no
nameopt       = default_ca
certopt       = default_ca
policy        = policy_match

[ policy_match ]
countryName           = match
stateOrProvinceName       = match
organizationName      = match
organizationalUnitName    = optional
commonName            = supplied
emailAddress          = optional

You should ensure that the directory definition and organization information is changed to match what you want.

After this you are ready to generate your CA:

1
openssl req -config openssl.cnf -new -x509 -extensions v3_ca -keyout CA/example-ca.key -out CA/example-ca.crt -days 1825

This will generate a CA certificate based on your settings file and will last for 5 years, after which you will need to renew it.

Enter a certificate password when prompted. You do not need to specify a Common Name (CN) as asked later in the process. Ensure that the other field defaults are correct, as they came from your openssl.cnf file and are needed to match in other certificates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@gmcsrvx1 config]# openssl req -config openssl.cnf -new -x509 -extensions v3_ca -keyout CA/example-ca.key -out CA/example-ca.crt -days 1825
Generating a 2048 bit RSA private key
................................................................................
..................+++
writing new private key to 'CA/example-ca.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Organization Name (company) [Example Corp]:
Organizational Unit Name (department, division) [System Engineering]:
Email [root@example.com]:
Locality Name (city) [Rochester]:
State or Province Name (full name) [New York]:
Country Name (2 letter code) [US]:
Common Name (full hostname) []:
[root@gmcsrvx1 config]#

Unfortunately this process creates the private key file to be world readable. This is extremely bad. Fix it. Also ensure that the public key is readable by all. It is supposed to be.

1
2
[root@gmcsrvx1 config]# chmod 600 CA/example-ca.key
[root@gmcsrvx1 config]# chmod 644 CA/example-ca.crt

Now back up one directory (back to the working directory you started at). I usually make a symlink to the CA public key here so you can easily reference it.

You can now issue SSL certificates by typing make DEST=hostname where hostname is the name of the server you are going to issue to.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[root@gmcsrvx1 example.com]# make DEST=gmcsrvx1
mkdir -p gmcsrvx1
openssl req -new -nodes -out gmcsrvx1/req.pem -config config/openssl.cnf
Generating a 2048 bit RSA private key
..............................................+++
......................................+++
writing new private key to 'host-key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Organization Name (company) [Example Corp]:
Organizational Unit Name (department, division) [System Engineering]:
Email [root@example.com]:
Locality Name (city) [Rochester]:
State or Province Name (full name) [New York]:
Country Name (2 letter code) [US]:
Common Name (full hostname) []:gmcsrvx1.example.com
mv host-key.pem gmcsrvx1/
openssl ca -out gmcsrvx1/host-cert.pem -config config/openssl.cnf -infiles gmcsr
Using configuration from config/openssl.cnf
Enter pass phrase for /etc/pki/example.com/config/CA/example-ca.key:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
organizationName      :PRINTABLE:'Example Corp'
organizationalUnitName:PRINTABLE:'System Engineering'
localityName          :PRINTABLE:'Rochester'
stateOrProvinceName   :PRINTABLE:'New York'
countryName           :PRINTABLE:'US'
commonName            :PRINTABLE:'gmcsrvx1.example.com'
Certificate is to be certified until May  1 18:12:03 2013 GMT (365 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
if [ gmcsrvx1 = mail -o gmcsrvx1 = mail/ ]; then cp gmcsrvx1/host-cert.pem gmcsr.pem; fi
chmod go= gmcsrvx1/host-key.pem
if [ -e gmcsrvx1/host-combined.pem ]; then chmod go= gmcsrvx1/host-combined.pem;
[root@gmcsrvx1 example.com]#

If you look in the directory it made for you (the hostname you specified) you will find the private and public keys for the server.

1
2
3
4
5
[root@gmcsrvx1 example.com]# ls -l gmcsrvx1/
total 12
-rw-r--r--. 1 root root 3997 May  1 14:12 host-cert.pem
-rw-------. 1 root root 1704 May  1 14:12 host-key.pem
-rw-r--r--. 1 root root 1171 May  1 14:12 req.pem

You can revoke this certificate using the makefile as well:

1
2
3
4
5
6
7
[root@gmcsrvx1 example.com]# make revoke SOURCE=gmcsrvx1
openssl ca -config config/openssl.cnf -revoke gmcsrvx1/host-cert.pem
Using configuration from config/openssl.cnf
Enter pass phrase for /etc/pki/example.com/config/CA/example-ca.key:
Revoking Certificate 01.
Data Base Updated
[root@gmcsrvx1 example.com]#

There you go, yet another useful service to have around for dealing with your network. Raw files used above are available in the Archive. Now back to Webauth with me…

System Technology Accounting and Resource Registration Solution (STARRS)

Background

Like most of my projects, it all started with CSH. RIT allocates us two /24 public-facing networks to distribute out to our users. These resources need to have some degree of accounting in the event a user does something stupid (piracy, kiddie-pr0n, etc). RIT handles this with their own internal application, referred to as “start.rit.edu”. Fun aside, Start.RIT was coded by an old CSHer, now RIT employee and CSH advisor. He still maintains portions of it today. When the CSH network got to the point of needing our own internal application, a member (Joe Sunday) created “start.csh.rit.edu”. Similar to it’s RIT counterpart, start.csh allowed administrators to register machines and distribute resources on the network. Start.csh also allowed for users to manage firewall rules for their hosts at the border firewall systems. On the backend, it had a script that would automatically generate an ISC-DHCPD config file and load it into the server.

Unfortunately over time, bits of Start began to break. The fact that users could not register their own machines was a major source of irritation for the RTPs (Root-Type-Persons, or Sysadmins). When the network topology was shifted to accomodate some reorganization within RIT, the firewall rule system broke completely. There was no way to clear out old hosts that havent been on the network in years. And after the house DHCP server was compromised (then obliterated by the RTPs for security), the automatic DHCP generation was completely gone. We needed something new.

Starting in the Spring of 2011 I starting to architect a new application to encoumpase the previous functionality and add a lot of new features and enhancements that would benefit us for years to come. Going into the projects, I knew one thing: It had to be based in PostgreSQL. Pg has native datatypes for IP addresses, subnets, and MAC addresses. I know that this would be invaluable to have later on in the project. I also knew that I didnt want to write a daemon that ran on some server imposing application logic.

Development

At this point in my career, I didn’t have a whole lot of database architecting experience. I began to search for a schema creation application to help me architect what I knew was going to be a complex schema. I ran across a service called SchemaBank. (SB is now defunct) It was a schema design webapp that supported Postgres. It also introduced me to two things that would change the course of the application forever: Triggers and Functions. I realized that I could use Pg as the backend daemon to the entire application. For client interaction with the application, I started to develop a set of wrapper API functions that would help impose the application logic on the stored data. While most of this was taken care of in the relations between entities, there were some that could not be expressed as a relation. Fast forward two months, and I had a first revision of the database and the API to start writing an interface for.

At this point I didn’t know any sort of web languages other than HTML and CSS either. So using a book I won at BarCampRoc on PHP, MySQL, and Javascript (a very good book by the way), I started to learn PHP. Originally I was going to go with a purely non-OOP model for the webapp, however after consulting one of my web-dev friends (Ben Russell), he suggested that I go with an OOP model and to use some sort of framework to avoid having to write a ton of extra code. At this point I went with one that another one of my web-dev friends had used, Codeigniter. So I set to work and started writing classes. During this time, I knew that I needed a new a new dhcpd.conf generation function. One of my friends (Anthony Gargiulo) expressed an interest in helping out with the project, so I tasked him with writing a generation function in Perl. Four months later, I had a working PHP web interface (and he had a working dhcpd.conf generator). At this point, I had everything except a name. I dont really remember a lot about how it came about, but I think I took the first cool-sounding word that came to my head. I was in a Quake 3 Arena mood that day and remembered an old console command (impulse). I took that word and backronymed it to the IP Management Program for Use in Local Server Environments.

Initial Implementation

When we returned to RIT in the Fall, I set to work deploying IMPULSE. After a few hickups here and there, everything was in place and in use. Users were fairly receptive to it, mainly that they could register their devices themselves. However after some time, a few problems emerged. The web interface became terribly slow with all of the data that people had entered. It wasnt very efficient at allowing users to complete basic tasks and had a few bugs. Clearly some changes needed to occur, but during the course of the year I didn’t want to touch it. I spent 6 months writing code on this project, and I really didnt want to work on it again.

Refresh

My latest co-op offerred me a chance to take to the code once again, this time with a new use case in mind. They wanted something that could manage network registrations for developers in a complex environment, but there were a few major additions that needed to happen. IMPULSE needed to go global. It needeed more specific access control. It needed better device integration. And most importantly, it also needed a new interface. Armed with all the knowlege I have gained over the last year, I set out to work. 2 months later I had a new web interface and a lot of new fixes and enhancements to the core. However the name IMPULSE no longer applied. So I cooked up a new one: STARRS (System Technology Accounting and Resource Registration Solution). And that’s presumably what you actually care about.

Features

STARRS starts off with Datacenters, which correspond to your physical locations. Each datacenter contains Subnets and VLANs. Each datacenter then contains one or more Availability Zones, which are logical partitions of resources within the datacenter. AZs contain ranges of your subnets.

Within datacenters are Systems. A system is a computer, based on a hardware Platform (usually Custom though). A system can have Interfaces which attach to your network. On each interface can be Addresses that are assigned from your Availability Zones. You can get your addresses in a variety of ways (depending on family). DHCP and Static for IPv4, and Autoconf, DHCPv6, and Static for IPv6. (Oh yeah, STARRS does both IPv4 and IPv6). Your addresses are given to you with a set expiration date. If you dont renew before that date (You will get email notifications), it will be automatically deleted. On your addresses, you can create a variety of DNS Records that can be added into managed DNS Zones. Zone DDNS updates are done with Keys that are stored within STARRS. If your address is configured with DHCP, you can be a member of a DHCP Class. Classes, Ranges, and Subnets all have a variety of configuration options that can be set. You can also set options globally. For network systems, you can store SNMP Credentials that STARRS can use to look up Switchport and CAM Table data. Using this you can easily locate your system interfaces in your infrastructure. And of course, you can search for systems that you have configured.

STARRS depends on your existing authentication infrastructure for its user data. It does not track credentials within itself, but relies on the client to provide the user and external sources for privileging. The STARRS web interface gets the username from the web server from configurable PHP variables. It takes the username and calls an API initializing function that sets up your privilege levels. STARRS can get privilege data from one of the following:

  • Local Tables
  • OpenLDAP
  • Active Directory

Users can be global ADMINs and have RW access to everything in the database. They can be USERs who can create/edit/remove their own stuff, but cannot see certain confidential configuration and cannot edit other peoples stuff. People who are Group Admins can edit other peoples stuff in their group. There is also a PROGRAM user-level that allows external applications Read-Only access to user resources. If you don’t exist in the privilege source, you will be unable to initialize and do anything in STARRS.

STARRS also has the ability to automatically generate a configuration file for the ISC-DHCPD server based on the resources you configure. It supports key-based DDNS updates to zones, DHCP classes, global/range/subnet/class options, and more. A cronjob runs to periodically generate, download, and reload the server configuration file. Eventually this might change to something more real, but for now this is what works.

Use cases have STARRS working with the following web authentication services:

Deployment

All of the code is available on Github in two repositories:

There is an unofficial and unsupported command-line interface written by a friend of mine (Ryan Brown) located Here.

STARRS is licensed under the MIT license.

Demo

You can access the demo by clicking here. Username is ‘root’ and password is ‘admin’. The database resets itself every 24 hours at midnight, so don’t expect long-term persistence of data.

Who Should Use This

If you are a service provider to a group of users who consume network resources that you control. You want some way of accounting for what resources are in use and where they are in your network. The use cases of it so far are:

  • Computer Science House - Dorm of college students (about 350 systems)
  • Work - Product developers in 4 sites across 3 countries (1000+ systems)
  • Enterprise - My personal VM server (30 systems)

Future

There are a couple of enhancements on deck for the next release (whever that may be). See the respective Github project issues for details. However STARRS will soon be encompasing libvirt support. When you create a system, you can specify it to be a Virtual Machine living on a Host in one of your datacenters. None of this has been worked on, but the idea is for STARRS to be a complete network management solution. Maybe I’ll call it CloudSTARRS (like rockstars, but in the cloud haha).