Syracuse has been unable to solve Mercyhurst in their entire D-I history, getting at best one tie each season. After getting swept by RIT to end the regular season I don’t think anything is going to change.
Mercyhurst Wins 5-2
The Tigers will need to bring their A-game if they want to keep this to a one-goal contest. I’ve seen that level of play from them before, but after seeing a C-game on Friday and a B+-game Saturday I don’t think it’s going to happen.
Robert Morris Wins 3-1 (empty net)
The Colonials have been the only team to seriously challenge the Lakers in conference, responsible for 2 of their 3 losses. If Robert Morris can hold the ‘Hurst offense at bay they should be able to pull off the win.
Robert Morris Wins 3-2
My personal odds of each team winning the title:
This weekend was a bittersweet moment for RIT. After sweeping Penn State in the first round of the CHA playoffs, the Tigers had to say farewell to Ritter Arena, their home since the programs inception in 1975. Next season the Tigers will take to the ice in the 4200-seat Gene Polisseni Center in September. After the handshake, the team circled up and took a knee at center ice for one final salute to Ritter. I’ll be honest. I shed a tear.
For the second straight season, Penn State finished with a single win under their belt (each time against Lindenwood). They also finished last in the CHA and had their seasons come to a close at the hands of RIT at Ritter Arena. Next season the Nittany Lions will return 7 of their 8 top goal scorers as sophomores/juniors. Freshman Laura Bowman is reponsible for 10 goals and 6 assists on 90 shots, good enough for 2nd best percentage. Sophomore Shannon Yoxheimer ended her season 2nd with 8 goals, 8 assists on 197 shots. Not an ideal percentage but no coach can fault her for putting the puck on net.
Syracuse had a nifty little move to open up scoring on Saturday. All five skaters touched the puck which eventually connecting with Julie Knerr’s stick and whipped it past the Lindenwood goaltender for the first goal of the game. This would be all the Orange would need as Jenesica Drinkwater would hold on for the remaining 59:00 to get the donut.
Awesome to see @GrantCohoe in my AutismUp jersey tonight! Thank you for your donation to the cure! #classact #pumped #lovethenumber #35
— Jetta Rackleff (@JettaRackleff) March 1, 2014
]]>I had the privilege of attending three different Barcamp events over the span of three weekends this past fall. They were:
While each event shared the same common premise that is Barcamp, they were different in many ways. I found there were a lot of good things and some not-so-good. So without further adieu, here’s the rundown!
Date: Oct 26-27 2013 (2 day)
Venue: MIT Stata Center, Cambridge MA
Attendees: ~450
This was my first Barcamp since graduation. Previously I had only attended a handful of BCRoc events in college. I was quite impressed at the vast number of attendees and the wide array of skillsets they brought with them. The hosts clearly had done this before because the event went incredibly smooth and was very organized. Talk selection was incredibly diverse ranging from online dating to cool javascript libraries.
I really enjoyed this event. The longer talks meant more content per session. And all of that was spread that across two days!
Date: Nov 2 2013
Venue: RIT GCCIS, Rochester NY
Attendees: ~100
After BCBos I emailed the hosts of BCRoc with ideas that I thought might be great to use. The Talk Request and Talk Interest boards were two examples that they implemented. I missed two of the morning sessions due to another event but there were still plenty of talks to be had. A benefit of a consistant venue, core hosts, and bi-anual events means the hosts can pull together an event relatively quickly.
All in all this was yet another successful BCRoc. There were a lot of new faces (both as hosts and attendees) and plenty of interesting talks to choose from.
Date: Nov 9 2013
Venue: Dyn Headquarters, Manchester NH
Attendees: ~50
While I was really looking forward to a weekend off, who can resist just one more event? This one was a bit more of a mystery to me since information was somewhat scarse. Arrived a bit late due to GPS errors and finding parking (I won’t go into the speed bumps of death along the way)
Overall I was not very impressed with this event. Not sure if it is just Barcamp Burnout or what. I was surprised that for being a somewhat science/technology center in New Hampshire that there weren’t more attendees. This showed in that there were only ~20 talks total. The venue while unique did not feel very useful for a conference such as this. That being said I did walk away learning a few new things and meeting a few new people.
Thats it for Barcamps for a while. 3 events in 3 weekends in 3 different states giving 3 different talks with at least 3 CSHers at each event. Yikes. TEDxBeaconStreet is right around the corner, followed by holidays and hockey.
As always, feedback and questions are always welcome. Shoot me an email or a tweet.
]]>Slides available in The Archive.
]]>I love Google Voice. I’ve used it as my primary phone provider since 2008 (back when it was super-invite-beta). One of GV’s advertised features was ringing multiple phones at once. This would be perfect for what I wanted to do. Unfortunately, GV only lets you have a mobile phone associated with one account. So I would have to sacrifice my mobile phone away from my primary account and put it on the shared one. My roommate would have to do the same thing. This was not an ideal solution.
This is a newer service that one of my friends had played with before. I looked into what it had to offer feature-wise and quickly realized that it would do not only what I wanted, but “would-be-nice” features as well! Twilio does cost money, but not a lot of it. It’s $1/mo for a phone number, $.01/min for incoming calls, and $.02/min for outgoing calls. Plenty reasonable for this project.
Twilio opened up interesting new features that add to the coolness of this project.
Twilio itself runs on their own cloud infrastructure. I use one of my Amazon S3 buckets to host the static XML files that drive the logic and flow of the project. However call menus require a dynamic web server (in this case PHP). Twilio operates some of these for free, known as Twimlets. They are simple little Twilio apps that can provide a wide variety of call functionality.
There are three major components to the Cloudbell system:
A visitor walks up to the intercom and enters the apartment number (111#). The intercom has been programmed with my Twilio phone number (222-333-4444). The intercom dials the phone number.
Twilio is configured with a URL to GET when an incoming call is received. That URL is an XML file like this:
1 2 3 4 5 6 |
|
Generated at https://www.twilio.com/labs/twimlets/menu
This code will prompt the user (<Say>
) to enter 4 digits (numDigits="4"
) and POST the values to a URL (action="http://"
). Lets look at the decoded URL:
1 2 3 4 |
|
Here we can see that when the user presses 1 the application will call another bit of XML located at connect.xml
. Likewise when the user enters 2222 a similar bit of XML will be called at letthemin.xml
. These are usually fully-qualified URLs.
Lets look at what happens when they hit 1.
1 2 3 4 5 6 7 |
|
Here the system announces “Connecting” (<Say>
) and dials two numbers (111-111-1111 and 222-222-2222). Whoever picks up first gets the call and a message is played: “Press any key to answer and 9 to let them in”. Simulring wants the receiver to accept the call before it is connected (hence the “press any key” part). 9 is the key that tells the intercom to unlock the door.
The <Dial>
action has also been configured with a failure URL (FailUrl
), which is a bit of code to call when no one picks up. This occurs after 10 seconds (timeout="10"
).
fail.xml is pretty simple:
1 2 3 |
|
After this the call is terminated.
Now lets go back and look at the code when then user entered the correct access code (2222). In this case a new file is called located at letthemin.xml
.
1 2 3 4 |
|
This code says “Access granted” then plays an audio file that has been pre-generated. In this case it is the DTMF tone for 9. I generated my file at DialABC.
By playing the tone, the intercom unlocks the door and terminates the call.
This system has worked pretty well thus far. I can give my friends the access code and they can let themselves into the building without myself or my roommate doing any work. Hosting it completely in the cloud also permits a certain degree of reliability in the system.
]]>The purpose of this guide is to help an IT administrator install and configure the STARRS network asset tracking application for a basic site.
The intended audience of this guide is IT administrators with Linux experience (specifically Red Hat) and a willingness to use open-source software.
The install process should take around an hour provided appropriate resources.
STARRS is a self-service network resource tracking and registration web application used to monitor resource utilization (such as IP addresses, DNS records, computers, etc). For more details, view the project description here.
STARRS requires a database host and web server to operate. It is recommended to use a single virtual appliance for all STARRS functionality.
STARRS was tested only on RHEL-based Linux distributions. Anything RHEL6.0+ is compatible. There are no major requirements for the virtual appliance and a minimal software install will suffice. In this example we will be using a Scientific Linux 6.4 virtual machine.
Ensure that you are able to log into your remote system as any administrative user (in this example, root) and have internet access.
1 2 |
|
Firewalls can get in the way of allowing web access to the server. Only perform these steps if you have a system firewall installed and intend on using it. In this example we will use the RHEL default iptables
.
1 2 |
|
1
|
|
NOTE: If you have IPv6 enabled on your system, make sure to apply firewall rules to the IPv6 firewall as well.
Any RHEL administrator has dealt with SELinux at some point in their career. There are system-wide settings that allow/deny actions by programs running on the server. Disabling SELinux is not a solution.
1 2 |
|
STARRS heavily depends on the PostgreSQL database engine for operation. PgSQL must be at version 9.0 or higher, which is NOT available from the standard RHELish repositories. PgSQL will also require the PL/Perl and PL/Python support packages (also NOT located in the repos). You will need to add new software repositories to your appliance.
These programs will be needed at some point or another in the installation process.
1
|
|
1
|
|
On your own computer, open up yum.postgresql.org and click on the latest available PostgreSQL Release link. In this case we will be using 9.2.
Locate the approprate repository link for your operating system (Fedora, CentOS, SL, etc) and architecture (i386, x86_64, etc). In this case we will be using Scientific Linux 6 - i386
.
Download the package from the link you located onto the virtual appliance into any convenient directory.
1
|
|
yum
. Answer yes if asked for verification.1
|
|
/etc/yum.repos.d/
. This will prevent any PgSQL packages from being used out of the base packages.1 2 3 4 5 6 7 |
|
1
|
|
1 2 3 |
|
1
|
|
1 2 3 |
|
su
to the Postgres account and assign a password to the postgres user account using the query below. Exit back to root when done. This password should be kept secure!1 2 3 4 5 6 7 8 9 10 11 12 |
|
The STARRS installer requires passwordless access to the postgres account. This is achievable by creating a .pgpass
file in the root home directory.
~/.pgpass
like such:1 2 3 |
|
You can replace the passwords with whatever you want. Note the passwords for later.
1
|
|
/var/lib/pgsql/9.2/data/pg_hba.conf
and change all methods to md5
like so:1 2 3 4 5 6 7 8 |
|
This will enable login from Localhost only. If you need remote access, only allow the specific IP addresses or subnets that you need. Security is good.
1
|
|
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 6 7 |
|
If you get prompted for a password, STOP! You need to have this working in order to proceed. Make sure you typed everything in the file correctly and its permissions are set.
STARRS has many other software dependencies in order to function. These are mostly Perl modules that extend the capabilities of the language. These modules are (CPAN and package are provided however not all packages may be available):
NOTE: The first time you run CPAN you will be asked some basic setup questions. Answering the defaults are fine for most installations.
1 2 3 |
|
STARRS comes in two parts: The backend (database) and the Web interface. Each one is stored in it’s own repository on Github. You will need to download both in order to use the application. Right now we will focus on the backend.
/opt
. Clone (download) the current versions of the repositories using Git into that directory.1 2 3 4 5 |
|
/opt/starrs/Setup/Installer.pl
. You will need to edit the values in the Settings section of the file to match your specific installation. Example:1 2 3 4 5 6 7 8 9 10 11 |
|
dbsuperuser is the root postgres account (usually just ‘postgres’).
dbadminuser is the STARRS admin user you created above.
dbclientuser is the STARRS client user you created above.
sample will populate sample data into the database. Set to anything other than undef
to enable sample content.
1
|
|
A lot of text will flash across the screen. This is expected. If you see errors, then something is wrong and you should revisit the setup instructions.
You can verify that STARRS is functioning by running some simple queries.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
If you see “Greetings admin!” and you can perform the queries without error, then your backend is all set up and is ready for the web interface.
The STARRS web interface requires a web server to be installed. Only Apache has been tested. If you have a new system then you might not have Apache (or in RHELish, httpd) installed.
1
|
|
Navigate to the /etc/httpd/conf.d
directory.
Remove the welcome.conf
that exists there.
1 2 |
|
1
|
|
1 2 3 4 5 |
|
A minor change to the system PHP configuration allows the use of short tags for cleaner code. This feature must be enabled in the /etc/php.ini
file.
short_open_tag = on
You will be cloning the starrs-web into the Apache web root directory to simply deployment.
/var/www/html/
and clone the repository. Note the . at the end of the clone command.1 2 |
|
Copy the application/config/database.php.example
to application/config/database.php
Edit the copied/renamed database file and enter your database connection settings. Example:
1 2 3 4 |
|
Copy the application/config/impulse.php.example
to application/config/impulse.php
For this web environment, the defaults in this file are fine. In this file you can change which environment variable to get the current user from.
Apache needs to be given some special instructions to serve up the application correctly. Create a file at /etc/httpd/conf.d/starrs.conf
with the following contents:
1 2 3 4 5 6 7 8 |
|
starrs-auth.db
).1
|
|
1
|
|
At this point you should have a fully functioning STARRS installation. Navigate to your server in a web browser and you should be prompted for login credentials. As we established in the authentication database file, the username is root and the password is admin. If you get the STARRS main page, then success! Otherwise start looking through log files to figure out what is wrong.
Detailed troubleshooting is out of the scope of this guide. System Administrator cleverness is a rare skill, but is the most useful thing when trying to figure out what happened. Shoot me an email if you really feel something is wrong. I have included a few common errors here that I have run into when I don’t follow my own instructions.
If you see this render
1
|
|
It probably means your DB credentials in application/config/database.php
are not correct.
If you see a blank page in the browser and something like this in the log file:
1
|
|
It probably means that PHP short_open_tag
is not On in /etc/php.ini
.
This presentation was never given explicitly, but it was used for a Public Speaking class.
]]>Presented at Barcamp Rochester in Spring 2013
]]>We can only use three channels in the 5 GHz spectrum. RIT has a massive unified wireless deployment across all of campus. Policy states that third-parties (students, staff, etc) cannot operate wireless access points within range of the RIT network to prevent signal overlap and interference. This is not an issue in the 5 GHz spectrum, where there are many more non-overlaping channels. This limits potential clients to only those with dual-band wireless radios.
No client-side software can be required. This is a requirement set by us to allow ease of access by users. Most operating systems support PEAP/MsCHAPv2 (via RADIUS) out of the box and require no extra configuration to make them work. Unfortunately this requires a bit more system infrastructure to make it work. Our authentication backend (MIT Kerberos) does not support this type of user authentication so we need to do some hax to make it.
We need to be faster than RIT. There is absolutely no reason to use CSH wireless over RITs unless we can be faster. By using Channel Bonding in two key areas, we can achieve this requirement and double the speed of RIT wireless. We also need to be able to do fast-reassociation between access points so that users can walk up and down floor and not lose connectivity.
First we had to figure out where and how to deploy our range of APs. Since we have relatively few resources we used a combination of what we had lying around, purchased, and donated harware.
Looking at the map of the floor, I wanted to place the channel-bonded APs in the areas with the largest concentration of users (the Lounge and the User Center). The others would be dispersed across the other public rooms on floor.
[[ IMAGE HERE ]]
Cisco WDS is essentially a poor-mans controller. WDS allows you to do authentication once across your wireless domain and do relatively seamless handoff between access points. I had an extra 1230 laying around without antennas so I parked it in my server room and configured it to act as the WDS master. When a client attempts to authenticate to an AP, the auth data is sent to the WDS server where it is then processed and a response sent to the AP to let them in or not. If the client roams to another AP then the WDS server promises the new AP that the client is OK and skips the authentication phase.
This is the only device that will ever talk to the RADIUS server, so all of the configuration for that is only needed once.
NOTE: In this example the WDS server is at IP address 192.168.0.250 and the RADIUS server is at 192.168.0.100.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
The access points need to use the AP username defined above to talk to the WDS server. RADIUS configuration here should be optional.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
We start with the user entering their credentials. These are entered in the standard Windows logon screen (which I dearly miss from Windows 2003). From there, the client is configured to have it’s default domain (realm in this case) to be the MIT Kerberos realm that your machine is a member of. This would be the equivalent of an Active Directory domain.
From here, your workstation acts just like any other Kerberized host. It uses it’s host principal (derived from what it thinks its hostname is) and configured password to authenticate to the KDC. Once it has verified it’s identity, it then goes to authenticate you. And if your password is correct, Windows lets you in!
The question is: who does it let you in as? A Kerberos principal is nothing more than a name. It is not an account, or any object that actually contains information for the system. You must map Kerberos principals to accounts. These accounts are created in Windows as either local users or via Active Directory (a whole other can of worms). Usually, you will want to create a one-to-one mapping between principal and account (grant@GRANTCOHOE.COM
principal = Windows account “grant”).
On your KDC, open up kadmin
with your administrative principal and create a host principal for the new machine. It needs to have a password that you know, so use your favorite password generating source.
1 2 3 4 5 6 7 8 9 10 |
|
That’s it for the KDC. On to the Windows machine!
Windows 7 (as well as Server 2008 I believe) include the basic utilities required to support MIT Kerberos. This was available for XP and Server 2003 as “Microsoft Support Tools”. The utility we will be using is called ksetup
. It allows for configuration of foreign-realm authentication systems.
So to set your machine to authenticate to your already existing MIT Kerberos KDC, open up a command prompt and do the following:
1 2 3 4 |
|
After that, you need to set a Group Policy setting to automatically pick the right domain for you to login to. Open up the Group Policy Editor (gpedit.msc) and navigate to Local Computer Policy->Computer Configuration->Administrative Templates->System->Login->Assign a default domain for logon. Set this to your realm (GRANTCOHOE.COM in my case).
Do a nice reboot of your system, and you should be ready to go!
Active Directory can be configured to trust a foreign Kerberos realm. It will NOT synchronize information with anything, but if you just need a bunch of users to log into something and no data with it, the process is not too terrible. Rather than duplicate the information, you can find the guide that I used here: Microsoft trusting MIT Kerberos
]]>Lets say you are in a domain, and you wish to access several different web services. How often do you find you have to enter the same username and password over and over again to get to each website? Wouldn’t it be nice if you could just enter it once and automagically have access to all the web services you need? Well guess what? You can!
It all works off of MIT Kerberos. Kerberos is a mechanism that centralizes your authentication to one service and allows for Single Sign-On. The concept of SSO is fairly simple: Enter your credentials once and that’s it. It save you from typing them over and over again and protects against certain attacks.
WebAuth was developed at Stanford University and brings “kerberization” to web services. All you need is some modules, a KDC, and a webserver.
In this guide, we will be utilizing the Kerberos realm/domain name of EXAMPLE.COM. We will also be using two servers:
Note that you will need a pre-existing Kerberos KDC in your network.
There are three components of the WebAuth system. WebAuth typically refers to the component that provides authorized access to certain content. The WebKDC is the process that handles communication with the Kerberos KDC and distributes tickets out to the client machines. The WebLogin pages are the login/logoff/password change pages that are presented to the user to enter their credentials. Certain binary packages provided by Stanford do not contain components of the WebAuth system. This is why we are going to build it from source.
Each machine you plan to install WebAuth on needs the following packages:
Enterprise Linux does not include several required Perl modules for this to work. You are going to need to install them via CPAN. If you have never run CPAN before, do it once just to get the preliminary setup working.
You also need Remctl, an interface to Kerberos. For EL6 we are using remctl 2.11 available from the Fedora 15 repositories.
1 2 3 |
|
Open up ports 80 (HTTP) and 443 (HTTPS) since we will be doing web stuff.
Since you will be doing Kerberos stuff, you need a synchronized clock. If you do not know how to do this, see my guide on Setting up NTP.
Your machine needs to be a Kerberos client (meaning valid DNS, krb5.conf, and host prinicpal in its /etc/krb5.keytab). We will be using three keytabs for this application:
You need an SSL certificate matching the hostname of each server (gmcsrvx2.example.com, gmcsrvx3.example.com, webauth.example.com) avaiable for your Apache configuration.
IMPORTANT:You also need to ensure that your CA is trusted by Curl (the system). If in doubt, cat your CA cert into /etc/pki/tls/certs/ca-bundle.crt
. You will get really odd errors if you do not do this.
We will be installing WebAuth into /usr/local
and this need its libraries to be linked in the system.
1
|
|
We will need to run ldconfig
to load in the WebAuth libraries later on.
The latest version at the time of this writing is webauth-4.1.1 and is avaiable from Stanford University. Grab the source tarball since the packages do not include features that you will need in a brand new installation.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Assuming everything is all good, go ahead and build it.
1 2 3 4 5 6 7 8 9 |
|
And assuming everything is happy, make install:
1 2 3 4 5 6 7 8 9 |
|
Now ensure that the new libraries get loaded by running ldconfig
. This looks at the files in that ld.so.conf.d directory and adds them to the library path. MAKE SURE YOU DO THIS!!!
The Apache modules for WebAuth have been compiled into /usr/local/libexec/apache2/modules
:
1 2 |
|
Make sure you have the proper keytab set up at /etc/webauth/webauth.keytab
. This file can technically be located anywhere as long as you configure the module accordingly.
Next you need to create a local state directory for WebAuth. Usually this is /var/lib/webauth
. The path can be changed as long as you set it in the following configuration file.
The Apache module needs a configuration file to be included in the Apache server configuration. On Enterprise Linux, this can be located in /etc/httpd/conf.d/mod_webauth.conf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
Again, adjust paths to suit your tastes. The WebAuthWebKdcSSLCertFile
should be your public CA certificate that you probably cat’d earlier.
Make sure you have the proper keytab set up at /etc/webkdc/webkdc.keytab
. This file can technically be located anywhere as long as you configure the module accordingly.
Next you need to create a local state directory for the WebKDC. Usually this is /var/lib/webkdc
. The path can be changed as long as you set it in the configuration files.
The WebKDC utilizes an ACL to allow only certain hosts to get tickets. This goes in /etc/webkdc/token.acl
:
1 2 3 |
|
For the WebLogin pages, another configuration file is used to specify the information for it (in /etc/webkdc/webkdc.conf
). This is seperate from the Apache module configuration loaded by the webserver.
1 2 3 4 5 |
|
NOTE: You CANNOT change the location of this file or the WebKDC module will freak out.
Note that certain directives must match the Apache configuration. We will make that now in /etc/httpd/conf.d/mod_webkdc.conf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Ensure that your paths match what you set up earlier.
This step should only be done if you have an LDAP server operating in your network and can accept SASL binds. You can make certain LDAP attributes available in the web environment as server variables for your web applications. This is configured in /etc/httpd/conf.d/mod_webauthldap.conf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Most of the files that you created do not have the appropriate permissions to be read/written by the webserver. Lets fix that.
1 2 3 4 5 6 7 8 9 10 11 |
|
I created a VHost for both “webauth.example.com” and “gmcsrvx2.example.com”, with the latter requiring user authentication to view. I name these after the hostname they are serving in /etc/httpd/conf.d/webauth.conf
, Just throw a simple Hello World into the DocumentRoot that you configure for your testing host. Note that you must have NameVirtualHost-ing setup for both ports 80 and 443.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
And the host file at /etc/httpd/conf.d/gmcsrvx2.conf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Once you are all set, start Apache. Then navigate your web browser to the host (http://gmcsrvx2.example.com). This should first redirect you to HTTPS, then bounce you to the WebLogin pages. After authenticating, you should be able to access the server.
This is by no means a simple thing to get set up. An intimate knowlege how how Kerberos, Apache, and LDAP work is crucial to debugging issues relating to WebAuth. All of my sample configuration files can be found in the Archive.
]]>First let’s review how your traffic gets to it’s destination. You open up your favorite web browser and punch in “www.google.com”. Since your computer works with IP addresses, and not names (like people), you need to do a process of resolving the name. The Domain Name System (DNS) is the service that does this. Your ISP runs DNS servers, that are typically given to you in your DHCP lease. Your computer sends a query to this DNS server asking “what is the IP address of www.google.com”. Since your ISP does not have control over the “google.com” domain, the request is forwarded to another server. Eventually someone says “Hey! www.google.com is at 72.14.204.104”. This response is sent back through the servers to your computer.
After resolving the name, your computer now creates a data packet that will be sent to the computer at 72.14.204.104. The packet gets there by looking at your hosts route table and will hit your default gateway.
IP over DNS is a method of encapsulating IP packets inside a DNS query. This essentially creates a VPN between a server and a client. In my case, I setup and configured the Iodine DNS server on my server located at RIT that I would use to route my traffic to the internet. Inside the tunnel exists the subnet of 192.168.0.0/27, with the server being at 192.168.0.1. My client connected and received IP address 192.168.0.2.
Certain WLAN installations will include a guest SSID that non-controlled clients can associate to (In this scenario, I will use ExampleGuest, keeping the actual one I used anonymous). Often your traffic will get routed to a captive portal first, requiring you to enter credentials supplied by your organization. Until you do so, none of your traffic will reach it’s destination outside the LAN… UNLESS you discover a vulnerability, such as the one we are going to explore.
When I walked into the Example Corp site, I flipped open my trusty laptop and got to work. After associating with the ExampleGuest SSID, I was given the IP address 10.24.50.22/24 with gateway 10.24.50.1. I opened my web browser in attempt to get to “www.google.com”, and was immediately directed to the captive portal asking me to log in. Obviously I do not have credentials to this network, so I am at a dead end at this time.
Next I whipped out a terminal session, and did a lookup on “www.google.com”. Lo and behold the name was resolved to it’s external IP address. This means that DNS traffic was being allowed past the captive portal and out onto the network. If “www.google.com” resolved to something like “1.1.1.1” or an address in the 10.24.0.0/16 space, I know that the captive portal is grabbing all of my traffic. This would be the end of this experiment. However as I saw, this was not the case. External DNS queries were still being answered. Time to do some hacks.
Step 1) I need my Iodine client to be able to “query” my remote name server. I added a static route to my laptop that forced traffic to my server through the gateway given to me by the DHCP server. (ip route add 129.21.50.104 via 10.24.50.1)
Step 2) I need to establish my IPoDNS tunnel. (iodine -P mypasswordwenthere tunnel.grantcohoe.com) A successful connection gave me the IP address 192.168.0.2. I was then able to ping 192.168.0.1, which is the inside IP address of the tunnel).
Step 3) I need to change my default gateway to the server address inside the tunnel (ip route add default via 192.168.0.1).
And with that, all of my traffic is going to be encapsulated over the IP over DNS tunnel and sent to my server as DNS queries, this giving me unrestricted internet access bypassing the captive portal.
This entire experiment would grind to a halt if DNS queries were not being handled externally. The network administrator should enable features necessary to either drop DNS requests or answer them with the captive portal IP address.
End result: Unrestricted internet access over a wireless network that I have no credentials to.
Difficulty of setting this up: HIGH Speed of tunneled internet: SLOW Worth it for practical use: NOT AT ALL Worth it for education: A LOT
This sort of trick can work on airplane and hotel wireless systems as well. Most sites do not think to have their captive portals capture DNS traffic in addition to regular IP traffic. As we can see here, it can be used against them.
]]>I have a shiny new VM server sitting in my dorm room. I have access to two networks, one operated by my dorm organization and the other provided to me by RIT. Both get me to the internet, but do so through different paths/SLAs. I want my server to be accessible from both networks. I also want to be able to attach VM hosts to NAT’d networks behind each respective network. This gives me a total of four possible VM networks (primary-external, primary-internal, secondary-external, secondary-internal). I have a Hurricane Electric IPv6 tunnel endpoint configured on the server, and want IPv6 connectivity available on ALL networks regardless of being external or internal. And to complicate matters, my external IP addresses are given to me via DHCP so I cannot set anything statically. Make it so.
First, like any good shell script, we should define our command paths:
1 2 3 4 5 |
|
Next we’ll define the interface names that we are going to use. These should already be pre-configured and have their appropriate IP information.
1 2 3 4 5 |
|
Now we need to load in the relevant subnet layouts. In an ideal world this would be set statically. However due to the nature of the environment I am in, both of my networks serve me my address via DHCP and is subject to change. This is very dirty and not very efficient, but it works:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
In order to get most of these features working, we need to enable IP forwarding in the kernel.
1 2 3 |
|
The routes are the most critical part of making this whole system work. We use two cool features of the linux networking stack, Route Tables and Route Rules. Route Tables are similar to your VRF tables in Cisco-land. Basically you maintain several different routing tables on a single system in addition to the main one. In order to specify what table a packet should use, you configure Route Rules. A rule basically says “Packets that match this rule should use table A”. In this system I use rules to force a packet to use a route table based on its source address.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
My script also maintains the host firewall rules, so we’ll enable those:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
One of the features of my networking setup is that all networks, no matter internal or external have IPv6 connectivity. This is achieved by sending Router Advertisements out all interfaces. This is all well and good for the VMs that I am hosting, but these advertisements travel out of the physical interfaces to other hosts on the network! This is not good in that I am allowing other users to use my IPv6 tunnel interface. This puzzled me for a long time until I discovered a handy little program called “ebtables”. Ebtables is basically iptables for layer 2. As such, I was able to filter all router advertisement broadcasts out of the physical interfaces.
1 2 3 4 5 6 |
|
You now have a dual-homed server with two external networks, two internal networks, and IPv6 connectivity to all. Both external networks can receive their configuration via DHCP and will dynamically adjust their routing accordingly.
]]>The logic behind these guidelines is relatively simple. There is no substitute for real life interaction. The idea of “meeting someone over Facebook” doesn’t sit well with me. Whenever I see a person friend-request someone they don’t really know or have had any degree of conversation with, it makes me question their ability to do so face-to-face. It also helps avoid the potential “stalker” effect that can come about easily with social media.
Take for example Jack and Jill. Jack is an avid fan of a school sports team. Jill is his favorite player on the sports team. Jack and Jill do not know each other, and have had nothing more than a dialog (as defined earlier) between them at various meet-n-greet events. Jill gets a friend-request from Jack several days later. Jill not wanting to potentially upset Jack accepts the friend-request. She now encounters frequent chat messages from Jack that lead a quick and dull conversation. Various other actions get Jack labeled as a “stalker” in Jills mind. This is not where Jack wanted to end up as he is now considered “the bad guy”. While his motives (assume everything here to be at face-value, no hidden agendas) are honorable, they have led to an awkward situation for both of them.
By my guidelines, Jill should never have accepted the friend-request. Similarly Jack should have never sent one. Since neither of them have had any level of in-person conversations, they have no common ground on which to establish a friendship.
I will occasionally receive friend-requests from people from High School. These I find myself subjecting to a higher degree of scrutiny since I do not associate with this crowd a whole lot anymore. If I did not particularly enjoy your presence back then, I probably have no reason to change that.
]]>My organization stores user information in an OpenLDAP directory server. This includes contact information, shell preferences, iButton IDs, etc. The directory server does not store the users password and relies on a Kerberos KDC to provide authentication via SASL. This is a widely supported configuration and is popular because you can achieve SSO (Single Sign-On) and keep passwords out of your directory. Our shell access machines are configured to do LDAP authentication of users which basically instructs PAM to do a bind to the LDAP server using the credentials provided by the user. If the bind is successful, the user logs in. If not, well sucks to be them. Before this project we had a single server handling the directory running CentOS 5.8/OpenLDAP 2.3. If this server went poof, then all userdata is lost and no one could log into any other servers (or websites if configured with Stanford Webauth. See the guide for more on this). Obviously this is a problem that needed correcting.
OpenLDAP 2.4 introduced a method of replication called “mirrormode”. This allows you to use their already existing syncrepl protocol to synchronize data between multiple servers. Previously (OpenLDAP <= 2.3) only allowed you to do a master-slave style of replication where the slaves are read-only and would be unwilling to perform changes. Obviously this has some limitations in that if your master (or “provider” in OpenLDAP-ish) were to die, then your users cannot do any changes until you get it back online. Using MirrorMode, you can create an “N-way Multi-Master” topology that allows all servers to be read/write and instantly replicate their changes to the others.
To enable users to have LDAP access even if one of the servers dies, we need a way to keep the packets flowing and automatically failover to the secondary servers. Linux has a package called “Heartbeat” that allows this sort of functionality. If you are familiar with Cisco HSRP or the open-source VRRP, it works the same way. Server A is assigned an IP address (lets say 10.0.0.1) and Server B is assigned a different address (lets say 10.0.0.2). Heartbeat is configured to provide a third IP address (10.0.0.3) that will always be available between the two. On the primary server, an ethernet alias is created with the virtualized IP address. Periodic heartbeats (keep-alive packets) are sent out to the other servers to indicate “I’m still here!”. Should these messages start disappearing (like when the server dies), the secondary will notice the lack of updates and automatically create a similar alias interface on itself and assign that virtualized IP. This allows you to give your clients one host, but have it seemlessly float between several real ones.
The easy way of configuring MirrorMode requires you to store your replication DN’s credentials in plaintext in the config file. Obviously this is not very secure since you are storing passwords in plaintext. As such we can use the hosts Kerberos principal to bind to the LDAP server as the replication DN and perform all of the tasks we need to do. This is much better than plaintext!
Since we like being secure, we should really be using LDAPS (LDAP + SSL) to get our data. We will be setting this up too using our PKI. We will save this for the end since we want to ensure that our core functionality actually works first.
I spun up two new machines, lets call them “warlock.example.com” and “ldap2.example.com”. Each of them runs my favorite Scientific Linux 6.2 but with OpenLDAP 2.4. You cannot do MirrorMode between 2.3 and 2.4.
First we need to install the required packages for all of this nonsense to work.
If you are coming off of a fresh minimal install, you might want to install openssh-clients and vim as well. Some packages may not be included in your base distribution and may need other repositories (EPEL, etc)
Each host needs its own set of host/ and ldap/ principals as well as a shared one for the virtualized address. In the end, you need a keytab with the following principals:
WARNING: Each time you write a -randkey’d principal to a keytab, it’s KVNO (Key Version Number) is increased, thus invalidating all previous written principals. You need to merge the shared principals into each hosts keytab. See my guide on Kerberos Utilities for information on doing this.
Put this file at /etc/krb5.keytab and set an ACL on it such that the LDAP user can read it.
1 2 3 |
|
If you see something like “kerberos error 13” or “get_sec_context: 13” it means that someone cannot read the keytab, usually the LDAP server. Fix it.
Kerberos and the OpenLDAP synchronization require synchronized clocks. If you aren’t already set up for this, follow my guide for setting up NTP on a system before continuing.
You need saslauthd to be running on both LDAP servers. If you do not already have this set up, follow my guide for SASL Authentication before continuing.
We will be using Syslog and Logrotate to manage the logfiles for the LDAP daemon (slapd). By default it will spit out to local4. This is configurable depending on your system, but for me I am leaving it as the default. Add an entry to your rsyslog
config file (usually /etc/rsyslog.conf
) for slapd
1
|
|
Now unless we tell logrotate
to rotate this file, it will get infinitely large and cause you lots of headaches. I created a config file to automatically rotate this log according the the system defaults. This was done at /etc/logrotate.d/slapd
:
1 2 3 |
|
Then restart rsyslog
. Logrotate runs on a cron job and is not required to be restarted.
Since I am grabbing the data from my old LDAP server, I will not be setting up a new data directory. On the old server and on the new master, the data directory is /var/lib/ldap/
. I simply scp’d all of the contents from the old server over to this directory. If you do this, I recommend stopping the old server for a moment to ensure that there are no changes occurring while you work. After scp-ing everything, make sure to chown everything to the LDAP user. I also recommend running slapindex
to ensure that all data gets reindexed.
Client Library Configuration
Now to make it convenient to test your new configuration, edit your /etc/openldap/ldap.conf
to change the URI and add the certificate settings.
1 2 3 4 |
|
When configuring this file on the servers, TLS_REQCERT must be set to ALLOW since we are doing Syncrepl over LDAPS. Obviously since we are using a shared certificate for ldap.example.com, it will not match the hostname of the server and will fail. On all of your clients, they should certainly “demand” the certificate. But in this instance that prevents us from accomplishing Syncrepl over LDAPS.
Since people want to be secure, OpenLDAP has the ability to do sessions inside of TLS tunnels. This works the same way HTTPS traffic does. To do this you need to have the ability to generate SSL certificates based on a given CA. This procedure varies from organization to organization. Regardless of your method, the hostname you chose is critical as this will be the name that is verified. In this setup, we are creating a host called “ldap.example.com” that all LDAP clients will be configured to use. As such the same SSL certificate for both hosts will be generated for “ldap.example.com” and placed on each server.
I placed the public certificate at /etc/pki/tls/certs/slapd.pem
, the secret key in /etc/pki/tls/private/slapd.pem
, and my CA certificate at /etc/pki/tls/example-ca.crt
. After obtaining my files, I verify them to make sure that they will actually work:
1 2 3 |
|
You need to allow the LDAP account to read the private certificate:
1
|
|
If you are using a previous server configuration, just scp the entire /etc/openldap
code over to the new servers. Make sure you blow away any files or directories that may have been added for you before you copy. If you are not, you might need to do a bit of setup. OpenLDAP 2.4 uses a new method of configuration called “cn=config”, which stores the configuration data in the LDAP database. However since this is not fully supported by most clients, I am still using the config file. For setting up a fresh install, see one of my other articles on this. (Still in progress at the time of this writing)
The following directives will need to be placed in your slapd.conf for the SSL certificates:
1 2 3 |
|
Depending on your system, you may need to configure the daemon to run on ldaps://. On Red-Hat based systems this is in /etc/sysconfig/ldap
.
You need to add two things to your configuration for Syncrepl to function. First to your slapd.conf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
The serverID value must uniquely identify the server. Make sure you change it when inserting the configuration onto the secondary server. Likewise change the Syncrepl RID as well for the same reason.
Secondly you need need to allow the replication binddn full read access to all objects. This should go in your ACL file (which could be your slapd.conf, but shouldnt be).
1 2 3 |
|
WARNING: If you do not have an existing ACL setup, doing just these entries will prevent anything else from doing anything with your server. These directives are meant to be ADDED to an existing ACL.
You should be all set and ready to start the server.
1 2 3 |
|
Make sure that when you do your SASL bind, the server reports your DN correctly. To verify this:
1 2 3 4 5 |
|
That DN is the one that should be in your ACL and have read access to everything.
Since the LDAP user will always need this principal, I recommend adding a cronjob to keep the ticket alive. I wrote a script that will renew your ticket and fix an SELinux permissioning problem as well. You can get it here. Just throw it into your /usr/local/sbin/
directory. There are better ways of doing this, but this one works just as well.
1
|
|
Dont forget to open up ports 389 and 636 for LDAP and LDAPSSL!
After configuring both the master and the secondary servers, we now need to get the data initially replicated across your secondaries. Assuming you did like me and copied the data directory from an old server, your master is the only one that has real data. Start the service on this machine. If all went according to plan, you should be able to search for an object to verify that it works.
1 2 3 4 5 6 7 |
|
If you get errors, increase the debug level (-d 1) and work out any issues.
After doing a kinit as the LDAP user described above (host/ldap2.example.com), start the LDAP server on the secondary. If you tail the slapd log file, you should start seeing replication events occurring really fast. This is the server loading its data from the provider as specified in slapd.conf. Once it is done, try searching the server in a similar fashion as above.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Heartbeat provides failover for configured resources between several Linux systems. In this case we are going to provide high-availability of an alias IP address (10.0.0.3) which we will distribute to clients as the LDAP server (ldap.example.com). Heartbeat configuration is very simple and only requires three files:
/etc/ha.d/haresources
contains the resources that we will be failing over. It should be the same on both servers.
1
|
|
The reason the name of the host above is not “ldap.example.com” is because the entry must be a node listed in the configuration file (below).
/etc/ha.d/authkeys
contains a shared secret used to secure the heartbeat messages. This is to ensure that someone doesnt just throw up a server and claim to be a part of your cluster.
1 2 |
|
Make sure to chmod it to something not world-readable (600 is recommend).
Finally, /etc/ha.d/ha.cf
is the Heartbeat configuration file. It should contain entries for each server in your cluster, timers, and log files.
1 2 3 4 5 6 7 8 9 10 |
|
After all this is set, start the heartbeat service. After a bit you should see another ethernet interface show up on your primary. To test failover, unplug one of the machines and see what happens!
A side affect of having the LDAP server respond on the shared IP address is that it gets confused about its hostname. As such you need to edit your /etc/hosts
file to point the ldap.example.com name to each of the actual host IP addresses. In this example, ldap1 is 10.0.0.1 and ldap2 is 10.0.0.2. You should modify your hosts file to include:
1 2 |
|
The first entry should be the address of the local system (10.0.0.2 should be the first in the case of ldap2).
Your DNS server should be able to do forward and reverse lookups for ldap.example.com
going to 10.0.0.3 (the highly available address).
If you are having issues binding between the servers or binding from clients, it is usually due to DNS problems.
If all went according to plan, you should be able to see Sync messages in your slapd log file.
1 2 |
|
Likewise if you log into a client machine (in my case, I made this the KDC) you should be able to search at only the ldap.example.com host. The others will return errors.
1 2 3 4 5 6 7 8 9 10 11 |
|
The reason is that the client has a different view of what the server’s hostname is. This difference causes SASL to freak out and not allow you to bind.
1 2 3 4 5 |
|
On the server reveals the error of:
1 2 |
|
This means your DNS is not hacked or functional. Fix it according to the instructions
The LDAP log says:
1 2 |
|
and /var/log/messages
contains:
1
|
|
You need to change the SELinux context of your KRB5CC file (default is /tmp/krb5cc_$UIDOFLDAPUSER
on Scientific Linux) to something the slapd process can read. Since every time you re-initialize the ticket you risk defaulting the permissions, I recommend using my script in your cronjob from above. If someone finds a better way to do this, please let me know!
If after enabling SSL on your client you receive (using -d 1):
1
|
|
Check your certificate files on the SERVER. Odds are you have an incorrect path.
You now have two independent LDAP servers running OpenLDAP 2.4 and synchronizing data between them. They are intended to listen on a Heartbeat-ed service IP address that will always be available even if one of the servers dies. You can also do SASL binds to each server to avoid having passwords stored in your configuration files.
If after reading this you feel there is something that can be improved or isnt clear, feel free to contact me! Also you can grab sample configuration files that I used at http://archive.grantcohoe.com/projects/ldap.
]]>kinit will initiate a new ticket from the Kerberos system. This is how you renew your tickets to access kerberized services or renew service principals for daemons. You can kinit interactively by simply running kinit and giving it the principal you want to init as:
1 2 3 |
|
No response is good, and you can view your initialized ticket with klist (discussed later on).
You can also kinit with a keytab by giving it the path to a keytab and a principal.
1 2 |
|
Note that it did not prompt me for a password. That is because it is using the stored principal key in the keytab to authenticate to the Kerberos server.
klist is commonly used for two purposes: 1) List your current Kerberos tickets and 2) List the principals stored in a keytab. Running klist without any arguments will perform the first action.
1 2 3 4 5 6 7 |
|
To list the contents of a keytab:
1 2 3 4 5 6 7 8 |
|
The duplication of the principal names represents each of the encryption types that are stored in the keytab. In my case, I use three encryption types to store my principals. If you support older deprecated enc-types (as Kerberos calls them), you will see more entries here.
kdestroy destroys all Kerberos tickets for your current user. You can verify this by doing:
1 2 3 4 |
|
The Keytab Utility lets you view and modify keytab files. To start, you need to read in a keytab
1 2 3 4 5 6 7 8 |
|
If you want to merge two keytabs, you can repeat the read command and the second keytab will be appended to the list. You can also selectively add and delete entries to the keytab as well. Once you are done, you can write the keytab out to a file.
1 2 3 |
|
Multipath routing is when you have a router with multiple equal-cost paths to a single destination. Since each of these routes carries the same weight, the router will distribute traffic across all of them roughly equally. You can read more on this technology on Wikipedia. This is more about the implementation of such a topology.
In my setup, I have three clients on the 192.168.0.0/24 subnet with gateway 192.168.0.254. Likewise I have three servers on the 10.0.0.0/24 subnet with gateway 10.0.0.254. I want my clients to always be able to access a webserver at host 172.16.0.1/32. This is configured as a loopback address on each of the three servers. This is the multipath part of the project. Each of the servers has the same IP address on a loopback interface and will therefore answer on it. However since the shared router between these two networks has no idea the 172.16.0.1/32 network is available via the servers, my clients cannot access it. This is where the routing comes into play. I will be using an OSPF daemon on my servers to send LSA’s (read up on OSPF if you are not familiar) to the router.
You need to allow OSPF traffic to enter your system. IPtables supports the OSPF transport so you only need a minimum of one rule:
1
|
|
Save you firewall configuration after changing it.
You need to install Quagga, a suite of routing daemons that will be used to talk OSPF with the router. These packages are available in most operating systems.
Configure your main interface statically as you normally would. You also need to setup a loopback interface for the highly-available host. Procedure for this varies by operating system. Since I am using Scientific Linux, I have a configuration file in /etc/sysconfig/network-scripts/ifcfg-lo:1
:
1 2 3 4 |
|
Make sure you restart your network process (or your host) after you do this. I first thought the reason this entire thing wasn’t working was a Linux-ism as my FreeBSD testing worked fine. I later discovered that it was because hot-adding an interface sometimes doesn’t add all the data we need to the kernel route table.
Quagga’s ospfd has a single configuration file that is very Cisco-like in syntax. It is located at /etc/quagga/ospfd.conf
:
1 2 3 4 5 6 7 |
|
ospfd has a partner program called zebra that manages the kernel route table entries learned via OSPF. While we may not necessarily want these, zebra must be running and configured in order for this to work. Edit your /etc/quagga/zebra.conf
:
1 2 |
|
After you are all set, start zebra and ospfd (and make sure they are set to come up on boot).
I am using a virtualized Cisco 7200 router in my testing environment. You may have to modify your interfaces, but otherwise this configuration is generic across all IOS:
1 2 3 4 5 6 |
|
This enables OSPF on the 10.0.0.0/24 network and will start listening for LSAs. When it processes a new one, the route will be inserted into the routing table. After a while, you should see something like this:
1 2 3 4 5 |
|
This is multipath routing. The host 172.16.0.1/32 has three equal paths available to get to it.
To verify that my clients were actually getting load-balanced, I threw up an Apache webserver on each server with a static index.html file containing the ID of the server (server 1, server 2, server 3, etc). This way I can curl the highly-available IP address and can see which host I am getting routed to. You should see your clients get distributed out across any combination of the three servers.
Now lets simulate a host being taken offline for maintenance. Rather than having to fail over a bunch of services, all you need to do is stop the ospfd process on the server. After the OSPF dead timer expires, the route will be removed and no more traffic will be routed to it. That host is now available for downtime without affecting any of the others. The router just redistributes the traffic on the remaining routes.
But lets say the host immediately dies. There is no notification to the router that the backend host is no longer responding. The only thing you can do at this point is wait for the OSPF dead timer to expire. You can configure this to be a shorter amount of time, thus making “failover” more transparent.
Multipath routing is basically a poor-mans load balancer. It has no checks to see if the services are actually running and responding appropriately. It also does not behave very well with sessions or data that involves lots of writes. However for static web hosting or for caching nameservers (the use that sparked my interest in this technology), it works quite well.
]]>There are some things we need to set up prior to installing and configuring OpenLDAP.
You will need a working KDC somewhere in your domain. You also need to have the server configured as a Kerberos client (/etc/krb5.conf
) and be able to kinit
without issue.
There are two principals that you will need in your /etc/krb5.keytab
:
If you do not already have this working, see my guide on how to set it up.
You need the following for our setup:
You will need SSL certificates matching the hostname you intend your LDAP server to listen on (ldap.example.com is different than server.example.com). Procure these from your PKI administrator. I place mine in the default directories as shown:
The LDAP user needs to be able to read the private key and keytab you configured earlier. Ensure that it can.
1 2 |
|
By default there are some extra files in the configuration directory. Since we are not using the cn=config method of configuring the server, we are simply going to remove the things we don’t want.
1 2 |
|
We will be re-creating ldap.conf later on with the settings that we want.
Copy /usr/share/doc/krb5-server-ldap-1.9/kerberos.schema
into your schema directory (default: /etc/openldap/schema
. This contains all of the definitions for Kerberos-related attributes.
Decide where you want your data directory to be. By default this is /var/lib/ldap
Copy over the standard DB_CONFIG file to that directory from /usr/share/openldap-servers/DB_CONFIG.example
. This contains optimization information about the structure of the database.
Some systems require you to explicitly enable LDAPS listening. On Red Hat Enterprise Linux-based distributions, this is done by changing SLAPD_LDAPS
to YES in /etc/sysconfig/ldap
.
Create a /etc/openldap/slapd.conf
to configure your server. You can grab a copy of my basic configuration file here.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
|
To generate a password for your directory and hash it, you can use the slappasswd
utility. See my guide on LDAP utilities for details. After that you should be all set. Start the slapd service, but dont query it yet.
The /etc/openldap/ldap.conf
file is the system-wide default LDAP connection settings. This way you do not have to specify the host and protocol each time you want to run a command. Make it now.
1 2 3 4 5 |
|
Right now you should have a running server, but with no data in it. I created a simple example setup file to get a basic tree set up. At this point you need to architect how you want your directory to be organized. You may chose to follow this or chose your own.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
|
You can add the contents of this file to your directory using ldapadd.
1
|
|
You should now be able to query your server and figure out who it thinks you are based on your Kerberos principal.
1 2 3 4 5 6 7 8 9 |
|
Here you can see that it thinks that I am uid=user,ou=users,dc=example,dc=com
given my principal of user@EXAMPLE.COM
.
Now try searching:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
This was dedicated to Ross Delinger (rossdylan) because: “FIX THE DAMN IRC SERVER!” Like the original song, it then went a lot further than expected. What more might come of this?!
MP3 - Sung by Alex Howland (ducker)
Lyrics:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
|
First, pick a directory to store your certificates in. I am going to use /etc/pki/example.com
since this is where all the other system SSL stuff is stored and for you SELinux people, will already have the right contexts. cd into that directory and create the Makefile:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
This lets us perform two actions (targets): Issue and Revoke. Issue will create a new host certificate from your CA. Revoke will expire an already-existing certificate. This is mainly used for renewal since you need to revoke then issue.
Next we will need a configuration directory and some basic files:
1 2 3 |
|
After that, cd into the config directory and create an openssl.cnf
file. This is the OpenSSL configuration file used for certificate generation:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|
You should ensure that the directory definition and organization information is changed to match what you want.
After this you are ready to generate your CA:
1
|
|
This will generate a CA certificate based on your settings file and will last for 5 years, after which you will need to renew it.
Enter a certificate password when prompted. You do not need to specify a Common Name (CN) as asked later in the process. Ensure that the other field defaults are correct, as they came from your openssl.cnf
file and are needed to match in other certificates.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Unfortunately this process creates the private key file to be world readable. This is extremely bad. Fix it. Also ensure that the public key is readable by all. It is supposed to be.
1 2 |
|
Now back up one directory (back to the working directory you started at). I usually make a symlink to the CA public key here so you can easily reference it.
You can now issue SSL certificates by typing make DEST=hostname
where hostname is the name of the server you are going to issue to.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
If you look in the directory it made for you (the hostname you specified) you will find the private and public keys for the server.
1 2 3 4 5 |
|
You can revoke this certificate using the makefile as well:
1 2 3 4 5 6 7 |
|
There you go, yet another useful service to have around for dealing with your network. Raw files used above are available in the Archive.
]]>