Grant Cohoe

Cloud plumber, bit wrangler, solution engineer, hockey nut.

Barcamp Tour 2013

Let me get something straight right off the bat: Barcamps have nothing to do with bars. Barcamps are technically-oriented community-organized unconferences where attendees share their ideas and projects with each other.

I had the privilege of attending three different Barcamp events over the span of three weekends this past fall. They were:

  • Barcamp Boston (BCBos)
  • Barcamp Rochester (BCRoc)
  • Barcamp Manchester (BCMan)

While each event shared the same common premise that is Barcamp, they were different in many ways. I found there were a lot of good things and some not-so-good. So without further adieu, here’s the rundown!

Barcamp Boston

Date: Oct 26-27 2013 (2 day)
Venue: MIT Stata Center, Cambridge MA
Attendees: ~450

This was my first Barcamp since graduation. Previously I had only attended a handful of BCRoc events in college. I was quite impressed at the vast number of attendees and the wide array of skillsets they brought with them. The hosts clearly had done this before because the event went incredibly smooth and was very organized. Talk selection was incredibly diverse ranging from online dating to cool javascript libraries.

Talks Given

  • 802.11/WLAN Design Fundamentals (Basic overview of the 802.11 protocol standards and the “golden rule” of WLAN design)

Talks Heard

  • Trippy Math
  • Bitcoin Basics
  • Keeping Up With Technology (Discussion)
  • Sociology of OkCupid
  • Open WiFi
  • Linux Sysadmin AMA
  • Dating for Nerds
  • Lightning Round Talks/Demos
  • Big Data Intro
  • Home Repair for Geeks
  • Docker (Process Containers)
  • Analog/Digital Audio Synthesis
  • Future of Storytelling
  • Ray Tracing Concepts
  • Sex, Drugs, Rock-n-Roll and Technology

The Good

  • Longer-than-expected talks. All of my past Barcamp events had 20 minute talks. Here they were 40. This gave much more time for talking and a full-length Q&A session which was awesome.
  • Online schedule board: A Google Doc was kept up-to-date with the daily schedule which was great for tracking what talks were available and where to go next.
  • Talks Requested Board: People could post topics they wanted to hear about to see if someone could give a talk on it.
  • Talk Interest Board: If you had a talk but weren’t sure if there was any interest in it, you could post it on a board and people would mark it if they would attend. A really great way to gauge interest in potential topics.
  • Lots of writing space: Each room had plenty of space for presenters to add on-the-fly diagrams to their talks. I blame the academia of MIT for having them be chalkboards though. We aren’t in the 19th century anymore.
  • Name Badges: These were really cool. Had QR codes that linked to each attendees information (interests, twitter, URL, etc)

The Not-so-Good

  • No organized lightning talks: These are quick 5-minute talks about anything imaginable given in quick succession. They are great because you get just a brief taste for a topic, then move on to the next. These are something that I assumed were a staple of Barcamps everywhere. Guess not :/
  • No travel time: Talks ended every 40 minutes, and the next talk would start immediately. You missed the last few minutes of one and/or were late to the other.

I really enjoyed this event. The longer talks meant more content per session. And all of that was spread that across two days!

Barcamp Rochester

Date: Nov 2 2013
Venue: RIT GCCIS, Rochester NY
Attendees: ~100

After BCBos I emailed the hosts of BCRoc with ideas that I thought might be great to use. The Talk Request and Talk Interest boards were two examples that they implemented. I missed two of the morning sessions due to another event but there were still plenty of talks to be had. A benefit of a consistant venue, core hosts, and bi-anual events means the hosts can pull together an event relatively quickly.

Talks Given

  • Infrastructure Security Considerations (Things to avoid when designing networks)
  • Linux Sysadmin AMA (Literally anything. I won’t know a lot of it but I can point you in the right direction)
  • Babby’s 2nd Networking Seminar for CSHers (Layer 4+ and DNS)

Talks Heard

  • Microsoft Cloud Overview
  • Why We Need to Disconnect More
  • Lessons Learned from ISTS(x)
  • 802.11ac WiFi
  • Sales Pitch for Vim

The Good

  • Later Start Time: Great for those who like sleep. Or a few extra hours the night before to build your talks.
  • Good Travel Time: 5 minutes between talks for biological functions, mingling, and getting to the next talk.
  • Sponsor Letters: These were a cool new thing for thanking the sponsors.

The Not-so-Good

  • No shirts: There was a legitimate excuse, but I’m still sad.
  • Lightning Talk Confusion: Rather than being after lunch, these were moved to later in the afternoon. What was not apparent was that these were scheduled for two blocks rather than one. Many people left to go to other talks (including me). Would have much rather stayed for the full Lightning Talks session.
  • No name badges: The stickers work, but you can’t beat pre-printed lanyard-afixed name badges.

All in all this was yet another successful BCRoc. There were a lot of new faces (both as hosts and attendees) and plenty of interesting talks to choose from.

Barcamp Manchester

Date: Nov 9 2013
Venue: Dyn Headquarters, Manchester NH
Attendees: ~50

While I was really looking forward to a weekend off, who can resist just one more event? This one was a bit more of a mystery to me since information was somewhat scarse. Arrived a bit late due to GPS errors and finding parking (I won’t go into the speed bumps of death along the way)

Talks Given

  • Lightning Talks/Project Show-n-Tell (5 minutes to talk about anything or to show off your cool pet project)
  • Linux AMA for Beginners or Hackers (Lets talk about all things Linux)

Talks Heard

  • FPGA/Ardiuno/Raspberry Pi
  • Vagrant Plugin Ecosystem
  • Intro to Object Oriented Javascript

The Good

  • Unique Setting: Held at a startup corporate office was kinda cool.
  • Lunchtime “build-it” Competition: Who can build the furthest-traveling paper airplane? I really liked this. There were many interesting designs and lots of software engineering jokes.

The Not-so-Good

  • Presentation Rooms: These were small team conference rooms and really did not accomodate more than 8 people. Not everyone could see the wall-mounted display and there was no writing surface for quick diagrams.
  • Scheduling: Talks were 40 minutes or an hour depending on the session time. There was also 20 minutes between talks, plenty of time to lose interest.
  • Lack of information: The website does not offer much information other than what a Barcamp is, the Sponsors, and the address of the venue. “Panels” is also not a very good label for the online talk system, which I didn’t discover until after the event.
  • Entry: You had to call someone to let you in. But the guy who entered after us had to sit and wait for the next person and let them in. Not sure what was up with that.
  • No shirt: Sadface

Overall I was not very impressed with this event. Not sure if it is just Barcamp Burnout or what. I was surprised that for being a somewhat science/technology center in New Hampshire that there weren’t more attendees. This showed in that there were only ~20 talks total. The venue while unique did not feel very useful for a conference such as this. That being said I did walk away learning a few new things and meeting a few new people.

Closing

Thats it for Barcamps for a while. 3 events in 3 weekends in 3 different states giving 3 different talks with at least 3 CSHers at each event. Yikes. TEDxBeaconStreet is right around the corner, followed by holidays and hockey.

As always, feedback and questions are always welcome. Shoot me an email or a tweet.

Cloudbell: Cloud-based Doorbell Service

I recently moved into a condo building with one of my college friends. The building has an intercom that allows residents to “buzz” visitors in. Usually this is done with a simple panel on the wall in the apartment. However this one is slightly more advanced. The administrator programs in a phone number to dial when a visitor rings the apartment. Whoever picks up then just has to press a button (9) and the intercom unlocks the door and ends the call. This is all well and good, if you only have one phone to pick up from. However this is the 21st century. We don’t have landlines or shared phones. We all have our own cell phones or other voice provider services. What if I want the intercom to ring multiple phones? “Nope. “Can’t do.” said the administrator. I don’t believe in “Nope. Can’t be done.” so I set to work researching solutions.

Google Voice

I love Google Voice. I’ve used it as my primary phone provider since 2008 (back when it was super-invite-beta). One of GV’s advertised features was ringing multiple phones at once. This would be perfect for what I wanted to do. Unfortunately, GV only lets you have a mobile phone associated with one account. So I would have to sacrifice my mobile phone away from my primary account and put it on the shared one. My roommate would have to do the same thing. This was not an ideal solution.

Twilio

This is a newer service that one of my friends had played with before. I looked into what it had to offer feature-wise and quickly realized that it would do not only what I wanted, but “would-be-nice” features as well! Twilio does cost money, but not a lot of it. It’s $1/mo for a phone number, $.01/min for incoming calls, and $.02/min for outgoing calls. Plenty reasonable for this project.

System Overview

Core Functionality

Twilio opened up interesting new features that add to the coolness of this project.

  • Call routing to multiple pickup stations (users)
  • Menu-based access code entry to allow pre-authorized users to enter a code and automatically open the door.

Hosting

Twilio itself runs on their own cloud infrastructure. I use one of my Amazon S3 buckets to host the static XML files that drive the logic and flow of the project. However call menus require a dynamic web server (in this case PHP). Twilio operates some of these for free, known as Twimlets. They are simple little Twilio apps that can provide a wide variety of call functionality.

Components

There are three major components to the Cloudbell system:

  • Intercom - The unit in the entry of the building that can place outgoing calls to people, who can then hit a key and unlock the door.
  • Twilio Engine - The call management system that handles call flow from the Intercom to the Pickup Stations.
  • Pickup Stations - Peoples phones (mine and my roommmates) who answer calls and hit keys.

Call Flow

A visitor walks up to the intercom and enters the apartment number (111#). The intercom has been programmed with my Twilio phone number (222-333-4444). The intercom dials the phone number.

Twilio is configured with a URL to GET when an incoming call is received. That URL is an XML file like this:

1
2
3
4
5
6
<Response>
  <Gather numDigits="4" action="http://twimlets.com/menu?Message=Enter%20access%20code%20or%20press%201%20to%20be%20connected.&Options%5B1%5D=connect.xml&Options%5B2222%5D=letthemin.xml&">
      <Say>Enter access code or press 1 to be connected.</Say>
  </Gather>
  <Redirect/>
</Response>

Generated at https://www.twilio.com/labs/twimlets/menu

This code will prompt the user (<Say>) to enter 4 digits (numDigits="4") and POST the values to a URL (action="http://"). Lets look at the decoded URL:

1
2
3
4
http://twimlets.com/menu
$Message="Enter access code or press 1 to be connected."
$Options[1]=connect.xml
$Options[2222]=letthemin.xml

Here we can see that when the user presses 1 the application will call another bit of XML located at connect.xml. Likewise when the user enters 2222 a similar bit of XML will be called at letthemin.xml. These are usually fully-qualified URLs.

Lets look at what happens when they hit 1.

1
2
3
4
5
6
7
<Response>
  <Say>Connecting</Say>
  <Dial action="http://twimlets.com/simulring?Dial=true&FailUrl=fail.xml&" timeout="10">
      <Number url="http://twimlets.com/whisper?Message=Someone+is+at+the+door.+Press+any+key+to+answer+and+9+to+let+them+in">111-111-1111</Number>
      <Number url="http://twimlets.com/whisper?Message=Someone+is+at+the+door.+Press+any+key+to+answer+and+9+to+let+them+in">222-222-2222</Number>
  </Dial>
</Response>

Here the system announces “Connecting” (<Say>) and dials two numbers (111-111-1111 and 222-222-2222). Whoever picks up first gets the call and a message is played: “Press any key to answer and 9 to let them in”. Simulring wants the receiver to accept the call before it is connected (hence the “press any key” part). 9 is the key that tells the intercom to unlock the door.

The <Dial> action has also been configured with a failure URL (FailUrl), which is a bit of code to call when no one picks up. This occurs after 10 seconds (timeout="10").

fail.xml is pretty simple:

1
2
3
<Response>
  <Say>No one is available at this time. Please try again later.</Say>
</Response>

After this the call is terminated.

Now lets go back and look at the code when then user entered the correct access code (2222). In this case a new file is called located at letthemin.xml.

1
2
3
4
<Response>
  <Say>Access granted</Say>
  <Play>access.wav</Play>
</Response>

This code says “Access granted” then plays an audio file that has been pre-generated. In this case it is the DTMF tone for 9. I generated my file at DialABC.

By playing the tone, the intercom unlocks the door and terminates the call.

Conclusion

This system has worked pretty well thus far. I can give my friends the access code and they can let themselves into the building without myself or my roommate doing any work. Hosting it completely in the cloud also permits a certain degree of reliability in the system.

STARRS Installation Guide

Introduction

Purpose

The purpose of this guide is to help an IT administrator install and configure the STARRS network asset tracking application for a basic site.

Audience

The intended audience of this guide is IT administrators with Linux experience (specifically Red Hat) and a willingness to use open-source software.

Commitment

The install process should take around an hour provided appropriate resources.

Description

Definition

STARRS is a self-service network resource tracking and registration web application used to monitor resource utilization (such as IP addresses, DNS records, computers, etc). For more details, view the project description here.

Physical

STARRS requires a database host and web server to operate. It is recommended to use a single virtual appliance for all STARRS functionality.

Process

  • The virtual appliance is provisioned (out of scope of this guide)
  • Core software is installed
  • Dependent software packages are downloaded and installed
  • STARRS is acquired, configured, and installed
  • The web server is configured to allow access

Installation

Virtual Appliance

STARRS was tested only on RHEL-based Linux distributions. Anything RHEL6.0+ is compatible. There are no major requirements for the virtual appliance and a minimal software install will suffice. In this example we will be using a Scientific Linux 6.4 virtual machine.

Connectivity

Ensure that you are able to log into your remote system as any administrative user (in this example, root) and have internet access.

1
2
[root@starrs-test ~]# cat /etc/redhat-release
Scientific Linux release 6.4 (Carbon)

System Security

Firewall

Firewalls can get in the way of allowing web access to the server. Only perform these steps if you have a system firewall installed and intend on using it. In this example we will use the RHEL default iptables.

  1. Add a rule to the system firewall to allow HTTP and HTTPS traffic to the server.
1
2
iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT
iptables -I INPUT 1 -p tcp --dport 443 -j ACCEPT
  1. Save the firewall configuration (no restart required)
1
service iptables save

NOTE: If you have IPv6 enabled on your system, make sure to apply firewall rules to the IPv6 firewall as well.

SELinux

Any RHEL administrator has dealt with SELinux at some point in their career. There are system-wide settings that allow/deny actions by programs running on the server. Disabling SELinux is not a solution.

  1. Allow Apache/httpd to connect to database engines
1
2
setsebool -P httpd_can_network_connect=on
setsebool -P httpd_can_network_connect_db=on

Core Software

STARRS heavily depends on the PostgreSQL database engine for operation. PgSQL must be at version 9.0 or higher, which is NOT available from the standard RHELish repositories. PgSQL will also require the PL/Perl and PL/Python support packages (also NOT located in the repos). You will need to add new software repositories to your appliance.

Utilities

These programs will be needed at some point or another in the installation process.

  1. Install the following packages through yum.
1
yum install cpan wget git make perl-YAML -y
  1. We will also need the development tools group of packages installed.
1
yum groupinstall "Development tools" -y

PostgreSQL Database Engine

  1. On your own computer, open up yum.postgresql.org and click on the latest available PostgreSQL Release link. In this case we will be using 9.2.

  2. Locate the approprate repository link for your operating system (Fedora, CentOS, SL, etc) and architecture (i386, x86_64, etc). In this case we will be using Scientific Linux 6 - i386.

  3. Download the package from the link you located onto the virtual appliance into any convenient directory.

1
wget http://yum.postgresql.org/9.2/redhat/rhel-6-i386/pgdg-sl92-9.2-8.noarch.rpm
  1. Install the PostgreSQL repository package file that was downloaded with yum. Answer yes if asked for verification.
1
yum install pgdg-sl92-9.2-8.noarch.rpm
  1. You need to ensure that the base PostgreSQL packages are hidden from future package searches and updated. Add an exclude line to your base OS repository file located in /etc/yum.repos.d/. This will prevent any PgSQL packages from being used out of the base packages.
1
2
3
4
5
6
7
# This example uses the sl.repo file. This will depend on your variant of OS (CentOS-base.repo for CentOS).
[sl]
name=Scientific Linux $releasever - $basearch
...
exclude=postgresql*

[sl-security]
  1. Install the required PgSQL packages using the Yum package manager.
1
yum install postgresql92 postgresql92-plperl postgresql92-server
  1. Initialize the PgSQL database server
1
2
3
[root@starrs-test ~]# service postgresql-9.2 initdb
Initializing database:                                     [  OK  ]
[root@starrs-test ~]#
  1. Make PgSQL start on system boot
1
chkconfig postgresql-9.2 on
  1. Start the PgSQL service
1
2
3
[root@starrs-test ~]# service postgresql-9.2 start
Starting postgresql-9.2 service:                           [  OK  ]
[root@starrs-test ~]#
  1. su to the Postgres account and assign a password to the postgres user account using the query below. Exit back to root when done. This password should be kept secure!
1
2
3
4
5
6
7
8
9
10
11
12
[root@starrs-test ~]# su postgres -
bash-4.1$ psql
could not change directory to "/root"
psql (9.2.4)
Type "help" for help.

postgres=# ALTER USER postgres WITH PASSWORD 'supersecurepasswordhere';
ALTER ROLE
postgres=# \q
bash-4.1$ exit
exit
[root@starrs-test ~]#

The STARRS installer requires passwordless access to the postgres account. This is achievable by creating a .pgpass file in the root home directory.

  1. Create a file at ~/.pgpass like such:
1
2
3
localhost:5432:*:postgres:supersecurepasswordhere
localhost:5432:*:starrs_admin:adminpass
localhost:5432:*:starrs_client:clientpass

You can replace the passwords with whatever you want. Note the passwords for later.

  1. This file should be readable only by the user that created it. Change permissions accordingly.
1
chmod 600 .pgpass
  1. You now need to allow users to login to the database server from the server itself. Open the /var/lib/pgsql/9.2/data/pg_hba.conf and change all methods to md5 like so:
1
2
3
4
5
6
7
8
# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     md5
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
# IPv6 local connections:
host    all             all             ::1/128                 md5

This will enable login from Localhost only. If you need remote access, only allow the specific IP addresses or subnets that you need. Security is good.

  1. Reload the postgres service to bring in the changes
1
service postgresql-9.2 reload
  1. Create admin and client accounts for STARRS.
1
2
3
4
5
6
7
8
9
10
[root@starrs-test ~]# psql -h localhost -U postgres
psql (9.2.4)
Type "help" for help.

postgres=# create user starrs_admin with password 'adminpass';
CREATE ROLE
postgres=# create user starrs_client with password 'clientpass';
CREATE ROLE
postgres=# \q
[root@starrs-test ~]#
  1. Verify that you can log into the database server without being prompted for a password.
1
2
3
4
5
6
7
[root@starrs-test ~]# psql -h localhost -U postgres
psql (9.2.4)
Type "help" for help.

postgres=# \q

[root@starrs-test ~]#

If you get prompted for a password, STOP! You need to have this working in order to proceed. Make sure you typed everything in the file correctly and its permissions are set.

Dependencies

STARRS has many other software dependencies in order to function. These are mostly Perl modules that extend the capabilities of the language. These modules are (CPAN and package are provided however not all packages may be available):

  • Net::IP (perl-Net-IP)
  • Net::LDAP (perl-LDAP)
  • Net::DNS (perl-Net-DNS)
  • Net::SNMP (perl-Net-SNMP)
  • Net::SMTP (perl-Mail-Sender)
  • Crypt::DES (perl-Crypt-DES)
  • VMware::vCloud
  • Data::Validate::Domain

NOTE: The first time you run CPAN you will be asked some basic setup questions. Answering the defaults are fine for most installations.

  1. Install each of these modules. Some of them are available as packages in yum.
1
2
3
yum install perl-Net-IP perl-LDAP perl-Net-DNS perl-Mail-Sender perl-Net-SNMP perl-Crypt-DES -y
cpan -i Data::Validate::Domain
cpan -i VMware::vCloud

Download/Configure STARRS

STARRS comes in two parts: The backend (database) and the Web interface. Each one is stored in it’s own repository on Github. You will need to download both in order to use the application. Right now we will focus on the backend.

  1. You will need a directory to store the downloaded repos in. I recommend using /opt. Clone (download) the current versions of the repositories using Git into that directory.
1
2
3
4
5
[root@starrs-test ~]# cd /opt/
[root@starrs-test opt]# git clone https://github.com/cohoe/starrs -q
[root@starrs-test opt]# ls
starrs
[root@starrs-test opt]#
  1. Open the installer file at /opt/starrs/Setup/Installer.pl. You will need to edit the values in the Settings section of the file to match your specific installation. Example:
1
2
3
4
5
6
7
8
9
10
11
# Settings
my $dbsuperuser = "postgres";
my $dbadminuser = "starrs_admin";
my $dbadminpass = "adminpass";
my $dbclientuser = "starrs_client";
my $dbclientpass = "clientpass";
my $sample = undef;

my $dbhost = "localhost";
my $dbport = 5432;
my $dbname = "starrs";

dbsuperuser is the root postgres account (usually just ‘postgres’). dbadminuser is the STARRS admin user you created above. dbclientuser is the STARRS client user you created above. sample will populate sample data into the database. Set to anything other than undef to enable sample content.

  1. Run the install script
1
perl /opt/starrs/Setup/Installer.pl

A lot of text will flash across the screen. This is expected. If you see errors, then something is wrong and you should revisit the setup instructions.

You can verify that STARRS is functioning by running some simple queries.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@starrs-test starrs]# psql -h localhost -U postgres starrs
psql (9.2.4)
Type "help" for help.

starrs=# SELECT api.initialize('root');
NOTICE:  table "user_privileges" does not exist, skipping
CONTEXT:  SQL statement "DROP TABLE IF EXISTS "user_privileges""
PL/pgSQL function api.initialize(text) line 19 at SQL statement
    initialize
------------------
 Greetings admin!
(1 row)

starrs=# SELECT * FROM api.get_systems(NULL);
 system_name | owner | group | comment | date_created | date_modified | type | os_name | last_modifier | platform_name | asset | datacenter | location
-------------+-------+-------+---------+--------------+---------------+------+---------+---------------+---------------+-------+------------+----------
(0 rows)

starrs=# \q
[root@starrs-test starrs]#

If you see “Greetings admin!” and you can perform the queries without error, then your backend is all set up and is ready for the web interface.

Apache2/httpd

The STARRS web interface requires a web server to be installed. Only Apache has been tested. If you have a new system then you might not have Apache (or in RHELish, httpd) installed.

  1. If you do not have Apache/httpd installed, install it.
1
yum install httpd php php-pgsql -y
  1. Navigate to the /etc/httpd/conf.d directory.

  2. Remove the welcome.conf that exists there.

1
2
cd /etc/httpd/conf.d/
rm -rf welcome.conf
  1. Enable httpd to start on boot
1
chkconfig httpd on
  1. Start the httpd service. (Warning messages are fine, as long as the service starts you should be fine)
1
2
3
4
5
[root@starrs-test ~]# service httpd start
Starting httpd: httpd: apr_sockaddr_info_get() failed for starrs-test.grantcohoe.com
httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
                                                           [  OK  ]
[root@starrs-test ~]#

PHP Configuration

A minor change to the system PHP configuration allows the use of short tags for cleaner code. This feature must be enabled in the /etc/php.ini file.

  1. In the php.ini file, set short_open_tag = on

Web Interface

You will be cloning the starrs-web into the Apache web root directory to simply deployment.

  1. Change directory to /var/www/html/ and clone the repository. Note the . at the end of the clone command.
1
2
[root@starrs-test ~]# cd /var/www/html/
[root@starrs-test html]# git clone https://github.com/cohoe/starrs-web -q .
  1. Copy the application/config/database.php.example to application/config/database.php

  2. Edit the copied/renamed database file and enter your database connection settings. Example:

1
2
3
4
$db['default']['hostname'] = 'localhost';
$db['default']['username'] = 'starrs_admin';
$db['default']['password'] = 'adminpass';
$db['default']['database'] = 'starrs';
  1. Copy the application/config/impulse.php.example to application/config/impulse.php

  2. For this web environment, the defaults in this file are fine. In this file you can change which environment variable to get the current user from.

  3. Apache needs to be given some special instructions to serve up the application correctly. Create a file at /etc/httpd/conf.d/starrs.conf with the following contents:

1
2
3
4
5
6
7
8
<Directory "/var/www/html">
        AllowOverride all
        AuthType basic
        AuthName "STARRS Sample Auth (root:admin)"
        AuthBasicProvider file
        AuthUserFile    "/etc/httpd/conf.d/starrs-auth.db"
        require valid-user
</Directory>
  1. Since we reference an authenication database, you will need to create this file (starrs-auth.db).
1
htpasswd -b -c /etc/httpd/conf.d/starrs-auth.db root admin
  1. Restart Apache to apply the changes. (A reload is not sufficient enough)
1
service httpd restart

Testing

At this point you should have a fully functioning STARRS installation. Navigate to your server in a web browser and you should be prompted for login credentials. As we established in the authentication database file, the username is root and the password is admin. If you get the STARRS main page, then success! Otherwise start looking through log files to figure out what is wrong.

Detailed troubleshooting is out of the scope of this guide. System Administrator cleverness is a rare skill, but is the most useful thing when trying to figure out what happened. Shoot me an email if you really feel something is wrong. I have included a few common errors here that I have run into when I don’t follow my own instructions.

PHP DB Error

If you see this render

1
pg_last_error() expects parameter 1 to be resource, boolean given

It probably means your DB credentials in application/config/database.php are not correct.

Blank Page

If you see a blank page in the browser and something like this in the log file:

1
syntax error, unexpected end of file in /var/www/html/starrs-web/application/views/core/navbar.php

It probably means that PHP short_open_tag is not On in /etc/php.ini.

Backups for Photo Majors

Data backups are something every single person should do. This presentation is with specific regard to non-technical users who want a little bit of information and some suggestions on what to do.

This presentation was never given explicitly, but it was used for a Public Speaking class.

Slides on Google Drive

CSH Wireless Network Deployment

We wanted to deploy our own wireless network across the floor. I decided to make the project happen.

Requirements

Spectrum Use

We can only use three channels in the 5 GHz spectrum. RIT has a massive unified wireless deployment across all of campus. Policy states that third-parties (students, staff, etc) cannot operate wireless access points within range of the RIT network to prevent signal overlap and interference. This is not an issue in the 5 GHz spectrum, where there are many more non-overlaping channels. This limits potential clients to only those with dual-band wireless radios.

Authentication

No client-side software can be required. This is a requirement set by us to allow ease of access by users. Most operating systems support PEAP/MsCHAPv2 (via RADIUS) out of the box and require no extra configuration to make them work. Unfortunately this requires a bit more system infrastructure to make it work. Our authentication backend (MIT Kerberos) does not support this type of user authentication so we need to do some hax to make it.

Speed

We need to be faster than RIT. There is absolutely no reason to use CSH wireless over RITs unless we can be faster. By using Channel Bonding in two key areas, we can achieve this requirement and double the speed of RIT wireless. We also need to be able to do fast-reassociation between access points so that users can walk up and down floor and not lose connectivity.

Setup

Access Points

First we had to figure out where and how to deploy our range of APs. Since we have relatively few resources we used a combination of what we had lying around, purchased, and donated harware.

  • 3x Cisco 1230
  • 1x Cisco 1142
  • 1x Cisco 1131
  • 1x Cisco 1252

Looking at the map of the floor, I wanted to place the channel-bonded APs in the areas with the largest concentration of users (the Lounge and the User Center). The others would be dispersed across the other public rooms on floor.

[[ IMAGE HERE ]]

Wireless Domain Services

Cisco WDS is essentially a poor-mans controller. WDS allows you to do authentication once across your wireless domain and do relatively seamless handoff between access points. I had an extra 1230 laying around without antennas so I parked it in my server room and configured it to act as the WDS master. When a client attempts to authenticate to an AP, the auth data is sent to the WDS server where it is then processed and a response sent to the AP to let them in or not. If the client roams to another AP then the WDS server promises the new AP that the client is OK and skips the authentication phase.

This is the only device that will ever talk to the RADIUS server, so all of the configuration for that is only needed once.

NOTE: In this example the WDS server is at IP address 192.168.0.250 and the RADIUS server is at 192.168.0.100.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
aaa new-model
!
!
aaa group server radius rad_local
 server 192.168.0.250 auth-port 1812 acct-port 1813
!
aaa group server radius rad_eap
 server 192.168.0.100 auth-port 1812 acct-port 1813
!
aaa authentication login eap_local group rad_local
aaa authentication login eap_methods group rad_eap
!
radius-server local
  no authentication mac
  nas 192.168.0.250 key 7 SUPERSECRETKEYHERE 
  user wds-authman nthash 7 AUTHMANPASS 
!
radius-server host 192.168.0.250 auth-port 1812 acct-port 1813 key 7 SUPERSECRETRADIUSKEYHERE 
radius-server host 192.168.0.100 auth-port 1812 acct-port 1813 key 7 SUPERSECRETRADIUSOTHERKEYHERE 
!
!
wlccp ap username wds-authman password 7 AUTHMANPASS 
wlccp authentication-server infrastructure eap_local
wlccp authentication-server client eap eap_methods
  ssid prettyflyforawifi
wlccp authentication-server client leap eap_local
  ssid prettyflyforawifi
wlccp wds mode wds-only
wlccp wds priority 255 interface BVI1

The access points need to use the AP username defined above to talk to the WDS server. RADIUS configuration here should be optional.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
aaa new-model
!
aaa group server radius rad_eap
 server 192.168.0.100 auth-port 1812 acct-port 1813
!
aaa authentication login default local
aaa authentication login eap_methods group rad_eap
dot11 ssid prettyflyforawifi 
   authentication open eap eap_methods
   authentication network-eap eap_methods
   authentication key-management wpa
   guest-mode
!
radius-server host 192.168.0.100 auth-port 1812 acct-port 1813 key 7 SUPERSECRETRADIUSKEY 
!
wlccp ap username wds-authman password 7 AUTHMANPASS 
wlccp ap wds ip address 192.168.0.250

Cross-Platform Authentication Services

You are a nerd. You like doing nerd-y things. You run an entirely *nix-based network. But like all nerds, you have that one pesky Windows machine that you want to have available for your users. Or you just want the same SSO password from your Kerberos KDC. Regardless, you want Windows to talk to MIT Kerberos. Believe it or not, this is SUPPORTED!

Process Overview


We start with the user entering their credentials. These are entered in the standard Windows logon screen (which I dearly miss from Windows 2003). From there, the client is configured to have it’s default domain (realm in this case) to be the MIT Kerberos realm that your machine is a member of. This would be the equivalent of an Active Directory domain.

From here, your workstation acts just like any other Kerberized host. It uses it’s host principal (derived from what it thinks its hostname is) and configured password to authenticate to the KDC. Once it has verified it’s identity, it then goes to authenticate you. And if your password is correct, Windows lets you in!

The question is: who does it let you in as? A Kerberos principal is nothing more than a name. It is not an account, or any object that actually contains information for the system. You must map Kerberos principals to accounts. These accounts are created in Windows as either local users or via Active Directory (a whole other can of worms). Usually, you will want to create a one-to-one mapping between principal and account (grant@GRANTCOHOE.COM principal = Windows account “grant”).

KDC Setup

On your KDC, open up kadmin with your administrative principal and create a host principal for the new machine. It needs to have a password that you know, so use your favorite password generating source.

1
2
3
4
5
6
7
8
9
10
miranda ~ # kadmin -p cohoe/admin
Authenticating as principal cohoe/admin with password.
Password for cohoe/admin@GRANTCOHOE.COM:
kadmin:  ank host/caprica.grantcohoe.com@GRANTCOHOE.COM
WARNING: no policy specified for host/caprica.grantcohoe.com@GRANTCOHOE.COM; defaulting to no policy
Enter password for principal "host/caprica.grantcohoe.com@GRANTCOHOE.COM":
Re-enter password for principal "host/caprica.grantcohoe.com@GRANTCOHOE.COM":
Principal "host/caprica.grantcohoe.com@GRANTCOHOE.COM" created.
kadmin:  quit
miranda ~ #

That’s it for the KDC. On to the Windows machine!

Windows 7 Client

Windows 7 (as well as Server 2008 I believe) include the basic utilities required to support MIT Kerberos. This was available for XP and Server 2003 as “Microsoft Support Tools”. The utility we will be using is called ksetup. It allows for configuration of foreign-realm authentication systems.

So to set your machine to authenticate to your already existing MIT Kerberos KDC, open up a command prompt and do the following:

1
2
3
4
ksetup /setdomain GRANTCOHOE.COM
ksetup /addkdc GRANTCOHOE.COM kerberos.grantcohoe.com
ksetup /setmachpassword passwordfromearlier
ksetup /mapuser * *

After that, you need to set a Group Policy setting to automatically pick the right domain for you to login to. Open up the Group Policy Editor (gpedit.msc) and navigate to Local Computer Policy->Computer Configuration->Administrative Templates->System->Login->Assign a default domain for logon. Set this to your realm (GRANTCOHOE.COM in my case).

Do a nice reboot of your system, and you should be ready to go!

Active Directory

Active Directory can be configured to trust a foreign Kerberos realm. It will NOT synchronize information with anything, but if you just need a bunch of users to log into something and no data with it, the process is not too terrible. Rather than duplicate the information, you can find the guide that I used here: Microsoft trusting MIT Kerberos

Stanford Webauth on Enterprise Linux

Background

Lets say you are in a domain, and you wish to access several different web services. How often do you find you have to enter the same username and password over and over again to get to each website? Wouldn’t it be nice if you could just enter it once and automagically have access to all the web services you need? Well guess what? You can!

It all works off of MIT Kerberos. Kerberos is a mechanism that centralizes your authentication to one service and allows for Single Sign-On. The concept of SSO is fairly simple: Enter your credentials once and that’s it. It save you from typing them over and over again and protects against certain attacks.

WebAuth was developed at Stanford University and brings “kerberization” to web services. All you need is some modules, a KDC, and a webserver.

In this guide, we will be utilizing the Kerberos realm/domain name of EXAMPLE.COM. We will also be using two servers:

  • gmcsrvx2.example.com: The Webauth WebKDC
  • gmcsrvx3.example.com: A secondary web server
  • webauth.example.com: A DNS CNAME to gmcsrvx2.example.com

Note that you will need a pre-existing Kerberos KDC in your network.

There are three components of the WebAuth system. WebAuth typically refers to the component that provides authorized access to certain content. The WebKDC is the process that handles communication with the Kerberos KDC and distributes tickets out to the client machines. The WebLogin pages are the login/logoff/password change pages that are presented to the user to enter their credentials. Certain binary packages provided by Stanford do not contain components of the WebAuth system. This is why we are going to build it from source.

Server Setup

Packages

Each machine you plan to install WebAuth on needs the following packages:

  • httpd-devel
  • krb5-devel
  • curl-devel
  • mod_ssl
  • mod_fcgid
  • perl-FCGI
  • perl-Template-Toolkit
  • perl-Crypt-SSLeay
  • cpan

Enterprise Linux does not include several required Perl modules for this to work. You are going to need to install them via CPAN. If you have never run CPAN before, do it once just to get the preliminary setup working.

  • CGI::Application::Plugin::TT
  • CGI::Application::Plugin::AutoRunmode
  • CGI::Application::Plugin::Forward
  • CGI::Application::Plugin::Redirect

You also need Remctl, an interface to Kerberos. For EL6 we are using remctl 2.11 available from the Fedora 15 repositories.

1
2
3
wget http://mirror.rit.edu/fedora/linux/releases/15/Everything/i386/os/Packages/remctl-2.11-12.fc15.i686.rpm
wget http://mirror.rit.edu/fedora/linux/releases/15/Everything/i386/os/Packages/remctl-devel-2.11-12.fc15.i686.rpm
yum localinstall remctl*

Firewall

Open up ports 80 (HTTP) and 443 (HTTPS) since we will be doing web stuff.

NTP

Since you will be doing Kerberos stuff, you need a synchronized clock. If you do not know how to do this, see my guide on Setting up NTP.

Kerberos

Your machine needs to be a Kerberos client (meaning valid DNS, krb5.conf, and host prinicpal in its /etc/krb5.keytab). We will be using three keytabs for this application:

/etc/krb5.keytab

  • host/gmcsrvx2.example.com@EXAMPLE.COM
  • /etc/webauth/webauth.keytab

  • webauth/gmcsrvx2.example.com@EXAMPLE.COM
  • /etc/webkdc/webkdc.keytab

  • service/webkdc@EXAMPLE.COM
  • NOTE: If you wrote these in your home directory and cp’d them into their respective paths, check your SELinux contexts.

SSL Certificates

You need an SSL certificate matching the hostname of each server (gmcsrvx2.example.com, gmcsrvx3.example.com, webauth.example.com) avaiable for your Apache configuration.

IMPORTANT:You also need to ensure that your CA is trusted by Curl (the system). If in doubt, cat your CA cert into /etc/pki/tls/certs/ca-bundle.crt. You will get really odd errors if you do not do this.

Shared Libraries

We will be installing WebAuth into /usr/local and this need its libraries to be linked in the system.

1
echo '/usr/local/lib' >> /etc/ld.so.conf.d/locallib.conf

We will need to run ldconfig to load in the WebAuth libraries later on.

Compile and Install WebAuth

The latest version at the time of this writing is webauth-4.1.1 and is avaiable from Stanford University. Grab the source tarball since the packages do not include features that you will need in a brand new installation.

1
2
3
4
5
6
7
8
9
10
11
12
[root@gmcsrvx2 ~]# tar xfz webauth-4.1.1.tar.gz
[root@gmcsrvx2 ~]# cd webauth-4.1.1
[root@gmcsrvx2 webauth-4.1.1]# ./configure --enable-webkdc
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
....
....
config.status: executing depfiles commands
config.status: executing libtool commands
config.status: executing include/webauth/defines.h commands
[root@gmcsrvx2 webauth-4.1.1]# 

Assuming everything is all good, go ahead and build it.

1
2
3
4
5
6
7
8
9
[root@gmcsrvx2 webauth-4.1.1]# make
make  all-am
make[1]: Entering directory `/root/webauth-4.1.1'
....
....
Manifying blib/man3/WebKDC::WebKDCException.3pm
make[2]: Leaving directory `/root/webauth-4.1.1/perl'
make[1]: Leaving directory `/root/webauth-4.1.1'
[root@gmcsrvx2 webauth-4.1.1]# 

And assuming everything is happy, make install:

1
2
3
4
5
6
7
8
9
[root@gmcsrvx2 webauth-4.1.1]# make install
make[1]: Entering directory `/root/webauth-4.1.1'
test -z "/usr/local/lib" || /bin/mkdir -p "/usr/local/lib"
....
....
test -z "/usr/local/include/webauth" || /bin/mkdir -p "/usr/local/include/webauth"
 /usr/bin/install -c -m 644 include/webauth/basic.h include/webauth/keys.h include/webauth/tokens.h include/webauth/util.h include/webauth/webkdc.h '/usr/local/include/webauth'
make[1]: Leaving directory `/root/webauth-4.1.1'
[root@gmcsrvx2 webauth-4.1.1]# 

Now ensure that the new libraries get loaded by running ldconfig. This looks at the files in that ld.so.conf.d directory and adds them to the library path. MAKE SURE YOU DO THIS!!!

The Apache modules for WebAuth have been compiled into /usr/local/libexec/apache2/modules:

1
2
[root@gmcsrvx2 ~]# ls /usr/local/libexec/apache2/modules/
mod_webauth.la  mod_webauthldap.la  mod_webauthldap.so  mod_webauth.so  mod_webkdc.la  mod_webkdc.so

mod_webauth Configuration

Make sure you have the proper keytab set up at /etc/webauth/webauth.keytab. This file can technically be located anywhere as long as you configure the module accordingly.

Next you need to create a local state directory for WebAuth. Usually this is /var/lib/webauth. The path can be changed as long as you set it in the following configuration file.

The Apache module needs a configuration file to be included in the Apache server configuration. On Enterprise Linux, this can be located in /etc/httpd/conf.d/mod_webauth.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Load the module that you compiled.
LoadModule webauth_module     /usr/local/libexec/apache2/modules/mod_webauth.so

# Some fancy WebAuth stuff
WebAuthKeyRingAutoUpdate      on
WebAuthKeyringKeyLifetime     30d

# The path to some critical files. These should be secured. 
WebAuthKeyring                /var/lib/webauth/keyring
WebAuthServiceTokenCache      /var/lib/webauth/service_token_cache
WebAuthCredCacheDir           /var/lib/webauth/cred_cache

# The path to the keytab that webauth will use to authenticate with your KDC
WebAuthKeytab                 /etc/webauth/webauth.keytab

# The URL to point to if you need to login. MAKE SURE THIS IS CORRECT!
WebAuthLoginURL               "https://webauth.example.com/login"
WebAuthWebKdcURL              "https://webauth.example.com/webkdc-service/"

# The Kerberos principal to use when authenticating with the keytab above
WebAuthWebKdcPrincipal        service/webkdc

# SSL is a good thing. Plaintext passwords are bad. Secure this server.
WebAuthSSLRedirect            On
WebAuthWebKdcSSLCertFile      /etc/pki/example.com/example-ca.crt

Again, adjust paths to suit your tastes. The WebAuthWebKdcSSLCertFile should be your public CA certificate that you probably cat’d earlier.

mod_webkdc Configuration

Make sure you have the proper keytab set up at /etc/webkdc/webkdc.keytab. This file can technically be located anywhere as long as you configure the module accordingly.

Next you need to create a local state directory for the WebKDC. Usually this is /var/lib/webkdc. The path can be changed as long as you set it in the configuration files.

The WebKDC utilizes an ACL to allow only certain hosts to get tickets. This goes in /etc/webkdc/token.acl:

1
2
3
# These lines allow all principals that under the webauth service to generate a token
krb5:webauth/*@EXAMPLE.COM id
krb5:webauth/gmcsrvx2.example.com@EXAMPLE.COM cred krb5 krbtgt/EXAMPLE.COM@EXAMPLE.COM

For the WebLogin pages, another configuration file is used to specify the information for it (in /etc/webkdc/webkdc.conf). This is seperate from the Apache module configuration loaded by the webserver.

1
2
3
4
5
# The KEYRING_PATH should match what you put in your httpd config
$KEYRING_PATH = "/var/lib/webkdc/keyring";
$URL = "https://webauth.example.com/webkdc-service/";
# You can make custom skins for the weblogin page. Change the path here
$TEMPLATE_PATH = "./generic/templates";

NOTE: You CANNOT change the location of this file or the WebKDC module will freak out.

Note that certain directives must match the Apache configuration. We will make that now in /etc/httpd/conf.d/mod_webkdc.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Load the module that you compiled.
LoadModule webkdc_module /usr/local/libexec/apache2/modules/mod_webkdc.so

# Some fancy WebKdc stuff
WebKdcServiceTokenLifetime    30d

# The path to some critical files. These should be secured.
WebKdcKeyring                 /var/lib/webkdc/keyring

# The path to the keytab and access control list that the webkdc will use to authenticate with your KDC
WebKdcKeytab                  /etc/webkdc/webkdc.keytab
WebKdcTokenAcl                /etc/webkdc/token.acl

# Debugging information is wonderful. Turn this off when you get everything working.
WebKdcDebug                   On

Ensure that your paths match what you set up earlier.

mod_webauthldap Configuration

This step should only be done if you have an LDAP server operating in your network and can accept SASL binds. You can make certain LDAP attributes available in the web environment as server variables for your web applications. This is configured in /etc/httpd/conf.d/mod_webauthldap.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Load the module that you compiled
LoadModule webauthldap_module /usr/local/libexec/apache2/modules/mod_webauthldap.so

# Webauth Keytab & credential cache file
WebAuthLdapKeytab /etc/webauth/webauth.keytab
WebAuthLdapTktCache /var/lib/webauth/krb5cc_ldap

# LDAP Host Information
WebAuthLdapHost ldap.example.com
WebAuthLdapBase ou=users,dc=example,dc=com
WebAuthLdapAuthorizationAttribute privilegeAttribute
WebAuthLdapDebug on

<Location />
        WebAuthLdapAttribute givenName
        WebAuthLdapAttribute sn
        WebAuthLdapAttribute cn
        WebAuthLdapAttribute mail
</Location>

Misc Permissions

Most of the files that you created do not have the appropriate permissions to be read/written by the webserver. Lets fix that.

1
2
3
4
5
6
7
8
9
10
11
[root@gmcsrvx2 ~ ]# chcon -t bin_t /usr/local/share/weblogin/*.fcgi
[root@gmcsrvx2 ~ ]# chown -R apache:apache /usr/local/share/weblogin
[root@gmcsrvx2 ~ ]# chcon -R -t httpd_sys_rw_content_t /usr/local/share/weblogin/generic
[root@gmcsrvx2 ~ ]# chown root:apache /etc/webkdc/webkdc.keytab
[root@gmcsrvx2 ~ ]# chmod 640 /etc/webkdc/webkdc.keytab
[root@gmcsrvx2 ~ ]# chown root:apache /etc/webauth/webauth.keytab
[root@gmcsrvx2 ~ ]# chmod 640 /etc/webauth/webauth.keytab
[root@gmcsrvx2 ~ ]# chown -R apache:apache /var/lib/webkdc
[root@gmcsrvx2 ~ ]# chown -R apache:apache /var/lib/webauth
[root@gmcsrvx2 ~ ]# chmod 700 /var/lib/webkdc
[root@gmcsrvx2 ~ ]# chmod 700 /var/lib/webauth

Apache VHosts

I created a VHost for both “webauth.example.com” and “gmcsrvx2.example.com”, with the latter requiring user authentication to view. I name these after the hostname they are serving in /etc/httpd/conf.d/webauth.conf, Just throw a simple Hello World into the DocumentRoot that you configure for your testing host. Note that you must have NameVirtualHost-ing setup for both ports 80 and 443.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
<VirtualHost *:80>
  ServerName webauth.example.com
  ServerAlias webauth
  # Send them to somewhere useful if they request the root of this VHost
  RedirectMatch   permanent ^/$ https://gmcsrvx2.example.com/
  # Send non-HTTPS traffic to HTTPS since we are dealing with passwords
  RedirectMatch   permanent ^/(.+)$ https://webauth.example.com/$1
</VirtualHost>

<VirtualHost *:443>
  # Name to respond to
  ServerName webauth.example.com
  ServerAlias webauth

  # Root directory
  DocumentRoot /usr/local/share/weblogin

  # SSL
  SSLEngine On
  SSLCertificateFile /etc/pki/example.com/webauth/host-cert.pem
  SSLCertificateKeyFile /etc/pki/example.com/webauth/host-key.pem
  SSLCACertificateFile /etc/pki/example.com/example-ca.crt

  # Web Login directory needs some special love
  <Directory "/usr/local/share/weblogin">
          AllowOverride none
          Options ExecCGI
          AddHandler fcgid-script .fcgi
          Order allow,deny
          Allow from all
  </Directory>

  # This allows you to not need to put the file extension on the scripts
  ScriptAlias /login "/usr/local/share/weblogin/login.fcgi"
  ScriptAlias /logout "/usr/local/share/weblogin/logout.fcgi"
  ScriptAlias /pwchange "/usr/local/share/weblogin/pwchange.fcgi"

  # More special options to make things load right based on your template
  Alias /images "/usr/local/share/weblogin/generic/images"
  Alias /help.html "/usr/local/share/weblogin/generic/help.heml"
  Alias /style.css "/usr/local/share/weblogin/generic/style.css"

  # This is the actual web KDC
  <Location /webkdc-service>
          SetHandler webkdc
  </Location>
</VirtualHost>

And the host file at /etc/httpd/conf.d/gmcsrvx2.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<VirtualHost *:80>
  ServerName gmcsrvx2.example.com
  ServerAlias gmcsrvx2
  DocumentRoot /var/www/html
  RedirectMatch   permanent ^/(.+)$ https://gmcsrvx2.example.com/$1
</VirtualHost>

<VirtualHost *:443>
  ServerName gmcsrvx2.example.com
  ServerAlias gmcsrvx2
  DocumentRoot /var/www/gmcsrvx2
  SSLEngine On
  SSLCertificateFile /etc/pki/example.com/gmcsrvx2/host-cert.pem
  SSLCertificateKeyFile /etc/pki/example.com/gmcsrvx2/host-key.pem
  SSLCACertificateFile /etc/pki/example.com/example-ca.crt

  # Require a webauth valid user to access this directory
  <Directory "/var/www/gmcsrvx2">
          AuthType WebAuth
          Require valid-user
  </Directory>
</VirtualHost>

Once you are all set, start Apache. Then navigate your web browser to the host (http://gmcsrvx2.example.com). This should first redirect you to HTTPS, then bounce you to the WebLogin pages. After authenticating, you should be able to access the server.

Conclusion

This is by no means a simple thing to get set up. An intimate knowlege how how Kerberos, Apache, and LDAP work is crucial to debugging issues relating to WebAuth. All of my sample configuration files can be found in the Archive.

IP Over DNS Exploit

The Basics

First let’s review how your traffic gets to it’s destination. You open up your favorite web browser and punch in “www.google.com”. Since your computer works with IP addresses, and not names (like people), you need to do a process of resolving the name. The Domain Name System (DNS) is the service that does this. Your ISP runs DNS servers, that are typically given to you in your DHCP lease. Your computer sends a query to this DNS server asking “what is the IP address of www.google.com”. Since your ISP does not have control over the “google.com” domain, the request is forwarded to another server. Eventually someone says “Hey! www.google.com is at 72.14.204.104”. This response is sent back through the servers to your computer.

After resolving the name, your computer now creates a data packet that will be sent to the computer at 72.14.204.104. The packet gets there by looking at your hosts route table and will hit your default gateway.

The Technology

IP over DNS is a method of encapsulating IP packets inside a DNS query. This essentially creates a VPN between a server and a client. In my case, I setup and configured the Iodine DNS server on my server located at RIT that I would use to route my traffic to the internet. Inside the tunnel exists the subnet of 192.168.0.0/27, with the server being at 192.168.0.1. My client connected and received IP address 192.168.0.2.

The Problem

Certain WLAN installations will include a guest SSID that non-controlled clients can associate to (In this scenario, I will use ExampleGuest, keeping the actual one I used anonymous). Often your traffic will get routed to a captive portal first, requiring you to enter credentials supplied by your organization. Until you do so, none of your traffic will reach it’s destination outside the LAN… UNLESS you discover a vulnerability, such as the one we are going to explore.

When I walked into the Example Corp site, I flipped open my trusty laptop and got to work. After associating with the ExampleGuest SSID, I was given the IP address 10.24.50.22/24 with gateway 10.24.50.1. I opened my web browser in attempt to get to “www.google.com”, and was immediately directed to the captive portal asking me to log in. Obviously I do not have credentials to this network, so I am at a dead end at this time.

Next I whipped out a terminal session, and did a lookup on “www.google.com”. Lo and behold the name was resolved to it’s external IP address. This means that DNS traffic was being allowed past the captive portal and out onto the network. If “www.google.com” resolved to something like “1.1.1.1” or an address in the 10.24.0.0/16 space, I know that the captive portal is grabbing all of my traffic. This would be the end of this experiment. However as I saw, this was not the case. External DNS queries were still being answered. Time to do some hacks.

The Exploit

Step 1) I need my Iodine client to be able to “query” my remote name server. I added a static route to my laptop that forced traffic to my server through the gateway given to me by the DHCP server. (ip route add 129.21.50.104 via 10.24.50.1)

Step 2) I need to establish my IPoDNS tunnel. (iodine -P mypasswordwenthere tunnel.grantcohoe.com) A successful connection gave me the IP address 192.168.0.2. I was then able to ping 192.168.0.1, which is the inside IP address of the tunnel).

Step 3) I need to change my default gateway to the server address inside the tunnel (ip route add default via 192.168.0.1).

And with that, all of my traffic is going to be encapsulated over the IP over DNS tunnel and sent to my server as DNS queries, this giving me unrestricted internet access bypassing the captive portal.

The Defense

This entire experiment would grind to a halt if DNS queries were not being handled externally. The network administrator should enable features necessary to either drop DNS requests or answer them with the captive portal IP address.

The Conclusion

End result: Unrestricted internet access over a wireless network that I have no credentials to.

Difficulty of setting this up: HIGH Speed of tunneled internet: SLOW Worth it for practical use: NOT AT ALL Worth it for education: A LOT

This sort of trick can work on airplane and hotel wireless systems as well. Most sites do not think to have their captive portals capture DNS traffic in addition to regular IP traffic. As we can see here, it can be used against them.