The Super-Tent/IT Pavilion/Big Top/Big House fronts on the main RU parking lot, at the other end of which is the 66th St Gate. Except that after we moved in, they walled in the lot and started digging:

Parking Lot and Super-Tent

They still haven't started on Smith Hall, though, which makes me wonder why we couldn't still be in a proper building now. In the meantime, the main campus entrance and driveway are closed, along with the parking lot, under which a new electrical vault will be built. Getting around campus is much more complicated now than 6 months ago. This is especially true for IT, moving equipment around the tent, as the pathways and steps around the periphery don't quite work for carts.


Our new main data center is nearing completion. It was previously our backup/disaster recovery site, so needed a lot of build-out to fit the rest of our servers. The swap from the older/smaller UPS system to the newer/larger one will be tricky, as several live servers will be switched over while running. Later we get to swap systems end-for-end across campus, so the primaries are in the primary DC, once their current location becomes the DR site. Needless to say, most of our systems are not redundant, so there will be a bunch of minor disruptions.

Stu Cohnen

Stu, who is overseeing the build-out of what will largely be 'his' DC, showed me why Cat6A cabling is so much thicker (and thus harder to work with) than old-school Cat5 UTP ("Unshielded Twisted Pair") -- the internal copper wiring is twisted around itself many more times to reduce interference, and the whole thing is cradled by a plastic framework shaped like a plus sign. This framework is twisted as well, so as the Cat6A cables lay next to each other in cable trays, the individual conductor strands don't align with neighboring Cat6A cables, again helping to avoid signal transference between what should be independent connections. The idea is that in 10 years, when everybody is demanding 10GE connections, we'll be able to simply re-patch uplinks into 10GE switch ports as needed. Otherwise the rewiring would be painful for individual machines, and impossibly disruptive to do in bulk.

Unfortunately, the heavier-duty Cat6A is also heavier and bulkier, thus significantly harder to work with and slower to run. Each of the 24 new 42U racks is getting 48 runs, from 2 1U patch panels in each rack, back to 6 patch panels (96 connections) in each of the new network racks, where switches and other Cat5-based gear, such as terminal servers and KVM switches, will go. This is new 1,152 runs in addition to the slightly older stuff at the South end of the room, which is still our DR site during this construction.

My question is: How long will it be before we need more than 48 connections in a rack? Our non-blade Linux servers tend to have 3 Cat5 connections: Ethernet, serial console, and KVM; Windows systems don't need serial consoles, so they get 2. A rack of 1U Linux servers maxes out at 40 1U servers and 120 Cat5 connections, which just won't fly here. 8 2U Linux servers (24 connections) and 12 Windows servers (another 24 connections) fill a rack, meaning as time goes on and we are again someday tight for space, we might run out of network connections sooner. At that point we could put a KVM server in every third rack and reclaim a lot of cabling for Ethernet, but it violates our model of having everything run patched to the switch racks. We'll see what the world looks like when we actually get there...


I discovered yesterday that they're also simultaneously digging up the driveway between Founders Hall and Flexner -- not sure why, but it looks like pipe-laying for plumbing.

Trench between Founders and Flexner Update According to Stu, this is actually conduit for electrical wiring, from the vault under our parking lot up through to an electrical switching station in Flexner.


Many more RU photographs are up at http://www.reppep.com/~pepper/album/ru/