Normally, the first time you ssh to a new server, OpenSSH asks for permission to store the server's hostname (and IP) along with its unique ssh hostkey in ~/.ssh/known_hosts. Then if the hostkey ever changes, either because the machine was rebuilt or because you're connected to a different machine (as would be the case if someone intercepted your connection, for instance...), OpenSSH complains loudly that something is hinky:

pepper@teriyaki:~$ ssh cluster uname -a
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@       WARNING: POSSIBLE DNS SPOOFING DETECTED!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
The DSA host key for cluster has changed,
and the key for the corresponding IP address 10.0.10.124
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the DSA host key has just been changed.
The fingerprint for the DSA key sent by the remote host is
f7:b0:d4:11:2c:6c:ec:be:96:f0:88:71:d9:26:20:0c.
Please contact your system administrator.
Add correct host key in /Users/pepper/.ssh/known_hosts to get rid of this message.
Offending key in /Users/pepper/.ssh/known_hosts:81
DSA host key for cluster has changed and you have requested strict checking.
Host key verification failed.

This is a nuisance with high-availability (HA) clusters, where multiple nodes may share a single hostname and IP. The first time you connect to a shared IP everything works and you store the hostkey for whichever node accepted your connection. Then it may continue to work for a long time, if you keep connecting to the same node. But when you get a different node at that IP, OpenSSH detects it's a different machine (hostkey), and either the connection fails (if it's non-interactive) or you get the scary warning (if it's interactive). To avoid this, the convention is to ssh directly into individual nodes for administration.

But some of our sequencers use rsync-over-ssh to export data to our Isilon storage clusters, so we had a problem. If we configured them to connect to the VIP (like NFS clients), things would break when they connected to different nodes. But if we configured them to connect to individual nodes, we'd lose failover -- if any Isilon node went down, all of 'its' clients would stop transferring data until it came back up.

I briefly considered synchronizing the ssh hostkeys between nodes, to avoid the hostkey errors, but this is poor security -- if each node has the same hostkey, it's easy for any node to eavesdrop on connections to all its peers with the same hostkey, and changing keys is disruptive.

Fortunately the OpenSSH developers are way ahead of me. If the hostkey is already on file as valid for a known host -- even if there are other conflicting keys on file for the same host -- OpenSSH accepts it.

To set this up, just ssh to each node, then append the cluster hostname and IPs to their entries in ~/.ssh/known_hosts or /etc/ssh/ssh_known_hosts.

cluster-1,10.0.10.101,cluster,10.0.10.121,10.0.10.122,10.0.10.123,10.0.10.124 ssh-dss AAAAB3NzaC1kc3MAAACBA...
cluster-2,10.0.10.102,cluster,10.0.10.121,10.0.10.122,10.0.10.123,10.0.10.124 ssh-dss AAAAB3NzaC1kc3MAAACBA...
cluster-3,10.0.10.103,cluster,10.0.10.121,10.0.10.122,10.0.10.123,10.0.10.124 ssh-dss AAAAB3NzaC1kc3MAAACBA...
cluster-4,10.0.10.104,cluster,10.0.10.121,10.0.10.122,10.0.10.123,10.0.10.124 ssh-dss AAAAB3NzaC1kc3MAAACBA...