Normally, the first time you ssh to a new server, OpenSSH asks for permission to store the server's hostname (and IP) along with its unique ssh hostkey in ~/.ssh/known_hosts. Then if the hostkey ever changes, either because the machine was rebuilt or because you're connected to a different machine (as would be the case if someone intercepted your connection, for instance...), OpenSSH complains loudly that something is hinky:

pepper@teriyaki:~$ ssh cluster uname -a
The DSA host key for cluster has changed,
and the key for the corresponding IP address
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the DSA host key has just been changed.
The fingerprint for the DSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /Users/pepper/.ssh/known_hosts to get rid of this message.
Offending key in /Users/pepper/.ssh/known_hosts:81
DSA host key for cluster has changed and you have requested strict checking.
Host key verification failed.

This is a nuisance with high-availability (HA) clusters, where multiple nodes may share a single hostname and IP. The first time you connect to a shared IP everything works and you store the hostkey for whichever node accepted your connection. Then it may continue to work for a long time, if you keep connecting to the same node. But when you get a different node at that IP, OpenSSH detects it's a different machine (hostkey), and either the connection fails (if it's non-interactive) or you get the scary warning (if it's interactive). To avoid this, the convention is to ssh directly into individual nodes for administration.

But some of our sequencers use rsync-over-ssh to export data to our Isilon storage clusters, so we had a problem. If we configured them to connect to the VIP (like NFS clients), things would break when they connected to different nodes. But if we configured them to connect to individual nodes, we'd lose failover -- if any Isilon node went down, all of 'its' clients would stop transferring data until it came back up.

I briefly considered synchronizing the ssh hostkeys between nodes, to avoid the hostkey errors, but this is poor security -- if each node has the same hostkey, it's easy for any node to eavesdrop on connections to all its peers with the same hostkey, and changing keys is disruptive.

Fortunately the OpenSSH developers are way ahead of me. If the hostkey is already on file as valid for a known host -- even if there are other conflicting keys on file for the same host -- OpenSSH accepts it.

To set this up, just ssh to each node, then append the cluster hostname and IPs to their entries in ~/.ssh/known_hosts or /etc/ssh/ssh_known_hosts.

cluster-1,,cluster,,,, ssh-dss AAAAB3NzaC1kc3MAAACBA...
cluster-2,,cluster,,,, ssh-dss AAAAB3NzaC1kc3MAAACBA...
cluster-3,,cluster,,,, ssh-dss AAAAB3NzaC1kc3MAAACBA...
cluster-4,,cluster,,,, ssh-dss AAAAB3NzaC1kc3MAAACBA...