I have discovered an issue that seems to be triggered when the machine that a UltraVNC server loses its Ethernet connection for several seconds. Once this happens, the server gets into a state where it will acknowledge connections on the TCP level, but never send its protocol version, thus making it impossible for a viewer to connect.
I have discovered two methods of getting the server to respond again, both functionally identical: (1) Reboot the computer, or (2) kill both winvnc processes (merely stopping the service only gets rid of one when the server is in this state) and restart the service.
I have packet captures exhibiting both a failed connection (which consists of a TCP three-way handshake followed by over a minute of TCP keep-alives, after which I got tired of waiting) and a successful connection (Another three way handshake, this time followed by the server sending its protocol version, the client responding in kind, and everything progressing as expected.) but I'm not convinced they're interesting beyond those details. If, however, anyone thinks they would be interesting, I can post them.
Both server and viewer are on 32-bit Win XP SP3, and are version 184.108.40.206. The server is running as a service under the Local System account.
What can I change on the server to alleviate this issue? Is there other information I could provide that would shed more light on it?
 "Several seconds", in this context, means "a minimum of 10 to 15 seconds". I did not do any particularly precise timing, but an approximately 5 second disconnect did not trigger this issue, while an approximately 15 second disconnect did. All disconnects longer than 15 seconds also triggered this issue.
 The reason for this connection loss seems to be immaterial; I observe the same behaviour both when the ethernet cable is unplugged and when the router power-cycled.