When a Research Group buys a server as part of a grant then it will get extensively used for that grant by the team working on that grant.
When the grant expires, so does the server. It gets decommissioned and sent to server heaven.
Actually, no. When the grant expires the server lives on and will most likely be used by the original team and others in the Group.
They get used to its setup, the software is probably quite stable.
At this point the system is probably 3 or 4 years old, it will have software of a similar age. If it still works then there is a reluctance to change it, especially if change means learning a new system or having to transfer a project from one version of software to another. “What happens if we need to revisit our old project?” “But I *know* this works on this setup!” they cry.
So we leave the machine be, we keep it patched as long as possible and running as long as possible. Only when it is no longer viable (either from a security or a parts-replacement cost point of view) do we let it fade away. And you know what? At that time very rarely do the users find that they actually needed that setup as much as they thought they did.
There must be a cut off point where an old machine has outlived its required usefulness and is just a drain on support and electricity. Should we push our users to find that point? ‘Proactive change’ can be a dirty phrase if one is resistant to change.
We try to virtualise where we can, but whilst virtualisation is a reasonable route for replacing the *processing* power of old machines it can be a bit of drain to try and move large amounts of storage around.
So what do we do? I think again this touches on Digital Literacy. We must impress on the users that when they purchase a machine they must give it a reasonable lifespan and include provision for the long term archive, storage and, if necessary, format/application-shifting of its data.