A reoccurring problem in development and planning of development is a lack of information on how the product is actually used and how well it functions. We primarily gather this information in our various contacts with users, but perhaps some things can be automated so that more information can be gathered. Many modern systems gather statistics via some telemetry feature, where the software regularly "phones home" with some data. It should be possible for us to do the same in the server and/or client. We should probably try to be respectful and privacy conscious, so: * It should be opt-in, not opt-out * Try to completely avoid any personal information (username, paths, ip addresses, even timestamps ideally). Preferably, don't even track which organisation it is. This also means we avoid any GDPR concerns. * Try to aggregate data even before we send. Means are usually too sensitive to outliers, but there are hopefully other ways we can aggregate things and preserve information about distribution. Some areas that are interesting to measure: * What platforms is the product used on * Which versions do people use * What features in the product are used, and how much? * What do usage patterns look like? Peaks? Averages? Durations? * What is the performance? Mainly various VNC measurements, but there could be others. * What errors are encountered? Many things can probably be monitored solely from the server, as the client sends over many details when it connects. But some things will need telemetry in both ends.
I don't know if it could also be possible to present some/all of this to the sysadmins of the systems. It might mean we need to store data for a longer time on each cluster, though, as some history might be needed for measurements to be useful.