And what does relevant mean?
Six months ago we began a project to build a site that would let our customers see, in real-time, how our services are performing and what the availability of those services looks like. We wanted to address two out of the top three concerns over adoption of the cloud - Performance and Availability.
The site was launched this week and we're proud of the results. We are testing the services we offer, on a frequent and continuous basis, measuring how long the tests take to run, and publishing the performance for customers to monitor.
The site shows current and historical performance going back a month. Of course we are keeping all the historical data and the benefits have been significant. We are able to see what the service looks like through our customers' eyes. We can ensure a consistent user experience regardless of the time of day, location of the customer, or load on the system. We can demonstrate we have the capacity we need.
Some pretty sophisticated technology is being used, with much resilience in the measuring system itself, and new processes have been created to respond to any issues that are raised.
The public site shows overall performance:
The thresholds were determined using a focus group to work out the performance levels that users would find acceptable. At an engineering level in the Service Operations Centre warning levels are set much lower than these.
At the time of launching the site, we've carried out over one million tests, and this will increase to nearly ten million over the coming year.
We think this is what cloud vendors should do to address some of the concerns that exist over whether cloud is to be trusted. I would be interested to read what others think.