How robust is Cloud hosting when compared to the reliable and familiar on-premise setup?
Mike Sowerbutts, Infrastructure Lead at Claremont, looks at how Oracle cloud hosting technology can perform as well as on-premise rivals. Cloud has been on the radar of many an IT manager for some time. And with the recent COVID-19 pandemic, the benefits of cloud-based technology, particularly for remote working employees, have been highlighted. Cloud hosting has now become a viable solution for many organisations that had previously been nervous about a move in this direction.
However, many organisations are apprehensive about moving to a Public Cloud and the ability of shared, virtualised platforms to deliver the performance required by company IT systems. So how robust is a Cloud option when compared to the reliable and familiar on-premise setup?
The Primary Concerns
From speaking to our customer base, the fears about moving to a cloud hosting solution are twofold. Firstly, cloud services share platform resources across multiple customers. But will this configuration lead to degradation in performance if other customers are using their systems intensively? And can a level of performance truly be guaranteed?
Secondly, when considering the network, an on-premise setup typically uses a corporate network to serve the user base and moving the servers into the cloud will no doubt increase latency between the user and the server.
Applications have, for some time, been designed to minimise traffic between server and client. For example, Oracle Self-Service technologies make relatively small demands on the network between the server and the user. This minimises the bandwidth required and mitigates the effects of high latency. Equally, in more recent years, networking has moved on and most organisations have large amounts of bandwidth at their disposal. Furthermore, leased lines and multi-protocol label switching (MPLS) links have become significantly cheaper and can provide dedicated, high-speed access to servers in the cloud.
A more complex issue is that of resource sharing. Public Cloud platforms use virtualisation technologies to share physical resources between multiple customers’ virtual servers. This raises concerns that resources consumed more hungrily by certain virtual servers will leave others starved of performance. However, this is not necessarily the case as it depends very much on how the cloud in question is configured. That said, it is critical to understand how load is managed across the cloud estate before deciding if it is suitable for your system.
All Clouds Are Not The Same
There is a plethora of cloud options available, but not all offer resource management or the flexibility to tailor to an organisation’s specific needs. Claremont’s cloud does offer such flexibility and is configured to prevent resource starvation by dedicating physical CPU cores on a 1:1 basis to customer’s vCPUs. This is achieved using Oracle VM configured with “Hard Partitioning”.
This not only allows us to control which physical CPUs are used by a specific VM, thus guaranteeing performance, but also delivers significant Oracle licensing cost savings. Conversely, many clouds minimise costs, (at the expense of performance) by time slicing each physical CPU core to run multiple vCPUs, yet actually require more Oracle licenses than on-premise hosting.
This also holds true from a memory and networking perspective, no elements of the purchased resource are shared between customers, and as such, Claremont is able to guarantee system performance.
What about I/O? It is true that some components of the storage platform are shared across multiple cloud customers. However, modern SAN technology allows the administrator to deploy storage profiles to guarantee a level of performance, be that response time, throughput or IOPS. It is typically the case that due to economies of scale, the most up-to-date SAN technology can be harnessed, allowing for optimal configuration of the storage platform and the deployment of, for example, solid state storage over traditional spinning disk to ensure performance is high.
Claremont has a number of customers that have experienced increased IO performance following a migration from on-premise to Claremont Cloud. This improvement is directly attributable to these economies of scale.
Tailored To Your Needs
Claremont can also be flexible with Cloud resource allocation. There are scenarios where choosing to share resource is beneficial. Such a situation could involve development cycles being well defined – first the development environment is used while test is idle, then vice versa before deployment to pre-production and production.
In this scenario, it can drive efficiencies to configure development and test to share resources. The customer saves costs by only buying enough resource for one environment, without any performance consequences because the environments follow a “timeshare” pattern of usage.
In conclusion, when deciding to move to a cloud hosting option, performance is a key consideration. It is important to understand when discussing options with a cloud supplier, how performance is managed and what options are available to insulate your own environment from others in the cloud.