By Jared Naude, Synthesis Cloud Architect
In the era of rapid technological change, organisations have had to adapt the way that they do business to respond to the rapidly changing needs of customers and to future-proof their businesses. To be at the forefront of innovation and to maintain a competitive edge, development teams need to be able to test and iterate on ideas quickly.
The challenge for IT teams is that they would have to build large data expensive centres to host the infrastructure for their business applications. Building and running data centres is an expensive business due to the significant amount of required expertise, capital expenditure and time investment for the procurement of buildings, servers, networking equipment, power distribution and cooling, as well as the supporting infrastructure for security and administrative staff.
Once data centres are in operation, organisations that work in regulated industries need to ensure that their data centres adhere to standards like PCI, SOX, ISO 27001 and others. Ensuring that data centres have the required security and procedures in place with the staff to support it is yet another expense item on the operational budget. As infrastructure ages, it needs to be updated and hardware refreshes can be an expensive and time-consuming exercise that can take up to 18 months for hardware to be procured, shipped, and racked.
Modern applications need to be able to scale out to handle traffic, the challenge with this is that this requires significant infrastructure investment to handle the spikes. Development teams may also want to experiment with new tools, ideas and products that would require hardware to do so. If excess capacity is available, the time to get the hardware configured for the team could take anywhere from 2-6 weeks in a typical organisation. If no excess capacity were present, additional hardware would need to be procured which could not only be time-consuming but may require additional budget as well.
This is where Cloud Computing shines as it can enable the on-demand delivery of compute power, database storage, applications, and other IT resources with pay-as-you-go pricing as it is needed. This means that infrastructure can be provisioned within minutes as it is needed and can be destroyed when the team has finished with testing an idea or product. Furthermore, infrastructure that is spun up to handle a sudden increase in traffic can easily be removed so that it is not idling when it is not required, this can result in significant cost savings for an organisation.
As part of the shared responsibility model of a Cloud provider, the Cloud provider looks after the “security of the Cloud” including all facilities, networking, and related infrastructure. The customer is responsible for the “security in the Cloud” which includes all the configurations of infrastructure and managed services that are used. Understanding this model is critical for understanding the cost savings of moving infrastructure into the Cloud. Cloud providers invest large sums of capital into the security of their data centres and underlying infrastructure and support systems. No organisation will be able to have data centres or managed services that are at the same level that Cloud providers are able to provide.
What is the future of on-prem data centres?
I believe that custom-built data centres that organisations use today are a thing of the past. The data centres that have already been built will continue to run workloads due to the capital that has been sunk into the building and compute resources. On-prem data centres will continue to exist, however, their primary purpose will change from core systems to peripheral systems. However, we will see organisations transitioning to the Cloud for additional capacity instead of building the capacity out in their data centres.
Organisations that have adopted a Cloud-first strategy will still need infrastructure on-prem to provide network connectivity as well as supporting infrastructure for access control and office facilities. Organisations will adopt hybrid Cloud where legacy workloads remain on-prem and newer workloads are landed directly in Cloud.
We will also see the rise of co-location where organisations either move infrastructure to a shared data centre or build an edge to a co-location. We have seen this over the past three years in South Africa where large enterprises have expanded their networks into co-location facilities like Teraco to be closer to the internet exchange as well as other provider networks. At this facility, they can also connect to dedicated high-speed Cloud transit points such as AWS Direct Connect or Azure’s ExpressRoute.