Justin Giardina, Chief Technology Officer, iland Internet Solutions
There was a time when cloud was about getting the fastest access to infrastructure resources. IT, and its shadow, did not hesitate to enter in their credit card numbers and spin up new environments in the hope of getting everything they’d been promised: more agility, easy scalability and reduced costs. And vendors did not hesitate to jump in on the action, leaving no shortage of cloud infrastructure options.
“An infrastructure provider should support more than just Internet connectivity and VPN, including technologies such as MPLS and OTV”
The reality, however, is that far too many cloud customers continue to struggle to achieve the returns they expected. The market is polluted by the myth of commodity cloud and messages about pricing wars and robust supporting eco systems. Companies are quickly realizing that these raw, big box cloud infrastructures suck up huge overhead as they force teams to navigate through the disparate add-on technologies that are required to satisfy their specific business needs. Failure rates are staggering, as reflected in Enterprise Management Associates recent report, “Casualties of Cloud Wars: Customers are Paying the Price.”
As a result, the market is seeing IaaS vendors starting to amend their approach to enterprise cloud with more ecosystem extensions and consulting and support services that are offered at a premium. Some providers are far ahead of others, offering more native integrations that remove major hurdles for customers.
IT and business leaders must also rethink their cloud strategies, including their methodology and criteria for choosing an infrastructure provider. There are several areas of functionality and support that teams should seriously evaluate as they begin to evolve their cloud initiatives in 2015:
Transparency: IT administrators do not want to – and cannot afford to – give up any of the visibility or control they are accustomed to getting from their local data centers. Companies continue to be surprised by unexpected cloud costs, downtime and other issues because they do not have easy access to detailed or meaningful cost, performance and usage metrics. Most public clouds offer extremely limited transparency or overwhelm users with too much complex data that is difficult to analyze. Many a times companies are left to buy additional tools or hire a third party consultant to make sense of it all. Nobody buys a car without first taking a test drive, and it should be no dif- ferent in cloud. When choosing a cloud infrastructure, customers should always request a demonstration of the native management tools to ensure they can view and take action on the information that impacts them most. They should settle for nothing less than real-time and historical statistics related to resource consumption, billing and performance at all levels of the environment – global, workload and individual resources. Leading tools will also include customizable, action-oriented alerts so teams can proactively address potential problems. There should be powerful self-service functionality, but customers should also have access to phone support should the need arise, and it should not cost extra.
Connectivity: The demand for 24/7 access to applications and resources from anywhere means companies must anticipate the type of connectivity required to enable remote access and maintain uninterrupted service. An infrastructure provider should support more than just Internet connectivity and VPN, including technologies such as MPLS and OTV. It’s important for companies to have the option to add their own physical gear to complement a cloud deployment so they can meet any networking or telephony challenge that may arise.
Network and Security: Enterprise applications are not simply “connected to the Internet.” Though concerns around security in the cloud have largely been addressed, companies should realize that most networking vendors provide virtual editions of their platforms for cloud use, enabling enterprises to have the same granular policies in the cloud as they do at their own datacenter. With the rise of virtual appliances, we see easy deployment and subscription based pricing models for services such as intrusion detection, load balancers and data encryption. It is very simple, for instance, to be able to control all network and security policies from one place in the event of a disaster, not only allowing for a smooth technical transition and attractive pricing model, but also the assurance that the enterprise security policy is adhered to.
Billing: While many teams pursue cloud because of promised cost benefits, unpredictable bills and indecipherable pricing models contribute to some of the biggest cloud headaches today. To eliminate waste, companies should look for straightforward but flexible pricing models that best address their use case, whether that means they pay as they go, reserve what they need or reserve resources with the option to boost. Transparency again plays a major role, as teams must be able to associate costs with each resource (disk, cpu, memory, bandwidth, etc.) and choose pricing models that ensure they get the best rates and only pay for what they need. Alerts are also key so teams can stay on top of potential overages and monitor for cloud sprawl.
Compliance: Companies today face ever increasing corporate and industry compliance considerations, which can be complicated to address in a public cloud. Teams must be able to easily change control capabilities amongst team members, giving easy access to those who need it while securely restricting others based on roles. What’s more, as highly regulated industries seek to capitalize on the benefits of cloud, IT teams needs partners that can work with them through intensive auditing processes.
Disaster Recovery: More cloud providers than ever are splashing messages about easy, fast disaster recovery, but there is no silver bullet. Without thorough analysis of technical and procedural needs, companies frequently end up with solutions that are inadequate or overkill. There are dozens of questions that teams frequently overlook: Do I need multiple options around networking and connectivity to ensure everyone can access the workloads once recovered? Can I provide my own gear that may not be able to be virtualized? Can I send my provider data to import? How easy will it be to make changes like Windows hostnames? How do I switch DNS for failover? Who ‘pulls the trigger’ should a disaster strikes, and how? An experienced Disaster Recovery as a Service (DRaaS) provider will work with a company through assessment, planning, implementation, testing and maintenance. That includes giving ustomers the ability to manage disaster recovery in the same user interface they use to manage the rest of their cloud infrastructure. When it comes down to it, CIOs must decide where they need to focus their resources. Should they go to building and protecting the cloud or leveraging the cloud? If it’s the latter, their team will likely need a provider that takes a consultative approach, understanding and addressing all of the outlined areas above. Cloud infrastructure and how it is used will continue to evolve in the years ahead. The service provider should take on the heavy lifting so its customers can focus on more strategic business initiatives.