Best of TechBeacon 2016: New technologies that redefined IT Ops
Cloud computing, DevOps, containerization and serverless computing are just some of the trends that have been transforming IT operations and service management functions in 2016. Runtime environments have become increasingly complex, exposing limitations in familiar and long-cherished practices for IT services management, such as ITIL. The need for IT Ops to evolve and adapt its practices to adapt is especially urgent where legacy infrastructures are involved.
TechBeacon's top 10 IT operations stories of 2016 tracked these trends—and offered practical advice for moving forward.
A frequent caveat you’ve probably heard when using Kubernetes for container cluster management is that it is not quite production-ready. That semtinment could be applied not just to Kubernetes but to clustering tools, such as Docker Swarm, during much of 2015. Since then, however, these tools have been evolving at lightning speed. Paul Bakker, Software Architect at Luminis Technologies, reflects on his organization’s experience using Kubernetes—the good and the bad—and why he thinks the tool is more mature than you might think.
IT operations management and service management professionals know just how complex runtime environments can get. The venerable set of practices captured in the ITIL is no longer enough for managing the business of modern IT. The Open Group’s IT4IT Reference Architecture, released last October, is designed to address the changing requirements of IT operations, and has already garnered considerable vendor support. Daniel Warfield, Senior Enterprise Architect at CC and C Solutions, explains what IT4IT offers and why you need it.
Containers have become the thing. From virtually no market share to speak of in 2015, Docker now runs on more than six percent of all hosts. Adoption has increased five fold in a single year, and development organizations that try Docker have tend to adopt it very quickly. Do you know all the things that you need to consider to successfully leverage container technology? SVP of Cloud Technology Partners, David Linthicum, highlights the five most important areas.
Applications built using microservices are flexible and scalable, but don’t do especially well on legacy distributed computing infrastructures. Serverless computing offers a way out by eliminating the need for developers to worry about underlying physical infrastructure and systems software. While the model is set to play a critical role in the enterprise, it is not suited for all use cases. In this thorough assessment, Peter Sbarski, Vice President of Engineering at A Cloud Guru, and author of Serverless Architectures on AWS, shares tips to help you determine when and where going serverless makes sense.
All of the excitement around containers has led many to believe that containers somehow automatically turbocharge continuous delivery pipelines. The reality is that containers help, but you still need to do a lot of hard work to achieve continuous delivery. Speaker and author Todd DeCapua highlights four of the most common misconceptions surrounding containers and continuous delivery.
In launching its Lambda service for Amazon Web Services (AWS) last year, Amazon has ushered in what many are calling the serverless era. It’s the first computing model that does not require the applications operations team to directly manage the environment that executes and runs the code. Do you know how serverless computing can help your organization? Navica CEO Bernard Golden gives you the low-down on serverless computing and the business case for using it.
Building an Infrastructure as Code (IoC) capability is a good way to bridge the gap between the developer and IT operations groups. It gives developers a way to focus on application development without having to worry about the minutia of their physical infrastructure and it gives operations teams the assurance that the code they're sent will run without disruption. Gary Thome, Vice President and Chief Technologist of Converged Datacenter Infrastructure at HPE, describes what managers really need to know about IoC.
Systems administrators still have a role to play in the DevOps world. But staying relevant becomes harder when the focus is all about speeding application delivery through the merger of development and IT operation groups. Skills like debugging, legacy systems expertise, and knowledge of things such as Python, Perl and configuration management can help, reports contributor Robert Scheier.
Rewriting legacy applications so they work better in a DevOps world can be expensive, challenging and downright risky. So how do you adapt your portfolio of legacy applications so they play more nicely in an environment where agility, speed and continuous development are becoming ever more important? Using modern tools to automate as much of your IT operations as possible is a good place to start. HPE CTO Jerome Labat outlines this and other ways that operations teams can help adapt legacy environments with the DevOps world.
Unless you’ve been living under a rock for the past several years, you know that the cloud revolution is here, and that almost every organization has embraced it. But do you know what it takes to be successful with cloud computing? Do you know how to reengineer your development, QA and production processes so you can fully leverage the flexibility and agility of the cloud? Bernard Golden explains how.