by Esther Barthel, CTP, Lead Mentor Women in Tech
As IT departments are changing into service-oriented support for business processes and moving their IT infrastructures into the cloud, traditional operation processes are no longer sufficient. IT is looking for a new way of work and is adopting DevOps as the answer.
So what is DevOps all about, and how does it change the way we operate our IT infrastructures?
DevOps is not just a term to describe a new way of work or a newly created function.
Wikipedia gives us the following definition:
Many DevOps articles and presentations I googled use the acronym CAMS (culture, automation, monitoring and sharing) to describe the DevOps core values and express that DevOps is so much more than a method.
CAMS refers to a shift in mindset and organizational culture that breaks down barriers between teams and gets developers and operators to collaborate. Thus, it enables the automation of IT infrastructure deployment and management by sharing knowledge and tools from both agile development projects and daily operational tasks.
So why is culture so important for DevOps?
Traditionally, development and operations seemed to be on opposite sides of the fence as developers keep changing application code to improve usability. This constant need to update applications and software is a PITA for operations, as their main goal is to keep the production environment as stable and consistent as they can. Operations has mastered a natural resistance to change.
DevOps recognizes the gap between both “teams” and strives to close the gap by looking for new ways of work to achieve agility with stability and quality. This does require knowledge and expertise from both developers and operators.
Getting both development and operations to collaborate is not as simple as putting specialists from both teams into a single team. It does require the alignment of business and operational processes as well, to support the agile development and implementation of IT automation.
Even though automation is key, the supporting processes need to be in place as well.
Each component within an IT Infrastructure follows a lifecycle that needs to be managed to ensure it is provisioned, configured, monitored and managed at the right time. I like the way JumpCloud has visualized a server lifecycle with the following image.
If we want to automate the operational tasks of a server lifecycle, we’ll need to identify the actions performed by an operator, whether it consists of manual activities, running automated scripts or processing information. Each action is part of one or more operational processes to support the lifecycle. By translating the operational processes into workflows, you create repeatable patterns of activities, which can be transformed to automated tasks.
Workflows provide us with a visual overview of the sequence in which input is provided, actions are performed and output is generated for an operations process. In addition, it provides us with a blueprint for orchestration tools that can be used to coordinate and manage our automated processes.
I use flowcharts as a visual aid to interview administrators and get a clear picture of the daily tasks they perform. This way, the administrators become more aware of their actions as we translate high-level processes into detailed tasks and I get to identify the tasks that can be easily automated and will improve the quality and consistency of the output.
Based upon the flowchart, I can build the workflow in an orchestration tool and determine the required input, output and automation tasks to build. This also allows us to transform manual tasks into an automated workflow by starting with a simple task and expanding the automated workflow with more tasks with each new release.
To allow operations to adjust to the automated workflows, it is very important to have a mutual agreement on the number of updates and the update cycles that are being used to update the automated workflows. Simply put: You’ll need release management to ensure developers can keep improving the code and workflows, while operations has enough time to adjust to the latest version and the impact it has on the infrastructure.
In addition to release management, versioning is just as important. Developers can be working on multiple editions of the automated workflows, and the workflow used to provision a server in production can already be outdated when an updated workflow is used to monitor that same server. Knowing what version (and basically what functionality) is used for each infrastructure component becomes more important as more and more tasks get automated, and configurations need to be predictable to be monitored and managed.
For me the most fun part of DevOps is automation: transforming workflows into automated tasks that can be orchestrated into provisioning, configuring, monitoring and managing infrastructure components, like network appliances, hypervisors, servers, services and applications. This is where the magic happens and we switch from clicking on different buttons and settings in a GUI (graphical user interface) to using CLIs (command-line interfaces) and APIs (application programming interfaces).
There are a lot of tools out there to support DevOps and the different lifecycle stages.
Here are some of the well-known tools:
Even with all the mentioned tools, in the end it will come down to programming skills, knowledge of programming languages like ruby, python and PowerShell, and metrics and data interpretations.
In order to improve the automated workflows and the quality, you’ll need insight into the configuration and performance of the infrastructure components. These insights can be gained by collecting different metrics on the performance of IT infrastructure components, automated workflows and supporting processes. Based upon the metrics, new goals for improvements can be set and optimizations can be developed for the automated workflows.
The key ingredient for success of DevOps is sharing. By sharing the knowledge of development and operations, sharing ideas and visions, and sharing templates, code and automated tasks, we can all learn from each other and improve not only our own skills but the skills of the team and the orchestrated and automated workflows.
And for me personally, it is not just about sharing with the DevOps team members, but even more importantly, sharing with the community.
This blog post is the first of a series of blog posts on my own DevOps experience and the automation scripts I have written.
So let’s have some DevOps fun together!