lassedesignen - Fotolia
There are few names in DevOps as big as Gary Gruver. He's an experienced software executive with a knack for implementing continuous release and deployment pipelines in large organizations. In fact, he literally wrote the book on the subject. His latest, Starting and Scaling DevOps in the Enterprise, is an insightful and easy-to-read guide that breaks down DevOps principles by putting them all in a context enterprises can use to gain alignment on their journey to continuous delivery.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Gruver sat down with DevOps Agenda to help define DevOps and discuss some of the challenges he sees in scaling DevOps in large enterprises.
How would you define DevOps?
Gary Gruver: I think the definition of DevOps is one the biggest missing things out there. Everybody wants to talk about the practices -- I see one practice here, this practice there, and another practice at a different company. Everyone thinks these practices apply to them.
When focusing on DevOps practices, I find that companies have a very different definition depending on whom you talk to in the organization. The biggest challenge to getting them started on a continuous improvement journey is just getting everybody on the same page.
For me, I tend to start with Gene Kim's definition:
DevOps is less about the practices. It's about the outcomes, and it's about the outcomes that enable you to deliver code on a more frequent basis while allowing you to maintain all aspects of quality whether it's functionality, security, performance and all those things.
I think if you define DevOps there, that gives you a solid foundation for a discussion. The challenge, though, is different organizations require different means and measures to deliver code on a more frequent basis while allowing them to maintain quality.
Small teams can take one approach. They can independently develop, qualify and release code because they are in a small organization or even in a large organization with loosely coupled architecture. Large organizations, with tightly coupled architectures that require coordinating the work of hundreds and thousands of people, need to take a very different approach.
You can't just copy the practices you see in different organizations and assume they will address all your issues. You really need to understand what is keeping your organization from releasing high-quality code more frequently and apply DevOps practices to address those issues. For me, it is more about applying the principles then copying the practices.
Where do you start when you're trying to transform a large organization?
Gruver: When I go into large organizations, I start by getting everyone to have a common understanding of the problem. I begin with the principles, and then review the different DevOps practices that we've developed to address different issues. I also review why the practices you use to coordinate the work of small teams can and should be different from the practices designed to address coordinating work across large teams. Finally, we review their software development and delivery processes to identify the biggest inefficiencies, so we know where to start their continuous improvement process.
Metrics -- The Holy Grail for any DevOps team
Learn more about the role metrics can and should play in DevOps teams. Dynatrace's Andrea Grabner spoke during DevOps Days Boston on why essential metrics should be accessible to the dev side.
To get everyone on the same page in defining DevOps and in terms of the biggest inefficiencies, we use the deployment pipeline, a concept created by David Farley and Jez Humble in Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation. The deployment pipeline basically defines how organizations go from a business idea to working code that ideally addresses their problem.
In Starting and Scaling DevOps in the Enterprise, I take that basic concept, add metrics to highlight the biggest inefficiencies, and then show how the concept scales to large, tightly coupled systems.