DevOps is about handling change quickly and effectively — rolling out updates when they're needed, with a minimum of overhead and delay. It's not surprising that DevOps has itself changed in the past few years. New tools have come out. Teams have gained experience and adjusted the way they operate. Keeping up with the best techniques lets a business improve its efficiency at maintaining software. It gets rid of bugs and introduces features more quickly.
What's new with DevOps as we move into the second quarter of 2019? Let's take a look.
Security needs to be part of the development cycle. It starts with coding practices that minimize the chance of introducing vulnerabilities. Doing this means making the security team an integral part of the DevOps team, creating a more comprehensive DevSecOps team. Neglecting security to speed up the creation of prototypes is a false economy.
Security experts know where to look for risks in code. They should work with developers to make it tight the first time and avoid the introduction of risky code in the update cycles. Security, like everything else in DevOps, becomes iterative and collaborative. Having separate teams leads to a feeling that the security people are out to frustrate the developers. Putting everyone into the development and release cycle emphasizes that everyone has the same goal.
The process starts with a risk assessment based on the product design. The team identifies points where special care is needed and puts protections in where they're needed most. As the project evolves, new concerns will arise. Making the product secure by design, rather than by retrofitting, makes it less likely that dangerous bugs will get into production.
Increased Use of Automation
It's impossible to imagine DevOps without automated processes. At a minimum, an automated suite of tests has to be part of the release cycle. But its use has often been piecemeal. Automating more of the cycle makes it flow more readily and makes the results consistent. Continuous integration/delivery (CI/CD) pipelines make sure all the necessary steps happen:
- Developers submit code and get necessary approvals.
- Automatic testing happens every time new code enters the repository.
- Features get thorough testing and receive all necessary sign-offs.
- The release code is properly tagged.
- There's no accidental cross-contamination between versions.
- Rollback happens cleanly (if it's even needed).
A mix of automated and manual processes creates the risk of errors and omissions. If the development and testing are highly automated but administrators have to take all the right steps to deploy the code, that's risky. They could miss a library update or a needed configuration change. There are more tools available than ever to automate all the steps, and smart teams take advantage of them.
The ideal is to automate everything from end to end. That's where the assembly-line approach to DevOps goes beyond pipelines. An assembly line turns everything from coding through release into a single automated process. This doesn't mean using just one tool. Certain automation tools are still best for particular steps in the process. Rather, all these tools operate under the arch of an overall automation process. It's sometimes called a "pipeline of pipelines."
Some businesses have tried to get there with custom scripts to bring the islands of automation together. This is good as far as it goes, but it creates its own maintenance headaches. When the process or the tools change, the scripts have to change.
Assembly lines are an idea which hasn't fully matured yet, but it's likely to take big steps in 2019. It's going to become one of the hottest buzzwords in DevOps, so companies will need to evaluate the tools that claim to offer it and decide which ones really deliver.
Containers and Kubernetes
Containerization has grown very popular in the DevOps world. It lets teams test and deploy code without building different versions for different environments. The test and runtime editions use the same code with different configuration files in their container. Different runtime environments likewise differ only in their configurations and supporting code.
At production time there are additional questions. Containers make it easy to deploy as many copies as necessary on multiple machines, but deploying them for best performance becomes an issue.
Setting them up properly is an important task, and that's what container orchestration tools are for. They help to optimize deployments by managing resources, monitoring load balancing, and scheduling the addition and removal of containers.
Kubernetes has emerged as the clear leader among the available tools. Docker is nearly universal as the containerization environment. A team that understands all the nuances of using Kubernetes can get more performance out of its applications.
Microservices and FaaS
CI/CD works best when it doesn't have to deal with one big, sprawling product. Breaking it into discrete pieces connected by APIs lets the team update each part when necessary, with little risk of unexpected effects on the other parts. Microservices are displacing the single big application as the best way to develop and deploy software. They make code reusable. A well-designed microservice will work with more than one application.
A related concept is Functions as a Service (FaaS), also known by the confusing name of "serverless." FaaS describes an infrastructure, while microservices are a software architecture. With FaaS, the code calls published functions and doesn't know or care about how they're deployed. Together, they give relatively small units of code a lot of isolation from each other. Each one can be updated, or even completely redesigned, without impacting the others.
AI and Machine Learning
It's become a cliche to say that AI is in the leading edge of any technology, but it offers concrete benefits in DevOps. Two big areas where it can contribute are testing and performance metrics.
Tests are usually manual creations. Someone thinks of a way the software could break and writes a test for it. This always leaves blind spots. The biggest source of bugs is always something you don't think of. AI with machine learning can identify common sources of errors, based on similar software. It can devise tests that will zero in on those potential problems.
Measuring performance is difficult because there are so many variables. Machine learning (ML) systems can gather a lot of information on a running application and zero in on the bottlenecks. They can spot trends and warn of performance issues on the horizon. This lets the team make adjustments more quickly, with less wasted effort. If a new release causes an unexpected change in performance, an ML system can spot it before users complain and identify its source.
Keeping Up with the Changes
Sometimes keeping up with everything that's happening in DevOps feels like chasing a moving target. But the new developments make it better, and businesses can't afford to fall too far behind. Adopting the latest practices pays for the effort in better collaboration and better software.
New techniques and tools keep emerging. They always will. Picking the best of them lets a business stay in the lead in what it offers its customers.
To learn more about how you can transform older applications using DevOps, download our free eBook.
To read more articles about DevOps strategy and trends, check out our DevOps archive.