DevOps as an enabler for efficient testing in large-scale agile projects: A case study from the Autosys Project at the Norwegian Public Roads Administration

About this Publication

Agile development methods have become the standard in most large IT development projects, and DevOps is on the way. Delivering new functionality to end-users every sprint is still a major goal for most large-scale agile development projects, and the benefits of shift-left testing, continuous integration and DevOps-thinking makes the testing more efficient and comprehensive each sprint. This enables the development teams to deliver working software ready for system integration testing earlier than before. However, it also challenges the teams to re-design their testing process as the introduction of DevOps puts increased focus on “doing the right testing at the right time”, having the right skills and collaboration within the necessary organization. This report describes how a large-scale agile project with 5 development teams benefited from DevOps by improving and streamlining their testing process. The report also provides recommendations on how to organize testing in present and future large-scale DevOps/agile development projects.


1.     Introduction

Software is created and released faster than ever, and the need for efficiency and integration between development and operations has become even more important, giving the DevOps movement traction and visibility [1]. DevOps stresses improved communication, collaboration and interdependence between software development and IT operations [2] to facilitate a reduction in cycle time, and, with multiple independent development teams, resulting in a need for continuous system integration and delivery methods [3]. To achieve high level of efficiency and integration, both teams will need to meet half-way regarding skills and collaboration, and then work together once the distribution of knowledge and responsibility has been applied [3]. DevOps focuses on areas that Agile has not completely been able to fulfill, especially when it comes to collaboration with stakeholders, faster deployment of running systems and responding to changes.  Software development organizations face the challenge of rapid and continuous adaptation to unpredictable changes to achieve their business needs, in the face of an even more dynamic and competitive market. Testing was one of the areas that continued to be quite “waterfall-ish” in many agile software teams, but the new movement with DevOps mindset is shaping this behavior to an even more thought through process.

DevOps is an abstract concept that requires being instantiated to specific organizational contexts, instead of applying a specific set of technologies. França et al. [4] observed a set of principles that supports the practices characterizing DevOps based on the literature:


  • Social Aspects: despite all technical principles, many of the DevOps characteristics are associated with social aspects among the software development and operations teams. This is one of the reasons that DevOps is often seen as a cultural shift in the organization. In testing, there has been a change in the roles and responsibilities of testers, where focus is put on distributing the responsibilities for testing in the whole team. Moreover, testers are also encouraged to collaborate more with other stakeholders such as operations and business.
  • Automation: is one of the core principles of DevOps due to the benefits it can promote. Manual and repetitive tasks can often be automated to reduce unnecessary effort and improve the software delivery. Hence, automation improves not only the delivery speed, but also the infrastructure consistency, productivity of teams, and repeatability of tasks. Testing is brought to another level of automation but the need for manual tests still exists, which again should have a more risk-based approach than before.
  • Quality Assurance: to assure high quality of both development and operations processes as products. This principle supports the implementation of DevOps practices since it links different stakeholders (development, operations, support, and customers) to perform activities in an efficient and reliable way, as well as the product and services meeting established quality standards.
  • Leanness: some DevOps practices are based on Lean Thinking principles [5]. DevOps requires a lean process as it intends to ensure a continuous flow to develop and deliver software regularly, in small and incremental changes. Therefore, it should foster constant and fast feedback among the development, testing and operations, as well as with customers.
  • Sharing: information and knowledge are disseminated among individuals to promote the exchange of personal learning and project information. In this sense, individuals should spread relevant information, for instance, implementation and execution of practices recommended in the context of DevOps. Transparency in test planning, execution and results is a must in this process and should involve not only the development team but also operations and customers.
  • Measurement: an important principle often instantiated by collecting efficient metrics to support the decision-making in the software development and operations lifecycle. In testing, the defects analysis and coverage measurement becomes an important approach for feedback on the whole development process.


In this experience report we describe Accenture’s experience from one of the biggest ongoing agile development projects to date in Norway. The project established its DevOps platform and made extensive use of technical DevOps concepts such as automated setup of environments, continuous integration and continuous deployment of applications for system and automated testing as well as active stakeholder participation and involvement of operations personnel to streamline and enable efficient development and testing. An important success factor was to design and implement these concepts at the very beginning of the project. It was also crucial to have efficient cross-team testing and an active involvement of the business and operations each sprint, as this was a large and complex project with both functional and technical dependencies across the Scrum teams and other vendors.

We describe our experience using DevOps concepts as a supportive and integrated part of the agile development process, and how this affected and challenged how the teams approached testing. We start by describing the project and our approach to agile and DevOps. Then, we discuss the impact on the test approach and finally summarize the main lessons learned. The report will focus on two main aspects of Devops Testing: How responsibilities and collaboration within the teams were affected (social aspects, sharing and leanness), and how we approached the problem of having the right testing at the right time in such a large project. These experiences can be of interest to test managers, testers, project managers, operation resources and other participants in large-scale agile development projects planning to move towards DevOps.

2.     The Autosys project

The Autosys Project is managed by The Norwegian Public Roads Administration (Statens vegvesen), a Norwegian government department responsible for the construction and maintenance of highways and county roads, including the supervision and administration of registered vehicles and certifications.

The project’s main objective is to replace the legacy Autosys automotive register with the new Autosys vehicle system, “Autosys kjøretøy”. The new system will support the formal approval, by law, of vehicles, registration and change of ownership, reseller solutions, and distribution of information to other public administrations and selected partners. The system will have extensive focus on self-service solutions. As all approval and registration of vehicles flow through this system-of-systems, it is considered critical to the Norwegian society. With a high focus on self-services, all people and businesses dealing with, or owning a car in Norway, are potential users. The system is integrated with the Police, Tax administration, Customs, and private organizations within the insurance and automotive industry. The project has fixed deadlines, and cost, scope and quality are critical factors.

The planning phase started in the summer of 2014 and the project was initiated together with Accenture as the selected IT-partner in August 2016. The project is scheduled to end in 2021. Accenture is responsible for delivering the core part of the system. Approximately 100 people are daily involved directly in the development of the solutions.

2.1       Autosys delivery model and approach to agile and DevOps development and testing

The project uses a PRINCE2-based model and the development phase uses an agile/Scrum development model (PS2000 SOL). The following Scrum practices are used in full or partial by the project: unit and integration testing, continuous integration, weekly and daily stand up meetings, incremental design, daily deployment, test driven development, test automation, customer side involvement, planning poker, negotiated scope contract, retrospective, epics and user stories, fixed cycles and team continuity.

A “big bang” transition to the new system was considered undesirable, hence the delivery plan prepared for agile development with a staged phase-out of the old system. The planned replacement process included seven main deliverables (always two in parallel) and two deliverables deployed to production per year. Maintenance releases are deployed when necessary. The development phase consists of 7-11 sprints of three weeks each. The sprints are followed by a three-week system integration test, and a 6-8-week user acceptance test performed by the customer (Statens vegvesen).

The development and test set-up is illustrated in Figure 1. Our Scrum teams included the following roles: Scrum master, developer, expert tester, technical architect and two resources from the business. The project aimed at having one developer per Scrum team with extensive operations skills. The customer had a dedicated team that was responsible for the specification of epics and user stories. Accenture was co-responsible for detailed design and responsible for development, unit and integration test, sprint test, continuous and final system test. Detailed design included specification of functional and non-functional (UX, performance, operations and security) acceptance criteria and test design per acceptance criteria, and was performed by a team with personnel from both Accenture and the customer. End-users were involved during early verification of UX elements. Sprint acceptance testing and user acceptance testing was performed by the customer.

Figure 1. Development and test organization

The test manager from Accenture was in charge of all test and quality assurance activities performed by the vendor, including strategy and plans. This test manager was the main contact for the customer’s test manager and responsible for involving the business and the customer’s operations team.

The Scrum teams had one skilled tester in the team. To reduce the need for manual testing the project wanted a high focus on test automation, and the organization setup aimed at the tester covering a broad set of test activities. The tester was scheduled to work 50% of the time with Scrum team activities during the sprints (e.g. manual sprint testing, identification and/or development of test data, specification of detailed expected results), and 50% serving test activities performed outside the Scrum teams. Test activities outside the Scrum team ran in parallel with the sprints and included test design per user story and continuous system test (functional and non-functional).

2.2       Autosys DevOps Platform (ADOP)

To support efficient software development with parallel deliverables, the project established its DevOps platform. The platform would support continuous delivery and integration with frequent rollouts of software versions to multiple test environments. The work was performed by the Autosys project organization including resources from the customer’s operations team. Highlights from the DevOps technical setup includes:


  • Fully automated setup of environments on Scrum team servers for integration and system testing;
  • Fully automated deployment of applications for integration and system testing;
  • Automated integration test with JUnit and JaCoCo running continuously by Jenkins and reported to Sonar;
  • Light-weight architecture with micro-services including Spring Boot, Docker, Puppet and Java.


Market-leading tools were used in development and test: Atlassian JIRA for delivery management, Atlassian Bitbucket (GIT) for version control, JUnit and Selenium for automation of unit, integration and system testing. Jenkins, Stomach, Gulp and NPM were used for building and deployment and scheduling of automated tasks. SonarQube was used for automated static code analysis and Docker for production-like setup in test. Splunk Enterprise was used for operational and business monitoring and logging.

The provisioning of the development environment was automated and standardized using Vagrant, Puppet and VirtualBox. This standardized set-up would also ensure efficient onboarding and socialization of new developers and a consistent and standardized tooling across the Scrum Teams.

The project’s operations’ team (which included resources from the customer) was responsible for set-up and maintenance of the development and test environments, and responsible for deployment of code to the system integration test environments. The team also functioned as the collaboration hub towards the customer’s main operations department. During the projects ramp-up phase, selected developers from each of the Scrum teams worked closely together with the operations team to establish the DevOps platform, methods and processes. The developers presented their needs and ideas for an efficient development process, and the team would then develop and configure possible solutions. This was also an important activity in discussing the level of automation and agreeing on the level of access all developers should have in the test environments. The team had regular meetings and demos with the customer’s operations team to demonstrate their ideas and to discuss how the new mechanisms and processes could be used. By having developers as “lightweight DevOps champions” the Scrum teams also became more self-contained entities, managing, maintaining and monitoring their own development and test environments.

3.     Devops testing at autosys – approach, challenges and results

In this section, we will describe our experience with DevOps as an enabler for efficient cross-team testing. The section will focus on two main aspects: How responsibilities and collaboration within the team were affected (social aspects, sharing and leanness), and how timing of the testing in our agile development approach (quality assurance, automation and measurement) needed to be adjusted.

3.1       Responsibilities and collaboration

Our approach to DevOps introduced a simplified setup of environments, deployment of applications and monitoring of logs for defect analysis. This enabled the developers to perform more of the typical tester work as part of their development. Regarding shift-left-testing, we experienced this as very positive as more blocking defects and unintended application behavior was identified and fixed by the developers early during development. However, it forced us to re-think the tester role and responsibilities as the developers became more independent in terms of doing testing, but the tester was still the test subject-matter expert and one of the key contributors for transparency and openness towards the customer during the sprints.

The Autosys project was staffed with highly skilled test personnel, and the ramp-up phase put a hybrid “agile- waterfall-ish” test approach into action. However, by the second sprint we experienced several instances of redundant testing within the teams. While the testers and team resources from the customer detailed the test design per acceptance criteria and later executed manual sprint testing, the developers had already developed unit and integration tests covering many of the same acceptance criteria. In these cases, the testers ended up spending much of their available time doing unnecessary manual sprint testing. This resulted in a growing backlog on test design and continuous system test work, and the goal for the tester to split their time 50/50 between team and other test activities could not be fulfilled.

We experienced that the DevOps platform was mostly used by the developers and the projects operations team, while the testers at first wanted to test complete functionality in their test environments. The test manager decided then to bring the testers closer to the developers with the goal of reducing redundancy, but also give the tester more responsibility bridging the gap to the business. Dialogue with the customer and the business had previously been the responsibility of the test manager, but was now transferred to the Scrum team testers. The tester participated in sprint planning and contributed on identifying dependencies between development tasks, user stories and teams. The tester talked the developers through the test design and together they decided on a detailed test approach per sprint. The Scrum team agreed on the level of automation, what parts of the test design that should be implemented as low level tests and what combinations of test data that needed to be identified or created to facilitate sufficient testing. The tester also became responsible for sharing information and to involve both the business and technical/operations personnel. As a large project with more developers than testers, such a process was time consuming for the single team tester. Nevertheless, the testers slowly gained more insight into unit- and integration testing, became familiar with the development methods, learned how to use the more technical test tools such as Soap and Postman, use the application logs and used this knowledge to scope a more effective and less redundant sprint test. By thinking like a developer and test smaller pieces of functionality/code in iterations, but still have the mindset as a tester (using different test techniques to uncover all test cases), the testers started testing features rather than complete solutions. The testers also started using the DevOps platform themselves enabling a shift to the approach: shorter iterations, fast deployment, fail fast, adjust and re-test. The sprint test progress graphs demonstrated that the testing started to be executed earlier in the sprint, and the early collaborative testing of features building up to the complete sprint deliverable proved successful in delivering a stable application every sprint (quality assurance).

By exposing both testers and the business to the “developer ways of working”, we developed a deeper understanding and competency in the technical aspects of the application in general as well as the DevOps mindset. By investigating logs, testing without a UI and participating in monitoring of the application, we experienced that the testers became more complete in their skillset. They became more confident in their role and demonstrated a higher delivery capacity and quality in their assigned work. By having this skillset, we also experienced that the testers became more active in their dialogue with operations and not only the business. On the other hand, the developers became more fluent in the “testing language” making it easier for the developers to support the tester in doing traditional tester work (mutual learning). With a broader skillset and a better understanding of each other’s contribution, we experienced the teams to be more proactive and motivated by collaboration resulting in more decisions and clarifications being done on the fly (leanness).

The testers were gradually able to free more time, and the goal of the tester working 50% of the time with test activities within the team and 50% with test activities outside the team could now be realized. With more time available we saw more comprehensive up-front test designs and a reduced system test backlog. Having high quality test designs, it became easier for more personnel to execute the testing, and we experienced that the tester in the team shifted the role from a traditional tester towards a team test advisor.

The DevOps platform and mindset was used as a trigger and enabler for many of these results. The platform was frequently used by all Scrum teams during sprint work, as demonstrated by the high number of test environments created on the team servers by developers/testers per sprint:

Figure 2. Number of test environments created on the team servers / Scrum team test environments

On the other hand, we experienced some negative effects:

  • With regards to status reporting, we saw some trouble reporting whether a user story was in development or test. Our simple solution to this issue was to not report a story to be in either development OR test, but shift the reporting to development AND test combined.
  • To the test manager and testers, it was very inspiring to see a Scrum team where the entire team supported and helped with testing activities. However, not all developers necessarily were motivated by performing manual testing. The teams started to discuss the balance development and testing during retrospectives, and despite of facilitating a collaborative culture it became very clear that we still needed to have a clear view on the different responsibilities of the team roles.
  • Another issue was that the testers, because of a more shared and fragmented test execution, experienced a subjective perception of “not knowing the quality of the complete solution that well”. The test managers initial response was that this would be corrected when testing the complete system towards the end of the delivery. However, the final testing did not solve this issue and we decided to work more closely with the team of testers discussing and improving “doing the right testing at the right time”.

3.2       Types of Testing: “Doing the right testing at the right time”

During development of user stories, the following testing was executed: unit- and integration test, system test of sprint scope and a system integration test focusing on end-to-end functionality. The project did high-level test strategy and planning before the development started, and detailed test planning and test design per user story as part of the sprints. The high-level strategy and planning described the test process to be used by the Scrum teams and the customer for functional and non-functional testing, how test would collaborate and interact with the other project disciplines, when the different functional areas (that consisted of several user stories) should be testable, an overview of all integrated systems and what tools to be used. The detailed test planning and design described the test approach per user story including test conditions per acceptance criteria, the test data that needed to be identified or constructed to reach sufficient test coverage, what test conditions that would be tested manually or automated and if the testing should be executed using stubs or live interfaces.

Supported by this test approach the Scrum teams established a good record delivering user stories as designed and with few outstanding defects. But, when moving to the customer’s acceptance test we experienced that our test process had focused too much on testing the defined acceptance criteria and less on identifying missed functionality/requirements. The fail fast approach dealt effectively with the defects related to the acceptance criteria, but we needed to make our test process better at identifying missing requirements and functionality. We also started to see this issue in relation to the testers feeling of “not knowing” the quality of the complete solution.

With more time available we included scheduled exploratory testing sessions in our test process. These 2-hour sessions were run every Monday in the third week of the sprint.  The activity gave high risk areas extensive focus and the testing was done in pairs. A product risk assessment was done by the customer prior to the development phase, and all testing done in the sprint provided metrics on defect prone parts of the system through input documented on the defects registered in Jira (measurement). To motivate the project members to participate in these sessions, we had prizes to the pair identifying most defects, defect prone areas the most critical defects etc.

End-users were involved in the design phase but not involved in early testing. To improve the projects ability to identify missing functionality we moved on to include the actual end-users in some of the exploratory test sessions. Non-functional testing (performance, operations, security) was addressed by having a similar cooperative process with the customer’s operations personnel and the project’s internal operations team.

In Picture 1 you see Scrum team testers doing operations testing together with the customer’s operations team.

To support manual system testing and our ability to do continuous regression testing, we developed an automated system test suite. This test suite focused on testing core functionality in end-to-end processes implemented in earlier sprints. In addition, we used feedback from team retrospectives when scoping the tests. This feedback helped us to learn about vague designed areas, complex areas and other issues that should be addressed by the automated system tests. The automated test suite was executed with each commit to the development branch and as part of the deployment process. To achieve short execution time and less risk of brittle tests, the automated tests were mainly developed at API-level (REST- and SOAP tests) with a small subset testing the full stack including the UX (Selenium).

With more use of test automation at system level, the project was supported with a stable and testable version of the applications early in the sprint. We experienced that most of the blocking defects were identified and fixed by the developers, making the manual system testing easier and faster to run. The automated system testing was included with the DevOps platform and functioned as a “safety net” aimed at detection of regression defects directly after each code commit. By having this safety net the developers and testers experienced a lower risk of introducing major defects when frequently merging code, deploying and testing it (at application level) in their team test environments. Because we (over time) experienced a stable and testable application when moving to our formal system integration test environment, we also decided to have daily builds and deploys to this environment.

When we were able to do more exploratory testing and with the inclusion of real end-users, we started to identify more missing functionality and requirements before moving to later test phases. The missing functionality was evaluated by the product owner and included in the remaining backlog of user stories. Having more time available to test also resulted in the testers being more confident in the overall quality of the system as they knew more about which parts of the system that worked well and which parts that had issues.

The sprints usually included some non-functional testing, but the testing was mostly focused at functionality. Despite having the roles and skilled personnel available both within and outside the teams, performance, security and operations testing received a growing focus towards the end of development phase. This resulted in performance issues related to both architecture, infrastructure and the application itself was identified late. The exploratory and collaborative testing of non-functional testing was extremely valuable as we got the testing done and could agree on the scope, perform the testing, evaluate the test results and in many cases also analyze the defects in the same session. For the coming releases we are considering how non-functional requirements may be better included within the user stories, consequently we hope to facilitate early non-functional testing as all acceptance criteria must be tested and documented upon sprint delivery. We are also planning to introduce both performance and security tests as part of the automated test suite, and use the results from these tests to create better visibility on potential risks related to non-functional quality aspects.

The scheduled exploratory testing made the testing social and fun, and brought energy, motivation and increased quality to the testing and to the applications. The Scrum teams started to proactively use this technique, hence the scheduled sessions identified fewer defects and then gradually lost some of it contribution. Additionally, when having established a healthy cross-team collaboration and desired transparency, the arena these sessions created for bringing the different disciplines together also felt less valuable. To revitalize these sessions, we are currently considering how we can use more of this approach when doing the final system testing and still meet the formal requirements due to documentation of test cases and results. To have more input available in front of these sessions, we are considering how we may use “technical metrics” collected from different sources (Git, Splunk and Jira) to automatically generate charts giving a detailed view on what parts of the code that often are affected by changes or defect fixing.

4.     Key learning points and recommendations

The Autosys project defined DevOps as a technical set of tools including mechanisms to accelerate development and testing as well as a catalyst for cross-team cooperation, sharing and learning. From our experience we took away 6 key learnings.

The first key learning point was that DevOps is possible also in large-scale agile projects. Having a DevOps mindset affected positively on the evolution of our testing process, from a traditional agile-waterfall hybrid to a more integrated and iterative process. Our DevOps platform enabled us to do automated and manual end-to-end testing earlier than before, and improved our ability to deliver a stable and thorough tested application to the customer through all sprints.

The second key learning point is that it is of high importance to embrace all aspects of DevOps, but the effects of the social aspects started to appear after you had the technical platform established and operative. This is due to the benefits being more visible and easier to understand for non-technical personnel after the different mechanisms are put into action. For further projects using DevOps we recommend that the set-up is done as part of the ramp-up phase and that the responsible team consists of experts on tooling and methods as well as the developers and testers planned to use the platform. To have the tools configured as to be used in production, and to facilitate collaboration and sharing of information between the project and the customer organization, the customers operations team definitively needs to be involved. In our case, the test manager and testers should have been more “open minded, embracing and helpful” with regards to DevOps in the beginning of the project. However, when we better understood the DevOps concept it became much easier to see what should be adjusted and changed in our testing strategy.

The third key learning point was that the DevOps platform and mindset enabled us to do comprehensive and efficient testing each sprint (quality assurance), resulting in the delivery of stable versions of the applications every sprint and no critical or major application defects identified after going to production. With regards to testing, we managed to establish a very good collaborative culture within the Scrum teams. However, we needed to reconsider our initial roles and responsibilities when the responsibility for testing became more shared than before. More available time also resulted in us managing the different tasks and responsibilities of the tester in a better way. The test designs became more complete, we had less redundant testing and we managed to run a complete system integration test in parallel with the sprints without additional staffing. When developing a test strategy for similar future projects, we recommend to approach the system testing in conjunction with low-level testing in a more holistic approach rather than a phase-contained strategy. In the beginning we tried to enforce a traditional hybrid test approach, but when using DevOps, it became very clear to us that the real benefits to the testing arrives when you have a natural collaboration between the disciplines. We still have hybrid elements in our test approach, but high level of collaboration in combination with iterative testing enabled us to develop better test designs and made the continuous sprint test planning more important than the up-front test planning.

The fourth key learning point is that we recommend the use of exploratory testing sessions as much as possible and to use these sessions as an active source of information on product quality and further for planning the test effort. We recommend using risk assessments and information on defect prone areas collected through metrics when managing the exploratory sessions (measurement). The project ramp-up phase should include training for both testers and developers so that a cross-discipline skill set is established. We also recommended that you continue to facilitate training and sharing of information and knowledge through dedicated sessions each sprint (e.g. lunch meetings).

The fifth key learning point was that when you introduce DevOps to large-scale projects you should have light-weight operations competency in each Scrum team. This strategy made our teams independent and effective in managing their own test environments with continuous deployments, and they used this competency when continually improving the team’s processes with regards to development and testing. We also experienced the social aspects and the leanness introduced by DevOps as an enabler for more efficient development as the team did not have to coordinate and route all environmental tasks through operations. The Scrum team personnel in these roles were also very important in fostering a constant and fast feedback among the different disciplines in the project. By having several DevOps champions, the customers operations personnel integrated with a larger group of project people, making it easier to discuss issues and solutions, and to get feedback and share competency. We experienced that the role, and the competency provided with it, had a positive effect on the testing as the Scum teams became more self-contained doing non-functional testing. We recommend that the role should be established from the start of ongoing and future projects, and formalized within the team. To facilitate further evolvement and improvement, you should have this discipline exposed to retrospectives and demos like traditional agile teams.

The sixth key learning point was that you need to facilitate a work environment that makes people with different roles collaborate (sharing, social aspects, leanness). When using DevOps, the teams need to deal with more roles and skills. In conjunction with an increased level of self-containment, and still working on interdependent tasks, the responsibilities of each team will increase in terms of expectations on quality and completeness of the deliverables every sprint. When having strict deadlines and requirements that needs to be fulfilled every sprint, it is easy to forget the importance of communication and sharing of information. The Autosys project used scheduled exploratory and collaborative testing sessions, training and mutual learning, in addition to regular demos and stand-up meetings, as methods to facilitate cooperation and communication across disciplines. We found this to be very successful with regards to the testing in general and very important when bringing more non-functional aspects to the test scope.

5.     Acknowledgements

We are grateful to Fabio Kon for helpful comments and guidance on this experience report. We also thank Statens vegvesen and Accenture for all support during the process of writing the report. We also would like to thank the Research Council of Norway for the research grants on the EMERGE project 231679/F20.


  • Sacks, Pro Website Development and Operations: Streamlining DevOps for Large-scale Websites. Berkely, USA: Apress, 2012,124 p.
  • Hüttermann, “Quality and testing,” in DevOps for Developers. 
Apress, 2012, pp. 51–64. [Online]. Available: 978-1-4302-4570-4 4
  • JHumbleand D. Farley, Continuous delivery: reliable software releases through build, test, and deployment automation. Boston, USA: Pearson Education, 2010, p.512.
  • Breno Bernard Nicolau de França, Helvio Jeronimo Jr.Guilherme Horta Travassos:
    Characterizing DevOps by Hearing Multiple Voices. SBES 2016: 53-62
  • Womack, James P and Jones, Daniel T. Lean thinking: banish waste and create wealth in your corporation. Simon and Schuster, 2010.

About the Author

No bio currently available.

No bio currently available.