Almost every software company nowadays is developing automation for the software it produces. We're building software and we're automating it in parallel. We're also employing a continuous integration process in order to have our automation running on code that is being submitted/checked into our source control while we're developing new code. It's all good, but at some point the automation starts failing. Tests fail because the tests could be inconsistent, because of infrastructure failure or simply because certain functionality in our software has a defect. What do we do then? We analyze why the tests failed and we try to correct the failures.
The automation analysis is performed manually. A survey conducted among twenty two Quality Engineers (more details available at the presentation linked on the bottom of this post) showed that on average 20% of the total time they spend at work is dedicated to performing manual analysis. To put this in perspective: one day each week is entirely dedicated to manual analysis of results produced by the existing automation. One note that should be made here is that this survey is applicable to a mature software product. Products that are in early stages of their life-cycle do not require the same amount of time spent on automation analysis simply because they are still relatively small.
We hypothesized at Infragistics that the software development process can be made self-sustainable by closing the continuous integration cycle.This can be achieved through automating the analysis process on the results coming from the existing automated software testing infrastructure. The purpose of doing this is to allocated the time of the Quality Engineers on more efficiently. If the 20% currently spent on manual analysis is spent on additional automation instead, then we would end up with 25% better automation coverage. As a result the only thing left for us to do is software development and automation coverage for the software development we're doing. In order to achieve this we need to analyze the process that Quality Engineers go through when analyzing automation results. Here's a simple workflow showing that process:
This process has five states including a single entry point which we call "Failed Test", a single exit point called "Bug" and three intermediate steps each of which could lead to the exit point given that a certain condition is met. As you have already noticed this process can easily be turned into a basic state machine with five states. The rest of this post focuses primarily on the "Analyze Point of Failure and Identify the Responsible Party" state. We essentially want to create a consistent framework that automates bug submission given that we have failed tests.
In order to automatically submit bugs for failed tests we need to determine how to provide the appropriate information from the failed tests to the bugs. There are essential bug fields you need to provide regardless of the bug tracking system you may be using. Such fields are the bug title, the steps to reproduce and the expected and actual behaviors. Those can be extracted directly from the failed assert.
We still need a little more information in order to complete the bug submission. Such fields are the assigned to field, for example, and more additional metadata like area path and iteration. These we can extract from metadata associate with our tests that we otherwise use for other purposes or we don't utilize at all.
These should not lead you to think that automated bug submission is restricted to MSTest only. Those are just examples, but the actual implementation that we're going to talk about later in this post is not at all related to MSTest. It's entirely custom in order to show that this is applicable to mostly any environment that we may be using. It's important to remember that:
For the implementation I am going to show the following things are needed.
You can download a trial of TFS from Microsoft in order to try out the demo. To set it up use the default project collection.
Then use either the TFS web access or visual studio to connect and setup a team project. The process for doing this is very well described in this article. Setup the project to use Scrum 2.2 template because the implementation of the bug submission framework uses this as a base implementation. If you like you can also setup e custom build template in order to execute the QUnit runner as part of the build. I would not show how to do that because this post is not about the CI process. A good article on how to do that can be found here.
So we setup a simple reporting class that will handle the bug submission for us. The idea behind it is to connect to our project collection and to the specific project that we've created and then to create or modify work items.
Then we need to populate some collection of failed tests that we're going to analyze and submit bugs for.
Now we need to populate that list whenever we have a failed test in our test run.
Finally let's submit the result to our TFS.
There methods for handling the bug submission are made virtual in order for those methods be be overridden for different work item templates. We aren't assuming that the Scrum 2.2 template is the only thing out there. In fact we use a custom work item template and we have overridden those method in order to handle that template.
We still need some code to test though. At Infragistics we test the Ignite UI product this way but for the purpose of this post I have created a very simple JavaScript"framework" called framework.js and a few QUnittests for it called test1.html. The framework.js looks like this.
The test1.html tests for this "framework" look like this.
So if we run the test file now the results look like this.
If we add code to the framework.js file that causes any of these tests to be failing we would get a bug submitted for that. Example would be to add an API method to the prototype of the Person class in framework.
We now have one failing test and our TFS would reflect this by creating and submitting a bug.
The additional meta information needed to create this work item is acquired through a configuration for the test runner. The configuration for this particular execution looks like this.
So that was the example implementation. You will find the runner attached with instruction on how to use it. Further should also look at potential problems that we might experience when adding such automated bug submission framework on top of our testing framework. What could go wrong? Well quite a few things!
Problem: There could be multiple bug submissions coming from the same issue. If a large number of tests could be running with the infrastructure and a single commit/changeset may cause a number of existing tests to fail. What would prevent an automated bug submission framework from submitting a bug for every failed test.
Solution: The tests should be analyzed as a batch instead of performing analysis on a single test at a time. This allows for identification of such issues.
Problem: The person receiving the submitted bug still needs to perform manual analysis.
Solution: Meaningful error messages must be provided with the test. The error message when an assert fails is the thing that we extract most information from. If that message is generic or is missing then we are increasing the time needed for analyzing the existing bug before fixing it.Also analysis is part of the bug-fixing process even now. Regardless of what analysis the Quality Engineer performed, the person working on the bug still needs to analyze the issue. In this sense we're still saving time.
Problem: Developers require detailed information about the bug. The person responsible for fixing a bug usually asks for additional things like a stack trace.
Solution: The infrastructure can submit any information provided by the IDE. Anything that the developer asks for and we extract using the IDE can be extracted and provided with the bug.
Problem: Inconsistent tests. There are a lot of existing tests but some of them are inconsistent. They randomly fail.
Solution: Test consistency analysis. This is currently not implemented with the framework but we have a very clear idea of how this problem can be handled. The failing tests need to be repeated. If they are rerun enough many times (definition of enough many depends on the implementation and on the tests and the tested software) and they show deltas in some of executions then they are inconsistent. If all the runs show the same results then they are consistent. This is a large topic by itself so I won't go into further detail.
As a conclusion we've examined the concepts for creating automated analysis of our automation executions and we've created a basic framework, proof of concept if you will, showing that these concepts are applicable. The expectation is for us to save at least some, if not all, of the time spent doing manual analysis of automation results. As I have already mentioned previously this framework is applicable and is being used already. Feel free to apply these concepts for the automation of your projects and hopefully it would prove useful. Feel free to send me any comments or questions!
Link to PowerPoint presentation on the topic.
The automation analysis is performed manually. A survey conducted among twenty two Quality Engineers (more details available at the presentation linked on the bottom of this post) showed that on average 20% of the total time they spend at work is dedicated to performing manual analysis. To put this in perspective: one day each week is entirely dedicated to manual analysis of results produced by the existing automation. One note that should be made here is that this survey is applicable to a mature software product. Products that are in early stages of their life-cycle do not require the same amount of time spent on automation analysis simply because they are still relatively small.
We hypothesized at Infragistics that the software development process can be made self-sustainable by closing the continuous integration cycle.This can be achieved through automating the analysis process on the results coming from the existing automated software testing infrastructure. The purpose of doing this is to allocated the time of the Quality Engineers on more efficiently. If the 20% currently spent on manual analysis is spent on additional automation instead, then we would end up with 25% better automation coverage. As a result the only thing left for us to do is software development and automation coverage for the software development we're doing. In order to achieve this we need to analyze the process that Quality Engineers go through when analyzing automation results. Here's a simple workflow showing that process:
This process has five states including a single entry point which we call "Failed Test", a single exit point called "Bug" and three intermediate steps each of which could lead to the exit point given that a certain condition is met. As you have already noticed this process can easily be turned into a basic state machine with five states. The rest of this post focuses primarily on the "Analyze Point of Failure and Identify the Responsible Party" state. We essentially want to create a consistent framework that automates bug submission given that we have failed tests.
In order to automatically submit bugs for failed tests we need to determine how to provide the appropriate information from the failed tests to the bugs. There are essential bug fields you need to provide regardless of the bug tracking system you may be using. Such fields are the bug title, the steps to reproduce and the expected and actual behaviors. Those can be extracted directly from the failed assert.
We still need a little more information in order to complete the bug submission. Such fields are the assigned to field, for example, and more additional metadata like area path and iteration. These we can extract from metadata associate with our tests that we otherwise use for other purposes or we don't utilize at all.
These should not lead you to think that automated bug submission is restricted to MSTest only. Those are just examples, but the actual implementation that we're going to talk about later in this post is not at all related to MSTest. It's entirely custom in order to show that this is applicable to mostly any environment that we may be using. It's important to remember that:
- Concepts presented here are general and are entirely platform and framework independent.
- Implementation is specific to the platform and the testing framework that is being utilized.
For the implementation I am going to show the following things are needed.
- Team Foundation Server (TFS) - manages the CI process
- QUnit - unit testing framework for JavaScript
- QUnit runner - automated runner for running and extracting the test results
You can download a trial of TFS from Microsoft in order to try out the demo. To set it up use the default project collection.
Then use either the TFS web access or visual studio to connect and setup a team project. The process for doing this is very well described in this article. Setup the project to use Scrum 2.2 template because the implementation of the bug submission framework uses this as a base implementation. If you like you can also setup e custom build template in order to execute the QUnit runner as part of the build. I would not show how to do that because this post is not about the CI process. A good article on how to do that can be found here.
So we setup a simple reporting class that will handle the bug submission for us. The idea behind it is to connect to our project collection and to the specific project that we've created and then to create or modify work items.
Code Snippet
- #region Private Members
- privateTfsTeamProjectCollection _projectCollection;
- privateWorkItemStore _store;
- privateProject _teamProject;
- #endregion
- #region Constructors
- ///<summary>
- /// Initializes a new instance of the <see cref="ReportingCore"/> class with default connection.
- ///</summary>
- public ReportingCore()
- {
- _projectCollection = newTfsTeamProjectCollection(newUri("http://localhost:8080/tfs"));
- _store = _projectCollection.GetService<WorkItemStore>();
- _teamProject = _store.Projects["ISTA2013"];
- TestRun = newList<GenericTestResult>();
- }
- ///<summary>
- /// Initializes a new instance of the <see cref="ReportingCore"/> class.
- ///</summary>
- ///<param name="collectionUri">The TFS project collection URI.</param>
- ///<param name="projectName">Name of the TFS project.</param>
- public ReportingCore(Uri collectionUri, string projectName)
- {
- _projectCollection = newTfsTeamProjectCollection(collectionUri);
- _store = _projectCollection.GetService<WorkItemStore>();
- _teamProject = _store.Projects[projectName];
- TestRun = newList<GenericTestResult>();
- }
- #endregion
Then we need to populate some collection of failed tests that we're going to analyze and submit bugs for.
Code Snippet
- #region Properties
- ///<summary>
- /// Gets or sets the list of failed tests from the test run.
- ///</summary>
- ///<value>
- /// The failed test run list.
- ///</value>
- publicList<GenericTestResult> TestRun { get; set; }
- #endregion
What is this GenericTestResult that we have there?
Code Snippet
- ///<summary>
- /// Generic test result class. Used to populate failed test results for analysis.
- ///</summary>
- [Serializable]
- publicclassGenericTestResult
- {
- #region Properties
- ///<summary>
- /// Gets or sets the test expected result.
- ///</summary>
- ///<value>
- /// The expected result.
- ///</value>
- publicstring ExpectedResult { get; set; }
- ///<summary>
- /// Gets or sets the test actual result.
- ///</summary>
- ///<value>
- /// The actual result.
- ///</value>
- publicstring ActualResult { get; set; }
- ///<summary>
- /// Gets or sets the test title.
- ///</summary>
- ///<value>
- /// The title.
- ///</value>
- publicstring Title { get; set; }
- ///<summary>
- /// Gets or sets the test owner.
- ///</summary>
- ///<value>
- /// The owner.
- ///</value>
- publicstring Owner { get; set; }
- ///<summary>
- /// Gets or sets the test file.
- ///</summary>
- ///<value>
- /// The file attachment.
- ///</value>
- publicstring FileAttachment { get; set; }
- ///<summary>
- /// Gets or sets the test description.
- ///</summary>
- ///<value>
- /// The description.
- ///</value>
- publicstring Description { get; set; }
- #endregion
- }
Now we need to populate that list whenever we have a failed test in our test run.
Code Snippet
- _analysis.TestRun.Add(newGenericTestResult()
- {
- Title = testName + " " + message,
- ActualResult = actual,
- FileAttachment = testUrl,
- ExpectedResult = expected,
- Description = message,
- Owner = testConf.Owner
- });
Finally let's submit the result to our TFS.
Code Snippet
- #region Public Methods
- ///<summary>
- /// Logs a bug.
- ///</summary>
- publicvoid Submit()
- {
- WorkItemType type = _teamProject.WorkItemTypes["Bug"];
- foreach (GenericTestResult failedTest in TestRun)
- {
- failedTest.Title = failedTest.Title.Length < 256 ? failedTest.Title : failedTest.Title.Substring(0, 255);
- WorkItemCollection bugs = _teamProject.Store.Query("SELECT [System.Id] FROM WorkItems WHERE [System.Title] = '" + failedTest.Title + "'");
- if (bugs.Count > 0)
- {
- WorkItem bug = bugs[0];
- this.HandleExistingBug(bug, failedTest);
- }
- else
- {
- WorkItem bug = newWorkItem(type);
- this.HandleNewBug(bug, failedTest);
- }
- }
- }
- ///<summary>
- /// Handles existing bug resubmission.
- ///</summary>
- ///<param name="bug">The bug.</param>
- ///<param name="failedTest">The failed test.</param>
- publicvirtualvoid HandleExistingBug(WorkItem bug, GenericTestResult failedTest)
- {
- if (bug.State == "Done" || bug.State == "Removed")
- {
- bug.Open();
- bug.State = "New";
- bug[CoreField.AssignedTo] = failedTest.Owner;
- bug["Repro Steps"] = failedTest.Description + "<br />Expected: " +
- failedTest.ExpectedResult + "<br />Actual: " +
- failedTest.ActualResult;
- bug.AreaPath = failedTest.AreaPath;
- bug.IterationPath = failedTest.IterationPath;
- if (!string.IsNullOrEmpty(failedTest.FileAttachment))
- {
- bug.Attachments.Clear();
- bug.Attachments.Add(newAttachment(failedTest.FileAttachment));
- }
- if (bug.IsValid())
- {
- bug.Save();
- }
- else
- {
- foreach (Field field in bug.Validate())
- {
- Console.WriteLine(field.Name + " did not validated. Field value: " + field.Value);
- }
- }
- }
- }
- ///<summary>
- /// Handles new bug submission.
- ///</summary>
- ///<param name="bug">The bug.</param>
- ///<param name="failedTest">The failed test.</param>
- publicvirtualvoid HandleNewBug(WorkItem bug, GenericTestResult failedTest)
- {
- bug.Title = failedTest.Title;
- bug[CoreField.AssignedTo] = failedTest.Owner;
- bug["Repro Steps"] = failedTest.Description + "<br />Expected: " +
- failedTest.ExpectedResult + "<br />Actual: " +
- failedTest.ActualResult;
- bug.AreaPath = failedTest.AreaPath;
- bug.IterationPath = failedTest.IterationPath;
- if (!string.IsNullOrEmpty(failedTest.FileAttachment))
- {
- bug.Attachments.Add(newAttachment(failedTest.FileAttachment));
- }
- if (bug.IsValid())
- {
- bug.Save();
- }
- else
- {
- foreach (Field field in bug.Validate())
- {
- Console.WriteLine(field.Name + " did not validated. Field value: " + field.Value);
- }
- }
- }
- #endregion
There methods for handling the bug submission are made virtual in order for those methods be be overridden for different work item templates. We aren't assuming that the Scrum 2.2 template is the only thing out there. In fact we use a custom work item template and we have overridden those method in order to handle that template.
We still need some code to test though. At Infragistics we test the Ignite UI product this way but for the purpose of this post I have created a very simple JavaScript"framework" called framework.js and a few QUnittests for it called test1.html. The framework.js looks like this.
Code Snippet
- function Person(first, last, age) {
- this.first = first;
- this.last = last;
- this.age = age;
- this.getPropCount = function () {
- var count = 0, prop;
- for (prop inthis) {
- count++;
- }
- return count;
- }
- this.compareFirstName = function (first) {
- returnthis.first === first;
- }
- }
The test1.html tests for this "framework" look like this.
Code Snippet
- <!DOCTYPEhtmlPUBLIC"-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
- <htmlxmlns="http://www.w3.org/1999/xhtml">
- <head>
- <metacharset="utf-8">
- <title>QUnit Example</title>
- <linkrel="stylesheet"href="QUnit/qunit-1.12.0.css"/>
- <scripttype="text/javascript"src="QUnit/qunit-1.12.0.js"></script>
- <scripttype="text/javascript"src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
- <scripttype="text/javascript"src="../Source/framework.js"></script>
- <scripttype="text/javascript">
- $(document).ready(function () {
- test("Test Person API", function () {
- var person = new Person("Konstantin", "Dinev");
- equal(person.getPropCount(), 5, "The prop count returned from getPropCount has an incorrect value.");
- equal(person.compareFirstName("Konstantin"), true, "The first name comparison method of the Person object failed.");
- });
- });
- </script>
- </head>
- <body>
- <divid="qunit"></div>
- <divid="qunit-fixture"></div>
- </body>
- </html>
So if we run the test file now the results look like this.
If we add code to the framework.js file that causes any of these tests to be failing we would get a bug submitted for that. Example would be to add an API method to the prototype of the Person class in framework.
Code Snippet
- Person.prototype.getLastName = function () {
- returnthis.last;
- }
We now have one failing test and our TFS would reflect this by creating and submitting a bug.
The additional meta information needed to create this work item is acquired through a configuration for the test runner. The configuration for this particular execution looks like this.
Code Snippet
- <?xmlversion="1.0"encoding="utf-8" ?>
- <QUnitRunnerConfiguration>
- <TestsRunnerConfiguration>
- <TestSuites>
- <TestSuite>
- <Name>ISTA 2013 Test-1</Name>
- <Owner>Konstantin Dinev</Owner>
- <FileSystemFolder>E:\ISTA2013\Tests</FileSystemFolder>
- <TestsFileName>test1.html</TestsFileName>
- <MappedServerUrl>Tests</MappedServerUrl>
- <SuiteCoverageFiles>
- <File>framework.js</File>
- </SuiteCoverageFiles>
- </TestSuite>
- </TestSuites>
- </TestsRunnerConfiguration>
- <TestsResultDispatcherConfiguration>
- <MailServer>mail.test.com</MailServer>
- <Sender>Automated Test Runner</Sender>
- <Recipients>
- <Email>email@test.com</Email>
- </Recipients>
- <Subject>ISTA 2013 Results</Subject>
- </TestsResultDispatcherConfiguration>
- </QUnitRunnerConfiguration>
So that was the example implementation. You will find the runner attached with instruction on how to use it. Further should also look at potential problems that we might experience when adding such automated bug submission framework on top of our testing framework. What could go wrong? Well quite a few things!
Problem: There could be multiple bug submissions coming from the same issue. If a large number of tests could be running with the infrastructure and a single commit/changeset may cause a number of existing tests to fail. What would prevent an automated bug submission framework from submitting a bug for every failed test.
Solution: The tests should be analyzed as a batch instead of performing analysis on a single test at a time. This allows for identification of such issues.
Problem: The person receiving the submitted bug still needs to perform manual analysis.
Solution: Meaningful error messages must be provided with the test. The error message when an assert fails is the thing that we extract most information from. If that message is generic or is missing then we are increasing the time needed for analyzing the existing bug before fixing it.Also analysis is part of the bug-fixing process even now. Regardless of what analysis the Quality Engineer performed, the person working on the bug still needs to analyze the issue. In this sense we're still saving time.
Problem: Developers require detailed information about the bug. The person responsible for fixing a bug usually asks for additional things like a stack trace.
Solution: The infrastructure can submit any information provided by the IDE. Anything that the developer asks for and we extract using the IDE can be extracted and provided with the bug.
Problem: Inconsistent tests. There are a lot of existing tests but some of them are inconsistent. They randomly fail.
Solution: Test consistency analysis. This is currently not implemented with the framework but we have a very clear idea of how this problem can be handled. The failing tests need to be repeated. If they are rerun enough many times (definition of enough many depends on the implementation and on the tests and the tested software) and they show deltas in some of executions then they are inconsistent. If all the runs show the same results then they are consistent. This is a large topic by itself so I won't go into further detail.
As a conclusion we've examined the concepts for creating automated analysis of our automation executions and we've created a basic framework, proof of concept if you will, showing that these concepts are applicable. The expectation is for us to save at least some, if not all, of the time spent doing manual analysis of automation results. As I have already mentioned previously this framework is applicable and is being used already. Feel free to apply these concepts for the automation of your projects and hopefully it would prove useful. Feel free to send me any comments or questions!
Link to PowerPoint presentation on the topic.