Tuesday, July 19, 2016

Dev-Assurance: Hybrid between a developer and a quality assurance personnel

Over the span of my career, I have seen many battles unfolding between the development team and the quality assurance team arguing countless hours around a defect. In the nerd-herd, these battles are the equivalent to some of the greatest battles in history minus the glorious brawls. Some people bring out their best sarcasm into the battle field though it might not be their forte.
“Works for me” is one of the classic responses of a developer which results in something similar to the following;


This is then followed up with a few innuendoes flying back and forth between the individuals. There can be many reasons for this such as;
  • Defect can only be re-produced in a production like environment due to it being clustered and will still work fine in the developer’s machine which runs only a stand-alone server instance.
  • Missing configuration in the test environment
  • Specific to a particular environment
And many more which we will not go into detail as it is not the purpose of this article.

Next comes the “Works as expected” dilemma. This is a tricky as one can look at it in different angles and come into different conclusions. In terms of the developer, if it was never specified as a requirement then it should not be a valid defect. In terms of the quality assurance person, it’s a non-functional requirement which should have been catered to in the solution delivered. 

Defects can take many forms but the end result is that most often they result in a conflict between the developer and the tester. In my career, I too have been in these unpleasant confrontations. If one actually looks at those situations in retrospection, one will see that there is always an issue of egoism in play which results in both parties not giving up their own standpoints. My father gave me one very important quote framed which I still keep at my desk. Think its best I show it to you;

Keeping all these SLAs, defect ratios, defect density measures aside, if you really look at the issue, for me as a developer, I would not want the testers to find any defects in my code. But we all know that 100% perfect code is as easy as finding a unicorn. What I am trying to get at here is a change in mindset to a reported defect in terms of a developer. You have to know that you will never produce zero defect code every time. When you do get a defect reported on the code you have worked on, it is best to take it as a learning opportunity and to build more self-awareness of what you need to do in the future to avoid such issues and move forward rather than get in a fruitless battle with the tester trying to defend the defect. Quality is something that should be instilled in you as a developer and one of the first steps in moving in that direction is to stop thinking as a developer when you are testing your own code. Often times we get over concerned on the technical details of the code and the code coverage with respective the branches you have tested. Don’t take me wrong, this is well and good, but what I am trying to get to is the fact that you need to look at the business functionality that your code is providing in the end and if it does that satisfactorily. 

I just love proactive approaches to software development, so where I work, we have put in a few quality gates that have substantially reduced these unpleasant confrontations between the two teams which in essence should be considered as one team with one aim which is to deliver quality solutions to the customer.

Design Reviews
In this stage what we do is, as a team, go through the design document created for a given feature or functionality. As we follow an agile process, a design document has features broken down into user stories. When we get the whole team involved in the design review, it brings out different opinions and concerns which provide invaluable information to better refine the solution. This has resulted in a reduction in the defect density and has improved the communication between the individuals as everyone has the context of the work carried out though everyone may not be involved in a given design implementation.

Demo/walk-through
After a particular implementation is completed, we have made it a habit that the developers involved in it should present their deliverable to the team before it is released to the testing environment. As this again involves the whole team, the team can openly discuss any issues or provide suggestions for improvement which can then be considered prior to the release to the test environment. In one aspect, this gives the developers an understanding of the business impact of the solution they provide so they can relate more closely with it and of course allows them to fine tune their soft skills in presentation as well.

Release audit
This is where we have a technical lead or an architect ask specific questions prior to the actual release. Questions such as;
  • What was the total effort for this release (to ascertain the amount of change going into a release)
  • Was code review completed and review comments addressed?
  • Is testing completed successfully?
  • How many positive and negative scenarios were identified during the test stage?
  • Any non-functional requirements considered for the release?
  • Regression testing completed for the core functionality?
  • Any major deployment changes on this release?
All these questions allows the team to ponder on aspects they might not have considered and acts as a quality gate just prior to the actual deployment.
These stages has allowed a developer to embed the quality aspect into their core as the Samaritan in the person of interest did to monitor everyone’s keystrokes by building a virus into the firmware of all hardware (I’m just not over the fact that this great TV series came to an end). This in turn allows us to create “dev-assurance” personnel in our team which in turn has improved our team’s productivity, communication and efficiency.