Friday, March 18, 2011

What is What in Testing

Testing Fida


Software testing is more than just error detection; 
Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted.
  1. Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements.  [Verification: Are we building the system right?]
  1. Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.
  1. Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted.  [Validation: Are we building the right system?]
In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly.  Both verification and validation are necessary, but different components of any testing activity.
The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.
Remember: The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed. 
 
Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.
 
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.
However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

“Quality Assurance” measures the quality of processes used to create a quality product.
Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation.
It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.
Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers.
 
Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation.
A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.
 
Simply put:
§  TESTING means “Quality Control”; and
§  QUALITY CONTROL measures the quality of a product; while
§  QUALITY ASSURANCE measures the quality of processes used to create a quality product.
 
7. The Mission of Testing
In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.
 

Friday, March 11, 2011

10 points what is Not Testing

Software testing is a relatively new field and has changed considerably in past few years. It is not taught in many universities and when I moved from development to testing in 2001, I was confused about it. I tried to learn from internet, books, forums and was not impressed with the information I got. I even did my certification (CSTE, if you are interested) but that wasn't very useful either. During that time, I came across many interesting theories / concepts and after working in the industry, I know they are not true, and are myths. Unfortunately, some of these myths are still in practice and widespread.

Myths in software testing has done this field more harm than good. In this post, I will explore popular software testing myths, why they are myths and what wrong these myths are doing to our profession?

1. 
Testers are Gatekeepers Of Quality - Nothing should be released to production before test organization / testers give their approval.

In many organizations, tetser / test team fight for this right. It makes test team empowered and to be honest, when I started my career I did think this is right way. In reality, this view is extremely dangerous, for the team and product both. Test team is information provider and provides information to stakeholders. It is up to them to act on the information test team provides. When testers act as gatekeeper they become responsible for quality in the product. It gives them feeling that other than test team, no one else is concern about quality. It also increases pressure and sometime creates situation wherein testers are afraid to release product, because there might be one last remaining defect which is not uncovered.

2. 
Complete testing is possible - If you plan properly, it is possible to test software completely, identify and fix all the defects.
Many organizations feel that it is possible to test software completely and fix all the defects. Nothing can be further from truth. No matter how much you test, complete testing is just plain illusion. Applications are becoming more and more complex and possibility of testing all the features, under all the conditions is extremely rare. When management is trap in this belief, test team will become responsible for every defect. Also, if test team attempts to do complete testing, they will become bottleneck. In reality, almost all the products have defects. Only difference is what kind of defects they have and how frequent is their occurrence. If you try hard, I am sure you can find defects in almost any software you use. Complete testing is not solution for this.

3.
Best practices - Improving quality is simple & straight forward, just follow the best practices.
Best practices, standards and processes are still a big myth. Not all the standards, processeses and best practices work all the time. Sometime they work and sometime they don't. There is nothing wrong in the practice as such, problem is in not identifying the context and problem before applying practices. Practices are practices, what makes them good or bad is whether they are applied after considering the context or not. Applying best practices is like applying hammer, if you do not consider size of the nail and try to use same hammer for all the nails, sometime it will work and some time it will not. When test team starts implementing industry's best practices without considering their project, timeline, skills, technology, environment, team structure and many other aspects, they get frustrated because they do not get  results they expected.

4. 
Certifications will make you better tester - So go and get CSTE, ISTQB.... etc to become better tester / get promotion.

When I started my career as tester, I was in service industry and certifications were / are considered good. There was a valid reason for that, because if you need more clients than boasting about number of certified test professionals will increase their confidence. But from what I have seen, certification exams are very shallow in nature and does not reflect whether person who is getting certification is good tester or not. Certifications, in their current format can be acquired by anyone who is prepared to study for a couple of weeks and it is highly unlikely that someone will become good tester in couple of weeks time. Certifications in its current format have created unnecessary pressure in the testing community to get certified, just because of peer pressure and client demand rather than as a benchmark for knowledge.

5.
Test Automation is Silver Bullet - If something can be automated and you can automate - automate it.

Now do not get me wrong, I am a big fan of automation , but only where it add value. I have seen many engineering hours wasted on developing automation or frameworks for automation which are hardly used. Automation, without considering its ROI and effectiveness is just waste of time. Dr. James in his recent post has highlighted it nicely and made a very good point that manual / automated testing should be considered only after good test design. This mentality of considering test automation as silver bullet, like many other myths is dangerous for our profession because of many reasons. Management sometime can become extremely focused on the automation rather than improving quality. Remember, more automation will not improve quality. Right level of automation combined with  required exploratory testing and good test design will certainly improve it.
 
6. 
Testing is demonstration of Zero defect - Testing is completed for this product and test team has signed off the product. This product does not have any defect now.

Whoever claim this, is obviously wrong. It is impossible to claim that any product is defect free. Even after spending thousands of hours in testing, there will be one more combination which was not tested, one condition which was missed and for all we know that might surface in production environment. If as a tester / manager you believe that zero defect is a possibility, you will feel responsible for any defect which is uncovered in production. On the other hand, if you modify the sentence and say that for the combinations I have tried, environment and data I have used and scenarios I tested, there was no defect according to my understanding of the product.Also, goal of testing is to uncover defects. As a tester, you can only find defects, you can not claim that this product is defect free.

7.
All measurements are useful - Keep track of number of test cases, how many of them are executed, how much automation is present, defect count.. and any other numbers you can think of.

When I started my career, we started preparing reports on the lines of - how many test cases are written, how many of them are executed, how many of them are automated. how many defects were found and so on. Week after week, we would send these reports without realizing that if additional information is not provided along with numbers, they  does not convey any meaning. If these numbers become primary consideration for management, quality will suffer. For example. if number of defects are important test team will start filing each and every issue, if number of rejected defects / duplicate defects become important test team will start spending lot more time on defects before filing them or may be will not file at all. Any measurement program should be approached with caution and should always provide clear meaning / summary for all the numbers.

8. 
Stable requirement and documentation are essential for any project.. BTW development team is crap.

With Agile development, this myth is slowly going away and we have realized that changes are inevitable. rather than fighting changes, we now embrace them. It was different when I started and probably still in many organizations, changes are not welcome, requirements are similar to contractual obligation and documentation is the first thing test team ask for. Development and test team work in their own silos and communication between them is limited to finger pointing. It is impossible to have quality software coming out from such environment. Development and test team should work together to improve quality.

9. 
Limited time and resources are the main reasons for missed defects  - As a test team, we are always pressed for time and hardly have any resources. We could have caught these defects, only if we  had more time / resources.

I am sure many of us have heard this and some of us even raised this as an issue, including me. It is true that time and resources are limited, but how many times defects are missed because of unavailability of resources and how many time defects are missed because of not utilizing resources properly and faulty test strategy and design. I have seen it many times that resources spend time in creating work which is not adding value in any way. It could be in the form of writing detailed test plans, test cases, writing automation suite which  becomes shelf-ware, munching numbers to prepare report for management and so on. Availability of time and resources is important, but also it is more important to have solid test strategy and  design, prepared with application / project under test in mind.

10. 
Anyone can become tester, they should have ability to read and follow instructions, thats it.Testing is not a creative job and does not require special trainings or skills and thats why there are not many professional testers around.

This is one of the most damaging myth of all and to some extent this is because of some of the practices we have seen in our industry. Manual scripted testing is probably closest  to unskilled job, which require minimal skill and probably very basic training. Everything  else apart from that, right from test design, to test execution to automation is highly skilled and creative job and can be done effectively, only if you are skilled. Not considering testing as a skilled profession has done more harm to the testing community than any other myth. This myth is going away with the rise / recognition of testing as a separate skill, exploratory testing practices, Agile and sensible test automation but still there is a long way to go.

This was my list of myths and is by no means is a complete list. Do leave your comments if you have observed / come across myths which are not covered here or if you do not agree with anything I said.