Saturday, February 1, 2025


Bug Life Cycle & Guidelines
In this tutorial you will learn about Bug Life Cycle & Guidelines, Introduction, Bug Life Cycle, The different states of a bug, Description of Various Stages, Guidelines on deciding the Severity of Bug, A sample guideline for assignment of Priority Levels during the product test phase and Guidelines on writing Bug Description.




Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).


Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:


Description: http://www.exforsys.com/images/vbnet/sourecode/Testing/Fig3.JPG


The different states of a bug can be summarized as follows:


1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed


Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.


2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.


3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.


4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.


5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.


6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.


7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.


8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.


9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.


10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.


While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.


Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.


A sample guideline for assignment of Priority Levels during the product test phase includes:


1.      Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
.
2.    Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
.
3.    Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
.
4.    Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

Guidelines on writing Bug Description:
Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior occurring when a particular action takes place can be given as bug description.


1.      Be specific. State the expected behavior which did not occur - such as after pop-up did not appear and the behavior which occurred instead.
2.    Use present tense.
3.    Don’t use unnecessary words.
4.    Don’t add exclamation points. End sentences with a period.
5.    DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).
6.    Mention steps to reproduce the bug compulsorily

Thursday, June 26, 2014


How to Test Smarter: Explore More, Document Less
That means testing smarter. Reallocating resources from creating documentation and instead focusing on adding value with exploratory testing.
Reducing documentation doesn’t throw quality out the window. Not at all. The test scripts are still created and still run. Testing smarter requires that developers and testers plan, build and run essential unit tests, functional tests, acceptance tests and security tests.
However, smarter testing does mean acknowledging that some tests are more important than others, and as such should receive more attention, including documentation.
Consider a traditional agile team tasked with adding new functionality to a website or mobile application. Early in the sprint, the team would create a test plan and either create new test cases or modify existing ones. Once the coding is done, the team would run the tests and document the execution results and defects. If there are defects, the code would be corrected, and the tests rerun. In some cases, the defects might require the agile team to re-examine the test cases, as well as the code, and potentially update them as well. Rerun tests. Repeat.
Creating and updating test cases takes time and resources. So does the process of documenting the test cases and each of the test runs (though automation helps).
Most test documentation adds no value to the business. If the team tests smarter, testers can focus on writing up tests runs if and only if defects appear, instead of documenting every test case and test run. If the test run’s results are negative (i.e., no defects), then you must move on. If the results are positive (i.e., defects appeared), then yes, testers should document the test, including everything needed to reproduce the defect.
Example:
Imagine there are 100 new test cases for a particular sprint. That’s 100 test cases that must be examined, possibly updated and thoroughly documented.
How to Test Smarter? Let’s test smart: Say that it’s determined that 10 of those test cases need to be carried forward for future regression testing. Perhaps another of the 15 tests failed during execution by producing unexpected or undesired results. If the team needs to document only those 25 key and failed test cases — not all 100 — think about the time savings.
Use that freed-up time to improve quality by encouraging developers, testers and other stakeholders to do more exploratory, ad-hoc type of testing. If the team is fortunate enough to have test-automation tools that can turn ad-hoc tests into reusable test scripts for future regression tests, that’s a bonus, since exploratory tests can be turned into test-case workflows.
Make no mistake:Before development teams decide to test smarter, and stop documenting certain tests, it is essential to ensure that the testers truly understand the goals of a particular development project or phase, and therefore which new tests won’t be needed for future sprints.
In agile shops, that means knowing the objective of each sprint. Understand what’s new or changing in that sprint and in the backlog. Understand the user stories. Agree which tests are only needed in that one sprint (and thus don’t need to be documented) and which tests are needed for future regression testing and acceptance testing (and thus should be thoroughly documented).
Ask yourself, “When the end user receives this sprint’s code, what would he/she be most interested in?” Obviously you need to test there, and document those tests. However, also ask, “What parts of the code would the end user probably not be thinking about, but where he/she could find problems?” Those questions will guide developers, testers and other stakeholders toward edge cases and situations that cry out for exploratory and ad-hoc testing.
The team leaders should envision a high-level approach for what should be tested. There will be key scenarios of each sprint that need to be tested and re-tested because they are highly vulnerable or foundational for future sprints. Once those are identified, those scenarios can be packaged for future regression testing. By contrast, code areas that are not high risk can be tested once — and not used for regression testing, especially if that code is stable and is not affected by future feature enhancements. Therefore, no documentation is required.
To summarize:
We are all under pressure to deliver more code faster. To accelerate software development without sacrificing quality, test smarter!
Use test automation whenever possible, and continue executing unit tests as new code is checked into the source-code management system. Document and run regression tests on critical code, of course, but don’t waste time documenting tests that won’t be needed in the future. Instead, use your testing resources for exploratory testing. That improves quality – and accelerates the development lifecycle.


Saturday, August 4, 2012

Selenium made easy: Interview Questions for Automation Engineers

Selenium made easy: Interview Questions for Automation Engineers: 1.Access Modifiers of JAVA 2.final finaly and finalize 3.real example for abstract and interface 4.Launch firefox driver using webdriver. (...

Tuesday, August 30, 2011

What is User Acceptance Testing ?



User Acceptance Testing is often the final step before rolling out the application.

Usually the end users who will be using the applications test the application before ‘accepting’ the application.

This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.

This testing also helps nail bugs related to usability of the application.

User Acceptance Testing – Prerequisites:
Before the User Acceptance testing can be done the application is fully developed. 
Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As various levels of testing have been completed most of the technical bugs have already been fixed before UAT.

User Acceptance Testing – What to Test?
To ensure an effective User Acceptance Testing Test cases are created. 
These Test cases can be created using various use cases identified during the Requirements definition stage. 
The Test cases ensure proper coverage of all the scenarios during testing.

During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment. 
The Test cases are written using real world scenarios for the application

User Acceptance Testing – How to Test?

The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing.

However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment.

The steps taken for User Acceptance Testing typically involve one or more of the following: 
.......1) User Acceptance Test (UAT) Planning 
.......2) Designing UA Test Cases 
.......3) Selecting a Team that would execute the (UAT) Test Cases 
.......4) Executing Test Cases 
.......5) Documenting the Defects found during UAT 
.......6) Resolving the issues/Bug Fixing 
.......7) Sign Off


1. User Acceptance Test (UAT) Planning: 
As always the Planning Process is the most important of all the steps. This affects the effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing Strategy. It also describes the key focus areas, entry and exit criteria.


2. Designing UA Test Cases: 
The User Acceptance Test Cases help the Test Execution Team to test the application thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios. 
The Use Cases created during the Requirements definition phase may be used as inputs for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used for creating.

Each User Acceptance Test Case describes in a simple language the precise steps to be taken to test something.

The Business Analysts and the Project Team review the User Acceptance Test Cases.

3. Selecting a Team that would execute the (UAT) Test Cases: 
Selecting a Team that would execute the UAT Test Cases is an important step. 
The UAT Team is generally a good representation of the real world end users. 
The Team thus comprises of the actual end users who will be using the application.


4. Executing Test Cases: 
The Testing Team executes the Test Cases and may additional perform random Tests relevant to them

5. Documenting the Defects found during UAT: 
The Team logs their comments and any defects or issues found during testing.

6. Resolving the issues/Bug Fixing: 
The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users.

7. Sign Off: 
Upon successful completion of the User Acceptance Testing and resolution of the issues the team generally indicates the acceptance of the application. This step is important in commercial software sales. Once the User “Accept” the Software delivered they indicate that the software meets their requirements.


The users now confident of the software solution delivered and the vendor can be paid for the same.



What are the key deliverable's of User Acceptance Testing?

In the Traditional Software Development Lifecycle successful completion of User Acceptance Testing is a significant milestone.

 The Key Deliverable's typically of User Acceptance Testing Phase are:

 1) The Test Plan- This outlines the Testing Strategy
 2) The UAT Test cases – The Test cases help the team to effectively test the application
 3) The Test Log – This is a log of all the test cases executed and the actual results.
 4) User Sign Off – This indicates that the customer finds the product delivered to their satisfaction.

Bug Life Cycle & Guidelines




In this tutorial you will learn about Bug Life Cycle & Guidelines, Introduction, Bug Life Cycle, The different states of a bug, Description of Various Stages, Guidelines on deciding the Severity of Bug, A sample guideline for assignment of Priority Levels during the product test phase and Guidelines on writing Bug Description.

 Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a 
specific concern about the quality of the Application under Test (AUT).

 Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

  
The different states of a bug can be summarized as follows:
  
1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed


Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
  
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
  
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.


 4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
  
5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

 6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

 7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
  
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
  
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.
  
While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.
  
Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.

 A sample guideline for assignment of Priority Levels during the product test phase includes:
  
1.         Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
.
2.         Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
.
3.         Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
.
4.         Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

Guidelines on writing Bug Description:
Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior occurring when a particular action takes place can be given as bug description.
  
1.         Be specific. State the expected behavior which did not occur - such as after pop-up did not appear and the behavior which occurred instead.
2.         Use present tense.
3.         Don’t use unnecessary words.
4.         Don’t add exclamation points. End sentences with a period.
5.         DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).
6.         Mention steps to reproduce the bug compulsorily