Thursday, June 26, 2014


How to Test Smarter: Explore More, Document Less
That means testing smarter. Reallocating resources from creating documentation and instead focusing on adding value with exploratory testing.
Reducing documentation doesn’t throw quality out the window. Not at all. The test scripts are still created and still run. Testing smarter requires that developers and testers plan, build and run essential unit tests, functional tests, acceptance tests and security tests.
However, smarter testing does mean acknowledging that some tests are more important than others, and as such should receive more attention, including documentation.
Consider a traditional agile team tasked with adding new functionality to a website or mobile application. Early in the sprint, the team would create a test plan and either create new test cases or modify existing ones. Once the coding is done, the team would run the tests and document the execution results and defects. If there are defects, the code would be corrected, and the tests rerun. In some cases, the defects might require the agile team to re-examine the test cases, as well as the code, and potentially update them as well. Rerun tests. Repeat.
Creating and updating test cases takes time and resources. So does the process of documenting the test cases and each of the test runs (though automation helps).
Most test documentation adds no value to the business. If the team tests smarter, testers can focus on writing up tests runs if and only if defects appear, instead of documenting every test case and test run. If the test run’s results are negative (i.e., no defects), then you must move on. If the results are positive (i.e., defects appeared), then yes, testers should document the test, including everything needed to reproduce the defect.
Example:
Imagine there are 100 new test cases for a particular sprint. That’s 100 test cases that must be examined, possibly updated and thoroughly documented.
How to Test Smarter? Let’s test smart: Say that it’s determined that 10 of those test cases need to be carried forward for future regression testing. Perhaps another of the 15 tests failed during execution by producing unexpected or undesired results. If the team needs to document only those 25 key and failed test cases — not all 100 — think about the time savings.
Use that freed-up time to improve quality by encouraging developers, testers and other stakeholders to do more exploratory, ad-hoc type of testing. If the team is fortunate enough to have test-automation tools that can turn ad-hoc tests into reusable test scripts for future regression tests, that’s a bonus, since exploratory tests can be turned into test-case workflows.
Make no mistake:Before development teams decide to test smarter, and stop documenting certain tests, it is essential to ensure that the testers truly understand the goals of a particular development project or phase, and therefore which new tests won’t be needed for future sprints.
In agile shops, that means knowing the objective of each sprint. Understand what’s new or changing in that sprint and in the backlog. Understand the user stories. Agree which tests are only needed in that one sprint (and thus don’t need to be documented) and which tests are needed for future regression testing and acceptance testing (and thus should be thoroughly documented).
Ask yourself, “When the end user receives this sprint’s code, what would he/she be most interested in?” Obviously you need to test there, and document those tests. However, also ask, “What parts of the code would the end user probably not be thinking about, but where he/she could find problems?” Those questions will guide developers, testers and other stakeholders toward edge cases and situations that cry out for exploratory and ad-hoc testing.
The team leaders should envision a high-level approach for what should be tested. There will be key scenarios of each sprint that need to be tested and re-tested because they are highly vulnerable or foundational for future sprints. Once those are identified, those scenarios can be packaged for future regression testing. By contrast, code areas that are not high risk can be tested once — and not used for regression testing, especially if that code is stable and is not affected by future feature enhancements. Therefore, no documentation is required.
To summarize:
We are all under pressure to deliver more code faster. To accelerate software development without sacrificing quality, test smarter!
Use test automation whenever possible, and continue executing unit tests as new code is checked into the source-code management system. Document and run regression tests on critical code, of course, but don’t waste time documenting tests that won’t be needed in the future. Instead, use your testing resources for exploratory testing. That improves quality – and accelerates the development lifecycle.


Saturday, August 4, 2012

Selenium made easy: Interview Questions for Automation Engineers

Selenium made easy: Interview Questions for Automation Engineers: 1.Access Modifiers of JAVA 2.final finaly and finalize 3.real example for abstract and interface 4.Launch firefox driver using webdriver. (...

Tuesday, August 30, 2011

What is User Acceptance Testing ?



User Acceptance Testing is often the final step before rolling out the application.

Usually the end users who will be using the applications test the application before ‘accepting’ the application.

This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.

This testing also helps nail bugs related to usability of the application.

User Acceptance Testing – Prerequisites:
Before the User Acceptance testing can be done the application is fully developed. 
Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As various levels of testing have been completed most of the technical bugs have already been fixed before UAT.

User Acceptance Testing – What to Test?
To ensure an effective User Acceptance Testing Test cases are created. 
These Test cases can be created using various use cases identified during the Requirements definition stage. 
The Test cases ensure proper coverage of all the scenarios during testing.

During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment. 
The Test cases are written using real world scenarios for the application

User Acceptance Testing – How to Test?

The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing.

However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment.

The steps taken for User Acceptance Testing typically involve one or more of the following: 
.......1) User Acceptance Test (UAT) Planning 
.......2) Designing UA Test Cases 
.......3) Selecting a Team that would execute the (UAT) Test Cases 
.......4) Executing Test Cases 
.......5) Documenting the Defects found during UAT 
.......6) Resolving the issues/Bug Fixing 
.......7) Sign Off


1. User Acceptance Test (UAT) Planning: 
As always the Planning Process is the most important of all the steps. This affects the effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing Strategy. It also describes the key focus areas, entry and exit criteria.


2. Designing UA Test Cases: 
The User Acceptance Test Cases help the Test Execution Team to test the application thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios. 
The Use Cases created during the Requirements definition phase may be used as inputs for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used for creating.

Each User Acceptance Test Case describes in a simple language the precise steps to be taken to test something.

The Business Analysts and the Project Team review the User Acceptance Test Cases.

3. Selecting a Team that would execute the (UAT) Test Cases: 
Selecting a Team that would execute the UAT Test Cases is an important step. 
The UAT Team is generally a good representation of the real world end users. 
The Team thus comprises of the actual end users who will be using the application.


4. Executing Test Cases: 
The Testing Team executes the Test Cases and may additional perform random Tests relevant to them

5. Documenting the Defects found during UAT: 
The Team logs their comments and any defects or issues found during testing.

6. Resolving the issues/Bug Fixing: 
The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users.

7. Sign Off: 
Upon successful completion of the User Acceptance Testing and resolution of the issues the team generally indicates the acceptance of the application. This step is important in commercial software sales. Once the User “Accept” the Software delivered they indicate that the software meets their requirements.


The users now confident of the software solution delivered and the vendor can be paid for the same.



What are the key deliverable's of User Acceptance Testing?

In the Traditional Software Development Lifecycle successful completion of User Acceptance Testing is a significant milestone.

 The Key Deliverable's typically of User Acceptance Testing Phase are:

 1) The Test Plan- This outlines the Testing Strategy
 2) The UAT Test cases – The Test cases help the team to effectively test the application
 3) The Test Log – This is a log of all the test cases executed and the actual results.
 4) User Sign Off – This indicates that the customer finds the product delivered to their satisfaction.

Bug Life Cycle & Guidelines




In this tutorial you will learn about Bug Life Cycle & Guidelines, Introduction, Bug Life Cycle, The different states of a bug, Description of Various Stages, Guidelines on deciding the Severity of Bug, A sample guideline for assignment of Priority Levels during the product test phase and Guidelines on writing Bug Description.

 Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a 
specific concern about the quality of the Application under Test (AUT).

 Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:

  
The different states of a bug can be summarized as follows:
  
1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed


Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
  
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
  
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.


 4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
  
5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

 6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

 7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
  
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
  
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.
  
While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.
  
Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.

 A sample guideline for assignment of Priority Levels during the product test phase includes:
  
1.         Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
.
2.         Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
.
3.         Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
.
4.         Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

Guidelines on writing Bug Description:
Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior occurring when a particular action takes place can be given as bug description.
  
1.         Be specific. State the expected behavior which did not occur - such as after pop-up did not appear and the behavior which occurred instead.
2.         Use present tense.
3.         Don’t use unnecessary words.
4.         Don’t add exclamation points. End sentences with a period.
5.         DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).
6.         Mention steps to reproduce the bug compulsorily


So you want to be a Software Tester ?



My best friend is actually one of the best software testers I have ever met. When I began testing for her consultating company I sincerely thought I'd found the perfect job. I could stay at home and work independently.
I received my projects, I completed them and I got paid. Alot. It seemed the perfect scenario until one day I realized that I just didn't enjoy what I was doing. Testing was repetitious, often boring and I found myself dreading the receipt of each and every new project. Trish, on the other hand, was a testing maniac. She saw each and every project as a challenge and the satisfaction she gleaned from finding more bugs than any other tester was almost frightening. Even with her help and coaching, however, I just couldn't get it and finally realized that even despite the excellent pay and flexible schedule, I just wasn't cut out for the wonderful world of testing.


I soon realized that like every other job, testing required a certain personality type and I just wasn't it. Consequently knowing exactly what I am not, I can tell those you who are actually considering this profession, exactly which personality traits will suit you to this industry. First of all, you have to be able to sit still for long periods of time; you need to be well organized and willing to systematically work through a project from the beginning to end. If you're easily bored or find yourself jumping from one task to another, testing will prove to be frustrating and confusing.

Secondly, you've got to be just a little bit intuitive. Quite often, Trish would identify problems within a program that my mind simply could not have even considered. When I'd ask her how in the world she thought to test it that particular way, she'd simply reply, "I just knew." In the world of testing, you've got to trust your gut and in most cases, you'll find that instinctively, you knew just how a program would respond. Many other testers I've spoken with will dismiss the intuition and insist that experience makes all the difference, but quite honestly, I believe some people are just born for this job.

Thirdly, you've got to be able to focus on the little windows inside the big picture and most importantly, I believe you actually have to have a secret desire to break things. The best testers I know were always taking things apart when they were children and trying to put them back together with a different result. Testing is chaotic, it's fast paced and you're often working under extreme deadlines. Communication skills are a must as you attempt to show everyone else what they've done wrong without getting them angry with you.
In the good old days, testing was something that was done at the end of a project. Today, however, testing is a part of the process from the very beginning. The ability to work within a team is essential. While automated testing is making a few waves, I find it extremely difficult to believe that it will ever replace Trish or any of the other born-to-be testers of the IT world.
In fact, automation is not designed as a replacement for manual testing but rather simply supports the skills that most testers have already developed. If you're considering software testing as a possible profession, I'd highly suggest that you find a mentor. Someone who's been in the business. Someone like Trish. Then spend a day with them at their computer. Borrow their project and give it a run yourself. You'll know very quickly if you're meant for this job or not.


Friday, March 18, 2011

What is What in Testing

Testing Fida


Software testing is more than just error detection; 
Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted.
  1. Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements.  [Verification: Are we building the system right?]
  1. Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should.
  1. Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted.  [Validation: Are we building the right system?]
In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly.  Both verification and validation are necessary, but different components of any testing activity.
The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.
Remember: The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed. 
 
Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.
 
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable.
However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.  

“Quality Assurance” measures the quality of processes used to create a quality product.
Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation.
It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software.
Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers.
 
Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation.
A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing.
 
Simply put:
§  TESTING means “Quality Control”; and
§  QUALITY CONTROL measures the quality of a product; while
§  QUALITY ASSURANCE measures the quality of processes used to create a quality product.
 
7. The Mission of Testing
In well-run projects, the mission of the test team is not merely to perform testing, but to help minimise the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognise that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality.