IST Clock

Friday, April 30, 2010

Basics Of Software Testing – 2

In my previous posts I discussed about the basic of software testing and bugs. Continuing the series, I would like to discuss about the software testing process followed in the industry. First we will talk about the software development life cycle , the SDLC. Be it any model followed in your organization all the models are based on the pioneer WATERFALL model which contains following general phases –
- Requirement Analysis,
- System design,
- System development / Coding ,
- Testing and
- Implementation & maintenance.

In brief , we analyze OR study the present system and get detail understating of the client’s requirements in context to the system to be built. Further we document these requirements in the form of SRS/FRS/Use Cases/Specification document. Moving to the next step we design the system on the basis of the requirement and we produce System Design Document(s). Based on the SDD, developers code the system. Once the system is build and released for testing , testers test the system against the documented requirements by executing test cases and adhering to a test plan. Once the system is ready for client release it enters into the last phase under which the system is implemented at the client site and then maintenance of the system can be done either by the same organization Or it can be outsourced to a different organization depending on the terms and condition in the initial contract.

Most important point to remember is that output of every phase in SDLC is a document. Even at the end of the last phase we prepare documents such as User Manuals and other related stuff. Now coming back to our main topic where does the testing process starts / fits in / comes into picture in the SDLC and the answer is just after the requirement phase. As soon as requirement phase is over and the SRS is freezes and is ready to use further then the activities progresses in two direction. One direction heads towards development and combines with next phase of SDLC. Another direction correlates to the fourth phase of SDLC , testing phase. Under this we first prepare the “Test Plan” . Test Plan is the bible of the whole testing activity to be carried out in the particular project. It contains various details but the most important which is / should be there in every test plan are –
- Brief description/introduction/overview of the system.
- Intended parties / reference documents
- Test items
- Features to be tested
- Features not to be tested
- Approach
- Item pass/fail criteria
- Suspension criteria and resumption requirements
- Test deliverables
- Testing tasks
- Environmental needs
- Responsibilities
- Staffing and training needs
- Schedule
- Risks and contingencies
- Approvals

Just a small note on who creates the test plan , it is the responsibility of senior member of testing team be it team lead/project lead. Once the test plan is freeze, testers are clear with the schedule, what is and how it is to be tested, approach to be followed and their individual responsibilities. Now the testing can proceed to the next step of designing test cases/scripts. Designing of test cases requires detailed knowledge of the system and thus test cases are designed on the basis of SRS/FRS/Requirement Specification. Best practice is to study the requirements , understands it and then come up with TEST SCENARIOS. A test scenarios is just a one line description of what are we testing. Now for testing this, you will perform some steps (Steps to be performed) which may / may not take some input data (Test Data) and has some result (Expected Result)

Thus, Test Case = Description + Steps + Test Data + Expected Result + Actual Result (Only at the time of execution) + some optional components like defect ID, severity, priority etc.. )

Once the test cases are created they are to reviewed by peers/BA’s/Senior members of the team. It’s a good practice to review the test cases as the creator of these test cases might miss some scenario, test cases may contain some language related mistakes. Further it’s good to have a TEST CASE CREATION CHECKLIST to reduce the review time. As per the recommended process review results should also be recorded in TEST CASE REVIEW RECORD (After all it helps in appraisals of testing team members J ). Once the test cases/scripts are freezes they are ready to execute on the build. When the developers releases the build for testing WE the testers execute the test cases and start fining the defects. Defects are then logged , fixed, resolved , closed and tracked carefully. In the test execution phase there are iteration cycles which depends on bug fixes, number of critical defects open/fixed. The decision to stop testing depends on many factors like –
- Testing budget of the project.
- Project deadline and test completion deadline.
- Critical test cases are successfully completed with no show stoppers OR even if some test cases fail does not have any show stoppers.
- Meeting the client requirements to certain point.
- Defect rates fall below certain specified level & High priority bugs are resolved.

Once the testing is over systems is released for customer usage.

END of POST , needless to say as always comments/suggestions are most welcome.

Tuesday, April 27, 2010

Basics of software Defects/Bugs

Basics of software Defects/Bugs -

In this post, I am sharing my understanding as to what a software bug is , what should be done when we find one and their overall lifecycle and related content.

A defect OR bug in a software systems is a state/scenario where a software system is functioning OR acting in a different manner from what has been specified in requirement. In simple words , it is deviation of software from the specified requirements. It is the state of software when it behaves in some unexpected manner. Software Testers main job is to find out these defects OR bugs in any software system. Now when a defect is found it has to be recorded carefully. Most of the organizations use some software tools for recording and tracking of these bugs and many of them use standard WORD /EXCEL formats to record these bugs. Whatever be the way most important part is that these bugs should be recorded however recording these bugs in software tool is much convenient as tracking is made easy by using these tools. There are many bug tracking software available in the market both freeware and license tools.

As soon as a tester finds then before logging / recording a bug the first part is to reproduce it again so as to get the reproducibility rate. In case it is an obvious deviation from the requirements then we should not waste time in reproducing it and it should be logged then and there. now once we are confirmed that there is a bug in current system next step is to log it in bug tracking tool / standard template used in organization. The reason why we log is so that we bring this in attention of the developer. Sometimes , higher management / client also wants detailed defects reports / number of bugs in the system etc..

Recording a bug requires some skills, there are some mandatory OR standard information given with every bug/defect. Regardless of what tool/template you use in your organization, every bug should have following information –

Short description – A shots and concise description of what the defect is , in which module it occurred.

Steps to reproduce – Detailed steps to reproduce the defect with test data. Test data is optional part, it the defect is there with some specific data set then it has to communicated.

Actual Results – what is exactly happening after performing the above mentioned steps.

Expected Results – what should happen as per the requirement specification.

Environment – details of the environment in the defect is occurring.

Severity – the impact of the defect on other parts of application.

Priority – how soon the defect has to be fixed.

Version – version of the application in which the defect occurred.

Assignee – Every defect is assigned to somebody so that this person can start working on / take any immediate action on the defect.

Apart from the above we can include some additional information like reproducibility rate and other environmental details. Thumb rule while writing a defect report is there should be enough information required to reproduce the defect at the developers end. Information must be to the point not much not less as both these situations are dangerous. Excess information confuses the developer and with less information developers will keep revolving around your chair J

Now a defect also has a lifecycle. When a tester finds a defects and logs it into the specified system it is in state NEW. From this the defect enters into OPEN state when the lead assigns it to any developer for working on it. As the developer starts working on it, the defect is in WORKING state and when the defect has been resolved by the developer then it enters into RESOLVED state. This is the state when it comes back to the tester for verifying. Now when the tester verifies it and found that it is fixed then the tester closes the defect and the state is CLOSED. If in case it is not fixed then tester reopens it with state REOPEN and again the normal cycle goes on till it get closed. This is very general bug life cycle with no variations. But I wish this could be the stats as it is J. From the state NEW a defect can move onto DEFERRED in case the development people feels that this defect can be ignored right now and can be fixed in later release. A defect with NEW state can also move on to NOT A DEFECT/ NOT REPRODICIBLE depending on what the current state of requirements OR information provided in the defect report. In both these cases, it again comes back to the tester and now tester has to close it OR provide more information to the developer and further reopen it.

That’s the end of the post and as always comments/suggestions are welcome for improvement. I would also like to discuss and correct my understanding (if wrong) through this post/blog.

Thanks & Regards,

Monday, April 26, 2010

Basics of Software testing- 1

Basics of Software testing –

What is “Software Testing” - There are many definitions available on the internet given by various scholars in this field and everybody’s definition is different. Ask this question to a 1 year experienced person he will give you a bookish answer and If you ask this question to a person having 4 OR more years of experience in this field he might not able to give you a perfect answer . I would like to define it in my terms which may be similar with any scholars definition. As per my experience Software testing is an activity/phase/process in SDLC where in we test whether the software is working as per the its requirement specification and in turn we measure the quality of the software developed. It is a process of executing a program or system with the intent of finding errors and evaluating it for conformance with the requirements specification.

Now on the basis of approach followed testing can be divided into –
- White Box
- Black Box

When we say that we are following/doing “White Box testing”, we mean that a testing activity in which we have access to the code of the system/module/ function we are testing and we are aware of the logic implemented in it. This knowledge may help us finding out the right place where the bug/defect has occurred and wrongly implemented logic.

Coming on to “Black Box testing”, In this type of testing we do not have access to the actual code of the system neither we have knowledge of the logic implemented. In this type of testing , tester is more concerned about the output which he/she gets from the system on the basis of the input given.

On the basis of types, there are four types of testing –
- Unit testing
- Functional testing
- Integration testing
- System testing

Unit testing is the primary testing activity carried out at the code level. It intend to find out the fault/bug in smallest unit of a project – A Program / Function. It follows “WHITE BOX” approach and is different from debugging in the sense that debugging is executing the program step by step to diagnose a problem where as Unit testing is broader activity which covers not only executing a program but also checking the placement of elements on the screen, performing nominal operations with them and related activities. For eg. If my program accepts input in a text field ‘T’ and it is not accepting that then debugging comes into picture. By executing the program step by step we will find out where the problem is. While in the same example Unit testing refers to testing whether the user is able to input the characters, fields are getting displayed at proper place and so on. Developers are responsible for UNIT TESTING.

Functional Testing is a step further to unit testing. This refers to testing functionality of the application/modules of the application as per the requirement specification. Test Engineers are responsible for it and they write test cases / scripts for performing functional testing.

Integration Testing is another steps further in testing of an application and it refers to testing the combined parts of the application , interaction of different parts of applications with each other. Under this different application parts (modules) are combined together and the application behavior, data flow between these parts, interaction between them is tested .

Systems Testing is the last testing activity conducted on any software system / application. It refers to testing the system as whole and confirm that whether it meets the requirements OR not. Apart from functional requirements systems testing also covers non functional requirements like performance, usability, security, reliability maintainability and other related factors.

As always, suggestions / comments welcome. Also I would like to have inputs on what more topics can be written


Thursday, April 22, 2010

Manual Testing V/S Automation Testing...

In recent days, I am following a discussion on a very popular professional website linkedin and the discussion is about “Which one is more crucial in testing? Manual testing or Automation testing?"

In my opinion, Manual testing is the most crucial and important as compared to automation. In the following content I am citing my reasons as to why I consider “Manual Testing” more important compared to “Automation testing”

Firstly, Manual testing is the initial level of testing carried out on every software product. Looking on to the stages of STLC (Software testing life cycle), Test Planning is a manual activity wherein we analyze the requirements/product and come up with a concrete approach of how & what we will test and who will test. Now after this we commence to “Test Design” phase where in we design high level test scenarios by analyzing the requirement document. This is again a manual activity and there is no alternate of it. On the basis of these scenarios we design detail test cases which we execute on the application. Its only after initial iterations we identify the test cases which we can automate. Generally, the first test cases to automate are the smoke/sanity level test cases which decides that build under test is testable further OR not. Once the build gets stable we automate more and test cases related to different modules of application. This decision to automate test cases further depends on how stable your application/different parts of application is.

Secondly not all testing activities can be automated. Testing related to usability of application/product cannot be automated as it is very subjective. Moreover, when we release the software for BETA usage and urge user to test it and provide their feedback then this is more of an manual activity rather than automation as most of end users does not have automation tools on their site.

Thirdly, automation testing is covers OR relates to specific activities OR types of testing which majorly includes Regression and load.

Moreover in my opinion automation testing is more successful in case of PRODUCT rather than projects (Customized solutions developed for specific process in an organization). The reason for this is a PRODUCT is more stable and running in market whereas in a project requirement changes often and it is tedious to maintain the automation scripts.

So at the end I would say that “Manual Testing” is more important as it is initial activity and has wider scope because only on the basis of this we can decide what to automate and when.

As always , This is an open discussion and suggestions / comments are most welcome.

Wednesday, April 21, 2010

The 10 rules of e-mail etiquette

In this blog ,I am sharing some tips published on a very popular Indian website ( for writing effective emails. The idea of posting this came to my mind due to a recent incident happened. One of my close friend asked to me to send some resumes for an off-campus drive in their company. I asked some of my friends for the referrals and then one person told me that his cousin is looking for job . I told him to forward CV to my friend’s mentioning as “Referred by Amit Jain”. To my surprise that’s person forwarded his CV and content of email consist of “Reference by amit jain” as it is, nothing more nothing less. I wondered that’s a Engineering degree holder does not know how to email. His email does not contain any salutation , greeting , subject line… nothing just plain sentence “Reference by amit jain” and that too grammatically wrong (Any name should start with capital and not as “amit”). So after this I thought of sharing a very useful post which was posted on rediff. Her you go –

The 10 rules of e-mail etiquette
Rule 1: Do not skip the head or tail of the e-mail
Salutation, body and sign-off are three principal parts of the e-mail message. Be absolutely sure not to miss any of them unless you are writing to a college friend in a casual setting.

Rule 2: Use simple and direct salutation; the same for sign-off
Dear Dilip, Hi Dilip, Dear Mr Kumar are proper professional salutations.
Do not get carried away while showing respect and do not borrow from letter-writing assignments you did in class VIII. "Respected Sir" is way too deferential and so is "Honorable Mr Kumar". While signing off too, keep things simple.

Rule 3: Use smart subject lines
Ideally, the purpose of your message should be clear in the subject line itself. Use as much information in the subject line as possible. Leaving out the subject field is highly unprofessional and so is using something meaningless like 'Hello', etc.

"Resume for s/w engineer adveritsed in TOI dated "

Rule 4: Avoid 'bureaucratic' sentences
Professional or business English is different from the bureaucratic English typically used in government offices. Using unnecessary long-winded sentences just makes the message difficult to understand.

Rule 5: Use as few words as necessary
Long e-mails take longer to read and tell the reader you are not a very efficient writer. Any word or group of words which can be shortened should be shortened. Professional communication should be sharp, unnecessary words have the potential of confusing the reader. Brevity is the soul of wit. It indeed is the soul of good professional writing.

Rule 6: Answer e-mails the same day
Delay in responding to e-mails gives out an impression of carelessness and unprofessionalism and also, that responding to that particular person is not high on your priority list. A very useful rule of thumb is to read the e-mails the same working day and any e-mail that you receive during the day should be replied to before the end of the working day, even if it means sending back a short note saying that you will get back to him/her soon with a detailed response.

Rule 7: Be careful with attachments
In these days of spam and Trojans and viruses, attachments are risky business. Try to have a 'no attachment' policy unless the person is expecting one or an attachment needs to be sent. Need is the key, if an attachment is not required, skip it. Several spam filters summarily send the mails with attachments to junk and that's another reason to use attachments sparingly.

Rule 8: Avoid using excessive capitals
Capitalization of unnecessary words amounts to shouting in the cyber world. Use capital letters only to begin sentences and for names -- exactly as they taught you in school grammar. Using capitalization to stress or emphasize a point is often considered rude and aggressive.

Rule 9: Be careful with 'reply all' and forwards
If the message does not need to be read by all in the mailing list, do not 'reply all'. These days everyone receives so much junk mail that adding an extra irrelevant email to them reduces your credibility and is bad manners. Similarly, do not forward chain letters or hoaxes or, 'Send it to 25 people within 15 minutes else your cat will die' email.

Rule 10: Avoid SMS language
SMS language is only for SMSes. Use proper English sentences and words while writing emails.
"Gr8 2 hear frm u" may be a nice way to greet a friend on SMS or orkut/facebook, but it can kill the professionalism of your e-mail. Avoid it.

If clothes make a man, then e-mails make a professional. Use it smartly as a tool to create a powerful professional impact.

Major points while testing a Mobile Application

Article content: Major points while testing a Mobile Application
This is my first blog post and I am sharing some basic points to be considered while testing a mobile application (Currently, I am working on it) are –

• Navigation of the application - We need to verify the whole navigation of the application and need to find out the dead ends . Traverse whole application, every possible path and come back to the starting point. In this way we can find out the dead ends, inconsistency in the navigation and lot other things.

• GUI of the application – Most of the mobile applications are developed for one platform / screen resolution and ported on other platform (Platform Porting) OR on devices with different resolution (Device Porting). In this process , there are many GUI issues which gets introduced in the application. These issues should be reported and addressed carefully.

• Behavior of the application on Call/SMS (Suspend/Resume) especially on entry screen & popup screen – This is one of most important activity while testing a mobile application be it a game / application residing on device OR application using communication protocols. Mobile applications using internet / client server communication needs to be tested for this scenario on the screens / process establishing network connections.

• Behavior of the application on pressing CLEAR key & END key – This is also one of the most important testing activity while testing a mobile application. When implemented default, CLEAR key takes the user back to the previous screen and END key ends the application. It needs to be tested carefully as there are chances of application crash. In my experience, I found one interesting bug for developers with CLEAR key. There was one data entry screen with default letter displayed in the text box. Now when the user taps on the text box and presses CLEAR key to clear the data, type in special characters in the text box the application was crashing.

• Behavior of application when memory is almost full (MaxFileCount) – This test is important when your application is creating any file(s) on the user device. Many times developer does not handles this situation well and the application crashes by giving a unfriendly error message (Exception found.. )

• Behavior of the application on application directed SMS (Syntax is “//brew::”) – This test is one of the most important in BREW applications. By this we test the behavior of application by sending the message to invoke the same application while it is running in foreground.

• In BREW applications apart from functional tested cases prepared we also need to execute standard set of test cases provided by the NSTL.

• Sound functionality of the application (if applicable)

• Timer functionality (specially on Suspend/Resume)