IST Clock

Showing posts with label Software Testing Basics. Show all posts
Showing posts with label Software Testing Basics. Show all posts

Friday, June 4, 2010

Functional & non functional requirements

Functional & non functional requirements

Today’s post is about ‘Functional & non functional requirements‘. I found lot of articles on the internet explaining what are functional and non functional requirements in reference to a software system. Also this is a common question asked in software testing interviews :)

In all software systems there are basically two types of requirements namely functional requirements and non functional requirements.

Functional requirements of a software refers to the stated features of the software which it should have OR it should do in order to achieve satisfy the business needs. Functional requirements are business requirements which a software should have in order to satisfy the user needs of the business. These requirements describe what a software should do and how the user should use it in order to achieve a particular business flow / functionality. Technically functional requirements divides the software system into different modules with respect to the various operations OR logical group of operations and defines the attributes/features of these modules. As stated above, functional requirements satisfies business rules and changing the functional requirements will definitely change the functionality of the software systems.

IEEE defines functional requirements as -A system/software requirement that specifies a function that a system/software system or system/software component must be capable of performing. These are software requirements that define behavior of the system, that is, the fundamental process or transformation that software and hardware components of the system perform on inputs to produce outputs.

Typical functional requirements of a software system can be –
- Business Rules
- Transaction corrections, adjustments, cancellations
- User authentication and authorization
- Certification Requirement subject to certification compliance.
- Legal or Regulatory Requirements

Non functional requirements are other set of requirement that a software system should have so that the system should be more usable and reliable specifically. Non-functional requirements specify all the remaining requirements not covered by the functional requirements. These requirements are more related to / cater to user rather than to business rules. These are not the tasks that will be performed by the system instead these are the quality standards which are good to have by the system and thus does not effects systems functionality. These are not the mandatory requirements of the systems but in order to be a good software from users perspective it’s good to have these. Often these requirements are used to compare the products of similar functionality for example there are lots of total banking automation (TBA) solutions available in the market but decision of choosing a software depends on how reliable and secure a system is. Typical non-functional requirements are:

- Performance - Response Time, Throughput, Utilization,
- Scalability
- Reliability
- Recoverability
- Maintainability
- Security
- Usability
- Interoperability
- Compatibility

Monday, May 31, 2010

Peer Reviews

Peer Reviews

Welcome friends , new week has just started. Monday blue’s are on full swing. Let’s start this week’s blogging with a less talked topic “PEER REVIWES”

What is it ? In simple language this term is used to describe a review process of the artifacts /documents by the peers in the team wherein the author of the document presents it to peer members in the team with the purpose of getting feedback in order to improve the document before making its final version.

Process of peer review is not very tedious where the document author presents the document to the group of reviewers who are members of same team and explains the purpose of the document as to what is the document, objective and details about the document and ask for the feedback on the document. From the author’s point of view it’s good to have a checklist ready at his end before submitting the document just to reduce the reviewing time and avoid small mistakes.

Purpose behind this activity is can be –
- Validate the correctness of the document.
- Validate the coverage of document with respect to purpose for ex. Peer review of a test plan/strategy document should verify that the test plan/strategy is in accordance with the project/product for which we are doing testing and it is covering each and every aspect of testing which can be done/carried on.
- Validate the conformance of the document with respect to the process followed in organization, standards of the organization and client needs if any.

With respect to software testing , there are many documents for which we can conduct peer review such as test plan, test strategy , test cases and scripts and other such documents. Benefits of the PEER REVIEW process can be achieved only when the review comments are documented and in turn implemented properly. Also as a suggestion it should be done be senior members of the team who have good command over the product/system and have good domain knowledge

Friday, May 28, 2010

Ad-hoc testing and Exploratory testing

Today’s post is just a MICRO blog about Ad-hoc testing and Exploratory testing

These two terms are often used interchangeably by the development guys. I wonder senior dev guys in my team don’t even know difference between black box & white box testing. Anyways coming back to our main topic both ad-hoc and exploratory testing

- are system testing techniques where in we test the whole software system.
- are undocumented form of testing in the sense that the test we perform are no formally documented. That is the scenarios in this testing are apart from the documented test cases.
- are informal testing techniques performed at small scale generally after the completion of formal testing OR when we have less time.

The only difference between these two is that in order to perform Ad-hoc testing a tester should have domain /product knowledge while no such knowledge is required for exploratory testing, success of ad-hoc testing totally depends on the knowledge and experience level of the tester. We can understand exploratory testing going be name , it means to explore the system with the intent of learning and also to find bugs. Main aim of tester performing exploratory testing is to get knowledge about the system and report bugs after getting them validated from a person having sound knowledge of the product. While the sole purpose of ad-hoc testing is to find bugs in the system as the tester already have the necessary knowledge about the system/product under testing. One important implementation of ADHOC TESTING is when we want to test the application beyond the documented OR obvious tests and also we may want to focus on most critical areas of application in case we have less time for testing and therefore can’t prepare test cases. For exploratory testing one such implication is when we do not have enough documentation available and want to know more about the application so that we can prepare some formal document regarding the functionality of the application.

Thursday, May 27, 2010

Positive, Negative testing and test execution

Ideal flow of test execution , some days before a friend of mine (a fresher is testing field) asked me about positive, negative testing and the flow of test execution and his understanding inspired me for this post. As per my understanding, positive testing is ‘Test to pass’ that is testing done to confirm that the system meets the specified requirements. In these types of tests we usually do not input invalid conditions and main concentration should be on finding out the defects/bugs by executing the system as per the normal functionality. Under this we concentrate on the written requirements and normal flows and validate the system against it. In other words we validate that the system is working as per the documented requirements.

Coming on to negative testing, it is ‘Test to break’. Now that we are done with the normal testing we try to break the system by trying negative scenarios on it such as invalid inputs, unusual load and other negative scenarios so as to know the behavior of the system with such conditions. Suppose that a website is expected to have load of 1000 hits at a time then in negative testing we may try to overload along with the abnormal communication conditions.

Coming on to the test execution part, I follow a thumb rule that as a tester my job is to first validate that the system is working as per the written requirements (TEST TO PASS) and then after this I should try the negative scenarios (TEST TO BREAK). Remember, our first job is to execute the documented test cases since this is what we have agreed to deliver to the client and these are the test for which client is expecting the test report. So the order of execution should be first the positive tests and then the negative tests that is after positive testing we can try our hands on some undocumented testing (Ad-hoc and exploratory testing)

Requirement traceability matrix (RTM)

Requirement traceability matrix (RTM) in simple words is a document containing the requirements mapped with the test cases/scripts covering that particular requirement. it is one of the most important document created during a project life cycle and is very important in software testing activities. Going further RTM can contain more details like name and brief detail about the requirement, details of test case/script document which contains the test case(s)/script(s) handling the requirement, priority & severity of the requirement and also the verification status of the requirement (whether the test cases have been executed OR not) along with the status of requirement (Pass/Fail – requirement is catered in the software system and in fully functional without any bug). These details are not mandatory but definitely adds value to the RTM document as it provides complete picture about a particular requirement.

Creating RTM – there are 100’s and 1000’s of template available on the internet. Just pick one of them OR create your own. In a very simple way , you just need to create a table containing requirement ID (As mentioned in the SRS/FRS/Requirement Specification document) and test case ID of the test case in which this requirement is being tested. In addition to this we can provide other fields as mentioned above. Also we can create a single RTM document for both unit and functional/system test cases

Benefits – RTM is very beneficial in software testing as it –
- Track all requirements and whether or not they are being met by the test.
- Help ensure that all system requirements have been met during the Verification process.
- Above all it’s a very good tool from management and testers perspective as well. For testers , it serves the benefit of knowing how many requirements are covered and from management’s perspective it’s a tool from which they can gather all the required information pertaining to a particular requirement.

Wednesday, May 12, 2010

Writing effective test cases

Tips for creating effective test cases –

Today’s post it about writing effective test cases. Many times people use to jump on to test case creation just after reading the requirement which is not the right way. In this way your test cases will be very limited although covering the requirements. Here are points we should follow while writing test cases –

# 1 – Review the requirement document fully . Study all the requirements of the module you are working on thoroughly and make notes. If possible try to get detail understanding of the requirements by discussing with the BA team members. It’s better to make a query log and document all the queries and corresponding resolutions and share it with team. This will help in developing team confidence on you and your confidence on the system to be build. This will also help in validating your understanding on the system.

# 2 – Do brainstorming, after reading the requirements discuss it with peer members. Make notes of this discussion. Such discussion will help in deriving more test cases which one might miss. Remember everyone has different perspective and such discussions can help an individual to find out more cases for testing the application.

# 3 – After doing the above exercises do not jump onto the test case creation part rather first prepare high level test scenarios in relation to the requirements. You may/may not discuss these scenarios with the peer members.

# 4 – it is advisable to have a “Test Case Checklist” maintained at your end containing the general rules to be followed while creating the test cases like naming conventions should be in accordance with what has been decided, spelling mistakes should not be there, some specific words that should/should not be used in test cases and so on. Many a times organization have such checklist ready in case it is not then you can maintain your own. Such checklist helps a lot while submitting the test cases for peer / lead review and saves a lot of review time.

# 5 – Once you are ready with the high level test scenarios then move on the test case creation wherein you can create test cases in the organization specific template mentioning the detailed tests steps, test data, expected results and other related details.

# 6 – Have a “Traceability Matrix” . In case your organization does not maintains it, maintain one at your end. This is for the obvious reason for mapping the requirements with the test cases. It will also help in finding out the left requirements and the redundant test cases if any.

Thumb Rule – Do enough , fast and effective paper work/brainstorming before moving on to actual test case creation.

Monday, May 10, 2010

Regression Testing and Retesting

These two terms are often used interchangeably by inexperienced testers OR experienced developers/PM’s who are not known to testing. Anyways, I am not pointing to them in this post but just telling the difference between the two terms.

First Retesting , it is the term which is used to describe the testing effort of verifying the bugs OR defects logged in for the previous build and taking appropriate action on them. By appropriate action , I mean to close/defer/reopen the defect. To be specific “Retesting” is a subset of “Regression Testing”. Coming on the super set “Regression Testing” , it is testing of the module(s) in which bugs where logged for previous build and also the interrelated modules just to make sure that the bug fixes has not introduced new bugs into the system and the modules are working fine overall. Regression testing has wider scope as compared to Retesting because in the former we are covering the whole module and also the other interrelated modules and not just concentrating on the bug fixes only.

Thanks & Regards,
Amit

Tuesday, May 4, 2010

Basics of Software Testing - 3

Friends, starting the first post of may about a basic document in testing “TEST CASE”.

Test case – A test case is one of the major document prepared while testing any software. Generally defined test case is document containing some precondition , steps to be followed, test data to be given (Optional as some tests may not contain any test data) , Expected results after performing the steps to be executed for testing a particular requirement. Thus a test case must contain following essential components –
- Test case ID / #
- Precondition.
- Objective of the test.
- Steps to be followed.
- Test data.
- Expected Result.

Apart from the above, some organizations include other heads in test case document such as –
- Defect ID
- Priority
- Type
- Requirement ID/#

Now coming on to details of above heads -

Test case ID is the unique identifier for identifying a particular test case. It can be number/alphanumeric as agreed upon.
Precondition is a state/condition which is essential to execute the given test case. A very-2 common Example if I am executing test case(s) for testing a login web page then the preconditions can be as follows –
- Browser should be installed and running on the PC.
- URL for login should be present.
Objective of the test case is the scenario what we are testing. Continuing the above example my objective is to test the functionality of the login screen. There can be another test case with the objective of testing the user interface of the login screen.
Steps to be followed are the detail sequence of steps to be followed while executing a particular test case
Test data is the data to be keyed in while executing the test case. Continuing the login screen example, test data can be User Name, Password.
Expected Result is the output/results expected after performing the steps as mentioned in “Steps to be followed” and with the test data keyed in.
Actual Result is the actual output observed after executing steps given in the test case. It may/may not differ from the Expected Result and it determines the status of the test case.
Status of the test case can be PASS/FAIL based on the actual result. It can also be NOT RUN/TESTED based on the execution status.
Defect ID is the unique identifier of the defect logged against the failed test case.
Priority of the test case is how important the test case is . It can be HIGH/MEDIUM/LOW or P1/P2/P3 depending on how critical that test case is OR how important the test case is. It is determined by the functionality/scenarios that the test case is covering.
Type of the test case can be functional, UI, database depending on the requirement the test case is covering.
Requirement ID is the unique identifier of the requirement which the test case is covering. The purpose of including the requirement is in the test case document is just to make out the each and every requirement is covered and also to determine that which test case is covering which requirement.

Based on the above, we can now design a generic template to write test case(s) which can be as follows –

Precondition :

Test Case ID
Priority
Type
Requirement ID
Objective
Steps to be followed
Test Data
Expected Result
Actual Result
Status
Defect ID

Some basic points while preparing/reviewing/executing test cases –
- Read the requirement document fully and get the full understanding of the requirements.
- First prepare high level test cases OR test scenarios describing the what we are going to test and possible different states of what is to be tested.
- After this move on to detail test case writing with the steps, test data, expected results.
- Have a checklist ready before preparing test cases this will help at the time of reviewing the test cases. This checklist should not contain details but should point out generic points applicable to all the test cases. While submitting the test cases for further review validate the test cases against this checklist and submit it along with the test case document.
- While reviewing the test cases, reviewer should also validate the test cases against the checklist.
- All the review points should be documented in a proper format. Avoid sending review points in the email instead document in a proper document and send it. A separate copy can be kept at a central repository. Apart from this, another advantage is that document can be amended further. As with email, the trail gets long and its very tedious to keep a track of it.
- While executing the test cases try to indentify the important test cases and prioritize them. This will help to identify test cases which are to be executed in ‘Sanity level’ testing on the forthcoming releases.
- Also try to execute tests with combinations of test data other than the documented test data.


END OF POST. As always, suggestions/comments are most welcome.

Thanks & Regards,
Amit

Friday, April 30, 2010

Basics Of Software Testing – 2

In my previous posts I discussed about the basic of software testing and bugs. Continuing the series, I would like to discuss about the software testing process followed in the industry. First we will talk about the software development life cycle , the SDLC. Be it any model followed in your organization all the models are based on the pioneer WATERFALL model which contains following general phases –
- Requirement Analysis,
- System design,
- System development / Coding ,
- Testing and
- Implementation & maintenance.

In brief , we analyze OR study the present system and get detail understating of the client’s requirements in context to the system to be built. Further we document these requirements in the form of SRS/FRS/Use Cases/Specification document. Moving to the next step we design the system on the basis of the requirement and we produce System Design Document(s). Based on the SDD, developers code the system. Once the system is build and released for testing , testers test the system against the documented requirements by executing test cases and adhering to a test plan. Once the system is ready for client release it enters into the last phase under which the system is implemented at the client site and then maintenance of the system can be done either by the same organization Or it can be outsourced to a different organization depending on the terms and condition in the initial contract.

Most important point to remember is that output of every phase in SDLC is a document. Even at the end of the last phase we prepare documents such as User Manuals and other related stuff. Now coming back to our main topic where does the testing process starts / fits in / comes into picture in the SDLC and the answer is just after the requirement phase. As soon as requirement phase is over and the SRS is freezes and is ready to use further then the activities progresses in two direction. One direction heads towards development and combines with next phase of SDLC. Another direction correlates to the fourth phase of SDLC , testing phase. Under this we first prepare the “Test Plan” . Test Plan is the bible of the whole testing activity to be carried out in the particular project. It contains various details but the most important which is / should be there in every test plan are –
- Brief description/introduction/overview of the system.
- Intended parties / reference documents
- Test items
- Features to be tested
- Features not to be tested
- Approach
- Item pass/fail criteria
- Suspension criteria and resumption requirements
- Test deliverables
- Testing tasks
- Environmental needs
- Responsibilities
- Staffing and training needs
- Schedule
- Risks and contingencies
- Approvals

Just a small note on who creates the test plan , it is the responsibility of senior member of testing team be it team lead/project lead. Once the test plan is freeze, testers are clear with the schedule, what is and how it is to be tested, approach to be followed and their individual responsibilities. Now the testing can proceed to the next step of designing test cases/scripts. Designing of test cases requires detailed knowledge of the system and thus test cases are designed on the basis of SRS/FRS/Requirement Specification. Best practice is to study the requirements , understands it and then come up with TEST SCENARIOS. A test scenarios is just a one line description of what are we testing. Now for testing this, you will perform some steps (Steps to be performed) which may / may not take some input data (Test Data) and has some result (Expected Result)

Thus, Test Case = Description + Steps + Test Data + Expected Result + Actual Result (Only at the time of execution) + some optional components like defect ID, severity, priority etc.. )

Once the test cases are created they are to reviewed by peers/BA’s/Senior members of the team. It’s a good practice to review the test cases as the creator of these test cases might miss some scenario, test cases may contain some language related mistakes. Further it’s good to have a TEST CASE CREATION CHECKLIST to reduce the review time. As per the recommended process review results should also be recorded in TEST CASE REVIEW RECORD (After all it helps in appraisals of testing team members J ). Once the test cases/scripts are freezes they are ready to execute on the build. When the developers releases the build for testing WE the testers execute the test cases and start fining the defects. Defects are then logged , fixed, resolved , closed and tracked carefully. In the test execution phase there are iteration cycles which depends on bug fixes, number of critical defects open/fixed. The decision to stop testing depends on many factors like –
- Testing budget of the project.
- Project deadline and test completion deadline.
- Critical test cases are successfully completed with no show stoppers OR even if some test cases fail does not have any show stoppers.
- Meeting the client requirements to certain point.
- Defect rates fall below certain specified level & High priority bugs are resolved.

Once the testing is over systems is released for customer usage.


END of POST , needless to say as always comments/suggestions are most welcome.

Tuesday, April 27, 2010

Basics of software Defects/Bugs

Basics of software Defects/Bugs -

In this post, I am sharing my understanding as to what a software bug is , what should be done when we find one and their overall lifecycle and related content.

A defect OR bug in a software systems is a state/scenario where a software system is functioning OR acting in a different manner from what has been specified in requirement. In simple words , it is deviation of software from the specified requirements. It is the state of software when it behaves in some unexpected manner. Software Testers main job is to find out these defects OR bugs in any software system. Now when a defect is found it has to be recorded carefully. Most of the organizations use some software tools for recording and tracking of these bugs and many of them use standard WORD /EXCEL formats to record these bugs. Whatever be the way most important part is that these bugs should be recorded however recording these bugs in software tool is much convenient as tracking is made easy by using these tools. There are many bug tracking software available in the market both freeware and license tools.

As soon as a tester finds then before logging / recording a bug the first part is to reproduce it again so as to get the reproducibility rate. In case it is an obvious deviation from the requirements then we should not waste time in reproducing it and it should be logged then and there. now once we are confirmed that there is a bug in current system next step is to log it in bug tracking tool / standard template used in organization. The reason why we log is so that we bring this in attention of the developer. Sometimes , higher management / client also wants detailed defects reports / number of bugs in the system etc..

Recording a bug requires some skills, there are some mandatory OR standard information given with every bug/defect. Regardless of what tool/template you use in your organization, every bug should have following information –

Short description – A shots and concise description of what the defect is , in which module it occurred.

Steps to reproduce – Detailed steps to reproduce the defect with test data. Test data is optional part, it the defect is there with some specific data set then it has to communicated.

Actual Results – what is exactly happening after performing the above mentioned steps.

Expected Results – what should happen as per the requirement specification.

Environment – details of the environment in the defect is occurring.

Severity – the impact of the defect on other parts of application.

Priority – how soon the defect has to be fixed.

Version – version of the application in which the defect occurred.

Assignee – Every defect is assigned to somebody so that this person can start working on / take any immediate action on the defect.

Apart from the above we can include some additional information like reproducibility rate and other environmental details. Thumb rule while writing a defect report is there should be enough information required to reproduce the defect at the developers end. Information must be to the point not much not less as both these situations are dangerous. Excess information confuses the developer and with less information developers will keep revolving around your chair J

Now a defect also has a lifecycle. When a tester finds a defects and logs it into the specified system it is in state NEW. From this the defect enters into OPEN state when the lead assigns it to any developer for working on it. As the developer starts working on it, the defect is in WORKING state and when the defect has been resolved by the developer then it enters into RESOLVED state. This is the state when it comes back to the tester for verifying. Now when the tester verifies it and found that it is fixed then the tester closes the defect and the state is CLOSED. If in case it is not fixed then tester reopens it with state REOPEN and again the normal cycle goes on till it get closed. This is very general bug life cycle with no variations. But I wish this could be the stats as it is J. From the state NEW a defect can move onto DEFERRED in case the development people feels that this defect can be ignored right now and can be fixed in later release. A defect with NEW state can also move on to NOT A DEFECT/ NOT REPRODICIBLE depending on what the current state of requirements OR information provided in the defect report. In both these cases, it again comes back to the tester and now tester has to close it OR provide more information to the developer and further reopen it.


That’s the end of the post and as always comments/suggestions are welcome for improvement. I would also like to discuss and correct my understanding (if wrong) through this post/blog.



Thanks & Regards,
Amit

Monday, April 26, 2010

Basics of Software testing- 1

Basics of Software testing –

What is “Software Testing” - There are many definitions available on the internet given by various scholars in this field and everybody’s definition is different. Ask this question to a 1 year experienced person he will give you a bookish answer and If you ask this question to a person having 4 OR more years of experience in this field he might not able to give you a perfect answer . I would like to define it in my terms which may be similar with any scholars definition. As per my experience Software testing is an activity/phase/process in SDLC where in we test whether the software is working as per the its requirement specification and in turn we measure the quality of the software developed. It is a process of executing a program or system with the intent of finding errors and evaluating it for conformance with the requirements specification.

Now on the basis of approach followed testing can be divided into –
- White Box
- Black Box

When we say that we are following/doing “White Box testing”, we mean that a testing activity in which we have access to the code of the system/module/ function we are testing and we are aware of the logic implemented in it. This knowledge may help us finding out the right place where the bug/defect has occurred and wrongly implemented logic.

Coming on to “Black Box testing”, In this type of testing we do not have access to the actual code of the system neither we have knowledge of the logic implemented. In this type of testing , tester is more concerned about the output which he/she gets from the system on the basis of the input given.

On the basis of types, there are four types of testing –
- Unit testing
- Functional testing
- Integration testing
- System testing

Unit testing is the primary testing activity carried out at the code level. It intend to find out the fault/bug in smallest unit of a project – A Program / Function. It follows “WHITE BOX” approach and is different from debugging in the sense that debugging is executing the program step by step to diagnose a problem where as Unit testing is broader activity which covers not only executing a program but also checking the placement of elements on the screen, performing nominal operations with them and related activities. For eg. If my program accepts input in a text field ‘T’ and it is not accepting that then debugging comes into picture. By executing the program step by step we will find out where the problem is. While in the same example Unit testing refers to testing whether the user is able to input the characters, fields are getting displayed at proper place and so on. Developers are responsible for UNIT TESTING.

Functional Testing is a step further to unit testing. This refers to testing functionality of the application/modules of the application as per the requirement specification. Test Engineers are responsible for it and they write test cases / scripts for performing functional testing.

Integration Testing is another steps further in testing of an application and it refers to testing the combined parts of the application , interaction of different parts of applications with each other. Under this different application parts (modules) are combined together and the application behavior, data flow between these parts, interaction between them is tested .

Systems Testing is the last testing activity conducted on any software system / application. It refers to testing the system as whole and confirm that whether it meets the requirements OR not. Apart from functional requirements systems testing also covers non functional requirements like performance, usability, security, reliability maintainability and other related factors.


As always, suggestions / comments welcome. Also I would like to have inputs on what more topics can be written


-Amit

Thursday, April 22, 2010

Manual Testing V/S Automation Testing...

In recent days, I am following a discussion on a very popular professional website linkedin and the discussion is about “Which one is more crucial in testing? Manual testing or Automation testing?"

In my opinion, Manual testing is the most crucial and important as compared to automation. In the following content I am citing my reasons as to why I consider “Manual Testing” more important compared to “Automation testing”

Firstly, Manual testing is the initial level of testing carried out on every software product. Looking on to the stages of STLC (Software testing life cycle), Test Planning is a manual activity wherein we analyze the requirements/product and come up with a concrete approach of how & what we will test and who will test. Now after this we commence to “Test Design” phase where in we design high level test scenarios by analyzing the requirement document. This is again a manual activity and there is no alternate of it. On the basis of these scenarios we design detail test cases which we execute on the application. Its only after initial iterations we identify the test cases which we can automate. Generally, the first test cases to automate are the smoke/sanity level test cases which decides that build under test is testable further OR not. Once the build gets stable we automate more and test cases related to different modules of application. This decision to automate test cases further depends on how stable your application/different parts of application is.

Secondly not all testing activities can be automated. Testing related to usability of application/product cannot be automated as it is very subjective. Moreover, when we release the software for BETA usage and urge user to test it and provide their feedback then this is more of an manual activity rather than automation as most of end users does not have automation tools on their site.

Thirdly, automation testing is covers OR relates to specific activities OR types of testing which majorly includes Regression and load.

Moreover in my opinion automation testing is more successful in case of PRODUCT rather than projects (Customized solutions developed for specific process in an organization). The reason for this is a PRODUCT is more stable and running in market whereas in a project requirement changes often and it is tedious to maintain the automation scripts.

So at the end I would say that “Manual Testing” is more important as it is initial activity and has wider scope because only on the basis of this we can decide what to automate and when.

As always , This is an open discussion and suggestions / comments are most welcome.