Test plan preparation for Manual Testing

Test plan is a document which covers all the future testing activities in the project and it covers all this

  1. objective: aim of  the project
  2. scope:here they will decide which all features to be tested and which all features not to be tested
    example:gmail application is given,here we will test login feature, logout , compose, draft, trash,etc….so this are the features to be tested.But here also helpfeature will be there, this help feature will be tested by the technical writter, so no need to test this.Like wise, for every application we will decide the scope of the application.
  3. Approach:It is way we go about tesing the product.
    1. high level testing
      going to login feature, testing each field in that ,covering that particular feature and going for next feature like composeand etc….the testing continous
    2. writing the flow chart for testing the product.
  1. Testing methodology:In the future when the product given for testing,what all testing should be done will be decided here.
    Testing like fuctional, integration, system, adhoc, smoke, etc….
  2. Testing Environment:Here they will decide what all the hardware and software to be required to set up the testing environment when the product given to them
  3. Defect Tracking:Here we will decide how to track the defects and also the
    1. defect tracking tool
    2. severity
    3. test management tool:to store the defects
      All will be decided
  4. Assumptions,Risks,Mitigation plan:here they will assume something if that assumption goes wrong, then it will be risk and finally they will do mitigaiton plan to overcome the risk
    Example:

    Assumption:got one new project,allocated work to every test engg assuming everyone will one will be there till the end of the project
    risk:one person quites the job, so assumption goes wrong, and it is risk nowmitigation plan:so does the mitigation plan to overcome this plan in the test plan stage itself.
  5. ETVX {Entry-Task-Validation-Exit} criteria are to be mentioned. Entry means the entry point to that phase. For example, for unit testing, the coding must be complete and then only one can start unit testing. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done. For example, the exit criterion for unit testing is all unit test cases must pass.
  6. Roles and Resposibilities:
    what are the roles and resposibilities of the P.M, T.L,T.E’s will be decided,like who shoul write the testplan , testcase, reveiew testplan testcase, execute testcase, auotmate test case, everything will be decided
  7. Deliverables:This the outputs from the testing team
    1. test report
    2. tracebility matrix
    3. defect tracking report
    4. graphs and metrices
      graphs:like defect tracking graph, defect distribution graph.so that we can track which module is having more bug and also how much testing we have completed , how much yet to be completed, everything can be tracked
    5. release notes:release notes are those which they will release along with the product.It consists of
      1. what all features are tested/not tested
      2. Addition/modification/deletion of features
      3. bugs which are found in previous release and fixed in current release
  8. Automation
    1. what all features to be automated and not to be auotmated
    2. which automation tool to use will be decided
  9. Template:Here all the standard templates will be stored for all the documents like test case, review template,tracebility matrix, test report, etc…So when ever the T.E is filing the document he /she should use this standard templateTest Plan template

    version Author Reviewer Comments Approve by Approved date
    Which versionshd be mentioned Name of the test engineer,who’s logging the defect

So this all I know about Test plan…I guess this may help u…

Software QA quotes

May 11, 2012 1 comment

Software testing is not all about  tasks  – testcases to write, scripts to run, issues to file, bugs to verify, tests reports etc.

It’s fun too =D

Here are some of my favorite quotes as a Software QA. Enjoy!

* Software Testing: Where failure is always an option.
* Improving the world one bug at a time.
* Software Testing: You make it, we break it.
* Software Testers don’t break software; it’s broken when we get it.
* Software Testers: We break it because we care.
* If developers are so smart, why do testers have such job security?
* Life is too short for manual testing.
* Trust, But Verify.
* The Definition of an Upgrade: Take old bugs out, put new ones in.
* We break software so you don’t have to.
* I used to build software…now I break it! Its a lot more fun!!
* All code is guilty, until proven innocent.
* It’s Automation, Not Automagic!
* Quality Assurance, we take the blame so you don’t have to.
* In God we trust, and for everything else we test.

Pick yours from the list. =)

Test cases for editable and noneditable dropdown and list box

If there are no functional specifications, for listboxes there can be found a some general specifications which you can use for testing these kind of elements. (windows standards for example)

(these are not the testcases, only a small checklist)
In General:
Is all data good visible, no half words in the lists.
Can all items be selected – and are they really used in the software as selected items?
Can you type in all characters in a listbox? (try on editable and non editable)
Pressing a character that is not in the list, brings you to the closest item that starts with that character
If there is a ‘View’ or ‘Open’ button beside the list box then double clicking on a line in the List Box, should act in the same way as selecting and item in the list box.
Can you Drag and drop items from and to the listbox? (is this a spec?)
Can you delete items from the listboxes (should not be possible)?
Items should be in alphabetical order with the exception of blank/none which is at the top or the bottom of the list box.
Shouldn’t have a blank line at the bottom of a list
Can you navigate through the listboxes with the arrow keys.
Is the selected Item selected when you go on with the TAB key?
Can the tab key be used to enter and leave the system element?

For Drop Down List Boxes
Pressing the Arrow should give list of options.
This List may be scrollable.
Pressing ‘Ctrl – F4’ should open/drop down the list box.
Spacing should be compatible with the existing windows spacing (word etc.).
Drop down with the item selected should display the list with the selected item on the top.

Combo Boxes
Should allow text to be entered.
Clicking Arrow should allow user to choose from list

List Boxes
Should allow a single selection to be chosen, by clicking with the mouse
If there is a ‘View’ or ‘Open’ button beside the list box then double clicking on a line in the List Box should act in the same way as selecting and item in the list box and after that clicking the view or open button.
Force the scroll bar to appear, make sure all the data can be seen in the box.
———————————————————————————–

Just some ideas to get the brain juices flowing:

Functionality:
Basic
Menu drops down when clicked
Items are selectable
Clicking items loads appropriate content

Mouse:
Left-click activates menu item
Right-click options function as expected

Keyboard:
Arrow keys move selection up and down the list of menu items
ESC key closes menu
Pressing a letter moves to the corresponding alphabetic list section

Usability:
Internationalization
Menus appear in the user’s native language
Content appears in the user’s native language
Labels appear in the user’s native language

Security:
Can the menu be hacked?
Can the content be hacked?

Content:
Spelling
Labels are spelled correctly
Menu items are spelled correctly
Content pages are spelled correctly
Grammar
Labels use accepted rules of grammar
Menu items use accepted rules of grammar
Content pages use accepted rules of grammar

Cosmetic:
Labels are easily legible
Background color does not too closely math text color
Colors are used that respect people with “color-blindness”
Highlighted text is easily legible


 

Test Effort Estimation Using Use Case Points

10 Tips to Survive and Progress in the Field of Software Testing

These tips not only survive but also advance you in your software testing career. Make sure you follow them:

Tip #1) Written communication – I repeatedly saying this on many occasions that keep all things in written communication. No verbal communication please. This is applicable to all instructions or tasks given to you by your superior. No matter how friendly your lead or manager is but keep things in emails or documents.

Tip #2) Try to automate daily routine tasks – Save time and energy by automating daily routine task no matter how small those tasks are.
E.g. If you deploy daily project builds manually, write a batch script to perform the task in one click.

Tip #3) 360 degree testing approach – To hunt down software defects think from all perspectives. Find all possible information related to the application under test apart from your SRS documents. Use this information to understand the project completely and apply this knowledge while testing.
E.g. If you are testing partner website integration with your application, make sure to understand partner business fully before starting to test.

Tip #4) Continuous learning – Never stop learning. Explore better ways to test application. Learn new automation tools like Selenium, QTP or any performance testing tool. Nowadays performance testing is the hot career destination for software testers! Have this skill under your belt.

Tip #5) Admit mistakes but be confident about whatever tasks you did – Avoid doing the same mistake again. This is the best method to learn and adapt to new things.

Tip #6) Get involved from the beginning – Ask your lead or manager to get you (QAs) involved in design discussions/meetings from the beginning. This is more applicable for small teams without QA lead or manager.

Tip #7) Keep notes on everything – Keep notes of daily new things learned on the project. This could be just simple commands to be executed for certain task to complete or complex testing steps, so that you don’t need to ask same things again and again to fellow testers or developers.

Tip #8) Improve you communication and interpersonal skill – Very important for periodic career growth at all stages.

Tip #9) Make sure you get noticed at work – Sometimes your lead may not present the true picture of you to your manager or company management. In such cases you should continuously watch the moments where you can show your performance to top management.
Warning – Don’t play politics at work if you think your lead or manager is kind enough to communicate your skill/progress to your manager or top management. In that case no need to follow this tip.

Tip #10) Software testing is fun, enjoy it – Stay calm, be focused, follow all processes and enjoy testing. See how interesting software testing is. I must say it’s addictive for some people.

Bonus tip
Read read and read – Keep on reading books, white papers, case studies related to software testing and quality assurance. Always stay on top of the news in software testing and QA industry.

Source : http://www.softwaretestinghelp.com/how-to-improve-communication-skill/

Test Scenario for Export to Excel Functionality!

Generally Reports are exported to excel.

broad scenarios of this are as follows

1. when some data is exported to excel, an excel sheet should open in next tab. A windows  dialogue with two options –  Save or open should appear.

i) Save- document should be  saved to browsed location in correct format.

ii)Open- document should open in next tab/window

2. Verify name of the document – opened or saved

3. Check if all data is transferred successfully- Compare data in excel with the data on webpage. No garbage values or encoded values should appear on excel

4. Verify if complete data is transferred. Check this in accordance with the maximum rows allowed in an excel  sheet (this will vary according to different versions of MS EXCEL)

5. verify if datatypes formats are correct in excel.

6. If graphs or pie charts are being exported verify size /color of the same.

7. Verify functionality with different versions of MS EXCEL

consider for example you have exported the report .

8. whether the report data and the data in the export data sheet are same and also in same format.

9. the column order should also be same

10. the values that is if any round upto 2,3 decimals should also be same as in case of reports( some times it happens that in report u get the rounded data wheras in export you get the direct darta.

11. You can also perform data validatuion testing on export if you have not done that on report i.e

fire a sql on the data base and get the result for that report, n export the report and compare the sql data with the exported data( use formulas in excel for comparison)

Test Case Point Analysis

White Paper on Test Case Point Analysis:

Click to access TestCasePointAnalysis.pdf

Categories: Test Estimation Tags: ,

TPA – Test Point Analysis – A method of Test Estimation

TPA – Test Point Analysis – A method of Test Estimation

refer this link also Test Estimation

The function point analysis productivity factor covers the white-box testing, it does not cover system testing or acceptancetesting.

3 important elements – size , strategy and productivity

1) Size :-

size of an information system is determined mainly by the number of function points assigned to it.

Other factors are

Complexity; complexity relates to the number of conditions in a function. More conditions almost always means more test cases and therefore a greater volume of testing work.

Interfacing; the degree of interfacing of a function is determined by the number of data sets maintained by a function and the number of other functions, which make use of those data sets. Interfacing is relevant because these “other” functions will require testing if the maintenance function is modified.

Uniformity; it is important to consider the extent to which the structure of a function allows it to be tested using existing or slightly modified specifications, i.e. the extent to which the information system contains similarly structured functions.

2) Strategy

The importance attached to the various quality characteristics for testing purposes and the importance of the various subsystems and/or functions determine the test strategy.

Any requirement importance is from two perspectives: one is the user importance and the other is the user usage. Depending on these two characteristics a requirement rating can be generated and a strategy can be chalked out accordingly, which also means that estimates vary accordingly.

3) Productivity

Productivity  has  two  important  aspects:   environment  and  productivity figures.  Environmental  factors  define  how  much  the  environment  affects  a project  estimate.  Environmental  factors  include  aspects  such  as  tools,  test environments,  availability  of  testware,  etc.  While  the  productivity  figures depend on knowledge, how many senior people are on the team, etc.

User-importance

The user-significance is an expression of the importance that the user attaches to a given function relative to the other system functions.

Rating:

3 Low: the importance of the function relative to the other functions is low.

6 Normal: the importance of the function relative to the other functions is normal.

12 High: the importance of the function relative to the other functions is high.

Usage-intensity

The usage intensity has been defined as the frequency with which a certain function is

processed by the users and the size of the user group that uses the function. As with user-

importance the usage-intensity is being determined at a user-function level.

Rating:

2 Low: the function is only used a few times per day or per week.

4 Normal: the function is being used a great many times per day

12 High: the function is used continuously throughout the day.

Interfacing

Interfacing is an expression of the extent to which a modification in a given function affects

other parts of the system. The degree of interfacing is determined by ascertaining first the

logical data sets (LDSs) which the function in question can modify, then the other functions

which access these LDSs.

Complexity

The complexity of a function is determined on the basis of its algorithm. The general structure

of the algorithm may be described using pseudo code, Nassi-Shneiderman or ordinary text.

The complexity rating of the function depends on the number of conditions in the function’s

algorithm.

Rating:

3 The function contains no more than five conditions.

6 The function contains between six and eleven conditions.

12 The function contains more than eleven conditions.

Uniformity (U):

This factor defines how reusable a system is. Clones and dummies come under this heading.

A uniformity factor of 0.6 is assigned in cases of the kinds where there are clone functions , dummy functions and virtually unique function reoccurring ; otherwise a uniformity factor of 1 is assigned

Df = ((Ue + Uy + I + C)/16) * U

Df = weighting factor for the function-dependent factors

Ue = user-importance

Uy = usage-intensity

I = interfacing

C = complexity

U = uniformity

Dynamic quality characteristics (Qd)

The third step is to calculate Qd. Qd, i.e, dynamic quality characteristics, have two parts: explicit characteristics (Qde) and implicit characteristics (Qdi).Qde has five important characteristics: Functionality, Security, Suitability,Performance, and Portability.

Qdi defines the implicit characteristic part of the Qd. These are not standard and vary from project to project. For instance, we have identified for this accounting application four characteristics: user friendly, efficiency, performance, and maintainability.

Qd = Qde + Ddi

TPf = FPf * Df * Qd

TPf = number of test points assigned to the function

FPf = number of function points assigned to the function

Df = weighting factor for the function-dependent factors

Qd = weighting factor for the dynamic quality characteristics

Calculate static test points Qs

In this step we take into account the static quality characteristic of the project. This is done by defining a checklist of properties and then assigning a value of 16 to those properties. For this project we have only considered easy-to-use as a criteria and hence assigned 16 to it.

Total number of test points

The total number of test points assigned to the system as a whole is calculated by entering the data so far obtained into the following formula:

TP = ΣTPf + (FP * Qi) / 500

TP = total number of test points assigned to the system as a whole

ΣTPf = sum of the test points assigned to the individual functions (dynamic test points)

FP = total number of function points assigned to the system as a whole (minimum    value 500)

Qi = weighting factor for the indirectly measurable quality characteristics

Calculate Productivity/Skill factors

Productivity/skill factors show the number of test hours needed per test points. It’s a measure of experience, knowledge, and expertise and a team’s ability to perform. Productivity factors vary from project to project and also organization to organization. For instance, if we have a project team with many seniors then productivity increases. But if we have a new testing team productivity decreases. The higher the productivity factor the higher the number of test hours required.

Calculate environmental Factor (E)

The number of test hours for each test point is influenced not only by skills but also by the environment in which those resources work.

Calculate primary test hours (PT)

Primary test hours are the product of test points, skill factors, and environmental factors. The following formula shows the concept in more detail:

Primary test hours =  TP * Skill factor * E

Categories: Test Estimation Tags: , , ,

Common web browser errors

List of Web Application errors:

404 Not Found: The browser could not find the specific document that you requested on the host computer. To resolve this error, check the Uniform Resource Locator (URL) syntax (some URLs are case sensitive). In addition, the page may have been removed, had its name changed, or have been moved to a new location. To rise above the mundane, some have made 404 error pages a work of art – see 404 Research Lab for some creative 404 pages.

403 Forbidden/Access Denied: The Web site you requested requires special access permission (for example a password).

503 Service Unavailable: The host computer is too busy or the Web server which hosts the requested Web site is down.

Bad File Request: The form or the Hypertext Markup Language (HTML) code for an online form has an error.

Cannot Connect to Server: This error can occur if you are using Secure Sockets Layer (SSL) security (“https” at the beginning of the URL) when you are connecting to certain Web servers.

Cannot Add Form Submission Result to Bookmark List: The results of a form cannot be saved as a bookmark. A bookmark must be a document or a web address.

Cipher Strength value is 0-bit: When you click About Internet Explorer on the Help menu, the Cipher Strength value is 0-bit -and- you cannot connect to and view Web pages on secure Web sites can occur if the Schannel.dll, Rsabase.dll, or Rsaenh.dll files are missing, damaged, or of the incorrect version. See MSKB Q261328 for the fix.

Connection Reset by Peer: This message indicates that you clicked the “Stop” button or moved on to another webpage before the server finished sending the data/page.

Connection Refused by Host: This is a version of the 403 error. The Web site you requested requires special access permission. Or this page/site requires you to have SSL functionality not found in older browsers.

Error Copying File: Cannot copy file: File system error (1026): This message indicates that the Temporary Internet Files folder is full. Internet Explorer downloads files to the Temporary Internet Files Folder and then copies the files to the specified location. To resolve, go to the Tools menu in Internet Explorer and select Internet Options. On the General tab, click the Delete Files button in the ‘Temporary Internet Files’ section. If you would like to delete content that has been stored locally, select the ‘Delete all offline / subscription content’ check box. Click OK and click OK again.

Failed DNS Lookup: The Web site’s URL could not be translated into a valid Internet protocol (IP) address. This error is common on commercial sites because the computers responsible for translating the IP addresses are overloaded. Try again later when there may be less Internet traffic. This can also be caused by a URL syntax error (the URL has incorrect format).

File Contains no Data: The browser found the site, but nothing in the specific file. Try adding “:80” (without the quotation marks) to the URL just before the first slash, for example: http://www.microsoft.com:80/.

Helper Application not Found: You have attempted to download a file that needs a helper program and your browser cannot find the program. On the browser’s preferences or options menu, make sure the correct directory and file name are entered for the helper program. If you do not have a helper program, save the file to disk and obtain the helper program.

HTTP Server at Compressed .com:8080 Replies:HTTP/1.0 500 Error from Proxy: This error is common with proxy servers (a server on a local area network that lets you connect to the Internet without using a modem). The proxy is either down, busy, or cannot interpret the command that was sent to it. You may want to wait for 30 seconds or more then try viewing the page again. If the problem persists, contact the network administrator of that proxy.

NTTP Server Error: The browser could not find the Usenet newsgroup that you tried to access. Make sure the news server address is correctly listed in your browser’s preferences or options menu and try again.

Not Found: The link no longer exists.

Site Unavailable: Too many users are trying to access the site, the site is down for maintenance, there is noise on the line, or the site no longer exists. This can also be caused by a user URL syntax error.

TCP Error Encountered While Sending Request to Server: This error is caused by erroneous data on the line between you and the requested site. This may be hardware related. Report the error to your network administrator and try again later.

Unable to Locate Host: The URL did not return anything, the site is unavailable, or the Internet connection was dropped. Check the hardware connections and URL syntax.

Cannot Connect to Server: This error can occur if you are using Secure Sockets Layer (SSL) security (“https” at the beginning of the URL) when you are connecting to certain Web servers.

400: Bad Request: – The 400 Bad Request browser error means that the request you sent to the website server (i.e. a request to load a web page) was somehow malformed therefore the server was unable to understand or process the request.

401: Unauthorized: – The 401 Unauthorized browser error means the page you were trying to access can not be loaded until you first log on with a valid user ID and password. If you have just logged on and received the 401 Unauthorized error, it means that the credentials you entered were invalid. Invalid credentials could mean that you don’t have an account with the web site, your user ID was entered incorrectly, or you password was incorrect.

408: Request Timeout: – The 408 Request Timeout browser error means the request you sent to the website server (i.e. a request to load a web page) took longer than the website’s server was prepared to wait. In other words, your connection with the web site “timed out”.

500: Internal Server Error: – The 500 Internal Server Error is a very general browser error meaning something has gone wrong on the web site’s server but the server could not be more specific on what the exact problem is.

502: Bad Gateway: – The 502 Bad Gateway browser error means that one server received an invalid response from another server that it was accessing while attempting to load the web page or fill another request by the browser.

504: Gateway Timeout: – The 504 Gateway Timeout browser error means that one server did not receive a timely response from another server that it was accessing while attempting to load the web page or fill another request by the browser. This usually means that the other server is down or not working properly.

Source: Internet

Why testing community prefer open source tools?

Recently I did a small survey on testinggeek to find out whether testing community prefers opensource tools or commercial tools. After around one month, 81% participants voted for open source tools. I have been using open source testing tools for around four years now so I wasn’t surprised with the result. But still, this result got me interested and made me think about why so many people prefer open source tools? What are / were the problems with vendor tools? How open source tools have affected testers and the way we work?

I am sure some respondant might have voted for open source because of moral reasons, but for me and propbably many others its the value that we have got from using open source tools. I have been using Firefox, Selenium, FitNesse, WATIR, Selendion, Concordion and many such tools and was benefited greatly with the rich feature set and support. I have used vendor tools like Silk Test, Rational Robot, Rational Functional Tester, Quality Centre etc in past and have first hand experience of pain / problems of using them.

Lets start with brief understanding of what is open source / free software? OSS (Open Source Software)/FS (Free Software) programs are programs whose licenses give users the freedom to run the program for any purpose, to study and modify the program, and to redistribute copies of either the original or modified program (without having to pay royalties to previous developers). It is important to understand that free does not refer to price, it refers to liberty. Free here refers to the the freedom to understand, discuss, repair, and modify the technological devices / softwares user own.

In this paper, author has done in-depth study and given quantitative reasons to support open source adaption. This paper give us very good understanding and (to some people) confidence that open source software works by capturing information on trend, reliability, performance, scalability and so on. Price might be a factor when adapting open source, but if this is the only reason it might be difficult to justify it sometimes. As mentioned earlier, Free in OSS is liberty, which allow users to have fundamental control and flexibility on the software, since they can modify and maintain their software to their liking. This is probably one of the most important reasons why open source works, in almost every sphere.

But what is making open source tools work specifically for testing community? Lets discuss various factors which are important for adopting any tool and compare open source and commercial tools against them. I am taking a narrow view and summarizing these factors for the automated test execution tools only, some factors listed below might or might not be relevant for other tools.
1. Automation Language – Language in which you write your automation has a big influence on how maintainable and robust your automation suite is. If you use right language ( depending on your context), chances of getting right support, technical know-how and libraries for typical task are much higher. Historically, vendors have always given their own specific language which was good for only a single tool. This IMO, was one of the major drawbacks of tool vendors. This situation is changing with RFT (Rational Functional Tester) supporting JAVA / .NET and TestComplete supporting many languages. OSS, scores really well on this front. Tools like Selenium support almost every major language, and because of their open nature, if support for any language is not present someone from community might develop that.

2. Responsiveness – It can be argued that support from tool vendors should be better because they are getting paid for it. In my experience though, support from tool vendors have fallen short of support that we recieve from motivated bunch of people working as a community on any open source project. These folks are probably much more responsive than various level of support sold by tool vendors. Tool vendores can probably guarentee that support will be available, but if you choose popular open source project, chances of getting right support will be higher. One important point to remember here is, with tool vendors support is demanded (because you paid them.) and with open source, support is requested (Because you need them).

3. Feedback Loop – I started working with test automation tools in 2001 and for initial 4-5 years they were more or less static. There were changes / improvements but more or less they were not radically different. One reason for this slow development was probably long feedback. Users and developers in these cases were two different entities and were oblivious of each other’s pain. On the other hand, in OSS users and developers are same folks, so feedback loop is extremely fast. Thats the reason why tools like Selenium, WATIR etc have become so popular and feature rich in such a short time.

4. Short evaluation period – Usually if time is short for the evaluation process, wrong decisions will be made about the tool and eventually they will become shelfware. Good evaluation is essential, but most of the time evaluation period given by tool vendors is not good enough to findout if tool is the right tool for the application under test? OSS on the otherhand, give us long (evaluation) period so when we decide to use any tool, we make that dicision with good knowledge on limitation, capabilities and its applicability in the context of the application under test. This increases chances of succeeding with the tool and so its reputation.

5. Selling is driven by Marketing – Normally tools are sold by sales people to managers and not to users. Tools are sold with wrong practices like record / playback, quick automation and so on. This gives wrong impression to management in terms of what is good automation, how it should be approached and so on. Most of the time, practices like this result in unmaintainable automation suite which becomes useless very soon. Normally tools or testers are blamed in these cases, but most of time its not the tool or tester but approach in which tool was sold /used. Most of the time its the reputation of tool which is damaged in such instances.

6. Vendor locking – Historically, tools have always tried to lock users with their offerings. QTP is integrated with quality centre, Rational robot was integrated with test manager and so on. Thier internal formats, integration everything is coupled with other tools from their stack making it impossible to migrate from one tool to another without substential rework once you are trapped. OSS, on the other hand are like open book and hardly have any motive to lock users.

7. Lack of choices – There are not many tools from the vendors and consolidation in the tool market means choices will be even lesser in future. There are only 4 or 5 major players in the field which hardly gives you any choice. Lack of choice is bad, because it increases the possibility of monopoly and features are not delivered based on what user is expecting but based on what competitors are doing.

8. Community Feeling – Sure there are some products like Mac which can instill feeling of community in users. On the software front though, this behavior is more common for OS than vendors. This community acts as a support centre, as a platform to discuss problem and give direction in which development should be carried on. This community usually provide information to anyone who is seeking information, facing difficulty in implementing anything and so on, without any self interest. I have not done any formal study, but was never disappointed with the community of many OSS tools.

9. Specialization & Generalization – Main motive of tool vendors is to sell more products and make money. They increase their chances of sales by combining more features, providing support for many languages / platforms etc. Most of time these products end up in a very bulky product, which is not easy to operate / understand and reduces the speed at which tools can be adopted. OSS on the other hand, try to solve one problem and they try to do it well. Motive in OSS is not to sell more products, but to solve a specific problem and that takes them much closer to users than vendors.

10. Reduces difference between dev / test team – IMO, with the usage of OSS, testing community is much closer to development team than tools from vendor. OSS has allowed developers and testers to talk in the same language, use same set of tools and allowed tighter integration. They have also allowed testers to leverage the work from developers and understand their work more closely. In the old world of vendor tools, testers and developers were effectively in their own silos, OSS has helped in reducing that gap

This Article from http://www.testinggeek.com/index.php/testing-articles/179-open-sour…