:: QC Boss :: Testing, Independent Software Testing, Manual Testing, Website Testing, Functionality Testing, Usability Testing, QC, QA, UAT

Archive for the ‘QC’ Category

Why do we need cross browser testing?

leave a comment »

In olden days, web surfers are restricted to use any one (or) a maximum of two browsers for their web patrol. In the modern era, (i.e.) World Wide Web there are various major browsers available in the market. As a web developer, you cannot restrict net surfers to a particular browser since they may have many browsers in one system. When your webpage renders across multiple browsers there will be innumerous defects found in the webpage, for example if you own a catalog e-website and if any one of the product or content is not displayed or visible to customer, then there are chances that you might lose the customer and business to avoid these circumstances cross browser testing is widely used.

Widely used browsers:
Google Chrome 19.0.1084.56*
Internet Explorer 9, 10*
Firefox 13.0.1*
Safari 5.1.7*
Opera 12.00*
*version as of June 26, 2012

While using the above browsers the outlook and performance of the websites varies from browser to browser in terms of effectiveness. So it is really essential that your website has to be appropriate with all the browsers, especially for those people who have a big share in the market.

Cross Browser Testing has almost become mandatory for web developers. Adding to it, cross browser testing helps to formulate some of your codes (HTML, CSS, Javascript, JQuery, etc.) are backwards compatible. So when you develop your site it needs to be checked across various browsers, so that all the major functionality gives you accurate results in different browsers. Therefore due to the evolution of various browsers, an increasing demand for various tools for cross browser compatibility came into existence.

Written by QCBoss

June 28, 2012 at 7:40 am

Posted in QC, Website Testing

Tagged with

Organizing a Cross Browser Testing

leave a comment »

A cross browser-testing checklist that can be referred while testing a web project on different browsers and operating systems is referred:

CSS validation
HTML or XHTML validation
Page validations with and without JavaScript enabled
Ajax and J Query functionality
Font size validation
Page layout in different resolutions
All images and alignment
Header and footer sections
Page content alignment to center, LHS or RHS
Page styles
Date formats

Cross browser testing involves testing of web sites or applications on both the sides i.e., client side and server side.

Written by QCBoss

April 5, 2012 at 7:07 am

Cross Browser Testing

leave a comment »

What is Cross-browser Testing

Cross-browser refers to the ability for a website, web application, HTML construct or client-side script to function correctly across all, or the majority of web browsers.

Web Statistics and Trends – Source from www.w3schools.com

2012 Internet Explorer Fire fox Chrome Safari Opera
February 19.5 % 36.6 % 36.3 % 4.5 % 2.3 %
January 20.1 % 37.1 % 35.3 % 4.3 % 2.4 %

Written by QCBoss

March 22, 2012 at 7:10 am

Browser Comparitability

leave a comment »

What is cross browser testing

Cross Browser Testing is a process to check how our web site or application performs on the different browsers, whether the available functions are working properly or not.

What is client side cross browser testing

Client Side Cross Browser Testing is a process to help you to test the functionality of our web application in client side with different Web Browsers.

With the help of Automation tool we can run the client machine in our environment. So that we can run our application with the different browsers.

Once the internal testing is done. Then we go for client side cross browser testing for their satisfaction. The process are follows:

1. To get the client IP address with their permission.

2. Start the Automation tool and type IP address to connect the client machine.

3. Now, we can run the application in our environment with client machine.

4. Finally, Analyzing the report.

What is server side cross browser testing

server side cross browser testing is a process to check the behavior of our web site or application is accessed fromdifferent web browsers.

Written by QCBoss

March 16, 2012 at 10:22 am

Quality in Software Development Process

leave a comment »

In software development process, a quality product can be termed as the one, which meets the product’s requirement. Quality is much more than absence of defects/bugs. Consider this, though the product may have zero defects, but if the usability fails, i.e. when a user finds difficult to learn and operate the product, then it is not a quality product.

However, Quality can only be seen through customer eyes. Therefore, the most important definition of quality is meeting customer needs or Understanding customer requirements, expectations and exceeding those expectations. Customer must be satisfied by using the product, and then it is a quality product. Does this mean customer needs can be translated into product requirements? No, Not always. Though our aim is to accurately capture customer needs into requirements and build a product that satisfies those needs, we sometimes fail to do so because of certain reasons:

1)       Customers fail to accurately communicate their exact needs

2)       Captured requirements can be misinterpreted

Though Quality is assured when the software has gone through the testing phase, it is always a good practice to ensure quality is maintained in all the phase like planning, development, etc.. in a software development process. Testers can only validate the correctness, reliability, usability and interoperability of a product and report the deviations. Quality is everybody’s responsibility including the customer. We, testers identify the deviations and report them. Many factors affect the quality such as maintainability, re usability, flexibility, portability that the testers cannot validate. Inspections, design and code walkthroughs and reviews are some of the quality control measures which can be done apart from testing.

Written by QCBoss

August 18, 2011 at 7:56 am

Classifications of Defects / Bugs

leave a comment »

There are various ways in which Bug / Defect can be classified. Below are some of the classifications.

Types of Error Wise:

Logic Error: Irrelevant or ambiguous functionality in the source code.

Message Error: Misleading or missing error messages in the source code.

Navigation Error: Navigation not coded correctly in the source code.

System Error: Memory leak, Hardware and operating system related errors

Incorrect Requirements: Incorrect or wrong requirements.

Performance Error: Anything related to the performance leads to performance error.

Data Error: Incorrect data population / update in the database.

Database Error: Error in the database schema/design.

Standards: Standards not followed like improper exception handling, use of E & D Formats and project related  design/requirements/coding standards.

Incorrect Design: Wrong or incorrect design.

Typographical Error: Spelling / Grammar mistake in documents/source code.

Comments: Inadequate/ incorrect/ misleading or missing comments in the source code

Variable Declaration Error: Improper declaration / usage of variables in the source code.

Sequencing / Timing Error: Error due to incorrect/missing consideration to timeouts and improper/missing sequencing in source code.

Work Product Wise:

DDS: A Defect from Detailed Design Document.

ADS: A Defect from Architectural Design Document.

FSD: A Defect from Functional Specification Document.

SSD: A Defect from System Study Document.

Source Code: A Defect from Source code.

User Documentation: A Defect from User Manuals/Operating manuals.

Test cases/ Test Plan: A Defect from Test case or Test plan.

Status Wise:

Opened

Closed

Deferred

Cancelled

Written by QCBoss

August 12, 2011 at 1:50 pm

Software Testing Metrics

leave a comment »

Software metric is a measure of some property of a piece of software or its specifications.

A metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute.

A quality metric is a quantitative measurement of the degree to which an item possesses a given quality attribute.

Metrics are the most important responsibility of the Test Team. Metrics allow for deeper understanding of the performance of the application and its behavior.

The following can be regarded as the fundamental metric:

Functional or Test Coverage Metrics:

Function Test Coverage Metric– It can be used to measure test coverage prior to software delivery. It provides a measure of the percentage of the software tested at any point during testing.

It is calculated as follows:

Function Test Coverage = FE/FT Where,

FE is the number of test requirements that are covered by test cases that were executed against the software

FT is the total number of test requirements.

 

Software Release Metrics:

The software is ready for release when:

1. It has been tested with a test suite that provides 100% functional coverage, 80% branch coverage, and 100% procedure coverage.

2. There are no level 1 or 2 severity defects.

3. The defect finding rate is less than 40 new defects per 1000 hours of testing

4. Stress testing, configuration testing, installation testing, Naïve user testing, usability testing, and sanity testing have been completed.

 

Software Maturity Metrics:

Software Maturity Index is that which can be used to determine the readiness for release of a software system. This index is especially useful for assessing release readiness when changes, additions, or deletions are made to existing software systems. It is calculated as follows:

SMI = Mt – ( Fa + Fc + Fd)/Mt, Where

SMI – is the Software Maturity Index value

Mt – is the number of software functions/modules in the current release

Fc – is the number of functions/modules that contain changes from the previous release

Fa – is the number of functions/modules that contain additions to the previous release

Fd – is the number of functions/modules that are deleted from the previous release.

 

Reliability Metrics:

Reliability is calculated as follows:

Reliability = 1 – Number of errors (actual or predicted)/Total number of lines of executable code

This reliability value is calculated for the number of errors during a specified time interval. Three other metrics can be calculated during extended testing or after the system is in production. They are:

MTTFF (Mean Time to First Failure)

MTTFF = The number of time intervals the system is operable until its first failure (functional failure only).

MTBF (Mean Time Between Failures)

MTBF = Sum of the time intervals the system is operable

MTTR (Mean Time To Repair)

MTTR = sum of the time intervals required to repair the system The number of repairs during the time period.

 

 

Written by QCBoss

August 12, 2011 at 5:19 am

Posted in QA, QC

Tagged with ,

Bug Reporting

leave a comment »

How do we report a Bug?

We follow a simple Bug report template. We include the following details in our bug report.

Bug Number: Unique ID is given for all the bugs to track the bug easily in the later stage.

Reporter: Name of the Tester along with the email address.

Project Name: This section includes the “Name of the project (or) application under test”

Version: Project version if any.

Component: These are the major sub modules of the project.

Issue Type: Design, Browser Layout, Functionality, Usability, Improvement, Request for Change

Operating system: Mention the operating systems where we found the bug. Operating systems like Windows, Linux, and Mac OS etc. Mention the different OS versions also if applicable like Windows 2000, Windows XP, Windows Vista, Windows 7 etc.

Browser Compatibility: Mention the web browser name where we found the bug. Browsers like Internet Explorer, Mozilla Firefox, Google Chrome, Safari and Opera. Mention the different OS versions also if applicable like IE 6, 7, 8, 9; Mozilla Firefox 3, 4, 5 etc.

Priority: When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as “Fix when time permits”.

Severity: This describes the impact of the bug.

Following are the types of severity:

  • Blocker: No further testing work can be done.
  • Critical: Application crash, Loss of data.
  • Major: Major loss of function.
  • Minor: minor loss of function.
  • Trivial: Some UI enhancements.
  • Enhancement: Request for new feature or some enhancement in existing one.

Status: This column is mainly used to track the bug status. By default, any new bug is ‘New’. Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.

Assignee: If we are aware of the developer to whom the bug needs to be assigned, we specify the Name and Email address of that developer else we assign the bug to module owner (or) the Manger will assign bug to developer.

URL: Here we mention the page URL to locate the bug.

Summary: A brief summary of the bug mostly in 60 or below words. We make sure the summary is reflecting what the problem is and where it is.

Description: A detailed description of bug. We use following fields for description field:

  • Reproduce steps: Clearly mention the steps to reproduce the bug.
  • Expected result: How application should behave on above mentioned steps with the help of the SRS or FS documents.
  • Actual result: What is the actual result on running above steps? i.e. the bug’s behavior.

These are the important steps we include in our bug report. We do add the “Report type” as one more field which will describe the bug type.

The report types are typically:

1) Coding error

2) Design errors

3) New suggestion

4) Documentation issue

5) Hardware problem

This kind of reporting not only provides a clear view of what the error is to the developer, but also ensures that there is proper management of the bugs in a project.

Complexity of Database Testing…

leave a comment »

Complexity of Database Testing:

The testing can often involve in a series of different database abstractions, such as data-warehouses, data-marts and data-vaults.

Our testing might include testing a web form that inserts data into a database-tracing that transaction from the web to the database is a very important testing process.  Testing of that insertion and then validating that the right triggers are being executed can involve many different tools and technologies.

Practice-Based Testing Group:

Running your database tests dramatically you can speed up or at least portion of them, against the in-memory database such as HSQLDB.  The challenge with this approach is that because database methods are implemented differently across database vendors that any method tests will still need to run against the actual database server.

  • Test for data-format integrity
  • Test the referential integrity of a database
  • Test database security including database permissions and privileges
  • Testing Application Programming Interfaces (APIs) such as ODBC, JDBC, and OLEDB
  • Test the database and data mart loads through specialized load tools

Database Test Tools:

  • Visual Studio Team Edition for Database Professionals
  • O Unit for Oracle
  • SQL Unit
  • TOAD
  • DB Unit

Written by QCBoss

August 8, 2011 at 10:47 am

SQL Scalar Functions

leave a comment »

Scalar Functions

SQL scalar functions return a single value, based on the input value. These functions return a single data value with RETURNS clause. Scalar-valued function returns a scalar value such as an integer or a timestamp and can be used as column name in queries user-defined data types. Inline table-valued functions return the result set of a single SELECT statement. Multi statement table-valued functions return a table, which was built with many TRANSACT-SQL statements.

Commonly used Scalar functions:

UCASE() – Converts value of a field to upper case

Syntax – SELECT UCASE(column_name) FROM table_name

Example – SELECT UCASE(First_name) FROM Persons

LCASE() – Converts the value of a field to lowercase.

Syntax – SELECT LCASE(column_name) FROM table_name

Example – SELECT LCASE(First_name) FROM Persons

MID() – Used to extract characters from a text field.

Syntax – SELECT MID(column_name,start[,length]) FROM table_name

Example – SELECT MID(City,1,4) FROM Persons

LEN() – Returns the length of the value in a text field

Syntax – SELECT LEN(column_name) FROM table_name

Example – SELECT LEN(Address) FROM Persons

ROUND() – Rounds a numeric field to the number of decimals specified

Syntax – SELECT ROUND(column_name,decimals) FROM table_name

Example – SELECT ROUND(UnitPrice,0) FROM Products

NOW() – Returns the current system date and time

Syntax – SELECT NOW() FROM table_name

Example – SELECT ProductName, UnitPrice, Now() FROM Products

FORMAT() – Formats how a field is to be displayed

Syntax – SELECT FORMAT(column_name,format) FROM table_name

Example – SELECT ProductName, UnitPrice, FORMAT(Now(),’YYYY-MM-DD’) FROM Products

Written by QCBoss

August 5, 2011 at 12:11 pm