Saturday, July 30, 2011

Bug Life Cycle

 The Various bug status report in bug life cycle is
New: Once a New bug is found, Tester would add the status as New
Open: If it as a valid Bug then status would be changed to open  
Defect Rejected: If the bug is not a valid bug then status would be changed to defect rejected
Fixed: After debugging, developer would change the status to Fixed.
Could not reproduce: If the bug cannot be reproduced then developer will change the status to could not reproduce.
Closed: If the bug has been really fixed then tester would change the status to Closed
Fix Rejected: If the bug is not fixed then tester would change the status to Fix Rejected.
Reopen: If during regression testing closed defect not working means status would be changed to reopen
Deferred: Developer has accepted it as a bug, but it is scheduled to fix for the later build.


Friday, July 29, 2011

Difference between BUG and Defect

Software bug" is nonspecific; it means an inexplicable defect, error, flaw, mistake, failure, fault, or unwanted behavior of a computer program. Other terms, e.g. "software defect", or "software failure", are more specific.
While the word "bug" has been a part of engineering jargon for many-many decades; many-many decades ago even Thomas Edison, the great inventor, wrote about a "bug" - today there are many who believe the word "bug" is a reference to insects that caused malfunctions in early electromechanical computers.
In software testing, the difference between "bug" and "defect" is small, and also depends on the end client. For some clients, bug and defect are synonymous, while others believe bugs are subsets of defects.
Difference number one: In bug reports, the defects are easier to describe.
Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate defects. In other words, defects tend to require only brief explanations.
Commonality number one: We, software test engineers, discover both bugs and defects, before bugs and defects damage the reputation of our company.
Commonality number two: We, software QA engineers, use the software much like real users would, to find both bugs and defects, to find ways to replicate both bugs and defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality.
Commonality number three: We, software QA engineers, do not differentiate between bugs and defects. In our reports, we include both bugs and defects that are the results of software testing.

Thursday, July 28, 2011

Five solutions to problems that occur during software development


Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.
1.      Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.
2.      Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.
3.      Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.
4.      Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and customers can see what to expect; this will minimize changes later on.
5.      Communicate. Require walk-through and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Use documentation that is electronic, not paper. Promote teamwork and cooperation. 


Wednesday, July 27, 2011

History of Internet Explorer

    The following is a history of the Internet Explorer graphical web browser from Microsoft developed over 9 major software versions including 1.0 (1995), 2.0 (1995) 3.0 (1996), 4.0 (1997), 5.0 (1999), 6.0 (2001), 7.0 (2006), 8.0 (2009), and 9.0 (2011), which began public beta testing in September 2010.[1] Internet Explorer has supported Microsoft Windows, but some versions also had an Apple Macintosh version, see Internet Explorer for Mac. For the UNIX version, see Internet Explorer for UNIX. For mobile versions such as Pocket Internet Explorer and versions for Windows CE see Internet Explorer Mobile.
         The first Internet Explorer was derived from Spyglass Mosaic. The original Mosaic came from NCSA, but since NCSA was a public entity it relied on Spyglass as its commercial licensing partner. Spyglass in turn delivered two versions of the Mosaic browser to Microsoft, one wholly based on the NCSA source code, and another engineered from scratch but conceptually modeled on the NCSA browser. Internet Explorer was initially built using the Spyglass, not the NCSA source code[2] The license to Microsoft provided Spyglass (and thus NCSA) with a quarterly fee plus a percentage of Microsoft's revenues for the software.
      Internet Explorer has been the most widely used web browser since 1999, attaining a peak of about 95% usage share during 2002 and 2003 with Internet Explorer 5 and Internet Explorer 6. Since its peak of popularity, its usage share has been declining in the face of renewed competition from other web browsers, and is currently 43.55% as of February 2011. Microsoft spent over US$100 million per year on Internet Explorer in the late 1990s,[1] with over 1000 people working on it by 1999.[2][update]
Since its first release, Microsoft has added features and technologies such as basic table display (in version 1.5); XMLHttpRequest (in version 5), which aids creation of dynamic web pages; and Internationalized Domain Names (in version 7), which allow Web sites to have native-language addresses with non-Latinsource code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and both the United States and the European Union have alleged that integration of Internet Explorer with Windows has been to the detriment of other browsers. characters. The browser has also received scrutiny throughout its development for use of third-party technology (such as the
The latest stable release is Internet Explorer 9, which is available as a free update for Windows 7, Windows Vista SP2, Windows Server 2008 and Windows Server 2008 R2. Internet Explorer was to be omitted from Windows 7 and Windows Server 2008 R2 in Europe, but Microsoft ultimately included it, with a browser option screen allowing users to select any of several web browsers (including Internet Explorer).[3][4][5][6]
Versions of Internet Explorer for other operating systems have also been produced, including an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile, which is currently based on Internet Explorer 7 and made for Windows Phone 7, Windows CE, and previously Windows Mobile. It remains in development alongside the more advanced desktop versions. Internet Explorer for MacInternet Explorer for UNIX and (Solaris and HP-UX) have been discontinued.


SQL QUERY TO FIND UNUSED TABLES & STORED PROCEDURES FROM A DB


Hi i had added the below query to find the unused tables and stored procedures from data base.first select the database and then write this query.
SELECT OBJECTNAME = OBJECT_NAME(I.OBJECT_ID), INDEXNAME = I.NAME, I.INDEX_ID

FROM SYS.INDEXES AS I

INNER JOIN SYS.OBJECTS AS O

ON I.OBJECT_ID = O.OBJECT_ID

WHERE OBJECTPROPERTY(O.OBJECT_ID,'IsUserTable') = 1

AND I.INDEX_ID

NOT IN (SELECT S.INDEX_ID

FROM SYS.DM_DB_INDEX_USAGE_STATS AS S

WHERE S.OBJECT_ID = I.OBJECT_ID

AND I.INDEX_ID = S.INDEX_ID

AND DATABASE_ID = DB_ID(db_name()))

ORDER BY OBJECTNAME, I.INDEX_ID, INDEXNAME ASC.

Expecting your comments.

Tuesday, July 26, 2011

What is a Requirement Test Matrix?

The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project's life cycle.
The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table.
The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality. The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort.
The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort.

Friday, July 22, 2011

What should be done if there isn't enough time for thorough testing?

Since its rarely possible to test  every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:
  • Which functionality is most important to the project's intended purpose?
  • Which functionality is most visible to the user?
  • Which functionality has the largest safety impact?
  • Which functionality has the largest financial impact on users?
  • Which aspects of the application are most important to the customer?
  • Which aspects of the application can be tested early in the development cycle?
  • Which parts of the code are most complex and thus most subject to errors?
  • Which parts of the application were developed in rush or panic mode?
  • Which aspects of similar/related previous projects caused problems?
  • Which aspects of similar/related previous projects had large maintenance expenses?
  • Which parts of the requirements and design are unclear or poorly thought out?
  • What do the developers think are the highest-risk aspects of the application?
  • What kinds of problems would cause the worst publicity?
  • What kinds of problems would cause the most customer service complaints?
  • What kinds of tests could easily cover multiple functionalities?
  • Which tests will have the best high-risk-coverage to time-required ratio?

Thursday, July 21, 2011

Five Common Problems that occur during Software Development


Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.
1.      Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.
2.      The schedule is unrealistic if too much work is crammed in too little time.
3.      Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.
4.      It's extremely common that new features are added after development is underway.
5.      Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed. 

Wednesday, July 20, 2011

Reason for occurence of Many bugs in Software

Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development. 
  • There are unclear software requirements because there is miscommunication as to what the software should or shouldn't do.
  • Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
  • Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
  • As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.
  • Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.
  • Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.
  • Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and software engineers feel they have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.
  • Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs