Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Can we do performance testing manually?

Yes, we can do Performance Testing Manually. This is one of strategies to execute performance testing, but it does not produce repeatable results, cannot deliver measurable levels of stress on an application and is an impossible process to organize. It also depends on what type of performance test a tester wants to do. However, in general a tester can review the active sessions, number of database connections open, number of threads running. (JAVA based Web applications) total of the CPU time and memory being used by having a performance viewer. Testers can have IBM Tivoli Performance viewer and WAPT Tools. These are available for trial version. Testers also can use JMeter for performance testing as it is an open source tool. Generally the test is done by installing the application on the server and accessing the application from several client machines and making numerous threads to run. The performance viewer should of course be installed on the server.

Some of the techniques to perform Performance testing manually are:

1) If a tester is testing a website, odds are that he will slice response times in half (sometimes more) by performance testing the front end.

2) Use browser plug-ins or online tools to capture page load times.

3) Ask functional testers and/or user acceptance testers to record their credence about performance while doing testing. It may be useful to give them a scale to use, such as “acceptable, fast, unusable, annoying and tolerable”.

4) Have the developers put timers in their unit tests? These won’t tell tester anything about the user-observed response times, but developers will be able to see if their functions/modules/classes/objects etc. takes more or less time to execute from build to build. The same idea can be applied to various resource utilization (like CPU and memory utilization), depending on the skills and/or tools available to the development team.

5) Testers should get increasing numbers of co-workers to use the application during a specified period of time and ask the workers to note both the response time (which is easiest to do using the above-mentioned browser plug-ins) and their view about the application’s performance. (Give them the same scale used for the functional and/or user acceptance testers).

6) Tester should have performance builds made with timestamps tactically output to log files. Evaluate the log files build after build and track the trends.

It also depends on what you’re trying to achieve from the test.

– If you want to simulate 20 users using a website and get a overall impression of user response time, then this is OK.

– If you want to simulate 20 users all performing a piece of code at just the same time, then this is dubious to work unless the code takes a long time to execute.

– If you want precise metrics, then this possibly isn’t the best way.

The post Can we do performance testing manually? appeared first on BugRaptors.



This post first appeared on How “Extent Report” Has Enhanced And Transformed Automation Test Reports, please read the originial post: here

Share the post

Can we do performance testing manually?

×

Subscribe to How “extent Report” Has Enhanced And Transformed Automation Test Reports

Get updates delivered right to your inbox!

Thank you for your subscription

×