Resillion
Internal cheat sheet
Back to Cheat Sheets Hub

Performance Testing Cheat Sheet

A practical guide for planning, running, and reporting performance testing in a way that supports Resillion's Total Quality approach. Use it to define realistic workloads, isolate bottlenecks, and provide decision-ready evidence.

Response times Throughput Concurrency Scalability Evidence
1

What Good Looks Like

Strong performance testing is not just “does it stay up?” It tells us how fast, how stable, and at what load the service stops meeting user expectations.

1
Start with business-critical journeys
Focus first on log in, search, checkout, API submission, reporting, or other high-value user flows.
2
Define measurable targets
Capture p95 response time, error rate, throughput, and resource thresholds before you start testing.
3
Use realistic demand
Model normal traffic, peak traffic, bursts, and background jobs based on expected client usage.
2

Primary Measures

Average response time p95 / p99 latency Requests per second Concurrent users Error rate CPU / memory Database wait time Queue depth
Percentiles matter more than averages when you want to understand real user experience under load.
3

Choose The Right Test Type

Type Use It For Typical Question
Baseline Establish current system behavior at known load. How does the platform perform under expected day-to-day traffic?
Load Validate expected operating levels over a realistic period. Does the service meet targets at forecast production load?
Stress Push beyond expected volume to find breakpoints and recovery behavior. What fails first when demand exceeds capacity?
Spike Model sudden surges in traffic. How does the service handle a sharp burst in requests?
Soak Run sustained demand over time. Are there memory leaks, thread exhaustion, or slow degradation over hours?
Scalability Evaluate how performance changes as resources or workload increase. Does adding nodes, replicas, or compute actually improve throughput?
4

Plan Before You Run

1
Confirm environment scope
Know whether caches, integrations, synthetic data, and monitoring match production closely enough.
2
Agree workload profile
Document user mix, transaction ratios, arrival rate, think time, and test duration.
3
Instrument the stack
Capture app, API, infrastructure, database, and third-party telemetry before the first run.
4
Set exit criteria
Be explicit about pass, fail, acceptable risk, and what evidence is needed for sign-off.
5

Common Pitfalls

Avoid

Tiny or unrealistic data sets

Small datasets often hide indexing, caching, and query issues that only appear at scale.

Avoid

Testing without observability

If you only measure front-end timings, you may miss bottlenecks in queues, databases, or third parties.

Avoid

Using averages alone

Averages can look healthy while a meaningful slice of users experience poor response times.

Avoid

One-and-done runs

Single runs are noisy. Repeat key scenarios and compare trends across builds or releases.

6

Fast Reporting Structure

Include

Decision-ready summary

State whether targets were met, where the bottleneck sits, business impact, and what risk remains.

Scope Workload Targets Result
Include

Evidence and next actions

Show charts, percentiles, error trends, infrastructure signals, and a short prioritized fix list.

p95 / p99 Errors CPU / memory Recommendations
A useful performance report helps stakeholders decide whether to release, optimize, re-test, or scale.
7

Simple Performance Test Checklist

1
Critical journeys selected
The test scope covers the transactions that matter most to users and operations.
2
Targets agreed
Response time, throughput, concurrency, and error thresholds are defined before execution.
3
Environment understood
Known differences from production are documented and reflected in conclusions.
4
Telemetry live
Application, infrastructure, and dependency metrics are visible during each run.
5
Results interpreted, not just collected
The team can explain why performance changed and what should happen next.
8

Typical Deliverables

Performance test plan Workload model Scenario scripts Monitoring dashboard Execution log Result summary Risk / issue list Retest recommendations
Keep the conclusion simple: met target, near target with risk, or failed target with evidence.