Performance Benchmarks

How We Benchmark

To ensure the accuracy and relevance of our performance claims, all Quickscope benchmarks are conducted using production-grade infrastructure and realistic conditions. Testing is done under simulated loads that mirror real-world user behavior, allowing us to capture meaningful metrics that reflect everyday use cases.

We run benchmarks across multiple global regions to assess how performance varies based on geographic location. This helps us fine-tune latency handling and ensure consistent results regardless of where users are located.

Our primary benchmarking focus is on three key areas: latency, throughput, and error rate. By measuring these under stress conditions and during normal usage, we provide a transparent view of how Quickscope performs under both load and scale.


Benchmarked Services

We benchmark both of our core services:

  • RPC, to evaluate request response time, sustained throughput, and failure rates under different usage conditions
  • Router, to measure quote speed, route accuracy, fill success, and the ability to sustain high-frequency swap queries

Each service is tested independently using relevant endpoints and monitored throughout for reliability, responsiveness, and regional variance.