/ˈθruː.pʊt/

noun — "how much your network or system can handle before it throws a tantrum."

Throughput in information technology refers to the amount of work, data, or transactions a system, network, or application can process in a given period of time. It is a key metric for evaluating performance, capacity, and efficiency of IT infrastructure.

Technically, Throughput involves:

  • Data transmission rates — measuring bytes or packets successfully delivered over a network.
  • System processing rates — tracking completed operations or transactions per second in software or databases.
  • Concurrency and bottlenecks — identifying resource limits that constrain overall output.

Examples of Throughput in practice include:

  • Measuring how many HTTP requests a web server handles per second under load.
  • Calculating database query processing rates during peak usage.
  • Evaluating network bandwidth utilization versus actual data delivered to endpoints.

Conceptually, Throughput is the work meter of a system—it tells you how much your infrastructure can really do, not just what it theoretically promises.

In practice, Throughput complements Latency, Bandwidth, Network Monitoring, and Performance metrics to optimize IT systems and maintain reliable service delivery.

See Latency, Bandwidth, Network Monitoring, Performance, Monitoring.