Maybe you’ve decided to learn a little more about the inner workings of your computer, or are shopping around for new components and want to have a better handle on the specs that determine your PC’s performance. Cracking open the contents of a computer is a lot like looking under the hood of a car for the first time — for the uninitiated, it looks like an indecipherable jungle of byzantine metal puzzle pieces.
Down below we’ll run you through one of the more common and easier to understand terms you’re likely to run into on your first foray into the sausage-making of computing: throughput. We’ll explain exactly what Throughput is, how it works, and why it’s useful to know!
What is Throughput in Computing?
Throughput, in simple terms, refers to the amount of data that has passed between a set of points — whether between hard drives or over an internet connection or anywhere else. The term actually originates from non-computing contexts as a measure of how much of anything could be processed — be it freight through a shipping line, passengers through a subway system, product out of an assembly plant, etc.
In the context of computing, throughput specifically refers to the rate of data transfer between two locations.
This is commonly confused with a related term, bandwidth, which is a measure of the maximum capacity for data transfer between given points — usually measured in terms of data per second.
If bandwidth refers to the maximum possible transfer rate of data over time, throughput is the actual, measured rate. In non-computing terms, bandwidth would refer to the total capacity of a subway car, while throughput would refer to how many people actually rode the car within a given time frame.
In the context of your own computing, throughput can be a measure of how fast your CPU talks to your SSD, or any other hardware components. It can also be a useful metric for measuring the performance of your internet connection.
What is Throughput Used For?
Throughput, being a measure of data processing and transfer speed, can be used alongside other information to evaluate performance or diagnose technical issues.
For example, if your internet bandwidth is 200 Mbps, you know that your network connection could theoretically transfer up to 200 megabits per second. If a test reveals your throughput is 10 megabits per second — well below an appropriate throughput for the given bandwidth.
Throughput measurements can be specified to certain points within a connection, allowing investigators to isolate the source of technical issues.
Low Throughput Causes
Heavy Traffic
In the case of network connections, internet bandwidth gets split between users. If you think of your data connection as a pipe, the diameter of the pipe is the bandwidth and the amount of water traveling through it is the data — only a certain amount of water can travel at a certain rate, and the more people feeding off the pipe the lower the amount each individual receives will be — less data over time means lower throughput.
Latency
Hearkening back to our water pipe example, if bandwidth is the diameter of the pipe and the water flowing through it the data, latency is the speed that the water is traveling. Since throughput is a measure of data processing over a given time frame, higher latency means more time being taken and thus this necessarily equates to lower throughput.
Packet Loss
Packet loss may come up when troubleshooting network issues. The term refers to data units that were damaged or lost during transfer. In the context of our water pipe analogy, this would be something like leaks in the piping; A certain amount of data is sent, but some amount of it is lost en-route — i.e. lower throughput. Packet Loss can indicate a number of issues, ranging from security attacks on a connection to faulty hardware.
And that’s your basic rundown of throughput in the context of computing. If you have any other questions, related or unrelated to throughput, feel free to shoot them at us in the comments below — we’d love to answer them for you!
Discussion