As the Tessabyte Throughput Test server can handle multiple clients concurrently, when more than one client is connected to the server, the bandwidth is inevitably shared among them. Unless the server has a very wide bandwidth connection that far exceeds the clients' bandwidth, you should expect that multiple concurrent client connections will affect the per-client throughput.
For example, consider a local LAN segment with a 1 Gbps link between a server and a few client computers. If you run a Tessabyte client on just one of these client computers, you can expect a maximum throughput of about 0.9 Gbps. If three clients are connected, there will inevitably be time periods when all three are sending a TCP stream to the server, as the testing cycles (TCP Up → TCP Down → UDP Up → UDP Down) are not synchronized. During such times, throughput might drop to approximately 0.3 Gbps per client. However, when only one client is sending data at the given moment, throughput may reach 0.9 Gbps. To summarize, the chart may appear volatile when multiple client connections are active.
Note that the client's log window on the Dashboard page keeps you informed about the number of currently connected clients whenever this number changes. This way, you can always determine whether the connection is exclusively yours or if other clients are sharing the bandwidth with you.
If you observe consistently low throughput compared to what you expect from the network link between the server and the client, then first of all—congratulations! You've used the right tool to discover the problem. However, after a brief celebration, let's dig into the issue. The most common reasons for a throughput below the expected level are:
Note that this is by no means a comprehensive list; there are many other potential causes of low throughput, such as high network latency and small TCP window size, packet fragmentation, etc.
Sometimes, the reported throughput rate might exceed the theoretical maximum. For example, while testing a 5 Gbps link between two computers, you might observe a spike in TCP downlink or uplink throughput that exceeds 5 Gbps. This is typically caused by buffering at the operating system level, where a relatively large chunk of data is accumulated before being "pushed" to the receiving application.
If you observe such spikes, the best solution is to use a custom TCP payload size. The payload size should be large enough to take at least half a second to transfer over your network link. To learn how to use a custom TCP payload size, refer to the Customizing Settings chapter.
This might not be a good sign for your network infrastructure. Any throughput testing tool generates a considerable load on the network hardware, sometimes to the extent that the hardware fails to reliably handle the high load. While the specific reason may vary (for example, the adapter chips might overheat, triggering a self-protection mechanism such as thermal shutdown), this is often an indication that the infrastructure being tested is unable to pass the stress test.
Generally, non-zero loss of UDP packets is totally normal. Unlike TCP, the UDP protocol is connectionless and does not guarantee delivery. Several factors may contribute to such loss:
High downstream UDP loss is quite common when running the client on computer with a Wi-Fi link. UDP traffic is not acknowledged, meaning the sender can transmit as much traffic as the network can handle without “caring” about how much of it is lost. If you run the server on the wired side of the network, a typical computer equipped with a gigabit adapter can send hundreds of megabits per second. This data will first reach a switch, which might be the first bottleneck, and then the access point, which is also often a bottleneck. In a multi-client environment, even modern 802.11be APs cannot provide a gigabit downlink to all clients simultaneously. As a result, many UDP packets might be lost en route. However, this is the only way to determine the maximum downstream UDP throughput.
A 100% downlink UDP loss is typically caused by a firewall or NAT issue. This means that the UDP data sent from the server cannot reach the client. During UDP testing, the client sends upstream UDP traffic to the server from an arbitrary UDP port to the server port (32500 by default). The return downstream traffic originates from an arbitrary server port and is sent to a port on the client side, determined by the following rule: (server port, 32500 by default) + 1. If this port is unavailable, the next available port is selected. While this information can help you configure a firewall or port forwarding rule, UDP traffic cannot easily traverse NATs. If you want to test UDP, you should avoid using NATs or consider using IPv6, which generally eliminates the need for NAT.