How bandwidth relates to transfer time
Network bandwidth is measured in bits per second — kilobits (Kbps), megabits (Mbps), or gigabits (Gbps) — while file size is usually shown in bytes. Because there are eight bits in a byte, a 1 GB file contains 8 gigabits of data. At a raw 100 Mbps connection, transferring those 8 Gb takes about 80 seconds before any overhead is counted.
In practice, every network connection has protocol overhead — the extra data that TCP, IP, and the physical layer add around each payload. A 5–10% overhead assumption is typical for wired Ethernet; wireless and long-distance WAN connections can be higher. Adding this overhead to the calculation gives a more realistic estimate.
Transfer time (s) = File size (bits) / Effective bandwidth (bps)
File size must be converted to bits (multiply bytes by 8) before dividing by the effective bandwidth after overhead.
Effective bandwidth = Raw bandwidth / (1 + Overhead fraction)
A 5% overhead reduces 100 Mbps to about 95.2 Mbps of effective payload throughput.
Required bandwidth = File size (bits) × (1 + Overhead fraction) / Time (s)
Rearranging the formula gives the minimum bandwidth needed to transfer a file within a given time window.