I've had Starlink since early beta days and am watching connection metrics very closely. Latency measurement is so much more useful than throughput. So is packet loss. FWIW I've averaged 0.6% packet loss, 38ms latency, and 130/20 MBit/s in Grass Valley, CA over the last month
But averages obscure what's really important. The biggest indicator of Starlink congestion problems has been how packet loss increases in evenings. Average latency is interesting but much more interesting is the variance of latency or jitter. A steady 50ms is better than a connection varying 20-80ms all the time. As for bandwidth what I've found most useful is a measure of "hours under 20 Mbps download" (1 or 2 a day on my Starlink).
IRTT is a fantastic tool for measuring latency and packet loss. Way better than simple pings. The hassle is you have to run a server too; I have one in the same datacenter as the Starlink terrestrial POP.
Starlink seems to be somewhat limited in using a ~400 km^2 cell of shared bandwidth instead of a 4 km^2 cell typical of older mobile networks, a 0.04 km^2 cell which is more appropriate for newer 5G service, or a 0.0004km^2 wifi cell that covers most of the house.
It's perfect for a sailboat or a wartime forward operating base, but not so much for a city.
That may not be very relevant now, as barely anyone uses Starlink and as their bandwidth needs are 2024 bandwidth needs rather than 2034 or 2044. But eventually, starting with the densest cells, congestion will occur. If your city is tied to an electrical grid, it's not that much harder to run gobs of fiber.
Starlink could shrink cell size with larger antennas (see Starlink 2.0), but I suspect it's abusing the deep magic of phased array antenna gain to expect a many-phased-array-N-to-many-phased-array-M network connection to scale feasibly at high SNR. Just taking a clever mathematical innovation and asking it to do things in too many dimensions with too many orders of magnitude improvements.
The problems with physical/geometrical high-gain antennas are not so arcane; I wouldn't be surprised if we go back to dishes or equivalently free-space optical networking for fixed antennas. You could run an almost indefinite number of satellite-client connections simultaneously in something as high-gain as a laser/LED launch telescope.
I mean, Grass Valley is rural, but not "middle of nowhere" rural (source: fellow Gold Countryite). If broadband companies actually don't serve that town well, then they really have no good excuse. I also thought the target market for Starlink was boats and very remote outposts, not "minor towns that are just within civilization." It's shameful if ISPs can't be arsed to serve somewhere like that.
It's worse than that; I'm half a mile from a major Obama-era fiber loop with open access rules for any ISP to buy service there. There's no significant ISP selling connections to it. (There's a couple of tiny neighborhood co-ops like the Beckville Network.) Classic last mile cost problems combined with terrible regulation of the monopoly providers like Comcast and AT&T.
We have a pretty robust local WISP. But that's expensive and not great performance. Starlink is really my best option.
Point to point wireless is an underappreciated tool. Building a tower half a mile away and pointing a hundred small dishes individually at various customers with their own dishes pointed at the tower, might be cheaper than running fiber to a hundred spread-out houses. Assuming no coordination problems.
You just need a lot of extra signal slack built into the system if you want it resilient to weather.
Packet loss is a real performance killer. Even a small percentage will result in enough TCP retransmissions to be noticeable. I had a cable DOCSIS connection for a while that would average 0.9% packet loss for long periods of time, with jumps to the 2 or 3% range. I remember there were days where I'd get like only a couple megabits due to extreme loss.
It took a year to get it resolved, and even then I'm not sure if it was deliberate or accidentally fixed. You wind up with all sorts of excuses. "It must be your splitter." "We should recap the end of the cable." "It must be your router." They refuse to do basic diagnostics, like checking your neighbors connections for packet loss (which can be done remotely.)
The data would be far more interesting if you indicated the distribution of the data, perhaps as ranges of one standard deviation. Then it would succinctly express your point as well as be easy for you to quickly find outliers.
But averages obscure what's really important. The biggest indicator of Starlink congestion problems has been how packet loss increases in evenings. Average latency is interesting but much more interesting is the variance of latency or jitter. A steady 50ms is better than a connection varying 20-80ms all the time. As for bandwidth what I've found most useful is a measure of "hours under 20 Mbps download" (1 or 2 a day on my Starlink).
IRTT is a fantastic tool for measuring latency and packet loss. Way better than simple pings. The hassle is you have to run a server too; I have one in the same datacenter as the Starlink terrestrial POP.