Can someone explain how does Starlink go about using the electromagnetic spectrum efficiently? 4G (and 5G even more so) relies on small cells to spatially partition the spectrum, therefore increasing the maximum bandwidth available in the system. Need to serve more users without compromising per-user throughput? Split the cells.
Now, the cell of a satellite is enormous, and there's no way to reduce the tx power without getting out of range of the surface of the planet altogether. Not to mention that those satellites are pretty damn fast so they stay over any particular user for a very short amount of time.
What am I missing? Or is this supposed to be a low throughput system (when added up over all concurrent users)?
You’re not missing any spectral efficiency trick. Starlink maxes out at one customer per square mile or about one million total customers. It’s for areas beyond cellular coverage.
Cell networks could improve infill coverage consistency with professional installs of roof mounted CPE. That would help people who have a weak signal inside their home but good signal outside.
Starlink is effectively a point to point connection with each satellite. There is no technical reason they can’t have 1 billion customers with a large enough network and the right hardware.
The issue is economic. Target 1 customer per square mile and most of the orbit over land is useful, target 100,000 customers per square mile and most satellites are only beneficial over megacities like NYC.
Due to the short lifespan of individual satellites they can adjust to market forces and new technology fairly rapidly. If for example China lets them in the economics around density change significantly.
Directionality of antennas/electronically steered beams (off axis gain is not zero on the satellite or the terminal) and the relatively large bandwidth involved in these links imposes a limit that you'll hit even with an unlimited number of satellites. Calculating that limit is tricky but I don't think it's practical to cover urban areas for the reasons the parent brings up... cellular networks rely heavily on short effective range; the minimum achievable spot size for LEO satellites is still much larger than a typical urban LTE cell, and the terminal has the same problem of spot size in orbit being quite large.
The spot size is proportional to the phased array antenna size, right? Could SpaceX just keep scaling up their phased array antennas on the satellites as Starship reduces launch cost?
Antenna size is a different one. They're orbiting low enough that atmospheric effects become a real issue. See the deployment in February[1] where they lost ~40 of the 49 satellites in a launch batch, because a Geomagnetic storm increased the atmospheric density at their initial parking orbit, overwhelming the ability for their control system to maintain orbit.
So, yeah, they could then scale up the control system, too - but that makes them even heavier, increases fuel expenditure, reduces lifetime, etc.
The antenna array is parallel to the ground, so scaling it up has a minimal impact on drag.
It increases mass, sure, but an increased mass to drag area ratio is usually considered a good thing (you'll be needing a better rocket to launch it tho).
It's not always perfectly edge on, and it requires a control system to keep it oriented that way.
This is why from the linked article:
> The satellites were then placed in a protective "safe mode" and commanded to fly edge-on "like a sheet of paper" to minimize drag effects as the company worked with the U.S. Space Force and the company LeoLabs to track them with ground-based radar, it added.
The v2 satellites they're testing out to launch on starship are in fact scaled up. The v1.5 they're launching now are larger than the originals which is why they've not launched 60 starlink satellites on a single falcon 9 launch in sometime.
I thought they start at about 250 km after launch and then boost up to around 400 km. They don't have very powerful thrusters or gyros so they couldn't boost faster then they fell out of orbit.
It seems that they are far safer once they have boosted up into operational orbit - that's why only the recently launched Starlinks fell back to earth.
This is the standard way of showing the performance of antennas - it shows the varying amount of power that is sent out from the antenna in different directions. Note that (a) it doesn't roll off immediately outside the target zone (usually the 3dB threshold as shown here) and (b) there is measurable power in completely off-axis directions. In fact the tighter you beam steer, the more lobes you get in weird places.
Both of these mean that you can't just parcel up physical space into perfect beam spots and just keep adding satellites. There are all sorts of things you can do to help as you go up, but it gets harder and harder.
That's true, but assuming maximum possible density both the sender and receiver should be using directional antenna on top of this the closest satellite gets the strongest signal simply due to distance. That distance effect becomes less meaningful as the density increases but stays relevant for practical networks.
Both ends of the starling link are already using directional antennas. The difference in power die to distance is already pretty irrelevant - the satellite is 400km up and we're talking about offsets of 10km on the ground. The 400km part of that is far more dominant making the power difference to the 'next satellite over' be very small.
“Both ends of the starlink are already using directional antennas” Sure, but it’s critical to understand the associated benefits when doing this kind of analysis.
If you’re talking average 10km separation on the ground that would mean stupid high constilation densities. A lower densities it’s more meaningful.
PS: I got to 1 billion customers assuming different hardware and more satellites not just more satellites. The only way 100,000 customers “works” is a none viable and lower bandwidth per customer business model, but the goal was to illustrate a point. Space X is aiming for an economic sweet spot, not the outer edge of what’s technically possible.
I don't follow your economic logic. Are you saying 100,000 customers need 100,000 different satellites which is too many to be used efficiently on the rest of their orbits?
That was in reference to the capacity to handle X customers per square mile not total. If you want to average 100,000 customers per square mile over a wide area you need a fuckton of satellites over that area.
Basically a network that handles 1 customer per square mile might in theory have say 40 million customers worldwide eventually. A network capable of handling 10 people per square mile might cost 10x as much, but only increase that to say 120 million because some areas have less than 10 potential customers per square mile. Increasing that to say 100 or even 100,000 people per square mile would vastly increase the cost, but you hit heavy diminishing returns with a lower percentage of areas reaching those densities and many people with better internet options than starlink or simply can’t afford the service.
> You’re not missing any spectral efficiency trick. Starlink maxes out at one customer per square mile or about one million total customers. It’s for areas beyond cellular coverage.
You're entirely wrong here. Their customer density is way higher than 1 per square mile... I'm not even sure where you get these ideas.
There are a bunch of estimates out there, but the best guess is that starlink can currently serve around 300 terminals per cell.
Each cell is 146 miles, which is around 2 terminals per square mile.
People may be clustered a bit, giving the illusion of starlink being able to support a higher density than this.
Starlink isn't as scalable as people want to imagine due to the RF bandwidth SpaceX has as well as power requirements. This is part of why SpaceX made such a big deal over the 12GHz band that Dish wants to use.
The next version of satellites may be able to support additional bands which will help, but starlink's still going to be fairly limited in terms of number of people it can support.
There are two directions satellite to ground and ground to satellite, and they operate slightly differently.
From the ground you can think of an array of satellites like a monitor showing a document. As you read every word is being displayed at the same time in the same color, but because the light is coming from different locations you can focus on a little area and ignore the rest of the screen.
A traditional satellite dish uses a similar idea to just look at a single satellite at a time and ignore the rest of the sky, this is why you need to aim them. What starkink does differently is using a phased array to let it change focus without physically moving the dish, which is really really fast and lets them track satellites in LEO.
Just like cellphones it can hand off from satellite to satellite plenty fast to keep up with the speeds involved. As soon as you can handle 1 transistion just repeat the same thing millions of times, computers are great at repetitive tasks.
As to throughput, it’s supposed to be fast enough to watch 4K video which isn’t gigabit speeds but is vastly better than millions of people are stuck with.
None of that makes any difference to the question. Phased array techniques save transmitter power more than bandwidth, because the beams just can't be very narrow with such a small array. It means one satellite gets your signal, not all in sight. Up top, one beam takes in many, many subscribers. Starlink will struggle to maintain more than a few tens of megabits.
Power ends up being bandwidth, but I don’t think you understand why.
A cellphone networks depend on proximity to a cell tower to alow frequencies to be reused across multiple cell towers. On top of this they also use a wide range of techniques to share each individual towers available bandwidth with multiple cell phones. But the most basic trick is signals get weaker with distance.
Starlink also uses distance as a satellite can only see a small fraction of the total number of ground stations. However, weaker from distance is indistinguishable from a weaker signal due to a phased array antenna pointing somewhere else, so they can in effect simulate more cells than they would have from proximity alone. On top of this they can use all the existing cellphone tricks to maximise bandwidth.
PS: Yes Starlink is limited to ~100 megabits per customer, but that’s vastly better than 1.5 Mbps DSL which is what many rural communities are stuck with.
> It means one satellite gets your signal, not all in sight.
So it would seem an easy way to scale out is by adding satellites. I don't think they're anywhere near capacity on that, yet.
Also, each current-gen satellite is 20 Gbps of bandwidth. v2 is claimed to be "almost an order of magnitude more capable than v1.0" in terms of communications bandwidth.
Oh, ok. But 22km is still a lot more than a typical 4G/5G urban cell. It's basically the size of a city. Whereas I can spot multiple stations in my city if I go for a 5-10 minute walk.
Latency is big issue for geostationary satellites - the ones at which you can aim a dish at a fixed point in the sky - which have to be very far away from Earth to be geostationary, 35000 km above surface, unlike Starlink satellites which are just 550km above ground.
Starlink gets 40-150ms, more often erring on the faster side. If they can get their peak load under control, I don't think I'd ever pine for hardwired internet again.
In principle you've got spatial diversity at both ends - on the ground and in space. I don't mean full duplex - I mean that the same area on the ground can in principle be covered by multiple Starlink satellites simultaneously using the same frequencies, so long as the satellites are not close together. The receiving phased array can separate the multiple signals just as it could if you used a steerable parabolic dish, but in software. Of course there may not be enough satellites launched yet to take advantage of this, but eventually there should be.
They're on the same frequency band and traditional satellite dishes can pick up some amount of off-axis interference. Assuming constant power per beam, more beams (Nco) eventually overwhelm traditional receivers.
There's a ton of very clever stuff going on with modern RF modulations. For example, each client terminal will have a unique pseudo-randomization key, and transmissions are modulated by that key. That evenly distributes the transmission across the spectrum. To someone without the key, it appears to be random noise. Multiple transmissions can occur simultaneously on the same spectrum with different keys, without necessarily interfering with each other.
Also, that 22km beam isn't going to be uniform, but rather higher gain at the center, and below some power threshold at the peripheral.
CDMA doesn't increase capacity because the N codes have to each use 1/Nth the rate. It fell out of favor around 15 years ago because it doesn't allow one transmitter to temporarily use the whole channel, which is advantageous for data traffic.
Spread spectrum methods are not magic, they do not manufacture new spectrum. As you get a larger number of simultaneous spread spectrum users the apparent "noise floor" (actually other spread spectrum transmissions) becomes higher and higher until the achievable data rate approaches zero. This isn't to say that the use of spread spectrum methods doesn't have benefits, but it doesn't change the fundamental concepts. Bluetooth, for example, uses FHSS but will still suffer from contention in very noisy environments.
It could also be that there are a series of different communication technologies. By the time we move onto the 3rd, noone is listening for the first. For instance AM radio, FM radio, TV, Spread spectrum analogue, digital, ... hypersuperfastwave ...
Since each tech spreads from the source in a small sphere-shaped shell depending on when that technology was invented, you need a huge co-incidence for someone else's shell to hit you WHEN you are listening for that type of communication.
Phased arrays and beam steering are pretty magical. Both the satellite and the ground station can aim at each other by using hundreds of antennas as a "lens", providing very efficient use of the spectrum. If I had to guess, I would assume that Starlink's beam size is about 10-50 km in radius.
There are limits to this magic. While sparse arrays of antenna can get you very good angular resolution when receiving signals, they're not so magical for transmission; as your array of transmitting antenna gets sparser, the power density of the beam also decreases proportionally. This is why aperture synthesis is great for radio astronomy, but next to worthless for far-field power transmission.
You can spread out a given collection of N antennas as wide as you like to get better angular resolution, but you lose signal strength at the target. Radio astronomers use big antennas and look for a long time. That doesn't work for communication.
You can add more antennas, but those cost money. So, you use as many as you can afford, and space them out according to what you need. A too-broad antenna array on a satellite would have one set of problems, one on the ground others.
Now, the cell of a satellite is enormous, and there's no way to reduce the tx power without getting out of range of the surface of the planet altogether. Not to mention that those satellites are pretty damn fast so they stay over any particular user for a very short amount of time.
What am I missing? Or is this supposed to be a low throughput system (when added up over all concurrent users)?