It's mostly RAM allocated per client. E.g. Postgres is very much limited by this fact in supporting massive numbers of clients. Hence pgbouncer and other kinds of connection pooling which allow a Postgres server to serve many more clients than it has RAM to allow connecting.
If your Node app spends vety little RAM per client, it can indeed service a great many of them.
A PHP script that does little more than checking credentials and invoking sendfile() could be adequate for the case of serving small files described in the article.
Libuv now supports io_uring but I’m fuzzy on how broadly nodejs is applying that fact. It seems to be a function by function migration with lots of rollbacks.
Except that it wastes 2 or 3 orders of magnitude in performance and polls all the connections from a single OS thread, locking everything if it has to do extra work on any of them.
Picking the correct theoretical architecture can't save you if you bog down on every practical decision.
I'm sure there is plenty of data/benchmarks out there and I'll let that speak for itself, but I'll just point out that there are 2 built-in core modules in Node.js, worker_threads (threads) and cluster (processes) which are very easy to bolt on to an existing plain http app.
So think of it this way: you want to avoid calling malloc() to increase performance. JavaScript does not have the semantics to avoid this. You also want to avoid looping. JavaScript does not have the semantics to avoid it.
If you haven’t had experience with actual performant code JS can seem fast. But it’s is a Huffy bike compared to a Kawasaki H2. Sure it is better than a kid’s trike but it is not a performance system by any stretch of the imagination. You use JS for convenience, not performance.
IIRC V8 actually does some tricks under the hood to avoid malocs which is why Node.js can be be unexpectedly fast (I saw some benchmarks where it was only 4x of equivalent C code) - for example it recycles objects of the same shape (which is why it is beneficial not to modify object structure in hot code paths).
Hidden classes is a whoooole thing. I’ve switched several projects to Maps for lookup tables so as not to poke that bear. Storage is the unhappy path for this.
(to be fair the memory manager reuses memory, so it's not calling out to malloc all the time, but yes a manually-managed impl. will be much more efficient)
Whichever way you manage memory, it is overhead. But the main problem is the language does not have zero copy semantics so lots of things trigger a memcpy(). But if you also need to call malloc() or even worse if you have to make syscalls you are hosed. Syscalls aren’t just functions, they require a whole lot of orchestration to make happen.
JavaScript engines also are also JITted which is better than a straight interpreter but except microbenchmarks worse than compiled code.
I use it for nearly all my projects. It is fine for most UI stuff and is OK for some server stuff (though Python is superior in every way). But would never want to replace something like nginx with a JavaScript based web server.
V8 does a lot of things to prevent copies. If you create two strings and concat them and assigned to a third var, no copy happens (c = a+b, c is a rope). Objects are by reference... Strings are interned.. the main gotcha with copies is when you need to convert from internal representation (utf16) to utf8 for outputs, it will copy then.
What a sad miserable cult it is, to be obsessed with performance, and to so desperately need to beat up on a language. I don't know if people spewing this shit are aware of how dull their radical extremism is?
You really don't have to go far down the techempower benchmarks to get to JS. Losing let's just say 33% performance versus an extremely tuned much more minimalist framework on what is practically a micro benchmark is far from the savage fall from civilization & decadence of man, deserving far less scorn and shame than what the cult of js hate fuels their fires on.
I could go on about how JS has benefitted from having incredibly work poured into it, because it is the most popular runtime on the planet, because it's everywhere, because there was a hot war for companies to try to beat each other on making their js runtime good (one of the only langs with many runtimes is interesting). It's a bit excessive & maybe it should have been a more deserving language perhaps (we can debate endlessly), but man... It just doesn't matter. Stressing out, being so mean and nasty (huffy vs Kawasaki, @hoppp's even worse top-voted half sentence snark takedown), trying to impress upon people this image that it's all just so bad: I think there's such a ridiculous off kilter tilting way too hard, and far less of it is about good reasons and valid concerns, and so so so much of it is this bandwagon of negative energy, is radical overconcern.
Like most tools & languages, it's what you do with it. With JS, we have a problem that (mainstream) software hadn't faced before, which is client server architecture, that the client might be a cruddy cheap phone somewhere and/or on a dodgy link. We are trying to build experiences that work across systems, with significant user perceived latency sometimes. And so data & systems architecture matters a lot. Trying to get keep the client primed with up to date data that it needs to render, doing client work without blocking/while maintaining user responsiveness, are hard fun multi-threaded (webworker) challenges, for those folks that care.
And those challenges aren't unique to js. Other languages have similar challenges. Trying to multithread a win32 UI to avoid blocking also was a bit of a nightmare, working off main thread. Doing data sync is a nightmare. There's so many ways to get stuff wrong. And I think a lot of the js code out there does get it wrong. But we experience hundreds or thousands of websites a week, and crucial tools we use that are js client-server are badly architected. I sympathize with why js has such a bad rap. To me it usually seems like architectural app design issues, that companies are too busy building features to really consider the core, to establish data architectures that don't block or lag. And that's not a specific js problem.
There are faster systems yes, but man, the energy being poured into blaming the worst of the world on JS seem ridiculous to me, like such a sad drag, that avoids any interest or fascination on what is so so interesting & so worthy. The language is the least remarkable part of the equation, practically doesn't matter, and the marginal performance levels are (with some exception for very specific cases) almost never a remotely critical factor. Just so so so tired, knowing such enormous and pointless frivolous scorn and disdain is going to overwhelm all conversations, going to take over every thread forever, when it matters so so little, is so very rarely a major determinant.
JS does not have to be the thought terminating cliche to every thread ever (and humbly, I'd assert it doesn't deserve that false conviction either. But either way!).
I was a Java programmer out of school. My first specialization became performance. It’s the same problem with JavaScript. There’s at least an order of magnitude speed difference between using your head and just vomiting code, and probably more like a factor of forty. Then use some of the time you saved to do deep work to hit an even two orders of magnitude and it can be fast. If you’re not an idiot that is.
bcantril has an anecdote about porting some code to rust and it was way faster than he expected. After digging he found it was using btrees under the hood, which was a big part of the boost. B-trees being a giant pain in the ass to write in the donor language but a solved problem in Rust. It’s a bit like that. You move the effort around and it evens out.
> With JS, we have a problem that (mainstream) software hadn't faced before, which is client server architecture, that the client might be a cruddy cheap phone somewhere and/or on a dodgy link.
What software had a good % of computer users using in the year 2000 that was client-server? What % of developers were doing client server?
What % were doing client server a decade latter?
To my view, it had largely been age of Personal Computing before the web happened. There were enterprise softwares some with client-server architectures doing important things sure, but very very very few people saw that in action, experienced the constraints of what client server meant. The web was the new thing changing the face of software for the world, and brought client-server to mainstream (consumers and devs).
I can't understand (and you have contributed nothing to clarify) your scathing shitty low grade unqualified hate. Are you at all serious? Why such a un hackerly comment that provokes no thought or raises no interesting ideas?
2000 was the peak of the dot-com boom. In the US, half of Silicon Valley was developing for client-server and nearly every person with a computer was using client-server applications.
To give a clearer example, Napster, where I worked, had 60 million users, mostly in the US and we were far from the largest.
The cruddy cheap phones one might complain today is several times more powerful than the servers we used then and the dodgy connection today are downright sublime compared to the dial-up modems over noisy phone lines.
Architecture is far more important than runtime speed. (People are so easily swayed by "JS SUCKS LOL" because of experiences with terrible & careless client-server architectures, far more than js itself being "slow".)
The people ripping into js suck up the interesting energies, and bring nothing of value.
If we are discussing C10K we are by definition discussing performance. JavaScript does not enter this conversation any more than BASIC. Yes of course architecture matters. Nobody has been arguing otherwise. But the point is that if you take the best architecture and implement it in the best available JS environment you are still nowhere close to the same architecture implemented in a systems language in terms of performance. You are welcome to your wishful thinking that you do not need to learn anything besides JavaScript when it comes to this conversation. But no matter how hard you argue it will continue being wishful thinking.
We are discussing tech where having a custom userland TCP stack is not just a viable option but nearly a requirement and you are talking about using a lighter JS framework. We are not having the same conversation. I highly recommend you get off Dunning-Kruger mountain by writing even a basic network server using a systems language to learn something new and interesting. It is more effort than flaming on a forum but much more instructive.
Techempower isn't perfect, but a 33% loss of speed is really not bad versus best of the best web stacks. Your attempt to belittle is the same tired heaping scorn, when the data shows otherwise. But it's so cool to hate!
Libuv isn't the most perfect networking stack in the planet, no. It could be improved. We could figure out new ways to get io-uring or some eBPF based or even dpdk bypass userland stack running. Being JS does not limit what you do for networking, in any way. At all. It adds some overhead to the techniques you choose, requires some glue layer. So yes, some cpu and memory efficiency losses. And we can debate whether thats 20% or 50% or more. Techempower shows it's half an order of magnitude worse (10^0.5, 33%). Techempower is a decent proxy for what is available, what you get, without being extremely finicky.
Maybe that is the goal, maybe this really is a shoot for the moon effort where you abandon all existing work to get extreme performance, as c10k was, but that makes the entire lesson not actually useful or practical for almost everyone. And if you want to argue against techempower being a proxy for others, for doing extreme work & what is possible at the limit, you have to consider a more extreme js networking than what comes out of box in node.js or Deno too, into a custom engine there too.
It's serious sad cope to pretend like it's totally different domains, that js is inherently never going to have good networking, that it can't leverage the same networking techniques you are trying to vaunt over js. The language just isn't that important, the orders of magnitude (<<1 according to the only data/evidence in this thread) are not actually super significant.
Look you seem to have made up your mind and are unwilling to listen to knowledge or experience. The tech industry does not do well with willful ignorance so I wish you luck with all that.
I'm more than happy to listen! But there's not really content or lessons happening. It's just an endless stream of useless garbage no effort dunking and "trust me bro."
I would be far less offended by the "js sucks lol" thought terminating cliche & dunking if they came at all correct. But I'm the only one citing any evidence here and your assertions that better networking is impossible in js, js could never userland bypass, just don't wash or make sense.
I don't think js is ideal to spend all your time optimizing for. It's a bit ridiculous to imagine a userland kernel bypass like dpdk that's married to v8. But it doesn't seem inconceivable at all.
Assume both of us are being willfully to our ends. I'd way rather be willfully of possibility, than to be so ironclad locked down assured beyond belief willful of impossibility. I think that's an insult to the hacker spirit. And it's a sad fate that every conversation has the same useless loud bellowing that JS is impossibly bad; this thought terminating cliche is a wicked evil to inflict on the world, adding chiefly to din. Especially when it never bring any evidence, when it's so self assured.
No, I'm pointing out that you have no evidence at all & and aren't trying to speak reasonably or informatively, that you are bullying a belief you have without evidence.
I'm presenting evidence. I'm clarifying as I go. I'm adding points of discussion. I'm building a case. I just refuse to be met and torn down by a bunch of useless hot air saying nothing.
Nothing about current js ecosystem screams good architecture it’s hacks on hacks and a culture of totally ignoring everything outside of your own little bubble. Reminds me of early 2000s javabeans scene
Reminds me of early 2000s JS scene, in fact. Anything described as "DHTML" was a horrid hack. Glad to see Lemmings still works, though, at least in Firefox dev edition. https://www.elizium.nu/scripts/lemmings/
Unqualified broadscale hate making no assertions anyone can look at and check? This post is again such wicked nasty Dark Side energy, fueled by such empty vacuous nothing.
It was an incredible era that brought amazing aliveness to the internet. Most of it was pretty simple & direct, would be more intelligible to folks today than today would be intelligible & legible to folks from them.
This behavior is bad & deserves censure. The hacker spirit is free to be critical, but in ways that expand comprehension & understanding. To classify a whole era as a "horrible hack" is injurous to the hacker spirit.
Worker threads can't handle I/O, so a single process Node.js app will still have the connection limit much lower than languages where you can handle I/O on multiple threads. Obviously, the second thing you mention, ie. multiple processes, "solves" this problem, but at a cost of running more than one process. In case of web apps it probably doesn't matter too much (although it can hurt performance, especially if you cache stuff in memory), but there are things where it just isn't a good trade-off.
And I have confirmed to my own satisfaction that both PM2 and worker threads have their own libuv threads. Yes very common in node to run around one instance per core, give or take.
I'm surprised there's not a lot more work on "backend free" systems.
A lot of apps seem like they could literally all use the same exact backend, if there was a service that just gave you the 20 or so features these kinds of things need.
Pocketbase is pretty close, but not all the way there yet, you still have to handle billing and security yourself, you're not just creating a client that has access to whatever resources the end user paid for and assigned to the app via the host's UI.
This is how I feel about this industries fetishization of "scalability".
A lot of software time is spent making something scalable when in 2025 I can probably run any site the bottom 99% of most visited sites on the internet on a couple machines and < 40k capital.
The sites that think they need huge numbers of small network interactions are probably collecting too much detailed data about user interaction. Like capturing cursor movement. That might be worth doing for 1% of users to find hot spots, but capturing it for all of them is wasteful.
A lot of analytic data is like that. If you captured it for 1% of users you'd find out what you needed to know at 1% of the cost.
Prior to the recent RAM insanity(a big caveat I know) a 1u supermicro machine with 768GB some NVME storage and twin 32 core Epyc 9004s was ~12K USD. You can get 3 of those and and some redundant 10G network infra(people are literally throwing this out) for < 40k. Then you just have to find a rack/internet connection to put them in which would be a few hundred a month.
The reality is most sites don't need multi region setups, they have very predicable load and 3 of those machines would be massive overkill for many. A lot of people like to think they will lose millions per second of down time, and some sites certainly do but most wont.
All of this of course would be using new stuff. If you wanted to use used stuff the most cost effective are the 5 year old second gen xeon scalables that are being dumped by cloud providers. Those are more than enough compute for most they are just really thirsty so you will pay with the power bill.
This of course is predicated on assumption you have the skill set to support these machines and that is increasingly becoming less common though as successful companies that started in the last 10 years are starting to do more "hybrid cloud" it is starting to come back around.
They just happen to have an online config tool that is somewhat close to what you would pay if you didnt engage with sales which is useful for a hackernews comment.
When people talk about a single server they are pretty much talking about either a single physical box with a CPU inside or a VPS using a few processor threads.
When they say "most companies can run in a single server, but do backups" they usually mean the physical kind.
The term is absolutely ambiguous and I know I've run into confusion in my own work due to the ambiguity. For the purpose of the C10K, server is intended to mean server process rather than hardware.
> You can buy a 1000MHz machine with 2 gigabytes of RAM and an 1000Mbit/sec Ethernet card for $1200 or so. Let's see - at 20000 clients, that's 50KHz, 100Kbytes, and 50Kbits/sec per client. It shouldn't take any more horsepower than that to take four kilobytes from the disk and send them to the network once a second for each of twenty thousand clients.
This is definitely talking about scaling past 10K open connections on a single server daemon (hence the reference to a web server and an ftp server).
However, most people used dedicated machines when this was written, so scaling 10K open connections on a daemon was essentially the same thing as 10K open connections on a single machine.
> You can buy a 1000MHz machine with 2 gigabytes of RAM and an 1000Mbit/sec Ethernet card for $1200 or so
Those are not "by process" capabilities and daemons were never restricted to a single process.
The article focuses on threads because processes had more kernel level problems than threads. But it was never about processes limitations.
And by the way, process capabilities on Linux are exactly the same as machine capabilities. There's no limitation. You are insisting everybody uses a category that doesn't even exist.
At the time this was written powerful backend server only had like 4 cores. Linux only started adopting SMP like that same year. Also CPU caches were tiny
Serving less than 1k qps per core is pretty underwhelming today, at such a high core count you'd likely hit OS limitations way before you're bound by hardware
Linux had been doing SMP for about 5 years by that point.
But you're right OS resource limitations (file handles, PIDs, etc) would be the real pain for you. One problem after another.
Now, the real question is do you want to spend your engineering time on that? A small cluster running erlang is probably better than a tiny number of finely tuned race-car boxen.
My recollection is fuzzy but i remember having to recompile 2.4-ish kernels to enable SMP back in the day which took hours... And I think it was buggy too.
Totally agree on many smaller boxes vs bigger box especially for proxying usecase.
I’m shocked that a 256 core Epyc can’t do millions of requests per second at a minimum. Is it limited by the net connection or is there still this much inefficiency?
It almost certainly can, even old intel systems with dual CPU 16 core systems could do 4 and a half million a second [1]. At a certain point network/kernel bottlenecks become apparent though, rather than being compute limited.
Like anything it really depends on what they are doing, if you wanted to just open and close a connection you might run into bottle necks in other parts of the stack before the CPU tops out but the real point is that yea, a single machine is going to be enough.
I think the most likely bottleneck is gonna be your NIC hating getting a ton of packets. Line rate with huge frames is quite different than line rate with just ICMP packets, for instance (see CME binary glink market data for a similarly stressful experience to the ICMP).
I don't see how you could have read the article and come to this conclusion. The first few sentences of the article even go into detail about how a cheap $1200 consumer grade computer should be able to handle 10,000 concurrent connections with ease. It's literally the entire focus of the second paragraph.
2003 might seem like ancient history, but computers back then absolutely could handle 10,000 concurrent connections.
In spring 2005 Azul introduced a 24 core machine tuned for Java. A couple years later they were at 48 and then jumped to an obscene 768 cores which seemed like such an imaginary number at the time that small companies didn’t really poke them to see what the prices were like. Like it was a typo.
We're slowly getting back to similarly-sized systems. IBM now has POWER systems with more than 1,500 threads (although I assume those are SMT8 configurations). This is a bit annoying because too many programs assume that the CPU mask fits into 128 bytes, which limits the CPU (hardware thread) count to 1,024. We fixed a few of these bugs twenty years ago, but as these systems fell out of use, similar problems are back.
> Driven by 1,024 Dual-Core Intel Itanium 2 processors, the new system will generate 13.1 TFLOPs (Teraflops, or trillions of calculations per second) of compute power.
This is equal to the combined single precision GPU and CPU horsepower of a modern MacBook [1]. Really makes you think about how resource-intensive even the simplest of modern software is...
Note that those 13.1 TFLOPs are FP64, which isn't supported natively on the MacBook GPU. On the other hand, local/per-node memory bandwidth is significantly higher on the MacBook. (Apparently, SGI Altix only had 8.5 to 12.8 GB/s.) Total memory bandwidth on larger Altix systems was of course much higher due to the ridiculous node count. Access to remote memory on other nodes could be quite slow because it had to go through multiple router hops.
Half serious. I guess what Iwas saying is that it is that kind of science which is still very useful but more to nginx developers themselves. And most users now dont have to worry about this anymore.
> And computers are big, too. You can buy a 1000MHz machine with 2 gigabytes of RAM and an 1000Mbit/sec Ethernet card for $1200 or so. Let's see - at 20000 clients, that's 50KHz, 100Kbytes, and 50Kbits/sec per client. It shouldn't take any more horsepower than that to take four kilobytes from the disk and send them to the network once a second for each of twenty thousand clients. (That works out to $0.08 per client, by the way. Those $100/client licensing fees some operating systems charge are starting to look a little heavy!) So hardware is no longer the bottleneck.
It seems to me that there are far fewer problems nowadays with trying to figure out how to serve a tiny bit of data to many people with those kinds of resources, and more problems with understanding how to make a tiny bit of data relevant.
It still absolutely can be. We've just lost touch.
This particular case, with the numbers given, would work as a server for profile pictures, for instance. Or for package signatures. Or for status pages of a bunch of services (generated statically, since the status rarely changes).
Yes, an RPi4 might be adequate to serve 20k of client requests in parallel, without crashing or breaking too much sweat. You usually want to plan for 5%-10% of this load as a norm if you care about tail latency. But a 20K spike should not kill it.
It has been long enough that C10K is not in common software engineer vernacular anymore. There was a time when people did not trust async anything. This was also a time when PHP was much more dominant on the web, async database drivers were rare and unreliable, and you had to roll your own thread pools.
The date (2003) is incorrect. The article itself refers to events from 2009, is listed at the bottom of the page as having been last updated in 2014, with a copyright notice spanning 2018, and a minor correction in 2019.
reply