Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like this idea, but I dont know if Hyper is the best package to go with. Hyper occupies part of the Rust ecosystem that I think suffers from package bloat, like much of NPM. For example, currently Hyper requires 52 packages:

autocfg, bitflags, bytes, cfg-if, fnv, fuchsia-zircon, fuchsia-zircon-sys, futures-channel, futures-core, futures-sink, futures-task, futures-util, h2, hashbrown, http, http-body, httparse, httpdate, indexmap, iovec, itoa, kernel32-sys, lazy_static, libc, log, memchr, mio, miow, net2, pin-project, pin-project-internal, pin-project-lite, pin-utils, proc-macro2, quote, redox_syscall, slab, socket2, syn, tokio, tokio-util, tower-service, tracing, tracing-core, try-lock, unicode-xid, want, winapi, winapi-build, winapi-i686-pc-windows-gnu, winapi-x86_64-pc-windows-gnu, ws2_32-sys



Part of this is just crates being broken up more in Rust. For example the `http` crate only contains trait (interface) definitions. They break down like so:

Platform integration: libc, winapi, winapi-build, winapi-i686-pc-windows-gnu, winapi-x86_64-pc-windows-gnu, ws2_32-sys, fuchsia-zircon, fuchsia-zircon-sys, kernel32-sys, redox_syscall

Primitive algorithms: itoa, memchr, unicode-xid

Proc macro / pinning utilities: proc-macro2, autocfg, cfg-if, lazy_static, quote, syn, pin-project, pin-project-internal, pin-project-lite, pin-utils

Data structures: bitflags, bytes, fnv, hashbrown, indexmap, slab

Core Rust asyncio crates: mio, miow, iovec, tokio, tokio-util, futures-channel, futures-core, futures-sink, futures-task, futures-util,

Logging: log, tracing, tracing-core

The following are effectively sub-crates of the project: http, http-body, httparse, httpdate, tower-service, h2

Not sure what these are for: net2, socket2, try-lock, want


Honestly it still seems like a lot this way.

If those platform crates are just backends for libc (or similar to libc), why aren't they all folded in a single project? Having them as separate crates open the gate for supply-chain attacks without really allowing greater control or expressiveness. You are always going to pull all of them in, you are unlikely to ever use them directly, and they have no more impact on compilation than features.

Having to use proc-macro2+quote+syn for macros feels wrong, considering it's a language feature.

Having to pull in 4 crates for pinning also seems wrong. This could easily be a single utility crate, and if you really need all this cruft to use Pin (a language feature) most of this should probably be in std.

The async I/O feels like definite bloat. Not only is that a lot of futures-* crates, but I know from first-hand experience that those tend to implement multiple versions of some primitives (like streams) that are incompatible.


> If those platform crates are just backends for libc (or similar to libc),

They're not, they're to the specific platform's APIs. The libc crate is a package that targets libc on every platform.

> Having to use proc-macro2+quote+syn for macros feels wrong, considering it's a language feature.

In order to ship procedural macros as a feature, they're pretty minimal. There's a tradeoff here; others could be chosen, but weren't for good reasons.

> Having to pull in 4 crates for pinning also seems wrong.

This is covered by https://news.ycombinator.com/item?id=24730280; that is, it's a transitive dependency issue.

> The async I/O feels like definite bloat.

Really depends, network calls are a textbook use-case for async I/O. And there's so many futures crates specifically so that you can only depend on the bits you need.


Why does hyper pull in hashbrown? Isn't it identical to std::collections::HashMap?


I imagine that it's there because it is older than that happening. HashBrown's README says

> Since Rust 1.36, this is now the HashMap implementation for the Rust standard library. However you may still want to use this crate instead since it works in environments without std, such as embedded systems and kernels.

I don't know if that use case is important to Hyper or not.


The dependency goes hyper -> h2 -> indexmap -> hashbrown, and indexmap is built on hashbrown::raw::RawTable.


Looks like the indexmap -> hashbrown dependency was added recently: https://github.com/bluss/indexmap/pull/131


Thinking about crates this way is a revelation to me. Is there any tooling to make analyzing dependencies as you did easier?


That. And then we become responsible for yet more possible vulns in all those deps. Current number of cURL deps: 14

I wish it relied only on OpenSSL.


Not depependin on OpenSSL is kind of the point of the original post if I understand it correctly.

Also I find the dependency on OpenSSL one major pain in my Rust projects. When you want to build a statically linked binary you need to supply a statically built OpenSSL and if your distro doesn't come with one (like Ubuntu) you are on your own. Yes, there is a Docker container that comes with all the prerequisites but I think that's a bit heavy for my purposes.

I wish there was a single switch in Cargo.toml and every dependency would automagically use rustls.


Containers is becoming the way to build stuff, partly for that reason.

I think any dependency adds a level of burden, but some things are better delegated to library. I think crypto is a good case, btw I think OpenSSL is not the only one lib for TLS with curl.


> Containers is becoming the way to build stuff, partly for that reason.

This sits wrong with me, but thinking about it: I rewrote the sentence about containers in my comment three times before posting it and it still doesn't sound compelling. Maybe you have a point here.


You dont have to use OpenSSL. For example Windows already includes Schannel, so you can build cURL with Schannel and avoid OpenSSL:

https://daniel.haxx.se/blog/wp-content/uploads/2020/09/curl-...


So this gets you also different behaviour that may be what you want, or may not, depending.

Specifically if you use SChannel, you get the CA roots from Microsoft's CA Root programme, whereas ordinarily you'll end up with (some derivative of) the Mozilla CA root programme.

You also get the local policy root overrides. So for example in many corporate networks with a middlebox ensuring employees don't look at porn, the middlebox is trusted according to Windows Group Policy. Now your Curl program works the same way as Internet Explorer does, if the site is trusted in IE then it's trusted in Curl.

On the other hand, this means that the SChannel enabled Curl trusts different things from the Curl on platforms with OpenSSL. Maybe this new setup works "fine" in SChannel Curl, but only when you try from a Linux do you discover that your new site doesn't work at all any more without Microsoft's trust list, which explains the thousands of new tickets filed by (mostly Linux using) customers whose product just mysteriously broke even though it looked fine on your Windows test machine and you've just closed a dozen of those tickets as WORKSFORME...


There is but for me cargo tree was always sufficient so far.


Https://Www.Github.com/mimoo/dephell


Sorry www.github.com/mimoo/cargo-dephell


cargo-crev is one


While I don't love the proliferation of dependencies, from a risk perspective the raw number of dependencies isn't always the right metric.

Looking at the authors and publishers numbers from https://github.com/rust-secure-code/cargo-supply-chain it's clear a lot of these are maintained by the same set of trusted folks.


This is true today, will it remain true?


You could argue the same thing for many big monolithic C projects though. How many of the original authors/maintainers are left in OpenSSL or the Linux kernel?

My main worry about Rust dependencies is not so much the number, it's that it's still a fairly young ecosystem that hasn't stabilized yet, packages come and go fairly quickly even for relatively basic features. For instance for a long time lazy_static (which is one of the dependencies listed here) was the de-facto standard way of dealing with global data that needed an initializer. Apparently things are changing though, I've seen many people recommend once_cell over it (I haven't had the opportunity to try it yet).

Things like tokio are also moving pretty fast, I wouldn't be surprised if something else took over in the not-so-far future.

It's like that even for basic things: a couple of years ago for command line parsing in an app I used "argparse". It did the job. Last week I had to implement argument parsing for a new app, at first I thought about copy/pasting my previous code, but I went on crates.io and noticed that argparse hadn't been updated in 2years and apparently the "go to" argument parsing lib was now "clap". So I used clap instead. Will it still be used and maintained two years from now? Who knows.


I switched ripgrep to clap 4 years ago. And that was well after clap had already become the popular "go to" solution.

Some parts of the ecosystem are more stable than others. That's true. And it takes work to know which things are stable and which aren't.

And yet, some things just take a longer time to improve. lazy_static has been stable and unchanged for a very long time and it works just fine. You don't need to switch to once_cell if you don't want to. lazy_static isn't going anywhere. The real change here, I think, is that we're hoping to get a portion of once_cell into std so that you don't need a dependency at all for it.

The async ecosystem is definitely moving more quickly because it just hasn't had that much time to stabilize. If you're using async in Rust right now then you're probably an early adopter and you'll want to make sure you're okay with the costs that come with that.


Interesting that I missed clap when I wrote that program a few years ago then. In my defence "argparse" is a lot more explicit than "clap" for a such a library. Also argparse's last update was 2 years ago, so there's been quite a bit of overlap.

I guess what I'm saying is that it's an other problem with the current package ecosystem: you often end up finding multiple packages purporting to do what you need, and it can be tricky to figure out which one you want. As an example, if you want a simple logging backend the log crate currently lists 6 possibilities: https://crates.io/crates/log

I picked "simple_logger" basically at random.


> it's an other problem with the current package ecosystem: you often end up finding multiple packages purporting to do what you need, and it can be tricky to figure out which one you want

I'm trying to remember the last language I've used where people didn't say that.

Hmm... clojure? Nop.

Javascript? Nop nop nop.

Python? Hahaha I can't even remember all the package managers: virtualenv, venv, pipenv, poetry, ...


Seems like an unavoidable problem unless you buy into a curated ecosystem. Like, yeah, the cost of a decentralized ecosystem is that you have to do your due diligence on which crate to use, if any. (For example, I don't even bother with a log helper crate because it just isn't necessary for simple cases.)


Well yes if they're published as part of the same project as lots of these are. In C/C++ you wouldn't do this because consuming a library is a pain so you want to minimise the number of dependencies. In Rust, what would be 1 library in C often gets broken up into a few that are published together in order to allow people to depend on only the functionality they need.


It's also worth pointing out that once a version is published to crates.io, it can't be altered, specifically to prevent social engineering attacks. If you're worried about it, that means you can audit the frozen codebase for any given version from a top-level crate down through the dependencies, and once that trust is established, it can't be leveraged for a silent dependency change later on, which can only happen through a version update on the end-user's side.


What if I audit something down a few levels and find it lacking - how do I force update everything to not use the bad version?



C/C++ suffers from severe wheel reinvention due to a lack of package management, but do you see people cautioning the use of programs written in that language for that reason?

The STL package contains lots of things, but most of them are pretty ugly to use and quite a few of them are quite complicated to use because they are generalized for as many use cases as possible. There are so many authors out there that decide they can do memory management manually, write their own thread or process pools, implement sorting in a different way, rewrite basic algorithms for list operations like mapping, filtering, zipping, reducing...

Simply counting the number of dependencies isn't a great indicator for dependency bloat. There are extremes on both ends : no deps --> I know everything better and reimplemented the world, and thousands of deps --> I put a one liner in a package ma! One should not judge too quickly.


https://wiki.alopex.li/LetsBeRealAboutDependencies

> These complaints are valid, but my argument is that they’re also not NEW, and they’re certainly not unique to Rust ... The only thing new about it is that programmers are exposed to more of the costs of it up-front.


Hm, this response essentially says "other languages have this problem too, so deal with it".

While thats true, it completely misses the point. The point is not that dependencies exist, or even that a package might have many dependencies.

The point is, Rust (and NPM) I have found many times dont care or even consider the impact of a large amount of dependencies, and often take no steps to mitigate or reduce that number.

As others said, some features could be split off into other crates. Maybe someone only needs HTTP, or maybe they need HTTPS but no Async. Or maybe they dont need logging. With Hyper and others you just have to build everything whether you want it or not.


> As others said, some features could be split off into other crates.

I agree with this, but that's not what your original post says. Or at least, it’s not what I understood from reading it. :)

> The point is, Rust (and NPM)

and C, in many real-world cases, which is why the above post matters.


When I find a project that is a handful of .c files and a Makefile they almost always compile and run. Sometimes with warnings because the features used in the code are depreciated but usually without too much fanfare.


> they almost always compile and run

Same in Rust. Actually, it's quite better than in C. The only time Rust projects fail to compile is when they pull in some C library and something there (like configure.ac) messed up. :D

And if this C project does anything interesting it pulls in bunch of C libraries that came precompiled with your OS, and might be stale and contain unpatched security vulns.

C/C++ developers pointing at other languages about dependency hell is a curiosity.


> other languages have this problem too, so deal with it

This is the primary reason I try to avoid projects built with npm. Fucking dependency hell. If the project hasn't been actively maintained in the last 3 months your chances of getting it to work drop precipitously.


Github and npm are both graveyards filled with dead JS libraries. They make it too easy to litter the universe with sub-par orphaned software. And somehow, it's up to each individual to filter it all out. You have critical software such as React sitting next to mountains of bad nonsense code. And they are all on equal footing.

People love to trash Perl on HN. But among many other things that Perl devs understood, they deeply understood issues that come up with dependencies. Most CPAN modules are namespaced, have unit tests, and unit tests run when modules are installed. Not only that, the people behind CPAN understood that it is a community effort and you, as a library author, have certain responsibilities to your community.

https://pause.perl.org/pause/query?ACTION=pause_04about

None of that exists in npm. We have scopes in npm, and that's about it. CPAN makes npm look like a child's toy.


Can you point out any projects that have gone off the rails because of dependencies? I mean otherwise it's just "what if" syndrome.


> The only thing new about it is that programmers are exposed to more of the costs of it up-front.

That's funny, because it's only "new" if your experiences primarily lie in newer languages and communities. There's a lot of criticism of C, but one thing it does is make dependencies pretty explicit. Some say that's good, some say that's bad, I guess it can be both at different times.


I don't believe that's accurate in general.

Let's say you open up a new C codebase: What are its dependencies? You'll have to hunt through its README (hopefully it's up to date!), other build instructions, maybe CMake, maybe some custom build system, etc.

What version of dependencies does it use? If the code has been vendored, you at least know what code its using - but where do you look for updates to that code? Do you manually go out to wherever it was copied from now and then and look for updates? If the code isn't vendored, then how was it installed? From the package manager? If so, what operating system and version was used during development? If its not from a package, its might have been downloaded and installed manually? Again, where was it downloaded from? Where was it installed? What options did it use when it was compiled?

How do you handle transitive dependencies? Probably by hand. How well documented are they?

C suffers plenty of dependency issues.


But each one of these dependencies is pretty consciously and manually added, most of the time. In new code, to introduce a new one usually requires thought, and that creates a culture of caution.

Also if you are dynamic linking, ldd(1) can give you a pretty good picture.


And a simple ‘cargo tree‘ will show you the tree of dependencies for a crate, nicely formatted.


This. Take even CURL as an example, try to list its dependencies and you'll see how harder it is.


libc that's it. Every other dependency for curl is optional.....


Try it. If you do not use any "optional" dependency it becomes pretty limited, almost useless for anything serious (eg. zlib, ssl)


Zlib is pretty small.

TLS implementions are often giant hairballs but one that many things depend on. You can think of it as a somewhat "system level" dependency.


Being small is not in the question. Being there is. Otherwise we wouldn't be here discussing all hyper dependencies in detail as some of these are a lot smaller than zlib.


I am late to reply. What I mean to say is zlib is a small self contained dependency, suitable for static linking, and no major dependency of its own, just math. A lot of more modern libraries and languages tend to have their small dependencies pull in a massive hydra of dependencies.


"It has always been that way." doesn't seem to be a helpful statement here.


I mean, in some sense you're right, but in another, the point is that it's not a regression.


True.

Also, I coud imagine Rust's type-system raises the bar of dependencies that can be mangaged before everything breaks down. So a lib with 100 deps in NPM isn't the same as a lib with 100 deps in Cargo.


If you lock to specific versions, I don't think it differs much.

Edit: I mean if you don't use ~ or ^ in your nose dependencies,. Just explicit versions.


Yes but people often throw out empty criticism without offering solutions or alternatives and it gets annoying after a while and is just a complaint rather than a healthy critique. As programmers we've heard them all before I guarantee it, so why add another one to the pile?


Saying "The people complaining are right, but they've been right a long time" isn't a great endorsement of the situation.


I mean, you're just repeating a sibling comment, but if development has been this way for a long time, it's on the folks who are suggesting the new way to get out there and prove that it's a viable model for software development. It appears that most real-world, actually used software works like this.

I am all about improving the world, don't get me wrong, but saying "hey this software works just like all the other software" isn't really the insult that you seem to think that it is.


> saying "hey this software works just like all the other software" isn't really the insult that you seem to think that it is.

Well, it's common knowledge that most existing software is compete and utter crap, as evidenced by the fact that our first thought upon hearing that a particular piece of software is no longer being updated is not "oh good, it is (probably) finished and we can rely on it", but rather "on no, now the innumerable defects no doubt still latent in it will remain unfixed". So "this software is just as bad as all the other software" is, while not a very grave insult in a relative sense, still quite damning in absolute terms.


It depends. If it's in a github repo and there isn't a massive backlog of issues for a software that hasn't been updated in a while, I might think that.

One good thing about stat counters for packages combined with GitHub for issue tracking of you can kind of tell.

It does take some level of die diligence and isn't easy. But neither is anything relying on say system installed libraries in C projects.

I'd rather have the package managers than not.


Can you give even a single example of:

- a significant (eg, at least as complex as wget) software project,

- that has been unmaintained (no updates, code has the same MD5/etc hash),

- with a significant userbase (not sure exactly how to define that one),

- for a significant amount of time (at least five years),

- which is generally regarded as finished and bug-free (not in need of further development) rather than abandoned?

Because I can't think of a single one, and the only ones that even come close are video games where the known bugs were co-opted into gameplay features. The general consensus seems to be that any system that doesn't have automatic updates running is de-facto insecure (which, since every update mechanism I've heard of can introduce new code (ie new security vulnerabilities), means any system whatsoever is insecure).

(I don't quite disagree with the tacit assertion that actually getting things right on - if not the first try - then at least one of the first thirty or so is a extremely, maybe even unreasonably high standard, but it manifestly is a standard that basically all existing nontrivial software projects fail to meet.)



5 years is a relatively rough one... in terms of libraries, I come across a lot that are 2+ years old that are feature complete and work. In terms of applications, there are a couple other responses in this thread, but the specific focus in reference was really on libraries themselves, which shouldn't be as complex as wget in general.



It seems to me Rust folks have developed this habit of deflecting blame by pointing to shallow commentary. One of good examples is compiler slowness reasons, sometimes its LLVM, or it is lot of optimizations, or it is not really slow compare to C++ and so on.

They could have said it straight "Guys highly optimized, safe compilation of medium size project will be in range of 20-30 min". And that would great and honest way to deal. Instead we get oh, we have reduced compile times from 26 minutes to 21 minutes so it is 19% improvement in just one year and there is more to come. Now this is hard work and great but I am sure Rust committers understand when people say fast compile times they most likely comparing to Go etc which would be under a minute for most mid size projects. And this is very well not going to happen.

Same is now to Cargo dependency situation. Cargo and NPM are ideologically in agreement that people in general should just pull it from package manager instead of rewriting even little bit of code. And again instead of owning it there will be list of shallow reasoning: others do it too, reusing is better that rewriting, cargo is so awesome that pulling crate is much easier and so on.


Your argument seems to be "people see problems and other people explain why those problems exist" and somehow you frame it like it's a bad thing.


No my point is Rust fans try to win very narrow technical arguments even when they should clearly know discussion is about big picture. And yes it seems bad thing to me.


On an internet forum when someone brings up a topic lots of people will respond with different opinions about that topic. You can act like this is somehow specific to the Rust community, but I don't think it is.

TBH you seem to have a really odd bias against Rust, you repeatedly take something about it that's positive and try to spin it as negative, like possibly even for years? Maybe examine that.


Rust core leadership treats most technical problems as marketing problems.


Most of these are maintained as sets of crates under the same project/maintainers. For example, everything starting with futures comes from one repo, everything starting with tokio plus mio (plus some others) are under the tokio-rs GitHub organization, all the windows bindings packages are from the same repo, etc.

Plus some of the dependencies are also dependencies of the standard library (hashbrown, cfg-if, libc).


I've recently taken a different stance on this. the issue isn't number of dependencies, its the reliability of said dependencies to be the following

- Secure. Does this contain malicious code, exploits or otherwise known bugs that could be fixed but aren't? This of course is hard, and will never be perfect. There are static security scans though, espcially in a language like rust, that should be able to verify what the code does before its being published for consumption via carog. NPM is trying to do something similiar. This isn't foolproof and more sophisicated exploits always will exist, but getting low hanging and mid-level fruit should be within reach, which is a net win

- Is it quality? This is beyond just secure, but does it provide real utility value? One thing we always hear is DRY, which may mean sometimes you consume alot of dependencies, since the problem space you work in involves a lot of things, so why re-invent every single fix if something exists and you can glue together to start making impact in your problem domain? I don't think this is an issue espcially if #1 is true

So I don't know, I think its fine to have a lot of dependencies, I think the justification for those dependencies is often related to the complexity of the work involved. I'd expect a cURL replacement to have quite a few dependencies, since its complicated software with lots of edge cases, so for instance not re-inventing a HTTP parser is a great idea.

Now if only we all shared as developers this sentiment in terms of upstreaming contributions back too. The more we contribute and share with each other the more productive we can be.

Of course, sometimes package managers do terrible jobs at ensuring some sort of base quality, around security or otherwise, and thats never great. So its important to be aware of the trade offs you make when you source your dependencies.

By no means does this excuse developers from not understanding their dependency tree either. Its really the opposite.


All these packages provide important pieces of functionality, which, I suppose, mostly cannot be omitted.

Either you depend on other's work for that, or you roll your own. Choose your poison.


A quick survey gave me:

* tracing is only present for logging purposes. does curl need it? It should be configurable.

* itoa is only present for performance purposes, and only seems used by the server.

* It seems that a bunch of projects in the dependency tree use pin-project which is heavyweight and instead could use pin-project-lite. Some already do, which creates both being used, so you are worse off than just with pin-project alone...

* hyper contains code for both http servers and clients. Even if the dependencies are the same (and as seen above they are not), having to compile the server code for the client means an increase in compile time. It would be cleaner to provide separate server and client flags to make it possible to turn one off.


The latter is scheduled for the next release https://github.com/hyperium/hyper/issues/2223


There’s a third option if the package is so important: put it in the standard library.

Technically it’s still a dependency but the standard library is maintained with a standard that is rarely matched by third party libraries, and can dramatically simplify the ecosystems’ dependency graph.


Rust's standard library is intentionally kept small since it's initially not always obvious what the best solution is and the stability guarantee makes it the wrong place for evolving, diverse or opinionated APIs.

E.g. the async ecosystem offers you to pick between runtimes of different complexity and tradeoffs. Std only picked up the essential traits that allow other crates to interoperate with each other and the language itself to define async functions.


hopefully at some point they start including more batteries


> with a standard that is rarely matched by third party libraries

I mean, you can go both ways with this. Standard libraries are significantly more difficult to work on than third party libraries, and I've seen a lot of code in standard libraries that is objectively worse than ecosystem equivalents because of it.


I agree, there's no silver bullet, only tradeoffs and our appetites for them.

From looking at the list of crates above, I see a lot that are often part of the stdlib of other languages, such as HTTP, concurrency/nio primitives, and logging.

My perception of Rust's not including such fundamental primitives in the standard library is that Rust is still very much experimental, and the ecosystem values tinkering and experimenting with new ideas. The cost of that is increased dependency hell, especially over time. There's obviously huge value in experimentation for the industry, but it makes me hesitate to use Rust for projects where I just need to get stuff done, and stay done.


HTTP libraries are a prime example of where many, many standard libraries are considered old and crufty, and there are much better ecosystem libraries that end up being wildly used more.

You may have that perception, and that is fine, but it's not likely to be a thing that changes significantly, even when Rust is quite old. There's just not a lot of advantage to being in the standard library, and numerous downsides.


We should be more nuanced than that. There are also many standard libraries where the HTTP implementation is the standard. Why?

> There's just not a lot of advantage to being in the standard library, and numerous downsides.

Look at those huge lists of dependencies and the complaints of Cargo dependency hell. That's the downside. Every node in your dependency graph has overhead for everyone involved, and it's even worse when it's something as fundamental as HTTP.


Yes, there are downsides to every approach here. That's just life.

I write a lot of Rust that never touches HTTP.


Isn't that exactly why you'd use a package like hyper that wraps the pieces together for you?

I'm less experienced with rust, but with nude, there's many times I'll use a specific dependency over another because it's already in the dependency tree.

Aside, it's rough actually trying to keep node dependencies in check in a project. Especially in web UI projects using npm.


As inconvenient as it is, I tend to agree. Having worked with C#/.Net where almost everything in the box, Node, where very little is in the box and a miniscule amount of rust which is more towards the latter, I prefer the latyer.

Now, I am somewhat supposed that say tokio or similar hasn't made it in the box yet, it allows for much greater experimentation.

Aside, I wouldn't be surprised to see MS generate a massive sure if libraries if they shift more internals development to rust. Not sure if it'll be good/bad or otherwise.


Go disagrees with you


Currently the mpsc channel on the stdlib is worse than the one found on crossbeam (according to the author[0]). That's one data point.

[0] https://internals.rust-lang.org/t/scaling-back-my-involvemen...


I also wonder if Hyper is really the right tool for the job here. When I was looking into HTTP/S crates, I decided against Hyper because it seemed to require bringing in a runtime for HTTPS, and the async nature of hyper did not seem necessary for my totally synchronous CLI tool. For something like CURL it seems like you would just want the leanest, simplest synchronous HTTP implementation possible


I think that it is the right tool.

1. CURL without https seems insufficient nowadays. 2. CURL could be improved by running multiple downloads at once. I'm not sure that curl command line utility could do it, but certainly libcurl.so has this ability, it allows client code to work with multiple connections. 3. Any application having UI could benefit from async: input/output and main task are async by nature. For example, curl might want to show progress/status to a terminal despite of a stalled connection.

So maybe Hyper is too much of a code for a curl, but it is arguable that it is not.


1. So as to your first point, I totally agree CURL needs to support HTTPS. My point is that Hyper needs a runtime for HTTPS, and it doesn't necessarily make sense for CURL to have a runtime.

2. I'm not sure that CURL should necessarily support multiple concurrent downloads. It could also be argued it's more UNIX-y to make it just do one thing and allow the caller to run multiple CURL processes at the same time

3. You wouldn't necessarily need async to achieve these goals. You could easily have a synchronous http implementation which allows for printing to the console between receiving chunks of data from the network. And if you really didn't want blocking to have a "spinning" activity indicator or something, you could still achieve it with threads.

I think ultimately you'd have to decide based on the relative cost of including an entire runtime vs. just launching a second thread.


Hyper only brings in a single-threaded runtime, so it's not much of a runtime at all. That is to say, driving a future returned from Hyper in a blocking fashion and then dropping it is all that's required. Once you drop the client, the runtime will be dropped too. I usually take issue with runtimes due to the added complexity and the lack of clarity about resource usage - I'm mostly worried about superfluous memory usage and some rogue threads going off and doing a quest and a half doing god knows what, hogging or otherwise interfering with my application's threads. I don't think that's possible in this case. Do you know if there are any other concerns that I should be worried about when bringing in something that has a runtime?


> You could easily have a synchronous http implementation which allows for printing to the console between receiving chunks of data from the network. And if you really didn't want blocking to have a "spinning" activity indicator or something, you could still achieve it with threads.

One could do a spinning indicator, but not a label STALLED. To get this one need to restart read(2) every while, and there we come to implementation with complexity on par with async. Things become even more interesting if a program wants to process user input in async. UNIX-way is to send signals, but it is just plain ugly. dd from coreutils allows to use signals to trigger it to print progress, it is very inconvenient way to do it.

> I think ultimately you'd have to decide based on the relative cost of including an entire runtime vs. just launching a second thread.

I'm not so sure. Runtime for user-space context-switching is very small. I did it for educational purposes at some time in the past with C. It is operation like save registers, switch stacks, restore registers and jump to another thread. If you have more than two threads, then you'd need some kind of structure to store all contexts and to decide which one to choose next. Add some I/O code (like epoll) to track state of file descriptors, and you are done. One could do it without async, but it wouldn't become much smaller, because it would be the same logic, just instead of stack switching program would recreate stack frames.


Tadaa: curl already supports --parallel to download many URLs simultaneously...


One better than multiple parallel downloads, aria2 can even be a long-running daemon process that does downloads on demand via a websocket API: https://aria2.github.io/manual/en/html/aria2c.html#rpc-inter...


The curl easy api perform call is just a blocking wrapper around the curl multi async interface afaik, so I actually think it makes sense.


IMO "how many packages are the dependencies broken into" is a far less useful question than "how many maintainers have commit access to the dependency subtree".

The latter is a better question because:

* It's directly connected to your security posture. * It's a stable metric across languages with different norms about module size.


I count 14: https://github.com/hyperium/hyper/blob/master/Cargo.toml#L22...

bytes, futures-core, futures-channel, futures-util, http, http-body, httpdate, httparse, h2, itoa, tracingfeatures, pin-project, tower-service, tokio, want


Those are the direct dependencies. They have dependencies of their own. What counts is the entire DAG traversal including all transitive dependencies.


I count 67 --> 42 (after removing the dev only deps)

https://gist.github.com/seg-lol/0d22cf5002f890305cfd094f9ed1...

*edit, passed in -e no-dev to `cargo tree`, thanks @est31 for the suggestion


The number is smaller than that. You need to pass -e no-dev to cargo tree to filter out the dev-dependencies (which are only relevant for hyper development).


Thanks!



Some of these are tiny, and a substantial part are platform-specific, so you will never need all of them for a single build.


A quick check of cargo-Geiger shows many hundreds of unsafe invocations in the dependencies of hyper. I think it’s hard to argue that some rust HTTP library is Irrefutably safer when you’ve thrown out so many of the static guarantees of the language and replaced them with “dude, trust me”.


It's fine, how often do you have to rebuild curl?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: