Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace hyper with a lighterweight framework #2040

Open
ignopeverell opened this issue Nov 28, 2018 · 13 comments
Open

Replace hyper with a lighterweight framework #2040

ignopeverell opened this issue Nov 28, 2018 · 13 comments

Comments

@ignopeverell
Copy link
Contributor

ignopeverell commented Nov 28, 2018

Hyper accounts for close to half of our external dependencies. We need to start weeding those out as much as possible or at least get a good control over them (see #2026).

Suggestions as base, both with far less dependencies:

https://github.com/tomaka/rouille
https://github.com/steveklabnik/simple-server (would have to add TLS support)

Note that we could potentially create a fork of those and replace some of their dependencies with ours (same log framework, serde, etc). I think @hashmap and @garyyu have looked a bit at hyper replacements, any feedback?

@hashmap
Copy link
Contributor

hashmap commented Nov 29, 2018

I don't have a straight answer. Hyper has a lot of deps, mostly because it's built on top of tokio. At the same time it's pretty bare bones in terms of functionality, basically it's like the standard HTTP package in Golang. It's not a framework for sure, a lot of framework are built on top of it. So it sounds like a win to remove it, right?

At the same time Hyper is de facto standard http package at the moment, same for tokio to be fair. Chances to get a serious issue with that are much lower than with a simple crate made by 2 contributors (I don't mean the links above, just an example).

So it's like to choose OS for a server to run just one service, eg dns - Linux or a simple custom OS or unikernel. In theory the latter has asmaller attack surface, in practice I'm not so sure.

@antiochp
Copy link
Member

Agree with @hashmap here.
It's one thing to have '000s of dependencies, its another to have a dependency on something like hyper which itself has lots of dependencies, but has a lot of eyes on it.

How much of our code has a dependency on hyper?
Is it just the api and wallet crates?

@ignopeverell
Copy link
Contributor Author

I'm not arguing hyper has a lot of eyes on it, but it does have a lot of dependencies. How many eyes on that one?

hyper > tokio > tokio-fs > tokio-threadpool > crossbeam-deque > crossbeam-utils > cfg-if

I don't mean to hate on hyper, but the shorter and shallower the tree, the easy to manage it will be. And yes, our dependency on hyper is just to serve a couple handful of endpoints to at most 5 clients...

@antiochp
Copy link
Member

ok - agree with both @hashmap and @ignopeverell 😀

@hashmap
Copy link
Contributor

hashmap commented Mar 5, 2019

I was skeptical about scalability requirements for grin node, but atm I'm not so sure anymore. I think it's already important to have a performant solution by default. I value simplicity, but a simple impl of Stratum showed its limits pretty quickly, our p2p layer is quite CPU intensive by the same reasons. In the first month many exchanges and pools integrated Grin, I suppose scalability is quite important for them. I have many issue with Tokio, but I have to admit it's the most performant IO solution we have atm.

@ignopeverell
Copy link
Contributor Author

Anything showing the current p2p layer is CPU intensive, and the source of that?

@DavidBurkett
Copy link
Contributor

@ignopeverell I never got around to investigating the cause, but there does appear to be a direct correlation between # of peers and cpu usage, at least on the mac version. #2594

@hashmap
Copy link
Contributor

hashmap commented Mar 5, 2019

@ignopeverell CPU load grows linearly with number of peers. Granted, IO may not be repsonsible for that, but perf shows that we spent most of the time calling Thread::sleep (see the file attached). Our p2p is a heavy user of it, also we have pretty aggressive sleep intervals, like 1ms.

screenshot 2019-03-05 at 19 41 09

@DavidBurkett
Copy link
Contributor

Ah, that's your issue. Easy fix: Don't use nanosleep. You don't need that kind of precision anyway.

@garyyu
Copy link
Contributor

garyyu commented Mar 9, 2019

Easy fix: Don't use nanosleep.

nanosleep is used by Rust in Linux, not controlled by us.
Maybe we can take a look whether it's feasible to remove the Thread::sleep in p2p thread.

@hashmap
Copy link
Contributor

hashmap commented Mar 9, 2019

I don’t think the problem is nanosleep oer se, but the fact that we call it too often. We could increase timeouts, sacrificing some latency. The only way to remove it is to switch to epoll events, that’s what mio (and tokio on top of it) does.

@yeastplume
Copy link
Member

Is this likely to be addressed? We're fairly tied into to hyper at this stage and I still don't believe there are any hugely compelling alternatives.

@quentinlesceller
Copy link
Member

I'd say this is unlikely to get addressed in the near term. But probably something to keep in mind (or add somewhere in a nice to have list).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants