Skip to content
This repository has been archived by the owner on Aug 19, 2022. It is now read-only.

Update README.md #78

Closed
wants to merge 16 commits into from
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,13 @@ File descriptors are an important resource that uses memory (and
computational time) at the system level. They are also a scarce
resource, as typically (unless the user explicitly intervenes) they
are constrained by the system. Exhaustion of file descriptors may
render the application incapable of operating (e.g. because it is
unable to open a file), this is important for libp2p because most
render the application incapable of operating (e.g., because it is
unable to open a file). This is important for libp2p because most
operating systems represent sockets as file descriptors.

### Connections

Connections are a higher level concept endemic to libp2p; in order to
Connections are a higher-level concept endemic to libp2p; in order to
communicate with another peer, a connection must first be
established. Connections are an important resource in libp2p, as they
consume memory, goroutines, and possibly file descriptors.
Expand Down Expand Up @@ -198,11 +198,11 @@ uses a buffer.
## Limits

Each resource scope has an associated limit object, which designates
limits for all basic resources. The limit is checked every time some
limits for all [basic resources](#basic-resources). The limit is checked every time some
resource is reserved and provides the system with an opportunity to
constrain resource usage.

There are separate limits for each class of scope, allowing us for
There are separate limits for each class of scope, allowing for
multiresolution and aggregate resource accounting. As such, we have
limits for the system and transient scopes, default and specific
limits for services, protocols, and peers, and limits for connections
Expand All @@ -223,7 +223,7 @@ used to initialize a fixed limiter as shown above) by calling the `Scale` method
The `Scale` method takes two parameters: the amount of memory and the number of file
descriptors that an application is willing to dedicate to libp2p.

These amounts will differ between use cases: A blockchain node running on a dedicated
These amounts will differ between use cases. A blockchain node running on a dedicated
server might have a lot of memory, and dedicate 1/4 of that memory to libp2p. On the
other end of the spectrum, a desktop companion application running as a background
task on a consumer laptop will probably dedicate significantly less than 1/4 of its system
Expand Down Expand Up @@ -269,7 +269,7 @@ For Example, calling `Scale` with 4 GB of memory will result in a limit of 384 f

The `FDFraction` defines how many of the file descriptors are allocated to this
scope. In the example above, when called with a file descriptor value of 1000,
this would result in a limit of 1256 file descriptors for the system scope.
this would result in a limit of 1000 (1000 * 1) file descriptors for the system scope.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
this would result in a limit of 1000 (1000 * 1) file descriptors for the system scope.
this would result in a limit of 1256 (256 + 1000 * 1) file descriptors for the system scope.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@marten-seemann : are you sure? Looking at https://github.com/libp2p/go-libp2p-resource-manager/blob/master/limit_defaults.go#L332, I don't see us adding a base.

I think it's a good thing we're not adding a base because in that case we'd be using even more FDs than we allocated to libp2p. For example, it seemed odd to me reading this example that we said we had 1000 FDs for go-libp2p but then our limit was being set to 1256.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's 1k (and 376 for the the conns) tested with:

func TestReadmeExample(t *testing.T) {
	scalingLimits := ScalingLimitConfig{
		SystemBaseLimit: BaseLimit{
			ConnsInbound:    64,
			ConnsOutbound:   128,
			Conns:           128,
			StreamsInbound:  512,
			StreamsOutbound: 1024,
			Streams:         1024,
			Memory:          128 << 20,
			FD:              256,
		},
		SystemLimitIncrease: BaseLimitIncrease{
			ConnsInbound:    32,
			ConnsOutbound:   64,
			Conns:           64,
			StreamsInbound:  256,
			StreamsOutbound: 512,
			Streams:         512,
			Memory:          256 << 20,
			FDFraction:      1,
		},
	}

	limitConf := scalingLimits.Scale(4<<30, 1000)

	require.Equal(t, limitConf.System.Conns, 376)
	require.Equal(t, limitConf.System.FD, 1000)
}


Note that we only showed the configuration for the system scope here, equivalent
configuration options apply to all other scopes as well.
Expand All @@ -278,12 +278,12 @@ configuration options apply to all other scopes as well.

By default the resource manager ships with some reasonable scaling limits and
makes a reasonable guess at how much system memory you want to dedicate to the
go-libp2p process. For the default definitions see `DefaultLimits` and
`ScalingLimitConfig.AutoScale()`.
go-libp2p process. For the default definitions see [`DefaultLimits` and
`ScalingLimitConfig.AutoScale()`](./limit_defaults.go).

### Tweaking Defaults

If the defaults seem mostly okay, but you want to adjust one facet you can do
If the defaults seem mostly okay, but you want to adjust one facet you can
simply copy the default struct object and update the field you want to change. You can
apply changes to a `BaseLimit`, `BaseLimitIncrease`, and `LimitConfig` with
`.Apply`.
Expand All @@ -302,7 +302,7 @@ tweakedDefaults.ProtocolBaseLimit.Apply(BaseLimit{
### How to tune your limits

Once you've set your limits and monitoring (see [Monitoring](#monitoring) below) you can now tune your
limits better. The `blocked_resources` metric will tell you what was blocked
limits better. ??? The `blocked_resources` metric will tell you what was blocked
MarcoPolo marked this conversation as resolved.
Show resolved Hide resolved
and for what scope. If you see a steady stream of these blocked requests it
means your resource limits are too low for your usage. If you see a rare sudden
spike, this is okay and it means the resource manager protected you from some
Expand All @@ -320,7 +320,7 @@ These errors occur whenever a limit is hit. For example you'll get this error if
you are at your limit for the number of streams you can have, and you try to
open one more.

If you're seeing a lot of "resource limit exceeded" errors take a look at the
??? If you're seeing a lot of "resource limit exceeded" errors take a look at the
`blocked_resources` metric for some information on what was blocked. Also take
MarcoPolo marked this conversation as resolved.
Show resolved Hide resolved
a look at the resources used per stream, and per protocol (the Grafana
Dashboard is ideal for this) and check if you're routinely hitting limits or if
Expand All @@ -335,7 +335,7 @@ routinely.
Once you have limits set, you'll want to monitor to see if you're running into
your limits often. This could be a sign that you need to raise your limits
(your process is more intensive than you originally thought) or that you need
fix something in your application (surely you don't need over 1000 streams?).
to fix something in your application (surely you don't need over 1000 streams?).

There are OpenCensus metrics that can be hooked up to the resource manager. See
`obs/stats_test.go` for an example on how to enable this, and `DefaultViews` in
Expand All @@ -344,7 +344,7 @@ or any other OpenCensus supported platform.

There is also an included Grafana dashboard to help kickstart your
observability into the resource manager. Find more information about it at
`./obs/grafana-dashboards/README.md`.
[here](./obs/grafana-dashboards/README.md).

## Allowlisting multiaddrs to mitigate eclipse attacks

Expand Down Expand Up @@ -408,7 +408,7 @@ implements `net.Error` and is marked as temporary, so that the
programmer can handle by backoff retry.

## Usage

???
MarcoPolo marked this conversation as resolved.
Show resolved Hide resolved
This package provides a limiter implementation that applies fixed limits:
```go
limiter := NewFixedLimiter(limits)
Expand Down
2 changes: 2 additions & 0 deletions limit.go
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,8 @@ func (l *BaseLimit) Apply(l2 BaseLimit) {
}

// BaseLimitIncrease is the increase per GB of system memory.
// Memory is in bytes. Values greater than 1<<30 likely don't make sense.
// FDFraction is expected to be >= 0 and <= 1.
MarcoPolo marked this conversation as resolved.
Show resolved Hide resolved
type BaseLimitIncrease struct {
Streams int
StreamsInbound int
Expand Down
2 changes: 1 addition & 1 deletion limit_defaults.go
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ func (cfg *LimitConfig) Apply(c LimitConfig) {
}

// Scale scales up a limit configuration.
// memory is the amount of memory that the stack is allowed to consume,
// memory is the amount of memory in bytes that the stack is allowed to consume,
MarcoPolo marked this conversation as resolved.
Show resolved Hide resolved
// for a full it's recommended to use 1/8 of the installed system memory.
// If memory is smaller than 128 MB, the base configuration will be used.
//
Expand Down