Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to limit goroutines? #839

Open
luodw opened this issue May 26, 2023 · 9 comments
Open

How to limit goroutines? #839

luodw opened this issue May 26, 2023 · 9 comments

Comments

@luodw
Copy link

luodw commented May 26, 2023

In some machines with bad disk, when p2p distribute big file and p2p process is full with pagecache(use cgroup limit memory), p2p process will create many goroutines and threads(as most of goroutines are in syscall write/read status and write/read is slow). This will result high load.

So does torrent support limit goroutines?

@anacrolix
Copy link
Owner

Could your problem be the same as #820, which was just recently fixed?

@luodw
Copy link
Author

luodw commented Jun 1, 2023

Could your problem be the same as #820, which was just recently fixed?

maybe not, our problem is happened when disk is busy, so golang runtime will create a lot of thread, as many thread is in syscall status, like read or write, and these two syscall return slowly. I think torrent should support limit goroutine numbers in order to limit thread numbers.

@anacrolix
Copy link
Owner

Interesting, how many torrents and what number of connections are you having for each? I can't imagine a situation where you have more than several torrents, with at most 20-25 connections actively downloading at once. That's still only several 100 goroutines blocked on reads or writes. Do you have a goroutine dump I can look at? Here is how to produce one if you're not sure: #820 (comment)

@luodw
Copy link
Author

luodw commented Jun 2, 2023

Interesting, how many torrents and what number of connections are you having for each? I can't imagine a situation where you have more than several torrents, with at most 20-25 connections actively downloading at once. That's still only several 100 goroutines blocked on reads or writes. Do you have a goroutine dump I can look at? Here is how to produce one if you're not sure: #820 (comment)

We split big file into several fixed size chunks, every chunk is downloaded as a single torrent. So a 3gb file, if chunk size = 100mb, it will 30 torrents.

when it happens, I will try to dump stack.

@anacrolix
Copy link
Owner

Hm, yeah the limit configuration available does not cater to very large torrent counts nicely. If your file grows, do you have to rebuild all those chunks? You might just want to use a single torrent for this. Multiple torrents for a single file will not be faster, unless you intend to reuse those chunks for different files, or append to the file.

@SemiAccurate
Copy link

@anacrolix you seriously regard 30 torrents in a anacrolix/torrent client as a "very large torrent count"?

Other clients can easily handle thousands of torrents. Don't you think the high goroutines load is straining anacrolix/torrent?

I mean it is an interesting concurrency model, though it does not appear to scale well compared to other methods - which are admittedly more difficult to program - but also when compared to async/await.

@tsingson
Copy link

tsingson commented Jul 21, 2023

limit goroutines?

try github.com/sourcegraph/conc/pool

example code

type SingleTask struct {
	t       *torrent.Torrent
	log     *logger.Logger
	ctx     context.Context
	cancel  context.CancelFunc
	begin   int
	end     int
	minRate int64
	debug   bool
}

func NewSingleTask(t *torrent.Torrent, begin, end int, minRate int64, ctx context.Context, cf context.CancelFunc, debug bool, log *logger.Logger) *SingleTask {
	return &SingleTask{
		t:       t,
		begin:   begin,
		end:     end,
		ctx:     ctx,
		cancel:  cf,
		minRate: minRate,
		log:     log,
		debug:   debug,
	}
}

func (b *SingleTask) download(tid int64) func(ctx context.Context) error {
	return func(ctx context.Context) error {
		if b.t.Complete.Bool() {
			b.log.Debug("torrent is complete")
			return nil
		}
	 
		b.t.DownloadPieces(b.begin, b.end)
		return nil
	}
}

goroutine pool



mg := "magnet:?xt=urn:btih:aa9d1cc9ee3ccba9ef9a037fd1e219f682fdbb57"

...

client, err := torrent.NewClient(cfg)

...
t, err := client.AddMagnet(mg)

...
      minThreads :=  16   // runtime.NumCPU()

	btContext, btCancel := context.WithCancel(context.Background())

	p := pool.New().
		WithMaxGoroutines(minThreads).
		WithContext(btContext).
		WithFirstError().
		WithCancelOnError()
	//
	b := NewSingleTask(t, 0, 0, s.minRate, btContext, btCancel, s.debug, s.log)

	p.Go(func(ctx context.Context) error {
		return b.speedLowStop(tid)
	})

	total := t.NumPieces()
	pageSize := 100
	page := total/pageSize + 1
	for i := 0; i < page; i++ {
		begin := i * pageSize
		end := (i + 1) * pageSize
		if end > total {
			end = total
		}
		b1 := NewSingleTask(t, begin, end, s.minRate, btContext, btCancel, s.debug, s.log)
		p.Go(b1.download(tid))
	}

	err = p.Wait()
	 


it's work in my project

@SemiAccurate
Copy link

limit goroutines?

try github.com/sourcegraph/conc/pool
...
goroutine pool
...
it's work in my project

@tsingson having a goroutine pool kinda defeats the whole purpose of goroutines!

They were supposed to be akin to cheap lightweight threads to be used everywhere. But if they're now considered a scarce resource to be conserved in a pool then something is deeply wrong ... either in the overall concept or the language runtime implementation.

@MrGlp
Copy link

MrGlp commented Aug 16, 2023

mark

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants