Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seeding 1000 swarms with Libtorrent #3475

Closed
synctext opened this issue Feb 21, 2018 · 4 comments
Closed

Seeding 1000 swarms with Libtorrent #3475

synctext opened this issue Feb 21, 2018 · 4 comments
Assignees

Comments

@synctext
Copy link
Member

Explore the performance bounds of Libtorrent for credit mining.

For performance and code quality we would like to leave all the heavy lifting/seeding inside Libtorrent.
Add 1000 swarms and see what happens. Next step. Put 900 of them in stop mode and check effective upload given the DHT, etc. overhead. What is the performance of a session resume (restart)?

Bypass the default normal 10 concurrent downloads, put the rest in a waiting queue. Key is to investigate that Libtorrent does not crash, hogs memory, thrashes disks, or fires too many threads.

@synctext synctext added this to the Backlog milestone Feb 21, 2018
@arvidn
Copy link

arvidn commented Feb 22, 2018

There's a tool in the libtorrent repo called connection_tester. I use it for various load testing. It can:

  • generate a large number of test torrents, to have something to load up
  • generate a specific test torrent where the piece hashes are computed over a pattern of bytes, that can be used with the two features below.
  • act as many peers uploading data to a test torrent (where the content is generated according to a pattern, so there's no disk involved)
  • act as many peers downloading data from a test torrent (without doing any hash checking)
  • generate a file with the pattern of bytes that a torrent client can use to seed a test torrent

I use it to load 100'000 torrents in client_test

@qstokkink
Copy link
Contributor

Related to #684 .

@egbertbouman
Copy link
Member

Thanks for the tips @arvidn! I'll definitely take a look at this tool.

For now, I'll start by improving performance on the Tribler side. Some things that I found so far:

  • We're using add_torrent instead of async_add_torrent, so adding a lot of downloads at the same time will likely lock the twisted thread
  • We're using stats_alerts instead of post_torrent_updates. So even if the status of a torrent does not change we still query libtorrent.
  • There are a lot of unnecessary calls to handle.status().
  • The GUI can't seem to handle a lot of downloads being added at the same time.

@egbertbouman
Copy link
Member

The Tribler-side issues should be resolved, so I'll close the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

6 participants