Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AllChannel load balancing/robustness experiment #513

Open
devos50 opened this issue Mar 1, 2022 · 0 comments
Open

AllChannel load balancing/robustness experiment #513

devos50 opened this issue Mar 1, 2022 · 0 comments

Comments

@devos50
Copy link
Contributor

devos50 commented Mar 1, 2022

Continuation of the discussion in Tribler/py-ipv8#979.

Future ToDo, after 2022 summer: resurrect AllChannel experiment. The AllChannels content dissemination community test has "unreasonable effectiveness". By plotting 1000 lines into a single plot a healthy experiment or fault shows up. Many bugs have been shows to exist using this approach. Its basically an end-to-end performance analysis, peer neighbourhood bias, and unhealthy overlay in general. AllChannels used 25 figures to depict the code health. One expert glace is sufficient to detect some new introduced bug.

One expert glace is sufficient to detect some new introduced bug.

Preferably, this should be an automatic trigger. I remember that a student worked on combining statistics of the last X runs and then computing the standard deviation of various metrics to detect anomalies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant