Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

excessive network load by /proc/$pid/net/dev #143

Closed
milahu opened this issue Jun 20, 2024 · 1 comment
Closed

excessive network load by /proc/$pid/net/dev #143

milahu opened this issue Jun 20, 2024 · 1 comment

Comments

@milahu
Copy link

milahu commented Jun 20, 2024

/proc/$pid/net/dev shows about 1000x more traffic than rqbit says

can the DHT overhead be so bad?
is rqbit vulnerable to attacks?
is rqbit flooding the trackers?

im getting the network load from the rqbit api with rqbit-list.sh in #142
and with rqbit-stats.sh from /proc/$pid/net/dev - see below

rqbit running for 12h40m on a 1Gbit/s connection

$ speedtest --simple
Ping: 2.291 ms
Download: 1045.50 Mbit/s
Upload: 959.81 Mbit/s

$ ./rqbit-list.sh
total up 1.416562TiB 10.11MiB/s down 688.649911MiB 0.00MiB/s

$ ./rqbit-stats.sh
1718869844,1.155361PiB,155.365246TiB

to compare, this is qbittorrent
running for 3d17h on a 100+40Mbit/s connection

qbittorrent gui says: uploaded 1.2TiB + downloaded 85GiB

$ speedtest --simple
Ping: 32.376 ms
Download: 90.49 Mbit/s
Upload: 30.84 Mbit/s

$ ./qbittorrent-stats.sh 
1718870926,4.454655TiB,813.896183GiB

im running rqbit with

rqbit --disable-upnp --tcp-min-port 6000 --tcp-max-port 65535 server start

maybe the port range is too large?

rqbit-stats.sh
#!/usr/bin/env bash

# monitor network traffic of a process

# https://unix.stackexchange.com/a/6914/295986

iface=eth0

pid=$(pgrep rqbit)

IN=0; OUT=0; TIME=0

get_traffic() {
  t=$(awk '/'$iface':/ { printf("%s,%d,%d\n",strftime("%s"),$2,$10); }' < /proc/$pid/net/dev)
  IN=${t#*,}; IN=${IN%,*}
  OUT=${t##*,};
  TIME=${t%%,*};
}

get_traffic $pid
while true
do
  get_traffic $pid

  # why are these numbers off by factor 1000?
  # DHT overhead?!

  # $ ./scripts/rqbit-list.sh
  # total up 1.416562TiB 10.11MiB/s down 688.649911MiB 0.00MiB/s

  # out,in = upload,download
  echo "$TIME,$(echo $OUT | numfmt --to=iec-i --format=%.6f)B,$(echo $IN | numfmt --to=iec-i --format=%.6f)B"
  # 1718869844,1.155361PiB,155.365246TiB

  sleep 1
done
@milahu
Copy link
Author

milahu commented Jun 20, 2024

aah. nevermind. i skipped the fine print of rqbit-stats.sh

https://unix.stackexchange.com/a/6914/295986

I don't think this is actually a per-process counter;
I think it's just the total interface count from the process' point of view.

using /proc/$pid/io gives better results

rqbit-stats.sh
#!/usr/bin/env bash

# monitor network traffic of a process
# https://unix.stackexchange.com/a/6929/295986

# this is hard without root access
# and without extra capabilities: cap_net_admin, cap_new_raw
# which are also required by nethogs
# so we use total io as a proxy for network io

set -eu

pid=$(pgrep rqbit)

while true; do

  eval $(cat /proc/$pid/io | sed 's/: /=/')

  time=$(date --utc +%s)

  # payload traffic: up 1.450163TiB + down 688.649911MiB

  # 1718872971,853.397302MiB,-87.720849GiB
  #out=$((wchar - write_bytes)); in=$((rchar - read_bytes))

  # 1718873132,108.031573GiB,1.829787TiB
  #out=$wchar; in=$rchar

  # 1718873187,107.232708GiB,1.915926TiB
  #out=$write_bytes; in=$read_bytes

  # 1718873228,1.916148TiB,107.256169GiB
  #out=$read_bytes; in=$write_bytes

  # 1718873309,1.830446TiB,108.136342GiB
  out=$rchar; in=$wchar

  out=$(echo $out | numfmt --to=iec-i --format=%.6f)B
  in=$(echo $in | numfmt --to=iec-i --format=%.6f)B

  echo "$time,$out,$in"

  sleep 1

done

@milahu milahu closed this as completed Jun 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant