Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High CPU usage #132

Closed
menzo2003 opened this issue Feb 22, 2023 · 6 comments
Closed

High CPU usage #132

menzo2003 opened this issue Feb 22, 2023 · 6 comments

Comments

@menzo2003
Copy link

I noticed that for a while the CPU usage of the fr24 docker has gone up to 100% of one core. I remember that it was around 20% before.

Here is some info, if you guys need more please let me know.

Process: "qemu-arm-static /usr/local/bin.fr24feed" uses 99% CPU

The docker image runs on a x86 system.

Log:

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 00-libsecomp2: executing...
[cont-init.d] 00-libsecomp2: exited 0.
[cont-init.d] 01-fr24feed: executing...
WARNING: Setting timezone via TZ is not supported in this container. fr24feed requires the container has a timezone of GMT (+0).
fr24feed version: 1.0.34-0
[cont-init.d] 01-fr24feed: exited 0.
[cont-init.d] 02-show-architecture.sh: executing...

Hardware information:
Machine: x86_64
Processor: unknown
Platform: unknown

[cont-init.d] 02-show-architecture.sh: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
2023-02-22 09:55:51 | ______ _ _ _ _ _ _____ ___
2023-02-22 09:55:51 | | || |() | | | | | | / __ \ / |
2023-02-22 09:55:51 | | |
| | _ __ _ | |
_ | |_ _ __ __ _ | | __ _ _ ' / /' / /| | 2023-02-22 09:55:51 | | _| | || | / _ || ' \ | || '|/ _ | / _ | / _` || '| / / / /| |
2023-02-22 09:55:51 | | | | || || (
| || | | || |_ | | | (| || (| || (| || | ./ /
__ |
2023-02-22 09:55:51 | _| |||| _, ||| || _||| _,| _,| _,||| ___/ |/
2023-02-22 09:55:51 | / |
2023-02-22 09:55:51 | |
/
2023-02-22 09:55:51 | 23-02-22 09:55:51.684 [I][http-server.cpp:270] [httpd]Server started, listening on :::8754
2023-02-22 09:55:52 | [i]PacketSenderConfiguration::fetch_config(): Yoda configuration for this receiver is disabled
2023-02-22 09:55:52 | [d]TLSConnection::ctor(): Enable verify_peer in production code!
2023-02-22 09:55:57 | [feed][d]fetching configuration
2023-02-22 09:55:57 | [feed][c]Max range AIR: 350.0nm
2023-02-22 09:55:57 | [feed][c]Max range GND: 100.0nm
2023-02-22 09:55:57 | 23-02-22 09:55:57.798 [I][crxstats.cpp:588] [stats]Stats thread started
2023-02-22 09:55:57 | [feed][c]Timestamps: optional
2023-02-22 09:55:57 | 23-02-22 09:55:57.801 [I][receiver_ac_sender.cpp:137] Stopping ReceiverACSender threads for feed
2023-02-22 09:55:57 | 23-02-22 09:55:57.804 [D][receiver_ac_sender.cpp:141] Stop called on non-running network thread for feed
2023-02-22 09:55:57 | 23-02-22 09:55:57.804 [I][receiver_ac_sender.cpp:96] Configured ReceiverACSender: 185.218.24.22:8099,185.218.24.23:8099,185.218.24.24:8099, feed: EHAM351, send_interval: 5s/1s, max age: 15s, send metadata: true, mode: 1, filtering: true
2023-02-22 09:55:57 | 23-02-22 09:55:57.814 [I][receiver_ac_sender.cpp:36] Network thread connecting to 185.218.24.22:8099 for feed EHAM351

@ssahlender
Copy link

I have exactly the same issue:

$ docker version
Client:
Version: 23.0.1
API version: 1.42
Go version: go1.20
Git commit: a5ee5b1dfc
Built: Sat Feb 11 13:58:04 2023
OS/Arch: linux/amd64
Context: default

Server:
Engine:
Version: 23.0.1
API version: 1.42 (minimum version 1.12)
Go version: go1.20
Git commit: bc3805a0a0
Built: Sat Feb 11 13:58:04 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.6.18
GitCommit: 2456e983eb9e37e47538f59ea18f2043c9a73640.m
runc:
Version: 1.1.4
GitCommit:
docker-init:
Version: 0.19.0
GitCommit: de40ad0

$ uname -a
Linux hostname 6.1.12-arch1-1 #1 SMP PREEMPT_DYNAMIC Tue, 14 Feb 2023 22:08:08 +0000 x86_64 GNU/Linux

@MaxWinterstein
Copy link

This seems to be docker v23 related, and maybe their change on the ulimit stuff.

This has been across others repos as well, see e.g. MaxWinterstein/homeassistant-addons#151 and all their related stuff.

@luciodaou
Copy link

I'm having the same issue, out of nowhere.
image

It began a few days ago.

@bawachhe
Copy link

bawachhe commented Sep 4, 2023

Hello, I am programmer. Probably.

Considering that the solution in the linked issue from the HomeAssistant addon community is to add a ulimit, here's how you would do it for this docker image only.

For those just running with the docker command, add --ulimit nofile=1024, like so (direct copy/edit from this repo's README.md):

docker run \
 -d \
 --rm \
 --ulimit nofile=1024
 --name fr24feed \
 -e BEASTHOST=beasthost \
 -e FR24KEY=xxxxxxxxxxx \
 -p 8754:8754 \
 ghcr.io/sdr-enthusiasts/docker-flightradar24:latest

Or, for the docker-compose method, add this to your docker-compose.yml:

    ulimits:
      nofile: 1024

Full container def (copied/edited from gitbook, but README.md version is fine too..):

  fr24:
    image: ghcr.io/sdr-enthusiasts/docker-flightradar24:latest
    ulimits:
      nofile: 1024
    tty: true
    container_name: fr24
    restart: always
    ports:
      - 8754:8754
    environment:
      - BEASTHOST=ultrafeeder
      - FR24KEY=${FR24_SHARING_KEY}
    tmpfs:
      - /var/log

I'll have to run this for a few days and see if it affects anything about reporting to fr24 but so far the cpu usage is basically idle on the fr24feed process (0.0%, sometimes 0.1%) and my poor CPU has finally cooled down to a steady 45C; anyway the HomeAssistant guys seem to like it as there's been no further comments on the issue over there for 3 months.

@luciodaou
Copy link

My issue was solved after removing a hard-drive that got logs written to (I mapped /var to it).

@kx1t
Copy link
Member

kx1t commented Feb 14, 2024

There hasn't been any activity on this item, and the code has undergone substantial changes since this bug was reported. I am closing the issue -- if you still have issues with CPU cycles on the latest build, feel free to reopen and add comments

@kx1t kx1t closed this as completed Feb 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants