-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jetson Nano Unknown decoder 'h264_nvmpi' #1175
Comments
@htilly I see that you have this running so any insights on the device to pass on docker-compose or anything else would be greatly appreciated. |
This will likely require adding an option to frigate's ffmpeg build. |
Ah ok so the package I mentioned would need to be used. |
Likely just needs to be incorporated here: https://github.com/blakeblackshear/frigate/blob/master/docker/Dockerfile.ffmpeg.aarch64 |
And before I start work, master should be my base? |
No. |
Ok I'll rebase, and just to confirm for testing. I build Dockerfile.ffmpeg.aarch64, then update base with the ARCH ARG and my image, then build that, then build Dockerfile.aarch64. |
Mostly. Take a look at the makefile. It may clarify some things. |
Oh cool, I haven't used that before. |
Hey @blakeblackshear, I am running into issues as it is looking for the following. set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,-rpath-link=/usr/local/cuda/lib64") I opened an incident on the other project to see if there is another way this can work and I'll let you know. |
If it requires a separate based image, I would rather have a separate image like we do for amd64nvidia. |
Is there any libraries I can remove? Just trying to see if I can clean up the image a little but I am not sure if I will break anything. I assume anything listed as a video encoder here I can remove? |
I wouldn't remove anything. |
https://github.com/e1z0/jetson-frigate i’ve do some research on it also. |
Thanks so much, gonna see if I can role this into what I am building here. |
I'm afraid that, as is, the repo does build the ffmpeg binary, but doesn't work on the jetson nano as per this issue. |
So finally got back on this, hoping I'll be able to close this out thanks to e1z0's help. |
@blakeblackshear the patch file that's needed, where should it live in this repo? Should it just be in the docker folder or is there somewhere else more appropriate. |
I would put it in the docker folder. |
Came to update. |
@e1z0 @KillahB33 Is the built image hosted somewhere? I'm struggling to build jetson-frigate as well. @blakeblackshear Do you think an official frigate aarch64nvidia for the Jetson Nano is in the cards? I have a m.2 Coral TPU on the way but if the NVEC, NVENC and potentially even the GPU can take care of decoding, the Nano + Coral could be a powerful hardware option. |
It's difficult for me to work through it without the hardware on hand. These ffmpeg builds are difficult to get right. I'm not sure it's a popular enough device for me to invest the time, but definitely open to PRs. |
the small jetson nano device is comparable cheap and AI powerfull on its own. |
I've been looking for this as for some time since I'm running a couple jetson nanos. Is it possible to take inspiration from a similar project? https://github.com/roflcoopter/viseron/tree/master/docker/jetson-nano |
I'm interested in supporting any testing as needed; I have the same hardware setup as @KillahB33 (Jetson Nano + m2 Coral), and have messed around a little trying to get the hwaccel options going with @e1z0's repo but haven't made it very far yet. |
others seem to use the nano also for AI with docker. some details for fiddling: |
Apologies y'all I would love to help but I can't seem to locate the image that I was using from e1 or find the comment where he mentions it. If you have more than 4 1080p cameras I would also suggest you look for something else as my nano couldn't handle it even after the hwaccell was setup. |
too bad, error ath the end, after inside the container install of ffmpeg from deb sources.. edit: here goes the patch to ffmpeg... if anybody sets something up, i will test it .. |
i wanted to test performance/hw-accel on the host-os with ffmpeg from the repo. no hwaccell to see in jtop or tegrastats ... 3 tastk running as test on host-os (beside the old container):
|
GSTREAMER on jetson ! == translation possible from ffmpeg rtsp -> gstreamer ? try ffmpeg teststreaming ffmpeg -vsync drop -c:v h264_nvv4l2dec -rtsp_transport tcp -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -stimeout 5000000 -i rtsp:https://admin:[email protected]/Streaming/channels/103 -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an -r 5 -s 1280x720 -f null - get cam per ssh on jetson ddisplay DISPLAY=:1 gst-launch-1.0 rtspsrc location=rtsp:https://admin:[email protected]:554/Streaming/channels/103 latency=200 drop-on-latency=true ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! xvimagesink -e |
seems painfull to compile ffmpeg for the nano |
there is hope.... now somebody has to look up their source to take over the trick.. edit: big mistake.... edit2: Installing shinobis tensor-flow plugin freezes the jetson completly... will revert back to fresh image.. |
dont let the jetson nano down, dive into deepstream, |
but jetson nano is capable of h264/265 hardware detection in a significant amount of streams. |
really cool, (diving into deepstream) |
some help with deepstream, tensorRT and yolo on the nano -> |
Hello, new to this feed but I have a few questions I'm hoping someone can answer without me having to try 1000 things. I have an i5 laptop that I'm running Home Assistant on. I have the 4 gig Jetson Nano installed with the base image and docker installed. The only docker container that's running is DeepStack for detections. Is there an approved/easy way to install Frigate in a separate container to use as an NVR or do I have to go through a bunch of hoops? I don't really need Frigate to do any serious detections. It would be nice if it could detect a person first and if that's true I can trigger DeepStack for facial recognition. But I didn't know if that's a way it can be installed. |
There are a few others doing that on here. You have home assistant so should be easy to do it with some node red setups. |
hi there, what do you think about this approach? |
GREAT! |
I managed to get FFMPEG working for me. Basically I used https://github.com/Metric-Void/jetson-ffmpeg-docker for FFMPEG, but I had to modify it at bit to get the entrypoints and install locations correct. I also had to modify Frigate a bit so that it was able to work with Python 3.6 (as the Nvidia docker container is still based on Ubuntu 18.04, which doesn't really support Python 3.8). I'm just in the process of cleaning up some of the code, but I should have some stuff pushed up soon. But I doubt I'll be able to incorporate it as a pull request, as it modifies too many core files. Hopefully once Nvidia releases a Jetpack image based on 20.04 (hopefully next year), it should be a lot simpler. I'm getting pretty consistent use of NVDEC, with sporadic NVENC usage according to jtop: |
NVDEC usage from ffmpeg looks FANTASTIC... did you stressed the rtmp decoding with more than 10 cams? (in a 2nd pipeline maybe gstreamer could also made working on the nano for frigate) |
I haven't really stressed it too much. I've just got 6 cameras, and don't use RTMP (although it was enabled). All seemed to be running fine, with no artifacts or anything. |
8 cams seems possible with good performance.. |
I’ve got a pretty good setup now for testing as well. Just setup Linux server in ESXi 7 and was able to pass through a GeForce 790. Inference speeds for 5 streams are under 100ms but nvidia-smi doesn’t show it utilizing it I don’t think. I’m also waiting to hear back from the Double Take and DeepStack devs as my DeepStack install on my Jetson Nano quit being recognized by Double Take. Although it still is able to process commands through its API so I’m pretty sure it’s on the Double Take side of things. But those 4 pieces of software together are so very powerful, that is including Home Assistant.
On Dec 25, 2021, at 2:51 AM, toz ***@***.***> wrote:
8 cams seems possible with good performance..
wb666greene/AI-Person-Detector#11 (comment)<wb666greene/AI-Person-Detector#11 (comment)>
—
Reply to this email directly, view it on GitHub<#1175 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACGXNJBLMVTC6TGKCH4STWLUSWAXNANCNFSM46DAEPAQ>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Hey Guys, |
This has been added as a community supported build for 0.13 |
Describe the bug
Basically I am running this on a jetson nano, I have compiled ffmpeg with the jocover/jetson-ffmpeg package.
I have also confirmed that it is properly showing by running ffmpeg -encoders |grep 264 and I see it there.
Frigate say that it is an unknown decoder.
Version of frigate
0.8.4-5043040
Config file
Include your full config file wrapped in triple back ticks.
Frigate container logs
Frigate stats
FFprobe from your camera
Run the following command and paste output below
Screenshots
Video is all green with this option on
Computer Hardware
Camera Info:
Not relevant
The text was updated successfully, but these errors were encountered: