-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error response from daemon: failed to listen to abstract unix socket "/containerd-shim/moby/<uuid>/shim.sock": listen unix /containerd-shim/moby/<uuid>/shim.sock: bind: address already in use: unknown #643
Comments
I've got the same issue in my project... |
Same problem here with Docker updates that don't re-start the containers. Running Edit: Gave myself a +1 a few months later because I had the same issue and found my own answer as a solution… |
I've got the same problem here and I can't remove it, unfortunately. |
Any update one this? I too am hitting it |
The solution that worked for me was to destroy the container and create a new one with the same volume from the old one. |
Try find the docker process and kill it will resolve the issue:
|
I also have this problem whenever the docker package in Ubuntu gets updated. Not sure whether this is a problem with the packaging or with docker itself. |
@chenz-svsarrazin Thanks! An apt update + upgrade worked and I didn't need to recreate the container. (Ubuntu 18.04.2 LTS + Docker version 18.09.7, build 2d0083d). |
Reproduced on Ubuntu 18.10, not sure what caused it but all my servers/containers randomly went down. Could have been an update. |
There was the 18.09.7 update a few days ago (security update) which restarted the Docker service and, for me, brought down four web servers and corrupted one database. Regular start-up didn't work due to these errors. |
kill -9 $(netstat -lnp |grep containerd-sh |awk '{print $9}'|cut -d / -f 1) |
I tryed kill pid and downgrade docker but it don't help me
|
Solution which worked for me.
|
We just had the same issue after updating the docker package to version docker-ce-19.03.3-3.el7.x86_64. On CentOS Linux release 7.7.1908 (Core). Exactly the same as in the first post, however killing the docker pid did not work for us. A docker restart did not work for us. A reboot of the entire server solved the problem. Any more on this issue? It is really scary that this can happen to our production services. |
same issue, couldn't run after update Error: Version
|
Same issue here. In my case, it helped to downgrade to an earlier docker version and then restart the system (just restarting docker did not help). No need to redeploy/remove existing containers. Example for Ubuntu Xenial:
|
I am seeing the same issue after updating to Docker version 19.03.4. I cannot reboot this Debian machine without a lot of hassle. I wish I hadn't upgraded Docker. Captain Hindsight advise: Pin the docker version. You wouldn't expect this from the non-edge channel. |
this one saved my day! thanks |
@thaJeztah Is this line same as other issues where it was some packaging related problem? |
This SO post seems to indicate this may be an issue with the Ubuntu Snap package and that the following may resolve it: # Remove snap installation, any prior Docker installations
sudo snap remove docker
sudo apt-get remove docker docker-engine docker.io
# Install latest Docker.io version
sudo apt-get update
sudo apt install docker.io
# Run Docker on startup
sudo systemctl start docker
sudo systemctl enable docker |
Based on the comments it seems this happens on 19.03.3 and 19.03.4, and we had someone reproduce it on Xenial with 19.03.8 as well, but I was NOT able to reproduce it with the following: Add the apt repository if you don't already have it:
Install Docker CE 19.03.8 (latest) explicitly:
|
does someone know what is the actual reason for this issue? |
Still get similar issue, but with version 27.1.1. |
I run containers using the "restart always" policy, but in some situations (the trigger is unclear to me at this point), a subset of containers fail to be restarted by the docker daemon.
In this example, I have a bunch of services that all have (almost) identical configs, and a random subset of service container are suddenly down (after days of running fine). :
other container instances of the service are running fine (and are restarted every once in a while):
When I try (for testing) to manually restart the container that the daemon failed to restart automatically, this fails:
Investigating the problem, I found that the unix socket mentioned above does not exist on the file-system, but the error message says "already in use", so I searched via
lsof
:so, indeed the socket is in use, but not on the file-system... which makes me wonder if the process (PID 37032) actually removed it, but didn't properly close it (yet?) while shutting down?
stracing the process shows that it's currently waiting on a mutex:
with no other behavior.
To test further, I decided to kill the process that's supposed to provide the unix socket, and now I can start the container successfully:
Expected behavior
Docker restart policy "always" always restarts a container.
Actual behavior
Docker restart policy "always" randomly fails after a service has been running longer periods of time (maybe because containerd does not correctly terminate/release the unix-socket)
Steps to reproduce the behavior
I have not been able to trigger the problem in a reproducible way, but I have seen dozens of instances over weeks of running services. Interestingly it happens on different services that are using completely unrelated images (aside that they have a common Debian-based base-image)
Output of
docker version
:Output of
docker info
:Physical host , under constant and high load. The containers that show the problem have memory limits in place using docker-compose:
Note that I'm using a private container registry, which is why I decided to replace the image data with
my-image
.The only potentially-related bug I managed to find online is this:
moby/moby#38726
The text was updated successfully, but these errors were encountered: