-
-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to connect via ftp client #82
Comments
I've been able to get past the initial connection issue (We had a PUBLIC_HOST env var instead of a PUBLICHOST env var). But, I'm still having connection issues:
Even though I've opened up all the appropriate ports on my load balancers, I'm not able to issue a command. It actually did issue a command once, but I wasn't able to get it to repeat. It only ever seems to work on a single outbound port, 30008. Using FileZilla, I get the following:
Any ideas? Thanks. |
Hi Matt, |
Hi there!
I think there is some problems with newer commits: |
Hi @TemaSM , would you be able to paste the docker logs output for this? |
@stilliard here's log from docker container:
log from
|
Thanks @TemaSM Looks like the But, I've just noticed in your example above, a few of the ports are incorrect:
I think these should be:
This could be causing the error, hope this helps. |
@stilliard thanks, it was my bad 😃 |
That's excellent, glad it's working for you. Pretty weird about that variable, from the run.sh file it shouldn't have any effect for you as i your example file above you're passing "-p 30000:30025" directly which should override it. I'll keep an eye out for any reports of this, but if it happens to you again could you check the logs again to see if anything else shows up please? Thank you. |
@mateodelnorte is this resolved for you now too? |
@stilliard I ended up using sftp to suit my purposes. But thanks! |
@mateodelnorte that's cool, glad it's all solved anyway :) |
Hi @stilliard : it seems I am having the same trouble. But I think it is not directly linked to the project. I think it is because of the docker networking which is not forwarding the real client IP address. I am still looking for a solution on swarm mode. Any ideas ? |
I'm not sure if other users are still experiencing this issue. Here's what I experienced: I was attempting to get this container setup for our developement team so that they could upload a wordpress website. This was on our production server but not yet publicly available. All of my testing was during locally. After initial deployment, this is the error I was seeing in FileZilla Client.
Upon further inspection of the last Response item, the IP address is different then the IP address in the second Status item. ftp.mydomain.org doesn't exist yet on CloudFlare. I'm strictly working locally right now (modified my host file). We do however have *.mydomain.org pointing to our Azure load balancer which is the same IP address as the last Response item. I logged into the container and ran:
and modified the host file to point ftp.mydomain.org to 10.10.127.213.60 FileZilla connected instantly to the container. It seems that Pure-Ftpd is looking up the IP address of my PUBLICHOST using standard DNS along with the list of passive ports to be used. I remember having this issue with PureFTPd years ago (way before containers were a thing) and remember using vsftp instead because I couldn't figure this out then. I will also be testing the host IP address and the public IP address for PUBLICHOST and report my findings. I hope this helps with some headaches. I've experienced similar issues before with other packages where I needed to edit the host file in the container for things to work. Moreover, I couldn't pass modified /etc/hosts file into the container without getting a bunch of startup errors. Anyone have a workaround for this? Also, anyway to get Pure-Ftpd to use PUBLICHOST domain name in the response instead of a dns returned IP address? I imagine this was by design to ensure that a user isn't redirected to a different server by someone tempering with client's DNS but that's a risk I'm willing to take considering all of the other stops we have in place. I'm using hardened version and this is the out of box run.sh logged upon startup.
|
Reporting back from my previous comment. Using both the host IP address and the public IP address as the PUBLICHOST works. I obviously needed to ensure that my passive range was being forward to the host IP address when connecting from outside. Additionally, I also modified the host file on my client machine to use my public IP for ftp.mydomain.org, while using my public IP as the PUBLICHOST env variable and I was able to connect successfully as well. |
Hi there. Thanks for putting this repo together. I'm having issues using this image in a docker stack, deployed in a docker swarm.
I've got a docker stack file that looks like the following:
Our pureftpd service spins up just as expected and seems to run fine. When I attempt to connect to it via an ftp client, though, I run into some trouble:
Any ideas as to why I'm not able to open a connection? Should PUBLICHOST be the publicly resolved hostname of my ftp server, or something else?
Note, since my pureftpd container is in a docker swarm, that means it's essentially behind two load balancers - an AWS ELB and a docker-flow-proxy load balancer. Are there any gotchas when deploying pureftpd behind a load balancer?
Thanks.
The text was updated successfully, but these errors were encountered: