-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor of custom configurations #173
Comments
Hello @AquaeAtrae, Here is my hypothesis. When using autoconf, the configuration generator (which executes the is_custom_conf function) is running inside the autoconf container. Since the modsec-confs and modsec-crs-confs volumes are not mounted on the autoconf container, is_custom_conf won't find anything. I've made a PR on your project, don't hesitate to test it and tell me if it works. If that's the culprit then it's an undocumented behavior and we should update all the examples and documentations related to autoconf to avoid the same bug happening for others. |
Thanks @fl0ppy-d1sk, Yes, that would explain it. I tested your PR (mounting the volumes to both nginx and autoconf) and confirmed it solved the Templator.py problem. So yes... as currently designed, any custom confs that adjust nginx must be duplicated on both the nginx and autoconf services. Perhaps better than simply documenting the need for duplication, might it make more sense to:
If I understand correctly, docker has no problem bind mounting volumes (e.g. modsec-confs) within other volumes (autoconf). It automatically sorts these from shallowest to deepest path (sortMounts). I believe that would provide both nginx and autoconf with a consistent workspace for templating. I suspect the Templator.py script may need to be adjusted to work around these existing files. Currently, it seems to fail to generate any nginx confs when a /etc/nginx/<FIRST_SERVER>/ folder contains an existing file. Presumably it would also fail seeing pre-existing custom conf folders there too. I'd suggest that Templator.py should generate each conf unless that file already exists. That way, users could effectively replace and override even these generated confs if needed. Alternatively, it would be great if all site-specific customizations could somehow be made from that site's own service, its metadata and volume mounts. I'm less clear how those mounts would securely be read by autoconf and Templator.py but you may have an idea. For security, we'd just want to be sure that each service could only affect its own FIRST_SERVER confs. Side note: Have you seen docker-gen ? I haven't played with it yet, but it looks to have similar aims as your autoconf service. It may be worth comparing against or even adding support for. |
I'd propose the following layout pattern specifically...
To make this work as suggested, Now, we could pull these in from...
Everything would remain within autoconf's reach and even generated confs could be overwritten if needed. |
Hello @AquaeAtrae, Sorry for late reply, I was busy working on the v1.3.0. I will look deeper into your suggestions when I have time. But clearly, make everything self contained within /etc/nginx looks like a good idea. It could resolve #163 too. |
@fl0ppy-d1sk No worries. Glad you like the idea. :) |
Hi @fl0ppy-d1sk I look forward to seeing how the new BunkerWeb system will work. I know you are looking to release the beta soon. I was hoping to flush out my vision more in code before bothering you, but the ideas may be worth earlier consideration for your upcoming BunkerWeb release. I wanted to revisit this particular issue and see if, by chance, we might see this fundamental structure of special folders well organized from the start. But also, I ran into another complication with multi-site projects that could be further improved perhaps. I just now how hard it can be to refactor structural changes like these once a design is cloned out and used by others. Last week, I wanted to separate the docker-compose.yml files for reverse proxy / autoconf apart from each of the multiple sites so that they could be started and stopped independently. You provided a sample of how this can work in your reply to #129 which was encouraging. While this works with ROOT_SITE_SUBFOLDER and its html files, I cannot find a similar provision to separate the special folders for a multi-site project. I think may be able to trick the system into working using symlinks, but its not ideal. Additionally, we're currently forced to host each site's subfolder listening as only one SERVER_NAME which must be identical. If one of more of these sites were designed listen as either multiple SERVER_NAMEs or a wildcard, that does not appear to be possible. In my case, I would like each site to listen to either its public production domain and a local development domain (like example.com, *.example.com and example.localhost). So far I've tried to to use an additional docker-compose-override.yml to rewrite the SERVER_NAME for localhost, but two problems prevent this alone from working. First, I must still rename each SUBFOLDER_NAME to match. Second, I think I'm seeing that the overridden autoconf labels referencing SERVER_NAME are being appended to the list rather than replaced. Ideally, each service including its site-specific special folders could be stored entirely within its own subfolder which wouldn't necessarily have to be below the reverse proxy project folder. Each might be its own git repo. And using autoconf, additional services could be added or removed later on without interrupting other existing services. In addition to decoupling each multi-site service, we should also be able to decouple the SERVER_FOLDER from the SERVER_NAME variable, freeing us to listen on more than one host name including wildcards. Also, it would be nice if the default site (or one of the services) could still listen and respond for any unmatched SERVER_NAMEs not otherwise defined. |
Here's a quick attempt to illustrate what I envision might work. I haven't tested it much or added the necessary code. Again I'm steering toward more conventional path names but understand if you have reasons otherwise.
docker-compose.yml (with no site-specific dependencies such as their special folders) : version: "3"
services:
myproxy:
image: bunkerity/bunkerized-nginx
restart: always
ports:
- 80:8080
volumes:
- ./web-files:/www/web-files # default static HTML content shown when no matching SERVER_NAME among multi-site services
- ./nginx:/etc/nginx/custom:ro # accessible by autoconf, one mount can include all default customizations
# includes defaults...
# ./nginx/server.d:/etc/nginx/custom/server.d # was /server-confs
# ./nginx/http.d:/etc/nginx/custom/http.d # was /http-confs
# ./nginx/modsec.d:/etc/nginx/custom/modsec.d # was /modsec-confs
# ./nginx/modsec-crs.d:/etc/nginx/custom/modsec-crs.d # was /modsec-crs-confs
# ./nginx/letsencrypt:/etc/nginx/letsencrypt:ro # was /letsencrypt
- autoconf:/etc/nginx
environment:
- SERVER_NAME= # must be left blank if you don't want to setup "static" conf
- ENABLE_CATCHALL=yes # or can enable the catchall domain for any non-matching SERVER_NAME
- MULTISITE=yes
labels:
- "bunkerized-nginx.AUTOCONF"
networks:
- mynet
myautoconf:
image: bunkerity/bunkerized-nginx-autoconf
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- autoconf:/etc/nginx
depends_on:
- myproxy
volumes:
autoconf:
networks:
mynet:
name: mynetname service1.yml (your basic example in #129) : version: "3"
services:
service1:
image: tutum/hello-world
networks:
mynet:
aliases:
- service1
labels:
- "bunkerized-nginx.SERVER_NAME=service1.example.com"
- "bunkerized-nginx.USE_REVERSE_PROXY=yes"
- "bunkerized-nginx.REVERSE_PROXY_URL=/"
- "bunkerized-nginx.REVERSE_PROXY_HOST=https://service1"
networks:
mynet:
external:
name: mynetname service2.yml (with its own custom security) : version: "3"
services:
service2:
image: wordpress:fpm-alpine
networks:
mynet:
aliases:
- service2
volumes:
- ./www/example-project/web-files:/var/www/html
- ./www/example-project/web-files:/www/example-project/web-files
- autoconf:/etc/nginx
- ./www/example-project/nginx:/etc/nginx/example-project/custom:ro
# includes...
# ./www/example-project/nginx/server.d:/etc/nginx/example-project/custom/server.d # was /server-confs
# ./www/example-project/nginx/http.d:/etc/nginx/example-project/custom/http.d # was /http-confs
# ./www/example-project/nginx/modsec.d:/etc/nginx/example-project/custom/modsec.d # was /modsec-confs
# ./www/example-project/nginx/modsec-crs.d:/etc/nginx/example-project/custom/modsec-crs.d # was /modsec-crs-confs
# ./www/example-project/nginx/letsencrypt:/etc/nginx/example-project/letsencrypt:ro # was /letsencrypt
environment:
- WORDPRESS_DB_HOST=db1
- WORDPRESS_DB_NAME=wp
- WORDPRESS_DB_USER=user
- WORDPRESS_DB_PASSWORD=db-user-pwd # replace with a stronger password (must match MYSQL_PASSWORD)
labels:
- "bunkerized-nginx.SERVER_NAME=service2.example.com www.example.com *.example.org *.example.localhost"
- "bunkerized-nginx.SUBFOLDER_NAME=./www/example-project"
- "bunkerized-nginx.USE_REVERSE_PROXY=yes"
- "bunkerized-nginx.REVERSE_PROXY_URL=/"
- "bunkerized-nginx.REVERSE_PROXY_HOST=https://service2"
- "bunkerized-nginx.USE_MODSECURITY=yes"
- "bunkerized-nginx.USE_MODSECURITY_CRS=yes"
- "bunkerized-nginx.ROOT_SITE_SUBFOLDER=web-files"
- "bunkerized-nginx.REMOTE_PHP=service2"
- "bunkerized-nginx.REMOTE_PHP_PATH=/var/www/html"
db1:
image: mariadb
volumes:
- ./www/example-project/db-data:/var/lib/mysql
environment:
- MYSQL_RANDOM_ROOT_PASSWORD=1 # not recommended if later need to mysql_upgrade the tables along with this image
- MYSQL_DATABASE=wp
- MYSQL_USER=user
- MYSQL_PASSWORD=db-user-pwd # replace with a stronger password (must match WORDPRESS_DB_PASSWORD) here or within...
networks:
- mynet
volumes:
autoconf:
networks:
mynet:
external:
name: mynetname As you noted in #129, it's important to wait for nginx to finish starting before trying to start the services. |
Now that Bunkerweb is released, I am trying to envision the least painful path to decouple domain/hostnames from paths and multisite variables. A few things complicate this. In particular, I'm struggling with the SERVER_NAME setting and the various ways it has been used for multisite contexts. I'll try to describe something as close to viable as I've found and would welcome any ideas of how we might address backward compatibility with global SERVER_NAME definitions for multisite scenarios. GOALS
Whenever a SERVICE_NAME is declared, all service specific folders would move under it as shown with arrows below. SERVICE_SUBFOLDER can be set to "" (blank) so no path is forced to depend on the local hostname(s) defined by SERVER_NAME.
Thoughts? Ideas? One aspect may be particularly inelegant: A conflict would remain between GOAL 7 and GOAL 1 when SERVER_NAME was used like in this simple multisite example. To expose nginx' server_name directive fully (GOAL 7), I suppose another format of SERVER_NAME could be set and detected by bunkerweb. But it's not elegant and I fear some of this may have painted us into a corner.
|
Description
Templator.py fails to include custom modsec confs on multisite when using autoconf
How to reproduce
I've reproduced this from scratch several times now so I'm fairly confident it's a bug. If not, I must be making the same silly mistake each time. In the process of my troubleshooting, I came up with PR 172 fixing the wordpress example (updated).
I adapted the wordpress example into a multisite docker-compose.yml with an identical docker-compose-autoconf.yml version for comparison. They both use the same custom rules and parameters, all of which work properly (including WordPress Site Health Status) but not when using autoconf. The only workaround is to disable ModSecurity.
My test repo can be cloned from
https://github.com/AquaeAtrae/bunkerized-multisite-autoconf-test
Templator.py should detect (is_custom_conf) and include both
/confs/site/modsecurity-rules.conf#L65 and L77
At one point, I believe I saw L65 but not L77 with the same docker-compose-autoconf.yml which makes me wonder if this isn't some kind of race condition. I don't understand autoconf well enough yet to say really.
Logs
After starting docker-compose-autoconf.yml and browsing to https://app2.localhost/wp-admin/site-health.php I find the results blocked (ModSecurity false positives) despite the exemption provided in /modsec-confs/app2.localhost/wordpress.conf When I inspect the site's modsecurity-rules.conf, I see the Templator failed to actually include these custom rules.
The text was updated successfully, but these errors were encountered: