Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor of custom configurations #173

Open
AquaeAtrae opened this issue Jul 29, 2021 · 8 comments
Open

Refactor of custom configurations #173

AquaeAtrae opened this issue Jul 29, 2021 · 8 comments
Assignees
Labels
bug Something isn't working v2

Comments

@AquaeAtrae
Copy link

AquaeAtrae commented Jul 29, 2021

Description
Templator.py fails to include custom modsec confs on multisite when using autoconf

How to reproduce
I've reproduced this from scratch several times now so I'm fairly confident it's a bug. If not, I must be making the same silly mistake each time. In the process of my troubleshooting, I came up with PR 172 fixing the wordpress example (updated).

I adapted the wordpress example into a multisite docker-compose.yml with an identical docker-compose-autoconf.yml version for comparison. They both use the same custom rules and parameters, all of which work properly (including WordPress Site Health Status) but not when using autoconf. The only workaround is to disable ModSecurity.

My test repo can be cloned from
https://github.com/AquaeAtrae/bunkerized-multisite-autoconf-test

Templator.py should detect (is_custom_conf) and include both
/confs/site/modsecurity-rules.conf#L65 and L77

At one point, I believe I saw L65 but not L77 with the same docker-compose-autoconf.yml which makes me wonder if this isn't some kind of race condition. I don't understand autoconf well enough yet to say really.

Logs
After starting docker-compose-autoconf.yml and browsing to https://app2.localhost/wp-admin/site-health.php I find the results blocked (ModSecurity false positives) despite the exemption provided in /modsec-confs/app2.localhost/wordpress.conf When I inspect the site's modsecurity-rules.conf, I see the Templator failed to actually include these custom rules.

# enable response body checks
SecResponseBodyAccess On
SecResponseBodyMimeType text/plain text/html text/xml application/json
SecResponseBodyLimit 524288
SecResponseBodyLimitAction ProcessPartial

# log usefull stuff
SecAuditEngine RelevantOnly
SecAuditLogType Serial
SecAuditLog /var/log/nginx/modsec_audit.log

# include OWASP CRS configuration

include /opt/owasp/crs.conf

# custom CRS configurations before loading rules (exclusions)



# include OWASP CRS rules
include /opt/owasp/crs/*.conf


# custom rules after loading the CRS```
@AquaeAtrae AquaeAtrae added the bug Something isn't working label Jul 29, 2021
@fl0ppy-d1sk
Copy link
Member

Hello @AquaeAtrae,

Here is my hypothesis.

When using autoconf, the configuration generator (which executes the is_custom_conf function) is running inside the autoconf container. Since the modsec-confs and modsec-crs-confs volumes are not mounted on the autoconf container, is_custom_conf won't find anything.

I've made a PR on your project, don't hesitate to test it and tell me if it works. If that's the culprit then it's an undocumented behavior and we should update all the examples and documentations related to autoconf to avoid the same bug happening for others.

@AquaeAtrae
Copy link
Author

Thanks @fl0ppy-d1sk,

Yes, that would explain it. I tested your PR (mounting the volumes to both nginx and autoconf) and confirmed it solved the Templator.py problem. So yes... as currently designed, any custom confs that adjust nginx must be duplicated on both the nginx and autoconf services.

Perhaps better than simply documenting the need for duplication, might it make more sense to:

  1. relocate any custom conf files under /etc/nginx/ (global) or /etc/nginx/<FIRST_SERVER>/ so everything is contained within the autoconf mount
  2. symlink old locations for backwards compatibility

If I understand correctly, docker has no problem bind mounting volumes (e.g. modsec-confs) within other volumes (autoconf). It automatically sorts these from shallowest to deepest path (sortMounts). I believe that would provide both nginx and autoconf with a consistent workspace for templating.

I suspect the Templator.py script may need to be adjusted to work around these existing files. Currently, it seems to fail to generate any nginx confs when a /etc/nginx/<FIRST_SERVER>/ folder contains an existing file. Presumably it would also fail seeing pre-existing custom conf folders there too. I'd suggest that Templator.py should generate each conf unless that file already exists. That way, users could effectively replace and override even these generated confs if needed.

Alternatively, it would be great if all site-specific customizations could somehow be made from that site's own service, its metadata and volume mounts. I'm less clear how those mounts would securely be read by autoconf and Templator.py but you may have an idea. For security, we'd just want to be sure that each service could only affect its own FIRST_SERVER confs.

Side note: Have you seen docker-gen ? I haven't played with it yet, but it looks to have similar aims as your autoconf service. It may be worth comparing against or even adding support for.

@AquaeAtrae
Copy link
Author

AquaeAtrae commented Jul 30, 2021

I'd propose the following layout pattern specifically...

volume: 
  - autoconf:/etc/nginx
  - ./web-files:/www/web-files # default site
  - ./nginx:/etc/nginx/custom:ro   # accessible by autoconf, one mount can include all customizations 
    # /nginx/conf.d:/etc/nginx/custom/conf.d   # was /server-confs
    # /nginx/modsec.d:/etc/nginx/custom/modsec.d   # was /modsec-confs
    # /nginx/modsec-crs.d:/etc/nginx/custom/modsec-crs.d   # was /modsec-crs-confs
  - ./app1.example.com/web-files:/www/app1.example.com/web-files:ro
  - ./app1.example.com/nginx:/etc/nginx/app1.example.com/custom:ro

To make this work as suggested, is_custom_conf() would test /etc/nginx/custom/**/*.conf
and Templator.py system would put these in place with one final step overwriting with customizations
\cp -rf /etc/nginx/custom/* /etc/nginx/

Now, we could pull these in from...

include /etc/nginx/conf.d/*.conf
include /etc/nginx/modsec.d/*.conf
include /etc/nginx/modsec-crs.d/*.conf

Everything would remain within autoconf's reach and even generated confs could be overwritten if needed.

@fl0ppy-d1sk
Copy link
Member

Hello @AquaeAtrae,

Sorry for late reply, I was busy working on the v1.3.0. I will look deeper into your suggestions when I have time. But clearly, make everything self contained within /etc/nginx looks like a good idea. It could resolve #163 too.

@fl0ppy-d1sk fl0ppy-d1sk changed the title [BUG] Templator.py fails to include custom modsec confs on multisite when using autoconf Refactor of custom configurations Aug 24, 2021
@AquaeAtrae
Copy link
Author

@fl0ppy-d1sk No worries. Glad you like the idea. :)

@AquaeAtrae
Copy link
Author

AquaeAtrae commented May 10, 2022

Hi @fl0ppy-d1sk

I look forward to seeing how the new BunkerWeb system will work. I know you are looking to release the beta soon. I was hoping to flush out my vision more in code before bothering you, but the ideas may be worth earlier consideration for your upcoming BunkerWeb release.

I wanted to revisit this particular issue and see if, by chance, we might see this fundamental structure of special folders well organized from the start. But also, I ran into another complication with multi-site projects that could be further improved perhaps. I just now how hard it can be to refactor structural changes like these once a design is cloned out and used by others.

Last week, I wanted to separate the docker-compose.yml files for reverse proxy / autoconf apart from each of the multiple sites so that they could be started and stopped independently. You provided a sample of how this can work in your reply to #129 which was encouraging. While this works with ROOT_SITE_SUBFOLDER and its html files, I cannot find a similar provision to separate the special folders for a multi-site project. I think may be able to trick the system into working using symlinks, but its not ideal.

Additionally, we're currently forced to host each site's subfolder listening as only one SERVER_NAME which must be identical. If one of more of these sites were designed listen as either multiple SERVER_NAMEs or a wildcard, that does not appear to be possible. In my case, I would like each site to listen to either its public production domain and a local development domain (like example.com, *.example.com and example.localhost). So far I've tried to to use an additional docker-compose-override.yml to rewrite the SERVER_NAME for localhost, but two problems prevent this alone from working. First, I must still rename each SUBFOLDER_NAME to match. Second, I think I'm seeing that the overridden autoconf labels referencing SERVER_NAME are being appended to the list rather than replaced.

Ideally, each service including its site-specific special folders could be stored entirely within its own subfolder which wouldn't necessarily have to be below the reverse proxy project folder. Each might be its own git repo. And using autoconf, additional services could be added or removed later on without interrupting other existing services. In addition to decoupling each multi-site service, we should also be able to decouple the SERVER_FOLDER from the SERVER_NAME variable, freeing us to listen on more than one host name including wildcards. Also, it would be nice if the default site (or one of the services) could still listen and respond for any unmatched SERVER_NAMEs not otherwise defined.

@AquaeAtrae
Copy link
Author

AquaeAtrae commented May 10, 2022

Here's a quick attempt to illustrate what I envision might work. I haven't tested it much or added the necessary code. Again I'm steering toward more conventional path names but understand if you have reasons otherwise.

  1. The custom confs within the nginx folders would all be included (default or multi-sites) via autoconf.
  2. LetsEncrypt certificates could also be stored there so they become portable.
  3. Proposing a new variable like ENABLE_CATCHALL that would enable the default static HTML for non-matching hostnames or, perhaps, could refer to a particular service (assuming its online).
  4. Proposing a new variable like SUBFOLDER_NAME to decouple the folder name from the hostnames each multi-site server listens for.
  5. SERVER NAME would allow for multiple entries or wildcard hostnames.
  6. Multi-sites services could be split into multiple, independent docker-compose YAML files, started and stopped independently via autoconf.

docker-compose.yml (with no site-specific dependencies such as their special folders) :

version: "3"

services:

  myproxy:
    image: bunkerity/bunkerized-nginx
    restart: always
    ports:
      - 80:8080
    volumes:
      - ./web-files:/www/web-files   # default static HTML content shown when no matching SERVER_NAME among multi-site services
      - ./nginx:/etc/nginx/custom:ro   # accessible by autoconf, one mount can include all default customizations
        # includes defaults...
        # ./nginx/server.d:/etc/nginx/custom/server.d   # was /server-confs
        # ./nginx/http.d:/etc/nginx/custom/http.d   # was /http-confs
        # ./nginx/modsec.d:/etc/nginx/custom/modsec.d   # was /modsec-confs
        # ./nginx/modsec-crs.d:/etc/nginx/custom/modsec-crs.d   # was /modsec-crs-confs
        # ./nginx/letsencrypt:/etc/nginx/letsencrypt:ro  # was /letsencrypt
      - autoconf:/etc/nginx
    environment:
      - SERVER_NAME= # must be left blank if you don't want to setup "static" conf
      - ENABLE_CATCHALL=yes # or can enable the catchall domain for any non-matching SERVER_NAME
      - MULTISITE=yes
    labels:
      - "bunkerized-nginx.AUTOCONF"
    networks:
      - mynet

  myautoconf:
    image: bunkerity/bunkerized-nginx-autoconf
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - autoconf:/etc/nginx
    depends_on:
      - myproxy

volumes:
  autoconf:

networks:
  mynet:
    name: mynetname

service1.yml (your basic example in #129) :

version: "3"

services:

  service1:
    image: tutum/hello-world
    networks:
      mynet:
        aliases:
          - service1
    labels:
      - "bunkerized-nginx.SERVER_NAME=service1.example.com"
      - "bunkerized-nginx.USE_REVERSE_PROXY=yes"
      - "bunkerized-nginx.REVERSE_PROXY_URL=/"
      - "bunkerized-nginx.REVERSE_PROXY_HOST=https://service1"

networks:
  mynet:
    external:
      name: mynetname

service2.yml (with its own custom security) :

version: "3"

services:

  service2:
    image: wordpress:fpm-alpine
    networks:
      mynet:
        aliases:
          - service2
    volumes:
      - ./www/example-project/web-files:/var/www/html
      - ./www/example-project/web-files:/www/example-project/web-files
      - autoconf:/etc/nginx
      - ./www/example-project/nginx:/etc/nginx/example-project/custom:ro
        # includes...
        # ./www/example-project/nginx/server.d:/etc/nginx/example-project/custom/server.d   # was /server-confs
        # ./www/example-project/nginx/http.d:/etc/nginx/example-project/custom/http.d   # was /http-confs
        # ./www/example-project/nginx/modsec.d:/etc/nginx/example-project/custom/modsec.d   # was /modsec-confs
        # ./www/example-project/nginx/modsec-crs.d:/etc/nginx/example-project/custom/modsec-crs.d   # was /modsec-crs-confs
        # ./www/example-project/nginx/letsencrypt:/etc/nginx/example-project/letsencrypt:ro  # was /letsencrypt
    environment:
      - WORDPRESS_DB_HOST=db1
      - WORDPRESS_DB_NAME=wp
      - WORDPRESS_DB_USER=user
      - WORDPRESS_DB_PASSWORD=db-user-pwd       # replace with a stronger password (must match MYSQL_PASSWORD)
    labels:
      - "bunkerized-nginx.SERVER_NAME=service2.example.com www.example.com *.example.org *.example.localhost"
      - "bunkerized-nginx.SUBFOLDER_NAME=./www/example-project"
      - "bunkerized-nginx.USE_REVERSE_PROXY=yes"
      - "bunkerized-nginx.REVERSE_PROXY_URL=/"
      - "bunkerized-nginx.REVERSE_PROXY_HOST=https://service2"
      - "bunkerized-nginx.USE_MODSECURITY=yes"
      - "bunkerized-nginx.USE_MODSECURITY_CRS=yes"
      - "bunkerized-nginx.ROOT_SITE_SUBFOLDER=web-files"
      - "bunkerized-nginx.REMOTE_PHP=service2"
      - "bunkerized-nginx.REMOTE_PHP_PATH=/var/www/html"

  db1:
    image: mariadb
    volumes:
      - ./www/example-project/db-data:/var/lib/mysql
    environment:
      - MYSQL_RANDOM_ROOT_PASSWORD=1  # not recommended if later need to mysql_upgrade the tables along with this image
      - MYSQL_DATABASE=wp
      - MYSQL_USER=user
      - MYSQL_PASSWORD=db-user-pwd  # replace with a stronger password (must match WORDPRESS_DB_PASSWORD) here or within...
    networks:
      - mynet

volumes:
  autoconf:

networks:
  mynet:
    external:
      name: mynetname

As you noted in #129, it's important to wait for nginx to finish starting before trying to start the services.

@AquaeAtrae
Copy link
Author

AquaeAtrae commented Jul 4, 2022

Now that Bunkerweb is released, I am trying to envision the least painful path to decouple domain/hostnames from paths and multisite variables. A few things complicate this. In particular, I'm struggling with the SERVER_NAME setting and the various ways it has been used for multisite contexts.

I'll try to describe something as close to viable as I've found and would welcome any ideas of how we might address backward compatibility with global SERVER_NAME definitions for multisite scenarios.

GOALS

  1. Backwards compatibility with Bunkerweb 1.4 initial structure by default.
  2. Each app can have its own custom config files and security policies.
  3. Each app is completely stored within its own subfolder (perhaps its own repo) from which nginx can still serve static files if needed (e.g. php-fpm).
  4. Apps can be added / removed using autoconf without interrupting other live apps.
  5. The apps "service name" (folder & variables prefix) may be decoupled from its domain / hostname(s). Folders and variable names can remain unchanged when deployed across different machines like production, ci, dev1, dev2. Each machine needs only its unique docker-compose.override.yml to set different SERVER_NAME domain name(s) locally.
  6. Multiple (sub)domain names or wildcard (sub)domains could reference a single app such as a Drupal multi-site installation.
  7. Multisite systems could still show a default site for any unmatched (sub)domain requested.

Whenever a SERVICE_NAME is declared, all service specific folders would move under it as shown with arrows below. SERVICE_SUBFOLDER can be set to "" (blank) so no path is forced to depend on the local hostname(s) defined by SERVER_NAME.

Setting             Default             Context     Multiple    Description
--------------      ------------------  ----------  ----------  ----------------------------------
SERVICE_NAME                            multisite   no          Mounted path under data/ containing server-specific files. Each multisite server can have its own folder and configs. Environment or label variables with matching prefixes are applied specifically to this server. 
SERVER_SUBFOLDER   {server_name}/       multisite   no          Mounted path under data/{service_name}/www/ where nginx serves files from. 
SERVER_NAME         www.example.com     multisite   no          List of the virtual hosts served by bunkerweb. If applied to a specific multisite server, nginx wildcard domain names are allowed. When a single hostname is set, environment or label variables with matching prefixes are applied specifically to this server. 
     ...along with these unchanged but related settings...
SERVE_FILES         yes                 multisite   no          Serve files from the local folder.
ROOT_FOLDER                             multisite   no          Root folder containing files to serve (/opt/bunkerweb/www/{server_name} if unset).

bw-data/
├── cache/
│   ├── asn.mmdb
│   ├── asn.mmdb.md
│   ├── blacklist/
│   │   ├── IP.list
│   │   ├── IP.list.md
│   │   ├── USER_AGENT.list
│   │   └── USER_AGENT.list.md
│   ├── bunkernet/
│   ├── country.mmdb
│   ├── country.mmdb.md
│   ├── customcert/
│   ├── selfsigned/
│   └── whitelist/
├── plugins/
├─> default/
│   ├── configs/
│   │   ├── default-server-http/
│   │   ├── default-server-stream/
│   │   ├── http/
│   │   ├── modsec/
│   │   ├── modsec-crs/
│   │   ├── server-http/
│   │   ├── server-stream/
│   │   └── stream/
│   ├── letsencrypt/
│   │   └── renewal-hooks/
│   │       ├── deploy/
│   │       ├── post/
│   │       └── pre/
│   ├── www/
│   │   └── index.php
│   └── db/
├─> app1/
│   ├── configs/
│   │   ├── default-server-http/
│   │   ├── default-server-stream/
│   │   ├── http/
│   │   ├── modsec/
│   │   ├── modsec-crs/
│   │   ├── server-http/
│   │   ├── server-stream/
│   │   └── stream/
│   ├── letsencrypt/
│   │   └── renewal-hooks/
│   │       ├── deploy/
│   │       ├── post/
│   │       └── pre/
│   ├── www/
│   │   └── index.php
│   └── db/
└─> app2/
    ├── configs/
    │   ├── default-server-http/
    │   ├── default-server-stream/
    │   ├── http/
    │   ├── modsec/
    │   ├── modsec-crs/
    │   ├── server-http/
    │   ├── server-stream/
    │   └── stream/
    ├── letsencrypt/
    │   └── renewal-hooks/
    │       ├── deploy/
    │       ├── post/
    │       └── pre/
    ├── www/
    │   └── index.php
    └── db/

Thoughts? Ideas?

One aspect may be particularly inelegant: A conflict would remain between GOAL 7 and GOAL 1 when SERVER_NAME was used like in this simple multisite example. To expose nginx' server_name directive fully (GOAL 7), I suppose another format of SERVER_NAME could be set and detected by bunkerweb. But it's not elegant and I fear some of this may have painted us into a corner.

      - SERVER_NAME=app1{{ app1A.example.com *.app1B.example.com mail.* ~^(?<user>.+)\.example\.net$ }} app2.example.com
      - app1_REMOTE_PHP=myapp1
      - app1_REMOTE_PHP_PATH=/app
      - app2.example.com_REMOTE_PHP=myapp2
      - app2.example.com_REMOTE_PHP_PATH=/app

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working v2
Projects
None yet
Development

No branches or pull requests

3 participants