Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Supabase containers keep restarting due to authentication-related error #2696

Closed
JohnGeek-dev opened this issue Jun 26, 2024 · 43 comments · Fixed by #2789
Closed

[Bug]: Supabase containers keep restarting due to authentication-related error #2696

JohnGeek-dev opened this issue Jun 26, 2024 · 43 comments · Fixed by #2789
Labels

Comments

@JohnGeek-dev
Copy link

JohnGeek-dev commented Jun 26, 2024

Description

When attempting to deploy Supabase using Coolify v4.0.0-beta.306, the process fails, and the logs of the containers indicate an authentication-related error.

Do note that it works fine on v4.0.0-beta.297 except for that fact that:

  1. Minio Createbucket container fails to run and exits.
  2. Supabase Rest and Realtime Dev shows running (unhealthy).

Minimal Reproduction (if possible, example repository)

  1. Upgrade to Coolify v4.0.0-beta.306.
  2. Attempt to deploy Supabase.
  3. Observe the failure in the deployment process. Several containers would keep restarting.
  4. Check logs of the failed containers.

Exception or Error

No response

Version

v4.0.0-beta.306

@MedLeon
Copy link

MedLeon commented Jun 26, 2024

I have the same error. And some user on Discord (Moritz) also seems to have it.

@olsoda
Copy link

olsoda commented Jun 26, 2024

Same here. It’s been one thing or another with Supabase on the last few beta releases

@Mortalife
Copy link

Mortalife commented Jun 26, 2024

There was only one small change (since 297) to the template which I wouldn't have expected to cause the issue:
v4.0.0-beta.297...v4.0.0-beta.306 search supabase

So I can only presume there's some issue with parsing/injecting env variables?

@MauruschatM
Copy link

Same for me, supabase doesn't work due to the supabase-db module. The supabase_admin won't be created, I think

@Mortalife
Copy link

For me the supabase-db boots, but the supabase-analytics doesn't and most of the containers depend on supabase-analytics, the logs say that the password for supabase_admin is incorrect, which is causing supabase-analytics to crash because the migrations can't run. That was my experience yesterday evening at least.

@Mortalife
Copy link

If I were a betting man, I'd say it was this commit: 1266810

@MauruschatM
Copy link

If I were a betting man, I'd say it was this commit: 1266810

No, it already did'nt work on Monday

@MauruschatM
Copy link

For me the supabase-db boots, but the supabase-analytics doesn't and most of the containers depend on supabase-analytics, the logs say that the password for supabase_admin is incorrect, which is causing supabase-analytics to crash because the migrations can't run. That was my experience yesterday evening at least.

Yes, because the supabase_admin user won't be created. You can see this inside of the supabase-db logs

@Mortalife
Copy link

If I were a betting man, I'd say it was this commit: 1266810

No, it already did'nt work on Monday

Fair, I read through it in more detail and if I were a betting man, I'd have lost money! Haha
Double checked the env's passed to the containers and it's correct. So my hypothesis was incorrect.

@Skeyelab
Copy link

I am also experiencing this.

@Mortalife
Copy link

Mortalife commented Jun 26, 2024

I've figured out the issue, can replicate and mitigate.

Coolify is setting the POSTGRES_HOST parameter to the POSTGRES_HOST environment parameter even though it has a hard coded value set.

You can resolve the issue by setting POSTGRES_HOST to some other value like POSTGRES_HOSTNAME, change all instances of POSTGRES_HOST parameter inside the docker-compose and then delete the POSTGRES_HOST after saving.

Issue:
Postgres runs the init scripts before the network connection is ready by connecting directly to the socket, which is why for the POSTGRES_HOST=/var/run/postgresql env variable on supabase-db it's set to a path.

When the env is incorrectly overridden the value is set to supabase-db which resolves to the docker network which isn't initialised and is also not able to use the local access root.

I might still be on to win my bet. 😂

Refs:

docker-library/postgres#941
https://raw.githubusercontent.com/docker-library/postgres/master/15/bullseye/docker-entrypoint.sh
https://github.com/supabase/postgres/blob/develop/migrations/db/migrate.sh

@MMTE
Copy link
Contributor

MMTE commented Jun 26, 2024

I've figured out the issue, can replicate and mitigate.

@Mortalife seems you are right. I was guessing maybe other services are loading sooner than DB but I believe it should be an environment conflict as you mentioned.
I hope supabase make it's deployment process more robust in future. it's a little tricky now.

@MauruschatM
Copy link

I've figured out the issue, can replicate and mitigate.

@Mortalife seems you are right. I was guessing maybe other services are loading sooner than DB but I believe it should be an environment conflict as you mentioned. I hope supabase make it's deployment process more robust in future. it's a little tricky now.

How would they sell their cloud services if self hosting was that easy? It's just marketing and it has to be somehow possible. But they don't want the mass to self host supabase..

@yyassif
Copy link

yyassif commented Jun 27, 2024

I am having same issues too, even after removing analytics service
Error: FATAL: 28P01: password authentication failed for user "supabase_admin"

@Torwent
Copy link

Torwent commented Jun 27, 2024

@Mortalife solution works! Thank you!

@agalev
Copy link

agalev commented Jun 30, 2024

@Mortalife This worked for me as well, thank you for the fix!

@MedLeon
Copy link

MedLeon commented Jun 30, 2024

The fix works for me aswell, but "Minio Createbucket" does not start.
Did that work for you with this fix, @Torwent & @agalev ?

@Mortalife
Copy link

Mortalife commented Jun 30, 2024

The fix works for me aswell, but "Minio Createbucket" does not start. Did that work for you with this fix, @Torwent & @agalev ?

It didn't start before this issue.

To clarify, it shouldn't be running. It runs once to ensure the mimo server has the default bucket created that's used by the storage server.

https://github.com/coollabsio/coolify/blob/main/templates/compose/supabase.yaml#L1067-L1071

It's creates the stub bucket, and then proceeds to exit. It has a no restart set.

The stub bucket is used by the storage server here: https://github.com/coollabsio/coolify/blob/main/templates/compose/supabase.yaml#L1104

@olsoda
Copy link

olsoda commented Jun 30, 2024

That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...
Screenshot 2024-06-30 at 3 49 13 PM

@Mortalife
Copy link

That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...

I don't experience that problem. I would double check you've replaced all of the POSTGRES_HOST variable instances and there aren't any extra spaces etc where there shouldn't be. If it still remains, it might be worth removing the supabase db volume and restarting.

@MMTE
Copy link
Contributor

MMTE commented Jul 1, 2024

@Mortalife
do you mind making that a pull request? I mean is there any other configuration that must be considered or just this hard-coded POSTGRES_HOST in postgresql was the probelm?
if so maybe we can make a PR and make this issue as fixed?

@Mortalife
Copy link

I think I'd rather the env variables be correctly parsed than putting a PR up for this work around. PR don't seem to be approved with much velocity so it won't change things immediately regardless.

@MMTE
Copy link
Contributor

MMTE commented Jul 1, 2024

I understand. personally I had much difficulties deploy supabase instances as separate projects. coolify at least made it easy. and on the other hand supabase also is under development so maybe we have a lot of breaking coming forward.

@Torwent
Copy link

Torwent commented Jul 1, 2024

The fix works for me aswell, but "Minio Createbucket" does not start. Did that work for you with this fix, @Torwent & @agalev ?

I'm pretty sure that's not meant to be running. Runs once the very first time you start things up to create the minIO bucket and never runs again AFAIK

@deozza
Copy link

deozza commented Jul 1, 2024

Hello @Mortalife and sorry to bother you.

I just ran into this issue and found about your solution.

Could you please clarify a bit what needs to be changed ? I don't understand where and what values are causing the issue.

As I understood, in the .env file, I need to add a new parameter called POSTGRES_HOSTNAME with supabase-dbas its value. And replace all iterations of POSTGRES_HOST in the docker-compose .yml file by POSTGRES_HOSTNAME ? Am I right or missed the point ?

@Mortalife
Copy link

Hello @Mortalife and sorry to bother you.

I just ran into this issue and found about your solution.

Could you please clarify a bit what needs to be changed ? I don't understand where and what values are causing the issue.

As I understood, in the .env file, I need to add a new parameter called POSTGRES_HOSTNAME with supabase-dbas its value. And replace all iterations of POSTGRES_HOST in the docker-compose .yml file by POSTGRES_HOSTNAME ? Am I right or missed the point ?

Correct, and then once you've done that, remove POSTGRES_HOST from the .env then restart.

@deozza
Copy link

deozza commented Jul 1, 2024

Sorry again, this is surely an error between the chair and the keyboard, but my analytics service is still failing to start. Due to that password authentication failed for user "supabase_admin" errro from the supabase-analytics service.

Here is my docker-compose.yml file :

services:
  supabase-kong:
    image: 'kong:2.8.1'
    entrypoint: 'bash -c ''eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'''
    depends_on:
      supabase-analytics:
        condition: service_healthy
    environment:
      - SERVICE_FQDN_SUPABASEKONG
      - 'JWT_SECRET=${SERVICE_PASSWORD_JWT}'
      - KONG_DATABASE=off
      - KONG_DECLARATIVE_CONFIG=/home/kong/kong.yml
      - 'KONG_DNS_ORDER=LAST,A,CNAME'
      - 'KONG_PLUGINS=request-transformer,cors,key-auth,acl,basic-auth'
      - KONG_NGINX_PROXY_PROXY_BUFFER_SIZE=160k
      - 'KONG_NGINX_PROXY_PROXY_BUFFERS=64 160k'
      - 'SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}'
      - 'SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}'
      - 'DASHBOARD_USERNAME=${SERVICE_USER_ADMIN}'
      - 'DASHBOARD_PASSWORD=${SERVICE_PASSWORD_ADMIN}'
    volumes:
      -
        type: bind
        source: ./volumes/api/kong.yml
        target: /home/kong/temp.yml
  supabase-studio:
    image: 'supabase/studio:20240514-6f5cabd'
    healthcheck:
      test:
        - CMD
        - node
        - '-e'
        - "require('http').get('http:https://127.0.0.1:3000/api/profile', (r) => {if (r.statusCode !== 200) process.exit(1); else process.exit(0); }).on('error', () => process.exit(1))"
      timeout: 5s
      interval: 5s
      retries: 3
    depends_on:
      supabase-analytics:
        condition: service_healthy
    environment:
      - HOSTNAME=0.0.0.0
      - 'STUDIO_PG_META_URL=http:https://supabase-meta:8080'
      - 'POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
      - 'DEFAULT_ORGANIZATION_NAME=${STUDIO_DEFAULT_ORGANIZATION:-Default Organization}'
      - 'DEFAULT_PROJECT_NAME=${STUDIO_DEFAULT_PROJECT:-Default Project}'
      - 'SUPABASE_URL=http:https://supabase-kong:8000'
      - 'SUPABASE_PUBLIC_URL=${SERVICE_FQDN_SUPABASEKONG}'
      - 'SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}'
      - 'SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}'
      - 'AUTH_JWT_SECRET=${SERVICE_PASSWORD_JWT}'
      - 'LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}'
      - 'LOGFLARE_URL=http:https://supabase-analytics:4000'
      - NEXT_PUBLIC_ENABLE_LOGS=true
      - NEXT_ANALYTICS_BACKEND_PROVIDER=postgres
  supabase-db:
    image: 'supabase/postgres:15.1.1.41'
    healthcheck:
      test: 'pg_isready -U postgres -h 127.0.0.1'
      interval: 5s
      timeout: 5s
      retries: 10
    depends_on:
      supabase-vector:
        condition: service_healthy
    command:
      - postgres
      - '-c'
      - config_file=/etc/postgresql/postgresql.conf
      - '-c'
      - log_min_messages=fatal
    restart: unless-stopped
    environment:
      - POSTGRES_HOST=/var/run/postgresql
      - 'PGPORT=${POSTGRES_PORT:-5432}'
      - 'POSTGRES_PORT=${POSTGRES_PORT:-5432}'
      - 'PGPASSWORD=${SERVICE_PASSWORD_POSTGRES}'
      - 'POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
      - 'PGDATABASE=${POSTGRES_DB:-postgres}'
      - 'POSTGRES_DB=${POSTGRES_DB:-postgres}'
      - 'JWT_SECRET=${SERVICE_PASSWORD_JWT}'
      - 'JWT_EXP=${JWT_EXPIRY:-3600}'
    volumes:
      - 'supabase-db-data:/var/lib/postgresql/data'
      -
        type: bind
        source: ./volumes/db/realtime.sql
        target: /docker-entrypoint-initdb.d/migrations/99-realtime.sql
      -
        type: bind
        source: ./volumes/db/webhooks.sql
        target: /docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql
      -
        type: bind
        source: ./volumes/db/roles.sql
        target: /docker-entrypoint-initdb.d/init-scripts/99-roles.sql
      -
        type: bind
        source: ./volumes/db/jwt.sql
        target: /docker-entrypoint-initdb.d/init-scripts/99-jwt.sql
      -
        type: bind
        source: ./volumes/db/logs.sql
        target: /docker-entrypoint-initdb.d/migrations/99-logs.sql
      - 'supabase-db-config:/etc/postgresql-custom'
  supabase-analytics:
    image: 'supabase/logflare:1.4.0'
    healthcheck:
      test:
        - CMD
        - curl
        - 'http:https://127.0.0.1:4000/health'
      timeout: 5s
      interval: 5s
      retries: 10
    restart: unless-stopped
    depends_on:
      supabase-db:
        condition: service_healthy
    environment:
      - LOGFLARE_NODE_HOST=127.0.0.1
      - DB_USERNAME=supabase_admin
      - 'DB_DATABASE=${POSTGRES_DB:-postgres}'
      - 'DB_HOSTNAME=${POSTGRES_HOSTNAME:-supabase-db}'
      - 'DB_PORT=${POSTGRES_PORT:-5432}'
      - 'DB_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
      - DB_SCHEMA=_analytics
      - 'LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}'
      - LOGFLARE_SINGLE_TENANT=true
      - LOGFLARE_SINGLE_TENANT_MODE=true
      - LOGFLARE_SUPABASE_MODE=true
      - LOGFLARE_MIN_CLUSTER_SIZE=1
      - 'POSTGRES_BACKEND_URL=postgresql:https://supabase_admin:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB:-postgres}'
      - POSTGRES_BACKEND_SCHEMA=_analytics
      - LOGFLARE_FEATURE_FLAG_OVERRIDE=multibackend=true

And here is my .env file :

ADDITIONAL_REDIRECT_URLS=
API_EXTERNAL_URL=http:https://supabase-kong:8000
DISABLE_SIGNUP=false
ENABLE_ANONYMOUS_USERS=false
ENABLE_EMAIL_AUTOCONFIRM=false
ENABLE_EMAIL_SIGNUP=true
ENABLE_PHONE_AUTOCONFIRM=true
ENABLE_PHONE_SIGNUP=true
FUNCTIONS_VERIFY_JWT=false
IMGPROXY_ENABLE_WEBP_DETECTION=true
JWT_EXPIRY=3600
MAILER_SUBJECTS_CONFIRMATION=
MAILER_SUBJECTS_EMAIL_CHANGE=
MAILER_SUBJECTS_INVITE=
MAILER_SUBJECTS_MAGIC_LINK=
MAILER_SUBJECTS_RECOVERY=
MAILER_TEMPLATES_CONFIRMATION=
MAILER_TEMPLATES_EMAIL_CHANGE=
MAILER_TEMPLATES_INVITE=
MAILER_TEMPLATES_MAGIC_LINK=
MAILER_TEMPLATES_RECOVERY=
MAILER_URLPATHS_CONFIRMATION=/auth/v1/verify
MAILER_URLPATHS_EMAIL_CHANGE=/auth/v1/verify
MAILER_URLPATHS_INVITE=/auth/v1/verify
MAILER_URLPATHS_RECOVERY=/auth/v1/verify
PGRST_DB_SCHEMAS=public
POSTGRES_DB=postgres
POSTGRES_HOSTNAME=supabase-db
POSTGRES_PORT=5432
SECRET_PASSWORD_REALTIME=
SERVICE_FQDN_SUPABASEKONG=http:https://supabasekong-d4kgsgk.xxx.xxx.xxx.xxx.sslip.io/
SMTP_ADMIN_EMAIL=
SMTP_HOST=
SMTP_PASS=
SMTP_PORT=587
SMTP_SENDER_NAME=
SMTP_USER=
STUDIO_DEFAULT_ORGANIZATION=Default Organization
STUDIO_DEFAULT_PROJECT=Default Project

As you recommanded, I removed the POSTGRES_HOST from the .env file and added POSTGRES_HOSTNAME. And I changed the use of POSTGRES_HOST in docker-compose.yml to POSTGRES_HOSTNAME.

Also, here is what I got when I tried to manually log into postgres inside the supabase-db service :

$ psql -U supabase_admin -W
Password: 
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL:  password authentication failed for user "supabase_admin"

@Mortalife
Copy link

@deozza Try stopping the stack, removing the associated _supabase-db-data volume and restart the stack.

You can find the volume by running docker volume ls look for the one which has <the_random_stack_string>_supabase-db-data and then running docker volume rm <name>.

For example my random stack string that is before my url etc is rwkg84s my volume is rwkg84s_supabase-db-data so I would run docker volume rm rwkg84s_supabase-db-data

Once you've done that you should be able to start the service again and hopefully the migrations will run correctly.

@deozza
Copy link

deozza commented Jul 1, 2024

This worked perfectly for me. For future reference, here are the steps I did to resolve :

  1. first deploy the stack via coolify
  2. wait for the deployment to fail
  3. stop all containers
  4. in the environment variable panel, or directly in the .env file in the server :
    a. replace the POSTGRES_HOSTvariable with POSTGRES_HOSTNAME
  5. in the service stack panel, click on the edit compose file or directly in the docker-compose.yml file in the server
    a. replace all use of POSTGRES_HOST variable with POSTGRES_HOSTNAME
  6. on the server, use docker compose down --volumes to remove the old db config
  7. deploy the stack again
  8. it should work

@olsoda
Copy link

olsoda commented Jul 1, 2024

That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...

I don't experience that problem. I would double check you've replaced all of the POSTGRES_HOST variable instances and there aren't any extra spaces etc where there shouldn't be. If it still remains, it might be worth removing the supabase db volume and restarting.

Tried removing the volumes after double checking the host values... still Rest is listed as unhealthy and also says public schema is still not available for me

@diegofino15
Copy link

diegofino15 commented Jul 4, 2024

Hello, I face the same exact problem but none of the solutions provided worked for me..
After replacing all the POSTGRES_HOST with POSTGRES_HOSTNAME and removing the volumes, upon restart, the supabase_db successfully creates the supabase_admin role, but has no password assigned to it ?

Here are the logs of thesupabase_db :

The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with this locale configuration:
  provider:    libc
  LC_COLLATE:  C.UTF-8
  LC_CTYPE:    C.UTF-8
  LC_MESSAGES: en_US.UTF-8
  LC_MONETARY: en_US.UTF-8
  LC_NUMERIC:  en_US.UTF-8
  LC_TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
syncing data to disk ... ok
Success. You can now start the database server using:
    pg_ctl -D /var/lib/postgresql/data -l logfile start
waiting for server to start.... done
server started
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/init-scripts
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/migrate.sh
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00-schema.sql
CREATE ROLE
REVOKE
CREATE SCHEMA
CREATE FUNCTION
REVOKE
GRANT
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00000000000000-initial-schema.sql
CREATE PUBLICATION
CREATE ROLE
ALTER ROLE
CREATE ROLE
CREATE ROLE
GRANT ROLE
CREATE SCHEMA
CREATE EXTENSION
CREATE EXTENSION
CREATE EXTENSION
CREATE ROLE
CREATE ROLE
CREATE ROLE
CREATE ROLE
GRANT ROLE
GRANT ROLE
GRANT ROLE
GRANT ROLE
GRANT
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
GRANT
ALTER ROLE
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
ALTER ROLE
ALTER ROLE
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00000000000001-auth-schema.sql
CREATE SCHEMA
�
CREATE TABLE
�
CREATE INDEX
�
CREATE INDEX
COMMENT
�
CREATE TABLE
�
CREATE INDEX
�
CREATE INDEX
�
CREATE INDEX
COMMENT
�
CREATE TABLE
COMMENT
�
CREATE TABLE
�
CREATE INDEX
COMMENT
�
CREATE TABLE
COMMENT
INSERT 0 7
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
GRANT
CREATE ROLE
GRANT
GRANT
GRANT
ALTER ROLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00000000000002-storage-schema.sql
CREATE SCHEMA
GRANT
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
ALTER DEFAULT PRIVILEGES
�
CREATE TABLE
�
CREATE INDEX
�
CREATE TABLE
�
CREATE INDEX
�
CREATE INDEX
ALTER TABLE
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
�
CREATE TABLE
CREATE ROLE
GRANT
GRANT
GRANT
ALTER ROLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER FUNCTION
ALTER FUNCTION
ALTER FUNCTION
ALTER FUNCTION
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/00000000000003-post-setup.sql
ALTER ROLE
ALTER ROLE
CREATE FUNCTION
CREATE EVENT TRIGGER
COMMENT
CREATE FUNCTION
COMMENT
DO
CREATE ROLE
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
GRANT
/docker-entrypoint-initdb.d/migrate.sh: running /docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql
psql: error: /docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql: Permission denied
PostgreSQL Database directory appears to contain a database; Skipping initialization
172.31.0.7 2024-07-04 09:06:44.563 UTC [48] supabase_admin@postgres FATAL:  password authentication failed for user "supabase_admin"
172.31.0.7 2024-07-04 09:06:44.563 UTC [48] supabase_admin@postgres DETAIL:  User "supabase_admin" has no password assigned.
	Connection matched pg_hba.conf line 89: "host  all  all  172.16.0.0/12  scram-sha-256"

In the logs it says that the role doesn't have a password, so I tried to do alter role supabase_admin with password [password] on the supabase_db, and now the supabase_analytics connects to it but throws a new error :

(These are the logs of the supabase_analytics)

08:54:24.570 [notice] Application logflare exited: Logflare.Application.start(:normal, []) returned an error: shutdown: failed to start child: Logflare.SystemMetricsSup
    ** (EXIT) shutdown: failed to start child: Logflare.SystemMetrics.AllLogsLogged
        ** (EXIT) an exception was raised:
            ** (Postgrex.Error) ERROR 42P01 (undefined_table) relation "system_metrics" does not exist
    query: SELECT s0."id", s0."all_logs_logged", s0."node", s0."inserted_at", s0."updated_at" FROM "system_metrics" AS s0 WHERE (s0."node" = $1)
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
                (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:828: Ecto.Adapters.SQL.execute/6
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:19: Ecto.Repo.Queryable.all/3
                (ecto 3.10.3) lib/ecto/repo/queryable.ex:151: Ecto.Repo.Queryable.one/3
                (logflare 1.4.0) lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex:20: Logflare.SystemMetrics.AllLogsLogged.init/1
                (stdlib 4.3.1) gen_server.erl:851: :gen_server.init_it/2
                (stdlib 4.3.1) gen_server.erl:814: :gen_server.init_it/6
{"Kernel pid terminated",application_controller,"{application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 132,message => nil,postgres => #{code => undefined_table,file => <<\"parse_relation.c\">>,line => <<\"1392\">>,message => <<\"relation \\"system_metrics\\" does not exist\">>,pg_code => <<\"42P01\">>,position => <<\"89\">>,routine => <<\"parserOpenTable\">>,severity => <<\"ERROR\">>,unknown => <<\"ERROR\">>},query => <<\"SELECT s0.\\"id\\", s0.\\"all_logs_logged\\", s0.\\"node\\", s0.\\"inserted_at\\", s0.\\"updated_at\\" FROM \\"system_metrics\\" AS s0 WHERE (s0.\\"node\\" = $1)\">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,\"lib/ecto/adapters/sql.ex\"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,\"lib/ecto/adapters/sql.ex\"},{line,828}]},{'Elixir.Ecto.Repo.Queryable',execute,4,[{file,\"lib/ecto/repo/queryable.ex\"},{line,229}]},{'Elixir.Ecto.Repo.Queryable',all,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,19}]},{'Elixir.Ecto.Repo.Queryable',one,3,[{file,\"lib/ecto/repo/queryable.ex\"},{line,151}]},{'Elixir.Logflare.SystemMetrics.AllLogsLogged',init,1,[{file,\"lib/logflare/system_metrics/all_logs_logged/all_logs_logged.ex\"},{line,20}]},{gen_server,init_it,2,[{file,\"gen_server.erl\"},{line,851}]},{gen_server,init_it,6,[{file,\"gen_server.erl\"},{line,814}]}]}}}}},{'Elixir.Logflare.Application',start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,logflare,{{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetricsSup',{shutdown,{failed_to_start_child,'Elixir.Logflare.SystemMetrics.AllLogsLogged',{#{'__exception__' => true,'__struct__' => 'Elixir.Postgrex.Error',connection_id => 132,message => nil,postgres => #{code => undefined_table,file => <<"parse_relation.c">>,line => <<"1392">>,message => <<"relation \"system_metrics\" does not exist">>,pg_code => <<"42P01">>,position => <<"89">>,routine => <<"parserOpenTable">>,severity => <<"ERROR">>,unknown => <<"ERROR">>},query => <<"SELECT s0.\"id\", s0.\"all_logs_logged\", s0.\"node\", s0.\"inserted_at\", s0.\"updated_at\" FROM \"system_metrics\" AS s0 WHERE (s0.\"node\" = $1)">>},[{'Elixir.Ecto.Adapters.SQL',raise_sql_call_error,1,[{file,"lib/ecto/adapters/sql.ex"},{line,913},{error_info,#{module => 'Elixir.Exception'}}]},{'Elixir.Ecto.Adapters.SQL',execute,6,[{file,"lib/ecto/adapters/sql.ex"},{line,828}]},{'Elixir.Ecto.Repo.Q
Crash dump is being written to: erl_crash.dump...done
LOGFLARE_NODE_HOST is: 127.0.0.1
08:54:27.231 [info] Starting migration
08:54:27.547 [error] Could not create schema migrations table. This error usually happens due to the following:
  * The database does not exist
  * The "schema_migrations" table, which Ecto uses for managing
    migrations, was defined by another library
  * There is a deadlock while migrating (such as using concurrent
    indexes with a migration_lock)
To fix the first issue, run "mix ecto.create" for the desired MIX_ENV.
To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create", both for the desired MIX_ENV. Alternatively you may
configure Ecto to use another table and/or repository for managing
migrations:
    config :logflare, Logflare.Repo,
      migration_source: "some_other_table_for_schema_migrations",
      migration_repo: AnotherRepoForSchemaMigrations
The full error report is shown below.
** (Postgrex.Error) ERROR 3F000 (invalid_schema_name) no schema has been selected to create in
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:913: Ecto.Adapters.SQL.raise_sql_call_error/1
    (elixir 1.14.4) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:1005: Ecto.Adapters.SQL.execute_ddl/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:738: Ecto.Migrator.verbose_schema_migration/3
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:552: Ecto.Migrator.lock_for_migrations/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:428: Ecto.Migrator.run/4
    (ecto_sql 3.10.1) lib/ecto/migrator.ex:170: Ecto.Migrator.with_repo/3
    nofile:1: (file)

I really tried everything that was said in this discussion and in others online, but could not get it to work...

@gBusato
Copy link

gBusato commented Jul 5, 2024

Also having the issue

@roddutra
Copy link

roddutra commented Jul 6, 2024

@diegofino15 and @gBusato, I can confirm that @Mortalife (here) and @deozza (here) instructions worked for me (thank you both).

To reiterate, make sure to:

  1. stop the services first
  2. delete the <uuid>_supabase-db-data volume as per @Mortalife 's instructions
  3. in Coolify's Environment Variables section, rename the POSTGRES_HOST variable to POSTGRES_HOSTNAME
  4. ⚠️ in Coolify's Service Stack > Edit Compose File, rename all instances of the POSTGRES_HOST variable to POSTGRES_HOSTNAME and save (this might make step 3 redundant but I didn't test it)
  5. restart the service

I had missed Step 4 and the stack just recreated that POSTGRES_HOST Environment Variable.

Lukyrouge3 added a commit to Lukyrouge3/coolify that referenced this issue Jul 9, 2024
Based on solution here:
coollabsio#2696
Tested and working !
@Mortalife
Copy link

Mortalife commented Jul 10, 2024

@andrasbacsai Can we still get a fix for statically set docker-compose environmental variables being overwritten if there's a dynamic variable with the same name? The PR you merged mitigates, but there's still an issue AFAIK.

@andrasbacsai
Copy link
Member

@andrasbacsai Can we still get a fix for statically set docker-compose environmental variables being overwritten if there's a dynamic variable with the same name? The PR you merged mitigates, but there's still an issue AFAIK.

Does this have a separate gh issue? I need to understand the context(with an example if possible).

@Mortalife
Copy link

@andrasbacsai Can we still get a fix for statically set docker-compose environmental variables being overwritten if there's a dynamic variable with the same name? The PR you merged mitigates, but there's still an issue AFAIK.

Does this have a separate gh issue? I need to understand the context(with an example if possible).

I think this thread gives a good indication what the issue is. Set a docker-compose env value to a hard coded value, create a dynamic variable of the same name and use it elsewhere, observe that the hard coded value is overwritten.

In one container, set the following:

environment:
      - DB_HOST=${POSTGRES_HOST:-supabase-db}

In the same compose on another container set the following:

environment:
      - POSTGRES_HOST=/var/run/postgresql

Observe that POSTGRES_HOST is not set to /var/run/postgresql but to supabase-db

@Mortalife
Copy link

It can also be replicated with a simpler flow:
Create a service with this docker-compose:

  test:
    image: hashicorp/http-echo
    environment:
      - DB_HOST=value/two

Create an env called DB_HOST with some other value.
Save the docker-compose, IE: "I just made a change"
Observe that DB_HOST is overridden with some other value.

@Geczy
Copy link
Sponsor Contributor

Geczy commented Jul 10, 2024

duplicate; #2713

@andrasbacsai
Copy link
Member

It can also be replicated with a simpler flow: Create a service with this docker-compose:

  test:
    image: hashicorp/http-echo
    environment:
      - DB_HOST=value/two

Create an env called DB_HOST with some other value. Save the docker-compose, IE: "I just made a change" Observe that DB_HOST is overridden with some other value.

This will be fixed in the upcoming version.

@andrasbacsai andrasbacsai linked a pull request Jul 10, 2024 that will close this issue
@Mortalife
Copy link

Final comment, if you're experiencing this issue with an installation created before v4.0.0-beta.308 You will need to follow the steps above, or recreate the supabase service to pick up the new docker-compose handler.

@Geczy
Copy link
Sponsor Contributor

Geczy commented Jul 11, 2024

what if we created a supabase service in 296, can we update to 308 and not have to recreate

@Mortalife
Copy link

what if we created a supabase service in 296, can we update to 308 and not have to recreate

I don't believe you can, my existing services still experience the issue above, but newly created ones worked as expected. I suspect that has something to do with the v1/v2 docker-compose parsing I saw in the commits. IE: I think existing services still use v1, which is the one that doesn't work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.