Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems when trying to use non default schema #674

Open
Na3blis opened this issue Apr 3, 2024 · 3 comments
Open

Problems when trying to use non default schema #674

Na3blis opened this issue Apr 3, 2024 · 3 comments

Comments

@Na3blis
Copy link

Na3blis commented Apr 3, 2024

I'm trying to get the fhir server to work on a schema other than public, and I'm not having much luck. I've tried many different combinations of properties with my current set being:

    spring:
      main:
        allow-circular-references: true
      datasource:
        platform: postgres
        driver-class-name: org.postgresql.Driver
        username: ${DB_USERNAME}
        password: ${DB_PASSWORD}
        hikari:
          maximum-pool-size: 2
      flyway:
        schemas: hapi2      
        fail-on-missing-locations: false
      jpa:
        database: postgresql 
        database-platform: org.hibernate.dialect.PostgreSQL94Dialect      
        properties:
          hibernate:
            default_schema: hapi2
            hbm2ddl:
              auto: update
            format_sql: true
            show_sql: true
        hibernate:  
          dialect: ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgres94Dialect  
          ddl-auto: update
          jdbc:
            batch_size: 20
          hbm2ddl:
            auto: update

Essentially I'm trying to be able to run two instances of HAPI so I can simulate having two separate hospitals. The one I've been running I didn't change any of the settings and everything is setup in the PUBLIC schema. For the second instance, I just wanted to create it in a new schema, I named hapi2. When trying that, it seems like it sort of works, but a lot of stuff doesn't. I was able to get it to create the tables, but none of the sequences were created. I had to manually create the sequences to get past that and just load the /Patient resource. However, this seems to be where another part of the problem is, it seems to query some things from the hapi2 schema, but others from the public schema, and I end up getting the Patients from the first instance of HAPI.

Here are two queries showing up in my logs, you can see some have the hapi2 schema prefix, whereas others do not:

    select
        rsv1_0.pid,
        rsv1_0.res_deleted_at,
        rsv1_0.res_encoding,
        rsv1_0.fhir_id,
        rsv1_0.res_version,
        rsv1_0.has_tags,
        rsv1_0.partition_id,
        rsv1_0.prov_request_id,
        rsv1_0.prov_source_uri,
        rsv1_0.res_published,
        rsv1_0.res_text,
        rsv1_0.res_id,
        rsv1_0.res_text_vc,
        rsv1_0.res_type,
        rsv1_0.res_ver,
        rsv1_0.res_updated 
    from
        ( SELECT
            h.pid               as pid,
            r.res_id            as res_id,
            h.res_type          as res_type,
            h.res_version       as res_version,
            h.res_ver           as res_ver,
            h.has_tags          as has_tags,
            h.res_deleted_at    as res_deleted_at,
            h.res_published     as res_published,
            h.res_updated       as res_updated,
            h.res_text          as res_text,
            h.res_text_vc       as res_text_vc,
            h.res_encoding      as res_encoding,
            h.PARTITION_ID      as PARTITION_ID,
            p.SOURCE_URI        as PROV_SOURCE_URI,
            p.REQUEST_ID        as PROV_REQUEST_ID,
            r.fhir_id         as FHIR_ID      
        FROM
            HFJ_RESOURCE r     
        INNER JOIN
            HFJ_RES_VER h 
                ON r.res_id = h.res_id 
                and r.res_ver = h.res_ver    
        LEFT OUTER JOIN
            HFJ_RES_VER_PROV p 
                ON p.res_ver_pid = h.pid  ) rsv1_0 
    where
        rsv1_0.res_id
select
    s1_0.pid,
    s1_0.created,
    s1_0.search_deleted,
    s1_0.expiry_or_null,
    s1_0.failure_code,
    s1_0.failure_message,
    mi1_0.search_pid,
    mi1_0.pid,
    mi1_0.search_include,
    mi1_0.inc_recurse,
    mi1_0.revinclude,
    s1_0.last_updated_high,
    s1_0.last_updated_low,
    s1_0.num_blocked,
    s1_0.num_found,
    s1_0.preferred_page_size,
    s1_0.resource_id,
    s1_0.resource_type,
    s1_0.search_param_map,
    s1_0.search_query_string,
    s1_0.search_query_string_hash,
    s1_0.search_type,
    s1_0.search_status,
    s1_0.total_count,
    s1_0.search_uuid,
    s1_0.optlock_version 
from
    hapi2.hfj_search s1_0 
left join
    hapi2.hfj_search_include mi1_0 
        on s1_0.pid=mi1_0.search_pid 
where
    s1_0.search_uuid=?
Any help would be appreciated trying to get these two instances to run together with separate data sources would be great. Thanks
@XcrigX
Copy link
Contributor

XcrigX commented Apr 3, 2024

That's interesting regarding the sequences. I had tried a similar setup in MS SQL Server and hit the same problem. When JPA was detecting/creating the database objects, it would detect that the sequences already existed (in the other db schema) and not create them. I had assumed this was a bug in the MSSQL jdbc driver or something like that. Looks like it was not.
I wonder if this is a problem somewhere in HAPI, or something with JPA or Hibernate?

FWIW, the way I worked around it was to create separate DB users for the two instances - each with ONLY permissions on one db schema. This prevented the second instance from being able to see the sequences in the first schema.

@Na3blis
Copy link
Author

Na3blis commented Apr 3, 2024

So testing around some more, it looks like I'm actually able to get the sequences created, as long as I have the schema created before I standup HAPI. So that gets me a little farther, but that still leaves me with HAPI trying to access hfj_resource from the default schema and it exits out:
Caused by: org.postgresql.util.PSQLException: ERROR: relation "hfj_resource" does not exist
Here's my updated application.yaml:

    spring:
      main:
        allow-circular-references: true
      datasource:
        platform: postgres
        driver-class-name: org.postgresql.Driver
        username: admin
        password: admin
        url: jdbc:postgresql:https://db:5432/hapi
      flyway:
        schemas: hapi2
        enabled: false
        fail-on-missing-locations: false
      jpa:
        database: postgresql 
        database-platform: org.hibernate.dialect.PostgreSQL94Dialect
        properties:
          hibernate:
            dialect: ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgresDialect
            search:
              enabled: false
            default_schema: hapi2
      batch:
        job:
          enabled: false
    hapi:
      fhir:
        ### This is the FHIR version. Choose between, DSTU2, DSTU3, R4 or R5
        fhir_version: R4
        server_address: http:https://localhost:8080/fhir
        cors:
          allow_Credentials: true
          allowed_origin:
            - '*'
        fhirpath_interceptor_enabled: false
        filter_search_enabled: true
        graphql_enabled: true
        binary_storage_enabled: true
        bulk_export_enabled: true
        allow_cascading_deletes: true
        allow_contains_searches: true
        allow_external_references: true
        allow_multiple_delete: true
        allow_override_default_search_params: true
        allow_placeholder_references: true
        auto_create_placeholder_reference_targets: false
        cql_enabled: true
        default_encoding: JSON
        default_pretty_print: true
        default_page_size: 1000
        enable_repository_validating_interceptor: false
        enable_index_missing_fields: false
        enforce_referential_integrity_on_delete: false
        enforce_referential_integrity_on_write: false
        etag_support_enabled: true
        expunge_enabled: true
        daoconfig_client_id_strategy: null
        client_id_strategy: ALPHANUMERIC
        tester:
            home:
              name: Local Tester
              server_address: 'http:https://localhost:8080/fhir'
              refuse_to_fetch_third_party_urls: false
              fhir_version: R4
            global:
              name: Global Tester
              server_address: "http:https://localhost:8080/fhir"
              refuse_to_fetch_third_party_urls: false
              fhir_version: R4

And my compose file:

services:
  fhir:
    container_name: fhir
    image: "hapiproject/hapi:latest"
    ports:
      - "8080:8080"
    configs:
      - source: hapi
        target: /app/config/application.yaml
    depends_on:
      - db


  db:
    image: postgres
    restart: always
    environment:
      POSTGRES_PASSWORD: admin
      POSTGRES_USER: admin
      POSTGRES_DB: hapi
    ports:
      - "5432:5432"
    volumes:
      - ~/dev/hapi-db:/var/lib/postgresql/data

configs:
  hapi:
     file: ./hapi.application.yaml

I just had to start it up once, create the hapi2 schema, then restart, and it creates the tables and sequences, but then later tries to use the public schema and exits because in my new test instance I don't have my first HAPI server running with its data already in the public schema
Screenshot from 2024-04-03 18-16-45
Screenshot from 2024-04-03 18-17-57

@Na3blis
Copy link
Author

Na3blis commented Apr 17, 2024

I ended up just creating a new database in my postgres instance since any schema other than public seems to break things all over

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants