Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

services/horizon/docker/ledgerexporter: deploy ledgerexporter image as service #4490

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
#4483: document k8s deployent env setup, remove test network hardcode…
…d assumption
  • Loading branch information
sreuland committed Aug 1, 2022
commit 2c8589cd50fabb832623bc3242a5c98d3820ef04
1 change: 1 addition & 0 deletions services/horizon/docker/ledgerexporter/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ RUN apt-get update && apt-get install -y stellar-core=${STELLAR_CORE_VERSION}
RUN apt-get clean

ADD captive-core-pubnet.cfg /
ADD captive-core-testnet.cfg /

ADD start /
RUN ["chmod", "+x", "start"]
Expand Down
16 changes: 16 additions & 0 deletions services/horizon/docker/ledgerexporter/ledgerexporter.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,28 @@ data:
# and stop process with error that ledger 3 is not <= expected ledger of 2.
START: "3"
END: "0"

# can only have CONTINUE or START set, not both.
#CONTINUE: "true"
WRITE_LATEST_PATH: "true"
sreuland marked this conversation as resolved.
Show resolved Hide resolved
CAPTIVE_CORE_USE_DB: "true"

# configure the network to export
HISTORY_ARCHIVE_URLS: "https://history.stellar.org/prd/core-live/core_live_001,https://history.stellar.org/prd/core-live/core_live_002,https://history.stellar.org/prd/core-live/core_live_003"
NETWORK_PASSPHRASE: "Public Global Stellar Network ; September 2015"
# can refer to canned cfg's for pubnet and testnet which are included on the image
# `/captive-core-pubnet.cfg` or `/captive-core-testnet.cfg`.
# If exporting a standalone network, then mount a volume to the pod container with your standalone core's .cfg,
# and set full path to that volume here
CAPTIVE_CORE_CONFIG: "/captive-core-pubnet.cfg"

# example of testnet network config.
# HISTORY_ARCHIVE_URLS: "https://history.stellar.org/prd/core-testnet/core_testnet_001,https://history.stellar.org/prd/core-testnet/core_testnet_002"
# NETWORK_PASSPHRASE: "Test SDF Network ; September 2015"
# CAPTIVE_CORE_CONFIG: "/captive-core-testnet.cfg"

# provide the url for the external s3 bucket to be populated
# update the ledgerexporter-pubnet-secret to have correct aws key/secret for access to the bucket
ARCHIVE_TARGET: "s3:https://horizon-ledgermeta-pubnet"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i've config'd the horizon-ledgermeta-pubnet bucket in same AWS account as Batch, with bucket owner enforced with ACLs disabled and a inline policy that defines allowed/disallowed statements, following aws recommendations .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean you changed the existing bucket settings?

Unfortunately the S3-writing code (HistoryArchive) code assumes ACLs are enabled (because it writes them). That's why they where enabled.

I am not against the change but then we should also change the writing code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't changed any existing bucket permissions, but this looks fairly constrained on ACL usage though, only one place in s3 writing code sets the object ACL to include public read during put which I removed and left a comment on one potential to migrate off that.. There are several other places where s3 object writing(puts) are done with s3uploadmanager and those don't specify ACLs.

So, the net seems to be in identifying how many existing buckets which could have been written to by this routine in historyarchive/s3_archive.go. With just removal of ACL here, the net effect is that put still works to the existing buckets but the objects won't have public read until the bucket permissions are updated to have ACL's disabled and a policy added with statement for Allow Everyone/Public Read.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated approach and added ACL config option on s3 - 452b20c, this way client can work with buckets in either permissions config.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@2opremio , I added more to permissions on horizon-ledgermeta-prodnet-test and horizon-index, added policy to grant write to ec2(Batch access) and an IAM user(k8s access) and a PublicRead rule.

I noticed slight diff on their main config, horizon-ledgermeta-prodnet-test had ACLs enabled/bucket owner preferred, and horizon-index had ACLs disabled/bucket owner enforced. The policy applies on top of either.

---
apiVersion: v1
Expand Down
7 changes: 3 additions & 4 deletions services/horizon/docker/ledgerexporter/start
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,19 @@ END="${END:=0}"
CONTINUE="${CONTINUE:=false}"
# Writing to /latest is disabled by default to avoid race conditions between parallel container runs
WRITE_LATEST_PATH="${WRITE_LATEST_PATH:=false}"

# config defaults to pubnet core, any other network requires setting all 3 of these in container env
NETWORK_PASSPHRASE="${NETWORK_PASSPHRASE:=Public Global Stellar Network ; September 2015}"
HISTORY_ARCHIVE_URLS="${HISTORY_ARCHIVE_URLS:=https://s3-eu-west-1.amazonaws.com/history.stellar.org/prd/core-live/core_live_001}"
CAPTIVE_CORE_CONFIG="${CAPTIVE_CORE_CONFIG:=/captive-core-pubnet.cfg}"

CAPTIVE_CORE_USE_DB="${CAPTIVE_CORE_USE_DB:=true}"

if [ -z "$ARCHIVE_TARGET" ]; then
echo "error: undefined ARCHIVE_TARGET env variable"
exit 1
fi

if [ "$NETWORK_PASSPHRASE" = "Test SDF Network ; September 2015" ]; then
CAPTIVE_CORE_CONFIG="/captive-core-testnet.cfg"
fi

# Calculate params for AWS Batch
if [ ! -z "$AWS_BATCH_JOB_ARRAY_INDEX" ]; then
# The batch should have three env variables:
Expand Down