Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible for canary use separate deployment configuration? #555

Open
BarrieShieh opened this issue Apr 11, 2020 · 6 comments
Open

Comments

@BarrieShieh
Copy link

BarrieShieh commented Apr 11, 2020

Hi,
When we use canary deployment for our apps, sometimes the deployment failed. For canary, it scales down the canary pod and route traffic back to primary. I some cases, new canary app might update the database structures automatically. And since the primary and canary share the same configurations, it means that if the canary deployment is rolled back, the database structure will not be changed back. It could cause problems for primary deployment. And i think this could happen if there’s a bug for canary or mis-configurations which make canary failed to start.
Is there any workaround for this case?

Thanks

@audrey-brightloom
Copy link

I'm running into this as well. I have a pre-install hook to run db migrations. Combined with Flux and Helm-Operator i end up with 2 workloads, both of which try to run the migration. One of them works, the other fails to start the init-containers and eventually marks the entire deployment as failed in Flux.

Are Jobs officially supported? I'm pretty stuck on this right now.

@BarrieShieh
Copy link
Author

I'm running into this as well. I have a pre-install hook to run db migrations. Combined with Flux and Helm-Operator i end up with 2 workloads, both of which try to run the migration. One of them works, the other fails to start the init-containers and eventually marks the entire deployment as failed in Flux.

Are Jobs officially supported? I'm pretty stuck on this right now.

Hi , for db migration, I use flyway for database versioning. It can prevent double updating db caused by multiple instances. Because if it detects the new version of db structure, the other pod will skip db migration when starting.

But this won’t solve the issue when canary rolled back. The changed shared database structure will cause primary malfunctioning.

@stefanprodan
Copy link
Member

You can use a post-rollout hook and call into a service that does a migration rollback if the canary phase is Failed.

@BarrieShieh
Copy link
Author

Any flag can check if the status is promoted or rolled back?
And one more things is that the primary might have critical error during the process. Only after the canary is promoted or database is rolled back when failed then the primary can back to normal. But the data during that time can be corrupted

@mathetake
Copy link
Collaborator

how about just using pod labels to switch application's behavior between primary and canary (for example, use separated databases for primary/canary)? We can embed labels by fieldRef to environment variables.

@burigolucas
Copy link

@mathetake, the pod labels in primary and canary are identical, so, currently this is not possible. Please check this issue: #1547

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants