-
Notifications
You must be signed in to change notification settings - Fork 6.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add bull(queue) to execute workflows (WIP) #1294
Conversation
If anybody wants to test it:
docker run --name redis -it --rm redis
docker run -it --rm --name n8n-main -p 5678:5678 -e EXECUTIONS_MODE=queue -e QUEUE_BULL_REDIS_HOST=redis --link redis:redis -v ~/.n8n:/home/node/.n8n n8nio/n8n:alpha-bull-scale n8n start --tunnel
docker run -it --rm -e QUEUE_BULL_REDIS_HOST=redis --link redis:redis -v ~/.n8n:/home/node/.n8n n8nio/n8n:alpha-bull-scale n8n worker |
await job.progress(-1); | ||
} else { | ||
// Job did not get started yet so remove from queue | ||
await job.remove(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure how this works here, but maybe we should add a try...catch
block as the job may have started in the meantime, right? The time in between isActive()
and remove()
.
This might fail in the end in a high concurrency environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, there could maybe also other reasons why it could fail. So adding try...catch
and then reporting back to UI if it fails should be added as you said.
// TODO: If "loadStaticData" is set to true it has to load data new on worker | ||
|
||
// Register the active execution | ||
const executionId = this.activeExecutions.add(data, undefined); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a concurrency environment, we will have multiple executions with the same ID as the current implementation here considers a static class and an internal counter. Once we merge the unification of execution ID, this probably won't be a problem anymore.
Adding this so other reviewers can know that it is being solved in another branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually this would only happen if we start multiple "starters" which would not be the case at the moment. So my comment above can be ignored unless we split the webhook interceptors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we merge with the unification branch before we split out the webhook-interceptors that should not be an issue.
packages/cli/src/WorkflowRunner.ts
Outdated
// Connect to bull-queue | ||
const prefix = config.get('queue.bull.prefix') as string; | ||
const redisOptions = config.get('queue.bull.redis') as object; | ||
this.jobQueue = new Bull('jobs', { prefix, redis: redisOptions }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if Redis is down or unreachable? We should try to detect it on startup if possible and maybe switch to regular mode if redis is unavailable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, think in this case it would be better to exit. After all, is something essential missing and hiding it would be bad as it would then cause problems later as it operates differently and can not deliver the expected performance.
We would have to however test what happens if redis connection gets lost in between and see how we handle that.
I'm worry that redis is a single factor not really scalable, it's really hard to distribute the load of redis and make it high available. I found https://github.com/wherefortravel/bull-amqp but would it be better if we can support rabbitmq directly? |
We're all really looking forward to this merge |
Got merged with #1340 |
Got released with [email protected] |
@janober Does it require redis to work? |
Yes @maxbaluev Redis is a requirement. Full documentation can be found here: https://docs.n8n.io/reference/scaling-n8n.html |
Uses bull to execute workflows (only the ones that did not get started manually).