-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging to stdout doesn't seem to work for isolated workers #63
Comments
Thanks for reporting. I think you already pinned down the root cause. It is in these lines (basically): jobqueue-common/Classes/Job/JobManager.php Lines 123 to 126 in 2db88c2
This just uses our nasty I once created a symfony process abstraction for Flow ( https://github.com/bwaidelich/Wwwision.BatchProcessing/blob/main/Classes/BatchProcess.php). |
what are the obvious reasons? :) |
First of all the pricing, to my knowledge. We don't run any ETL processes on the logs that are in bigquery so there's no use to have it there in a data warehouse, really. Having the logs only in bigquery also makes looking for stuff a little more difficult in my experience (and that of my colleagues), as opposed to, for example, Google's LogExplorer which does also support JSON formatting of the logs (and/or their additional data) etc. Additionally we're trying to progress to more or less actual twelve-factor (web) apps that are, ideally, cloud native. Therefore logging directly to stdout (and the logs therefore also being available in the LogExplorer) seems to suggest itself a bit more than writing logs to BQ and paying for writing, reading and storing the logs there. |
We’re currently attempting to replace our loggers to log on stdout instead of writing logs to bigquery (for obvious reasons). We’re running a flow 7.3 project (
flowpack/jobqueue-common
is running @3.3.0) and have two pods in parallel in our deployment: a “normal” application pod and a worker pod that runs php /app/flow job:work --verbose on a particular queue. Now we have the “issue” that most of the processes in our application are being executed as jobs inside this queue. And we can’t seem to get any logs from jobs in this queue, at least not to stdout. The bigquery logs seem to have worked in the past.After a short but helpful conversation with @kitsunet I was able to identify that the cause of this seems to be the fact that our workers are running with
executeIsolated: true
. Jobs/workers running in this mode don't seem to correctly redirect their "internal" stdout to the "parent system's" stdout.So far I don't seem to be able to identify the exact location of the issue, where things go wrong. Also I can't justify to invest more time into this issue from my job's point of view. However, if someone could point me to the root of the issue, I might be able to (attempt) to fix the issue in my own time after work.
The text was updated successfully, but these errors were encountered: