Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance problem #27

Closed
1Map opened this issue Oct 22, 2017 · 7 comments
Closed

Performance problem #27

1Map opened this issue Oct 22, 2017 · 7 comments

Comments

@1Map
Copy link

1Map commented Oct 22, 2017

Hi,

I use your utility a lot and it works 99% on my side, except for the only 1% (which is actually creating a huge headace).

Example code to export from Postgresql to SHP, etc...:

shapefile = ogr2ogr('PG:host=localhost port=5432 user=pguser dbname=mydb password=pgpass')
    .format(targetformat)
    .timeout(12000000)
    .options(['-lco', 'GEOMETRY_NAME=the_geom', '-sql', sql])
    .project(targetprojection)
    .skipfailures()
    .destination(filename)
    .exec(function (er, data) {
        if (er) console.error(er);
        callback();
    });

Now, with the above code there is nothing wrong and it executes on 1 record or thousands of records.

Problem is, the moment I run the above on a large export job (where it is busy for some time), it creates all kinds of havoc throughout the nodejs web app and clients gets various errors (Mostly 502 Bad Gateway errors). I even tried running above in Async Series Task.

var async = require("async");

// ..... 

async.series([
    function (callback) {
        // Do some stuff prior to doing ogr2ogr, then call callback();
    },
    function (callback) {
        // Do the ogr2ogr export, then call callback()
        shapefile = ogr2ogr('PG:host=localhost port=5432 user=pguser dbname=mydb password=pgpass')
            .format(targetformat)
            .timeout(12000000)
            .options(['-lco', 'GEOMETRY_NAME=the_geom', '-sql', sql])
            .project(targetprojection)
            .skipfailures()
            .destination(filename)
            .exec(function (er, data) {
                if (er) console.error(er);
                callback();
            });
    }
    ], function (err) { //This is the final callback
        // Return success
    }
);
@wavded
Copy link
Owner

wavded commented Oct 23, 2017

Hey @1Map, what Node version are you using, and are you possibly running out of memory? The 502 error is odd, I don't know what would return that, are you going through a proxy to get to the database?

@1Map
Copy link
Author

1Map commented Oct 23, 2017

Hi @wavded ,

I am using nodejs 6.11.4. My NodeJS app is using express and is behind NGINX Reverse Proxy. I have clients that uses the app to download data as shape files. The moment a client start a lenghty job the whole web app becomes unresponsive giving 502 errors on every other call. This is also not a memory issue as I have enough memory.

I also uses Uptime Robot to monitor my HTTP. The moment a lengthy job starts, then I will suddenly also receive emails from Uptime Robot informing me that my website is down.

@wavded
Copy link
Owner

wavded commented Oct 23, 2017

Is the client making one large call, or multiple ones? Perhaps the thread pool is hung up waiting on ogr2ogr to return. I'm assuming the process is hung at 100% during this time. You may want to use cluster: https://nodejs.org/api/cluster.html to get more out of the server. If you have anything duplicable I could check on my side. At this point I'm guessing.

@1Map
Copy link
Author

1Map commented Oct 23, 2017

Hi @wavded

No, this can be one single client accessing one single large call to ogr2ogr with no other clients online, and then it can get hung up.

@wavded
Copy link
Owner

wavded commented Oct 23, 2017

Hmm, is the NGINX timeout too short? Seems like ogr2ogr is still processing.

@1Map
Copy link
Author

1Map commented Oct 23, 2017

@wavded Thanks for the help so far. Where exactly can I start looking for this in NGINX ?

What I can say is the following: The app is stable if ogr2ogr is not running. The moment ogr2ogr kicks in and do lengthy job, then all becomes unstable. The moment ogr2ogr finishes the job, then all is back to normal (without restarting any service). Also, the 502 errors (while ogr2ogr is processing) is intermittent and not the whole time.

Also, I create an ogr2ogr job in the background and immediately return a response to the client. It is not that the client have to wait for the job to finish to get a response.

@wavded
Copy link
Owner

wavded commented Feb 14, 2020

Closing due to age, reopen if issue persists.

@wavded wavded closed this as completed Feb 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants