Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a new features planned? #7

Open
catroot opened this issue Jul 1, 2016 · 6 comments
Open

Is there a new features planned? #7

catroot opened this issue Jul 1, 2016 · 6 comments

Comments

@catroot
Copy link
Contributor

catroot commented Jul 1, 2016

Hello @saghul!
Pls tell us, is there a websockets and https (or even http/2) support exists in roadmap of this project?
Which feature will be the next step?

@saghul
Copy link
Owner

saghul commented Jul 1, 2016

Hey @catroot! I'm not actively working on this project, so I'd say it's "finished", as in, I'm not planning to add new features. One thing I'd like to have though is multi-process support, which should be relatively easy, but never got around doing it.

If you do have plans for it, please let me know, send a PR and if all looks good I'll give you commit access.

@catroot
Copy link
Contributor Author

catroot commented Jul 1, 2016

i`v got your point. Will think for it.

BTW, multiprocessing easily covers any single server instances.
For examle, here is my usecase:

from .mywsgiapp import App

def init_worker():
    import signal
    signal.signal(signal.SIGINT, signal.SIG_IGN)



if __name__ == '__main__':
    import multiprocessing as mp
    from functools import partial
    from uvwsgi import run
    x = lambda x: [('0.0.0.0', 5000+y) for y in range(x)]
    runx = partial(run, App)
    with mp.Pool(mp.cpu_count(),init_worker) as pool:
        try:
            pool.map(runx, x(mp.cpu_count()))
        except (KeyboardInterrupt, SystemExit):
            pool.terminate()
            pool.join()
            print("\nInterrupted...\n")

whis code will start servers equals to CPU core count on the system at different sequential ports, so it`s easy to loadbalanse it with nginx.
I thing implementing multiprocessing inside uvwsgi is a bad way.
Another trouble, if you need a communication between processes.
It may be solved via redis PUB/SUB, celery, or RQ.

@saghul
Copy link
Owner

saghul commented Jul 1, 2016

I thing implementing multiprocessing inside uvwsgi is a bad way.

I respectfully disagree :-) IMHO load balancing based on subsequent port numbers is not great. We could make uvwsgi listen on a single port, and do the load balancing by itself. Easier to configure in nginx, easier to change.

Now, even if I added this, you would still be able to use your way since there would be an option for it :-)

@catroot
Copy link
Contributor Author

catroot commented Jul 1, 2016

So, you think, that GIL lets you scale load between cores like a charm?
One way to true utilization on all cores with GIL is forking. Fork binds to port and inside theirself uses threads, which is enought for utilization for 1 core. There are no such case, where nginx sends one request at a time to one of backends. So, making LB inside ONE python process with GIL throttle can`t utilize all CPU power, instead if you use a server written in C, wrapped by python, wich will utilize load in their own threads without any GIL.
How will you deal with GIL?

@saghul
Copy link
Owner

saghul commented Jul 1, 2016

I never said there would be a single Python process :-) uvwsgi would fork, by using the builtin facilities in pyuv and pass handles around.

@catroot
Copy link
Contributor Author

catroot commented Jul 1, 2016

That's up to you. I didn't see much profit from this.
But may be more perfect way to implement uvlib worker at gunicorn, which already does the same, but with choice for different workers model to use.

PS
See https://github.com/veegee/guv
It seems to be already done. Will make a stress test for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants