Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed issues with running out of resources and connection timing out #27

Merged
merged 1 commit into from
Apr 6, 2020
Merged

Conversation

nalauder
Copy link
Contributor

@nalauder nalauder commented Apr 5, 2020

Had issues with program quitting when encountering a Timeout exception, now handles more gracefully.

Was also running out of resources due to thread pools being created but not closed.

@initstring
Copy link
Owner

Thank you so much for contributing! This is great, I didn't know about the need to close the thread pools.

I do have a question on this. I wonder if silently passing on a DNS timeout might cause someone to be unaware that all their queries are failing.

See this recent ticket, where it seems the user had DNS resolution issues in general: #23

Do you think it might be better to send an alert in the UI during a timeout? I have no issues with timeouts. If you do a print statement in one of your runs, how many timeouts do you generally see?

Could your timeouts be related to the running out of resources? Like you had so many DNS queries pending?

This looks like a good edition either way, except I may put an alert to console on the timeouts while allowing the program to continue.

@initstring initstring changed the base branch from master to dev April 6, 2020 20:29
@initstring initstring merged commit 8e2381b into initstring:dev Apr 6, 2020
@initstring
Copy link
Owner

I've merged the changes to a dev branch but tweaked slightly: https://github.com/initstring/cloud_enum/tree/dev

Would you be able to test and let me know your results? Curious how many timeouts you are getting.

Basically, I added an alert to the console and also adjusted the timeout threshold down to 10 seconds. A 10-second long DNS query seems like it reasonably should fail, but let me know if you think otherwise.

Thanks again! Your contribution is greatly appreciated!!!

@nalauder
Copy link
Contributor Author

nalauder commented Apr 6, 2020

Thanks for the inclusion.

I like your idea better with the alert to the user. It actually makes it helpful rather than just silently failing. This would help in cases where users could be getting different results from different hosts because they can actually see the errors.

I'm not so sure about the resource limits and timeouts being related though. I feel like this could be more to do with the python garbage collector not cleaning up old pools quickly, whereas the pool.close() will clean it up as soon as its finished. Which could then be because the pool was alive for longer because of the timeouts that the garbage collector waited longer to remove it because it may get reused. Either way, I haven't seen the resource errors since the inclusion and I was running on a host with limited resources and an older python version.

In a classic testing move, I am unable to reproduce the issue! So something else may have changed on my end since I actually made fix (about a week ago).

Big fan of the tool!

@initstring
Copy link
Owner

Thanks @nalauder !!

I'm keeping the pool close in, as it definitely makes sense. I'll merge to master soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants