Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use port scan timing data to influence service detection parallelism #55

Open
dmiller-nmap opened this issue Feb 5, 2015 · 0 comments

Comments

@dmiller-nmap
Copy link

dmiller-nmap commented Feb 5, 2015

In scan_engine.cc, the ideal parallelism for service scan is set based on -T timing template and the max and min parallelism. Unfortunately, this may be lower or higher than the network can support, so it would be better to be able to use timing info from the port scan phase to influence this number.

Here are some progressive improvements that may be able to be made:

  1. Choose desired_par based on port scan timing results.
  2. Tune desired_par empirically to accurately reflect what the network can handle: port scan's idea of parallelism is number of packets in transit, but service scan's is number of connections.
  3. Adjust timeouts and parallelism dynamically during the scan based on timed-out connections
  4. Introduce per-host parallelism to account for slow targets without slowing down the entire scan.

Maybe we can take clues from scan_engine_connect and @Deetah's Nsock-based port scanning GSOC 2014 project to see how to do timing with our Nsock-based service scan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant