Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NSE completion time and percentage estimates are inaccurate #701

Open
dmiller-nmap opened this issue Feb 24, 2017 · 3 comments
Open

NSE completion time and percentage estimates are inaccurate #701

dmiller-nmap opened this issue Feb 24, 2017 · 3 comments

Comments

@dmiller-nmap
Copy link

https://twitter.com/truekonrads/status/835221937005576193

This is a difficult problem without a clear solution. The way we currently display NSE completion estimates is based on number of threads completed vs begun. But because threads can spawn new ones and some take a long time by design, it's really hard to give a realistic estimate of how long before NSE is done running.

This issue is mostly a reminder that this is a problem. Designing a solution could take a while.

@dmiller-nmap
Copy link
Author

Some brainstorming:

  • What if scripts could provide an estimate of number of connections they will make? or seconds they will sleep? Then NSE could track actual connections and sleep events against that estimate.
  • A common case is one script holding things up at the end--for example if the user did --script ftp-* and didn't realize "ftp-brute" was included. When we print the stats at the end, maybe we could check and print the names of the remaining scripts if there are less than 5 or so? Nothing the user can do at that point, but it could be helpful for reporting slow scripts or avoiding them in the future.

@Varunram
Copy link

Adding to your suggestion, we could have a counter for the number of threads spawned and based on the estimated number of connections that a script makes, we can provide a (somewhat) fair estimate of the time remaining. But I don't know whether it would be possible to estimate the number of connections for all scripts though.

@djcater
Copy link

djcater commented Mar 2, 2017

Certainly having an idea of which scripts are the ones holding up the end of the scan would be nice. I've found that using --script-timeout=15m is a good solution. It gives the scripts time to finish on their own, but has hard cutoff if for some reason they get stuck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants