You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Starting Nmap 7.94SVN ( https://nmap.org ) at 2023-10-26 01:14 UTC
************************INTERFACES************************
DEV (SHORT) IP/MASK TYPE UP MTU MAC
eth0 (eth0) 169.254.50.24/16 ethernet up 1500 00:50:56:9D:9A:84
eth0 (eth0) 10.2.154.84/16 ethernet up 1500 00:50:56:9D:9A:84
lo (lo) 127.0.0.1/8 loopback up 65536
lo (lo) ::1/128 loopback up 65536
**************************ROUTES**************************
DST/MASK DEV METRIC GATEWAY
10.2.154.1/32 eth0 100
10.2.154.0/24 eth0 0
169.254.0.0/16 eth0 0
0.0.0.0/0 eth0 100 10.2.154.1
::1/128 lo 0
::1/128 lo 256
We are seeing this on multiple machines in multiple hypervisors. It seems like having duplicates in excludes will sometimes but not always cause the program to use all the memory the system will give it. We are going to work around but would love to see a fix to this in the next release if possible.
I built from the SVN head with ./configure && make debugbut it also exhibits the bug with the 7.94 version given on your website (rpm) then converted to a deb using alien
It seems like it may be a bug in the trie implementation.
Using gdb, if you attach after the program has started (and it seems like it wont complete as above, it starts to fill memory quickly) It seems like its doing a calloc to create a new trie_node.
(continuing many times appears to keep hitting this breakpoint)
It appears that it will continue trying to split the trie many many times. This is evident by it still being in this function 5 seconds after program start. I'm not very familiar with the trie datastructure but I imagine the data structure is the cause of the bug.
If I had to guess, its just creating more and more trie nodes unnecessarily.
The below appears to not exhibit the issue, not that there are no duplicates.
./nmap --exclude 192.168.1.1/32,192.168.1.2/32,192.168.1.3/32 127.0.0.1
Starting Nmap 7.94SVN ( https://nmap.org ) at 2023-10-26 01:34 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00037s latency).
Not shown: 996 closed tcp ports (reset)
PORT STATE SERVICE
22/tcp open ssh
873/tcp open rsync
[... omitted intentionally.]
Nmap done: 1 IP address (1 host up) scanned in 1.82 seconds
The text was updated successfully, but these errors were encountered:
Describe the bug
Duplicates in exclude list cause nmap to allocate memory on the system very quickly.
To Reproduce
You may have to run it multiple times, if it takes more than a 5 seconds you've probably hit the bug.
dmesg
will tell you that the OOM killer got it.Expected behavior
Program finishes without using all the memory.
Version info (please complete the following information):
cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
Additional context
Seems to not happen on Nmap 7.80.
Nmap version 7.80 ( https://nmap.org ) Platform: x86_64-pc-linux-gnu Compiled with: liblua-5.3.3 openssl-1.1.1d nmap-libssh2-1.8.2 libz-1.2.11 libpcre-8.39 libpcap-1.9.1 nmap-libdnet-1.12 ipv6 Compiled without: Available nsock engines: epoll poll select
We are seeing this on multiple machines in multiple hypervisors. It seems like having duplicates in excludes will sometimes but not always cause the program to use all the memory the system will give it. We are going to work around but would love to see a fix to this in the next release if possible.
I built from the SVN head with
./configure && make debug
but it also exhibits the bug with the 7.94 version given on your website (rpm) then converted to a deb usingalien
It seems like it may be a bug in the trie implementation.
Using gdb, if you attach after the program has started (and it seems like it wont complete as above, it starts to fill memory quickly) It seems like its doing a calloc to create a new trie_node.
With a breakpoint in the loop in trie_split on this line
(continuing many times appears to keep hitting this breakpoint)
It appears that it will continue trying to split the trie many many times. This is evident by it still being in this function 5 seconds after program start. I'm not very familiar with the trie datastructure but I imagine the data structure is the cause of the bug.
If I had to guess, its just creating more and more trie nodes unnecessarily.
The below appears to not exhibit the issue, not that there are no duplicates.
The text was updated successfully, but these errors were encountered: