Ever since the first dark web monitoring services became available, around 2005, consumers of such services often asked – why aren’t these websites being taken down? After all, the sites that comprise the dark web are platforms and tools for illegal activities.
The answer, which used to satisfy most, was that these sites are intelligence sources and taking them down means that the criminals will congregate somewhere else, somewhere that may not be known to those who monitor them.
These sites are intelligence sources for both law enforcement and security vendors, without them there is less intelligence to prevent fraud, recover credentials, and reveal the true identity of criminals.
It’s law enforcement’s main goal to apprehend criminals, and considering it takes time to build a case of evidence, dark web boards can be a treasure trove of leads and relevant data for a case, one that keeps growing as criminals post new content. It makes sense for them not to touch a board until they are ready to make their move and apprehend the bigger players on a certain site, usually an international operation involving multiple local agencies.
For security vendors, it’s just operationally expensive – if a site goes down, it may come back somewhere else in a different board format, which means the vendor will have to develop new crawlers. Alternatively, it may now be protected by a new bot detection service, which will have to be circumvented, or worse – not come back at all.
These are all good reasons, supporting the notion that dark web monitoring should be performed with the smallest disruption as possible. However, there is a case to be made for adopting the other strategy – disrupt the dark web as much as possible – and it seems that unlike the early days of dark web monitoring, it is not one that is discussed at all.