-
Notifications
You must be signed in to change notification settings - Fork 635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passive crawling from external sources support #139
Comments
@longnguyenhuynh Katana is primarily an active web crawler; although, thanks to your feature suggestion, passive sources will be added soon. |
@ehsandeep, could you detail how passive URLs should be handled? For example, are they only retrieved and listed? |
Itd be cool if you could use passive sources to seed the spider, so if there is a passive source of a page that currently isnt linked to so active spider wouldn't find it, the passive record would be spidered from and new pages could potentially be found. |
@Mzack9999 issue is now updated with details. @fail-open good idea and something to consider/work on after passive support implementation. |
in naabu we use |
Please describe your feature request:
Add the ability to get passive URLs / endpoints from -
Virus TotalURLScanCLI Options -
JSON Output -
Example run:
Note:
-passive-source
option can specify single or multiple (comma-separated) sources.-passive
flag.Describe the use case of this feature:
Katana's missing some important URLs compares to other crawler tools
Tasks
The text was updated successfully, but these errors were encountered: