Skip to content

Latest commit

 

History

History
128 lines (118 loc) · 4.62 KB

TODO.org

File metadata and controls

128 lines (118 loc) · 4.62 KB

Greylost TODO

This looks dumb rendered in GitHub. Open with org-mode for better results.

get basic PoC working

timestamps

sort responses before adding to bloom filter

queries with multiple responses arent guaranteed to be in the same order each time they are queried. These should be sorted prior to adding to the bloom filter so that they arent counted dozens of times due to being out of order

baseline timer

don’t alert on new queries before N time passes. This allows the software to baseline DNS queries and not give alerts.

argparse for interface, promisc, etc

logging

HUP signal reopens log files.

daemonize

finish IPv6 in pypacket

investigate pypacket alternatives

offline mode?

This might not work great; dont know if pcaps keep timestamps in a manner that I can utilize.

Abandoning this idea. Might do a different toolset to analyze pcaps.

https://www.elvidence.com.au/understanding-time-stamps-in-packet-capture-data-pcap-files/

add mmh3 to requirements.txt

This should speed it up a bit

Splunk/ELK

Add examples of how to ingest this data. Don’t really have to add any code for this…

ignore list for bloom filter

mcafee is making a ton of random resolutions. we know that this particular case is benign, so add some feature to ignore these queries.

cli flags to set logfile paths

ability to save/reload filter (for reboots/restarts)

log in pcap format

test on authoritative DNS server

remove repetitive patterns

cli flags to enable/disable specific logs (all, not dns, …)

webhook alerts

For really important events, send a webhook alert. Closing this, should be done via Splunk or ELK

TimeFilter stores decimal currently. Look into storing as int instead.

Since we don’t need this precision, look into storing integers to save space in RAM and on disk when its pickled.

pid file watchdog script for crontab

handle out of memory issues gracefully

Currently if there’s not enough RAM, it throws a memory error and crashes. Catch these exceptions and be able to calculate how much RAM a filter at a given size will require.

cleanup: are _functions necessary?

use syslog when daemonized; service starts, stops, signal received, …

config file

systemd and init scripts to start as a service

rotate pcap files?

Alerting for resolutions of known-bad domains

http.kali.org start.parrotsec.org

ability to pull in from feeds

This might be worthy of an entire new tool. Be able to pull in multiple sources and store them in a manner that can be used universally.

shared bloom filter when using multiple resolvers

This will be another project, but has other potential use cases:

  • NSRL
  • known bad malware hashes
  • is a password known to be in a breach?
  • known good hashes for webpress, drupal, joomla, …

example HTTP API: /add?filter=name_here&element=element_goes_here /lookup?filter=name_here&element=element_goes_here

add malicious domains to blocklist when using w/ dnsmasq

detect dns protocol abuses

  • weird TXT/NULL records
  • reallylongsubdomaintosqueezeineverypossiblebyte.whatever.com
  • hex/baseN encoded stuff: aabbccddeeff.whatever.com
  • volume
  • not dns at all.. they are just sending data over port 53

setup.py

log to socket

Splunk and ELK can receive input from a TCP or UDP socket. Add an option to ship logs in this manner. This may be useful when operating as a sensor with limited resources.

Nice to have:

  • encryption
  • compression
  • maintain integrity if networking fails

interactive mode

command prompt w/ readline and whatnot.

ability to toggle settings.

ability to query/add elements to ignore/malware lists

highlight output