Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] MineMeld_1_0 Obesrvable not reaching destination. #773

Closed
mparis1 opened this issue May 23, 2020 · 5 comments
Closed

[Bug] MineMeld_1_0 Obesrvable not reaching destination. #773

mparis1 opened this issue May 23, 2020 · 5 comments
Labels
category:bug Issue is related to a bug

Comments

@mparis1
Copy link

mparis1 commented May 23, 2020

Describe the bug
Submitting observable (ip) from TheHive to the responder MineMeld_1_0 successfully runs, however the indicator does not get added to the target indicator_list. (no errors returned).

To Reproduce
Steps to reproduce the behavior:

  1. Have a fresh Palo Alto MineMeld instance running on docker
  2. Create an indicator list in Palo Alto MineMeld and add it as a miner.
  3. Ensure you have MineMeld_1_0 as a responder. You may need to mount this volume in your container or base installation.
  4. Enable the MineMeld_1_0 responder in your Cortex instance with the required parameters
  5. From your TheHive instance, run an IP observable to the MineMeld_1_0 responder

Expected behavior
Currently, when performing this operation, the observable gets submitted to the job queue in Cortex, however the observable does not get posted to the MineMeld list.
It is expected that the indicator once submitted by TheHive, will get posted to the defined list.

Complementary information
image
image
image

Work environment

  • Client OS: Ubuntu 18.04
  • Server OS: Ubuntu 18.04
  • Browse type and version:
  • Cortex version: 3.0.0 RC3
  • Cortex Analyzer/Responder name: MineMeld_1_0
  • Cortex Analyzer/Responder version: 1.0

Possible solutions
If applicable, indicate possible solutions to the problem.

  • The issue may be certificate related in that you need a valid CA-signed certificate
  • The issue may be related to Cortex-3.0.0-RC3, however this is the only stable docker image I could find.

Additional context
This is the docker compose of my deployment:

version: "3.2"
services:
elasticsearch:
image: elasticsearch:6.8.9
environment:
- http.host=0.0.0.0
- discovery.type=single-node
- thread_pool.index.queue_size=100000
- thread_pool.search.queue_size=100000
- thread_pool.bulk.queue_size=100000
ulimits:
nofile:
soft: 65536
hard: 65536
cortex:
image: thehiveproject/cortex:3.0.0-RC3
depends_on:
- elasticsearch
ports:
- "0.0.0.0:9001:9001"
volumes:
- "/home/ubuntu/cortex/Cortex-Analyzers/responders:/opt/Cortex-Analyzers/responders"
thehive:
image: thehiveproject/thehive:latest
depends_on:
- elasticsearch
- cortex
ports:
- "0.0.0.0:9000:9000"
#volumes:
#- "/home/ubuntu/thehive/application.conf:/etc/thehive/application.conf"
command: --cortex-port 9001 --cortex-key [key]

@mparis1 mparis1 added the category:bug Issue is related to a bug label May 23, 2020
@weslambert
Copy link
Contributor

Can you share your config, redacting as necessary? Have you checked TheHive/Cortex logs for clues?

@mparis1
Copy link
Author

mparis1 commented May 24, 2020

Thank you for responding so quickly Wes. I am confident it has something to do on the Cortex-side, as the action to run the job from TheHive to Cortex, does successfully kick off the Responder.

When tailing the Cortex application log (/var/log/cortex/application.log) in the container, during execution, the following is observed:

2020-05-24 00:32:15,147 [INFO] from org.thp.cortex.services.JobSrv in application-akka.actor.default-dispatcher-11 - Job cache is disabled 2020-05-24 00:32:16,568 [INFO] from org.thp.cortex.services.AuditActor in application-akka.actor.default-dispatcher-2 - Register new listener for job sGUYRHIBXsWc-cIGL8ou (Actor[akka://application/temp/$i]) 2020-05-24 00:32:16,663 [INFO] from org.thp.cortex.services.AuditActor in application-akka.actor.default-dispatcher-2 - Job sGUYRHIBXsWc-cIGL8ou has be updated (JsDefined("InProgress")) 2020-05-24 00:32:16,663 [INFO] from org.thp.cortex.services.ProcessJobRunnerSrv in application-akka.actor.default-dispatcher-11 - Execute /opt/Cortex-Analyzers/responders/Minemeld/minemeld.py in /opt/Cortex-Analyzers/responders, timeout is 30 minutes 2020-05-24 00:32:16,820 [INFO] from org.thp.cortex.services.ProcessJobRunnerSrv in Thread-52 - Job sGUYRHIBXsWc-cIGL8ou: /usr/local/lib/python3.5/dist-packages/urllib3/connectionpool.py:1004: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings 2020-05-24 00:32:16,821 [INFO] from org.thp.cortex.services.ProcessJobRunnerSrv in Thread-52 - Job sGUYRHIBXsWc-cIGL8ou: InsecureRequestWarning, 2020-05-24 00:32:18,678 [INFO] from org.thp.cortex.services.AuditActor in application-akka.actor.default-dispatcher-11 - Job sGUYRHIBXsWc-cIGL8ou has be updated (JsDefined("Success")) 2020-05-24 00:32:18,680 [INFO] from org.thp.cortex.services.JobSrv in application-akka.actor.default-dispatcher-2 - Job sGUYRHIBXsWc-cIGL8ou has finished with status Success

It is showing as successful, but giving warnings about SSL. After invoking the job from TheHive case, I did confirm calls were made over 443 from the Cortex Server to the MineMeld server via tcpdump and that the IPv4 list is in-place.

This was validated via the Job History on Cortex as well:
image

Cortex-Responder Configuration:
image

MineMeld IPv4 Node Configuration:

image

I'm not sure if I'm missing anything. I am running MineMeld VERSION: 0.9.68.

Currently, our work-flow workaround is to use this script to manually update indicators to MineMeld: https://live.paloaltonetworks.com/t5/minemeld-articles/uploading-list-of-indicators-to-minemeld/ta-p/162242

We're looking to replace that with your responder!

Really appreciate your help in looking into this issue.

@mparis1
Copy link
Author

mparis1 commented May 24, 2020

A quick addition to my previous response, it does seem that the responder is trying to submit an update to the list, and the behavior is repeatable:

See below tail on /opt/minemeld/log/minemeld-engine.log

IP Submissions sent from TheHive start at each Polling cycle:
image

Next thing I did was manually add an indicator to that list and sure enough, the behavior is identical to TheHive calls.

image

Definitely a head scratcher.

@mparis1
Copy link
Author

mparis1 commented May 24, 2020

Hi Wes,

Wanted to pass an update. We can close this issue out. The issue was that the list in MineMeld needs to use the class: stdlib.localDB

I would highly reccomend updating the documentation to ensure that when a user configures a list, they use this class within MineMeld.

Best Regards,

mparis1

@mparis1 mparis1 closed this as completed May 24, 2020
@weslambert
Copy link
Contributor

weslambert commented May 25, 2020

Hi @mparis1, I can't recall if this information was already in the documentation for an example Minemeld node, but we'll certainly consider calling this out, if needed. Glad to hear your issue is resolved, and you are able to use the responder as intended!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category:bug Issue is related to a bug
Projects
None yet
Development

No branches or pull requests

2 participants