-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kubernetes enum module #14
Add kubernetes enum module #14
Conversation
}, | ||
'Actions' => [ | ||
['all', { 'Description' => 'enumerate all resources' }], | ||
['version', { 'Description' => 'enumerate version' }], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This ends up printing both the kubernetes version, as well as the msfconsole version. It turns out that multiple command dispatchers can receive the same cmd_
call. Still pondering a nicer solution here 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the unknown_command
handler that executes the actions need to be updated to return :handled
? https://github.com/rapid7/metasploit-framework/blob/b11237fea02f1c7a6776e35fedb6fb5df70a1033/lib/rex/ui/text/dispatcher_shell.rb#L500-L545
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cmd_status
will already be set to :handled
as we go down this path rather than the unknown_command
route:
But it turns out that the guard for breaking out of the loop doesn't take into consideration the cmd_status result, so it continues to loop over all dispatchers:
I can update that guard clause, but I don't know the ramifications of that change just yet. I'm also on the fence about shadowing the default msfconsole 'version' command, unless we also provide an alias for msfversion
or something as a fallback 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look great, I just left a couple of comments.
One thing I did notice while testing however is that when I misconfigured the module and it couldn't connect to Kubernetes because my RPORT was incorrect, the module completed it's run with just a bunch of ApiErrors. It took me a minute to realize it was not connecting instead of the TOKEN or some other issue. Would it be possible to either 1) validate the service is up and running before doing everything (maybe with enum_version
) or 2) raise a different exception when it's an HTTP-level problem like the connection fails that could then be handled differently to inform the user more specifically what the problem is.
Bad RPORT Output
msf6 auxiliary(cloud/kubernetes/enum_kubernetes) > run
[*] Running module against 192.168.159.31
[*] version failure - Kubernetes ApiError
[+] Enumerating namespaces
[*] namespace failure - Kubernetes ApiError
Namespaces
==========
# name
- ----
No rows
[-] No namespaces available. Attempting the current token's namespace and common namespaces: default, dev, staging, production, kube-node-lease, kube-lease, kube-system
[+] Namespace 0: default
[*] auth failure - Kubernetes ApiError
[*] pod failure - Kubernetes ApiError
[*] secret failure - Kubernetes ApiError
[+] Namespace 1: dev
[*] auth failure - Kubernetes ApiError
[*] pod failure - Kubernetes ApiError
[*] secret failure - Kubernetes ApiError
[+] Namespace 2: staging
[*] auth failure - Kubernetes ApiError
[*] pod failure - Kubernetes ApiError
[*] secret failure - Kubernetes ApiError
[+] Namespace 3: production
[*] auth failure - Kubernetes ApiError
[*] pod failure - Kubernetes ApiError
[*] secret failure - Kubernetes ApiError
[+] Namespace 4: kube-node-lease
[*] auth failure - Kubernetes ApiError
[*] pod failure - Kubernetes ApiError
[*] secret failure - Kubernetes ApiError
[+] Namespace 5: kube-lease
[*] auth failure - Kubernetes ApiError
[*] pod failure - Kubernetes ApiError
[*] secret failure - Kubernetes ApiError
[+] Namespace 6: kube-system
[*] auth failure - Kubernetes ApiError
[*] pod failure - Kubernetes ApiError
[*] secret failure - Kubernetes ApiError
[*] Auxiliary module execution completed
}, | ||
'Actions' => [ | ||
['all', { 'Description' => 'enumerate all resources' }], | ||
['version', { 'Description' => 'enumerate version' }], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the unknown_command
handler that executes the actions need to be updated to return :handled
? https://github.com/rapid7/metasploit-framework/blob/b11237fea02f1c7a6776e35fedb6fb5df70a1033/lib/rex/ui/text/dispatcher_shell.rb#L500-L545
I've added a separate error to handle this, it'll now spit out:
Decided not to bail out early on failing the version check, in case there's glitches in the matrix, and the other API calls would have worked. |
vprint_error("Unable to store #{loot_name} as a valid ssh_key pair") | ||
end | ||
|
||
path = store_loot('id_rsa', 'text/plain', nil, json, loot_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This writes an empty file out for me. When I debug it, I see that json
is nil at this point.
It looks like this should maybe be:
path = store_loot('id_rsa', 'text/plain', nil, json, loot_name) | |
path = store_loot('id_rsa', 'application/json', nil, secret, loot_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Should be fixed 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, so i see you switched it to path = store_loot('id_rsa', 'text/plain', nil, data, loot_name)
. In my testing environment, this writes a 15-byte binary string to disk which is definitely not a text document. The file
utility fails to identify what it is. In my case, the private_key
is also nil because Net::SSH::KeyFactory.load_data_private_key
threw an exception.
I'm not really sure what the user should do with this if it's not actually a private key. I think it should either be marked as a binary blob, or marked as json and the full secret
object should be written to disk, or nothing should be stored if it's not actually a usable private key as determined by Net::SSH::KeyFactory.load_data_private_key
.
92d26c4
to
a5697bc
Compare
0d3d0e9
to
45bcfe8
Compare
b306641
to
0d3d0e9
Compare
Closing to open a pull request in the Rapid7 Repository |
Adds support for a new Kubernetes enum module. Similar to rapid7#15733 - the user must have a Kubernetes JWT token, and access to the Kubernetes REST API through some means (either direct or through a compromised pod). A session on an existing pod can be used to configure a few options, including the JWT token and RHOST / RPORT options. Some changes were made to the RHOSTS and SESSION validation code to honor instances in which they are marked as optional. When defined, the validation still takes place either way, but when they are marked as optional and blank the validation is skipped to allow the module to run.
Verification
After setting up a Kubernetes cluster locally/remotely, the following scenarios should work:
The following commands should work:
run
namespaces
namespaces name=kube-public
auth
auth output=json
secrets
pods
pod
pod namespace=default name=redis-7fd956df5-sbchb
pod namespace=default name=redis-7fd956df5-sbchb output=json
pod namespace=default name=redis-7fd956df5-sbchb output=table
version
As well as pivoting through a compromised container:
Additional context is in the module documentation.