Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Update KubeArmor to use OCI hooks instead of depending on container runtime socket #1714

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

AbdelrahmanElawady
Copy link
Contributor

Purpose of PR?:
This PR utilizes OCI hooks to get container details inside KubeArmor instead of using container runtime socket. It depends on some Kubernetes annotations to get container name, pod name, namespace name and AppArmor profile (it looks like AppArmor is the only one required out of those but further testing is required).
This PR also updates snitch to configure hooks on the host (currently only CRI-O is supported).

Fixes #1390

Does this PR introduce a breaking change?
No, the goal is to have the same functionality as mounting container runtime socket without the security concerns of doing that.

If the changes in this PR are manually verified, list down the scenarios covered::
Tested with Getting Started example for now but more testing will be done.

Additional information for reviewer? :

  • Containers created before KubeArmor was deployed is still not handled.

Checklist:

  • Bug fix. Fixes #
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • PR Title follows the convention of <type>(<scope>): <subject>
  • Commit has unit tests
  • Commit has integration tests

@AbdelrahmanElawady AbdelrahmanElawady force-pushed the hook branch 3 times, most recently from 9d510ed to 86c5478 Compare April 8, 2024 20:41
@AbdelrahmanElawady AbdelrahmanElawady force-pushed the hook branch 2 times, most recently from dab1324 to 4e65d92 Compare April 24, 2024 21:06
@AbdelrahmanElawady
Copy link
Contributor Author

Container created before KubeArmor now are handled with a simple detached process that waits on KubeArmor to start then sends all previous containers.

@AbdelrahmanElawady
Copy link
Contributor Author

Also, in order to try out this PR. You only need to build this code and run it on a CRI-O Kubernetes cluster and operator with snitch will take care of setting up the cluster.

@AbdelrahmanElawady AbdelrahmanElawady marked this pull request as ready for review April 24, 2024 21:40
Copy link
Member

@achrefbensaad achrefbensaad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1/2

Comment on lines 29 to 30
listenPath := filepath.Join(kubearmorDir, "ka.sock")
_ = os.Remove(listenPath) // in case kubearmor crashed and the socket wasn't removed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
listenPath := filepath.Join(kubearmorDir, "ka.sock")
_ = os.Remove(listenPath) // in case kubearmor crashed and the socket wasn't removed
_ = os.Remove(listenPath) // in case kubearmor crashed and the socket wasn't removed
listenPath := filepath.Join(kubearmorDir, "ka.sock")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just reordering for better clarity

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, listenPath is defined in the first line, so that wouldn't work?

Comment on lines 29 to 30
listenPath := filepath.Join(kubearmorDir, "ka.sock")
_ = os.Remove(listenPath) // in case kubearmor crashed and the socket wasn't removed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to handle the error case

for {
conn, err := socket.Accept()
if err != nil {
log.Fatal(err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we really sure we want to stop the entier execution if a single connection is not handled ?

Copy link
Contributor Author

@AbdelrahmanElawady AbdelrahmanElawady Apr 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for some of the fatal error handling is because if we just logged that there is an error and continued, some containers might go unnoticed for the duration of KubeArmor lifetime. Do you think it's better if we leave KubeArmor running and not monitoring some containers or restart KubeArmor and get all containers from the start (using some form of exiting)?

return
}
if err != nil {
log.Fatal(err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as above

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please handle in other parts in the code as well

Comment on lines 69 to 73
data := struct {
Operation string `json:"operation"`
Detached bool `json:"detached"`
Container types.Container `json:"container"`
}{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define the struct outside function at the top of the file

return
}

if data.Operation == "create" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to use enums

dm.ContainersLock.Lock()
if _, ok := dm.Containers[container.ContainerID]; !ok {
dm.Containers[container.ContainerID] = container
dm.ContainersLock.Unlock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to unlock once outside the if-else block rather than unlocking at each branch

}

func (h *crioHandler) close() {
_ = h.conn.Close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handle error

Comment on lines 47 to 59
if err != nil {
return nil, err
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we sure we want to quit at the first error ?

Comment on lines 101 to 106
type containerInfo struct {
SandboxID string `json:"sandboxID"`
Pid int `json:"pid"`
RuntimeSpec specs.Spec `json:"runtimeSpec"`
Privileged bool `json:"privileged"`
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move this to the top of the file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: P1- PR Ready for review
Status: In Review
Development

Successfully merging this pull request may close these issues.

feat: Leverage OCI Hooks for Container Events
2 participants