Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCI Ampere Support #1084

Closed
3 tasks done
rksharma95 opened this issue Feb 2, 2023 · 4 comments
Closed
3 tasks done

OCI Ampere Support #1084

rksharma95 opened this issue Feb 2, 2023 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@rksharma95
Copy link
Collaborator

rksharma95 commented Feb 2, 2023

Feature Request

Support for OCI Ampere needs to be validated. Validation has to be done for:

  • enforcement supported? What is the LSM used?
  • Audit/Observability supported
  • Paste the output of karmor probe
@rksharma95 rksharma95 added the enhancement New feature or request label Feb 2, 2023
@rksharma95 rksharma95 self-assigned this Feb 2, 2023
@rksharma95
Copy link
Collaborator Author

Currently having some issues deploying KubeArmor on OCI Ampere based cluster

kubectl get pods -A
NAMESPACE     NAME                                             READY   STATUS             RESTARTS      AGE
kube-system   coredns-69d7b9d5f4-6gg27                         1/1     Running            2             18h
kube-system   csi-oci-node-9h6bd                               1/1     Running            2             18h
kube-system   kube-dns-autoscaler-d8996d8b8-lq6sv              1/1     Running            2             18h
kube-system   kube-flannel-ds-v4p74                            1/1     Running            4             18h
kube-system   kube-proxy-s7r2m                                 1/1     Running            2             18h
kube-system   kubearmor-annotation-manager-7fc8d9b964-g8lhf    1/2     CrashLoopBackOff   4 (48s ago)   2m36s
kube-system   kubearmor-host-policy-manager-5644c558d8-grn7f   0/2     CrashLoopBackOff   8 (52s ago)   2m42s
kube-system   kubearmor-policy-manager-5cc6867465-7h4v5        0/2     CrashLoopBackOff   8 (62s ago)   2m42s
kube-system   kubearmor-qzxl5                                  0/1     CrashLoopBackOff   4 (4s ago)    2m44s
kube-system   kubearmor-relay-6fddb8865b-5bjhf                 1/1     Running            0             2m44s
kube-system   proxymux-client-2p4bn                            1/1     Running            2             18h
Pod logs
kubectl logs -n kube-system kubearmor-qzxl5
Defaulted container "kubearmor" out of: kubearmor, init (init)
2023-02-03 03:18:17.665419	INFO	Build Time: 2023-02-02 09:08:00.067521323 +0000 UTC
2023-02-03 03:18:17.665601	INFO	Arguments [cluster:default coverageTest:false criSocket: defaultCapabilitiesPosture:audit defaultFilePosture:audit defaultNetworkPosture:audit enableKubeArmorHostPolicy:false enableKubeArmorPolicy:true enableKubeArmorVm:false gRPC:32767 host:oke-cc2xs7urxaa-nddk6kgcxrq-snkwcxc5thq-0 hostDefaultCapabilitiesPosture:audit hostDefaultFilePosture:audit hostDefaultNetworkPosture:audit hostVisibility:default k8s:true logPath:none lsm:bpf,apparmor,selinux seLinuxProfileDir:/tmp/kubearmor.selinux visibility:process,file,network,capabilities]
2023-02-03 03:18:17.665708	INFO	Configuration [{Cluster:default Host:oke-cc2xs7urxaa-nddk6kgcxrq-snkwcxc5thq-0 GRPC:32767 LogPath:none SELinuxProfileDir:/tmp/kubearmor.selinux CRISocket: Visibility:process,file,network,capabilities HostVisibility:default Policy:true HostPolicy:true KVMAgent:false K8sEnv:true DefaultFilePosture:audit DefaultNetworkPosture:audit DefaultCapabilitiesPosture:audit HostDefaultFilePosture:audit HostDefaultNetworkPosture:audit HostDefaultCapabilitiesPosture:audit CoverageTest:false LsmOrder:[]}]
2023-02-03 03:18:17.665735	INFO	Final Configuration [{Cluster:default Host:oke-cc2xs7urxaa-nddk6kgcxrq-snkwcxc5thq-0 GRPC:32767 LogPath:none SELinuxProfileDir:/tmp/kubearmor.selinux CRISocket: Visibility:process,file,network,capabilities HostVisibility:none Policy:true HostPolicy:true KVMAgent:false K8sEnv:true DefaultFilePosture:audit DefaultNetworkPosture:audit DefaultCapabilitiesPosture:audit HostDefaultFilePosture:audit HostDefaultNetworkPosture:audit HostDefaultCapabilitiesPosture:audit CoverageTest:false LsmOrder:[bpf apparmor selinux]}]
2023-02-03 03:18:17.666470	INFO	Initialized Kubernetes client
2023-02-03 03:18:17.666517	INFO	Started to monitor node events
2023-02-03 03:18:17.666558	INFO	GlobalCfg.Host=oke-cc2xs7urxaa-nddk6kgcxrq-snkwcxc5thq-0, KUBEARMOR_NODENAME=10.0.10.245
2023-02-03 03:18:18.667393	INFO	Node Name: oke-cc2xs7urxaa-nddk6kgcxrq-snkwcxc5thq-0
2023-02-03 03:18:18.667452	INFO	Node IP: 10.0.10.245
2023-02-03 03:18:18.667479	INFO	Node Annotations: map[alpha.kubernetes.io/provided-node-ip:10.0.10.245 csi.volume.kubernetes.io/nodeid:{"blockvolume.csi.oraclecloud.com":"10.0.10.245","fss.csi.oraclecloud.com":"10.0.10.245"} flannel.alpha.coreos.com/backend-data:{"VNI":1,"VtepMAC":"5e:2d:86:ff:e9:81"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.0.10.245 kubearmor-policy:audited kubearmor-visibility:none node.alpha.kubernetes.io/ttl:0 oci.oraclecloud.com/compartment-id:ocid1.tenancy.oc1..aaaaaaaatfk4q3erag6x5cnhdebcgyjs43rtntiiheudddb22jiovdqznfdq oci.oraclecloud.com/node-pool-id:ocid1.nodepool.oc1.ap-mumbai-1.aaaaaaaahgnlrq7w2prhfhua7daead5sxbail3wsbdphlizkknddk6kgcxrq volumes.kubernetes.io/controller-managed-attach-detach:true]
2023-02-03 03:18:18.667526	INFO	OS Image: Oracle Linux Server 8.7
2023-02-03 03:18:18.667537	INFO	Kernel Version: 5.15.0-6.80.3.1.el8uek.aarch64
2023-02-03 03:18:18.667544	INFO	Kubelet Version: v1.25.4
2023-02-03 03:18:18.667551	INFO	Container Runtime: cri-o://1.25.1-111.el8
2023-02-03 03:18:18.668051	INFO	Initialized KubeArmor Logger
2023-02-03 03:18:18.669553	INFO	Initializing eBPF system monitor
2023-02-03 03:18:18.669593	INFO	eBPF system monitor object file path: /opt/kubearmor/BPF/system_monitor.bpf.o
2023-02-03 03:18:19.862321	INFO	Initialized the eBPF system monitor
2023-02-03 03:18:21.014721	INFO	Initialized KubeArmor Monitor
2023-02-03 03:18:21.014795	INFO	Started to monitor system events
2023-02-03 03:18:21.017682	INFO	Supported LSMs: capability,yama,selinux,bpf
2023-02-03 03:18:21.638004	ERROR	opening lsm LSM(enforce_proc)#156: errno 524
github.com/kubearmor/KubeArmor/KubeArmor/log.Err
	/usr/src/KubeArmor/KubeArmor/log/logger.go:97
github.com/kubearmor/KubeArmor/KubeArmor/feeder.(*Feeder).Errf
	/usr/src/KubeArmor/KubeArmor/feeder/feeder.go:491
github.com/kubearmor/KubeArmor/KubeArmor/enforcer/bpflsm.NewBPFEnforcer
	/usr/src/KubeArmor/KubeArmor/enforcer/bpflsm/enforcer.go:94
github.com/kubearmor/KubeArmor/KubeArmor/enforcer.selectLsm
	/usr/src/KubeArmor/KubeArmor/enforcer/runtimeEnforcer.go:101
github.com/kubearmor/KubeArmor/KubeArmor/enforcer.NewRuntimeEnforcer
	/usr/src/KubeArmor/KubeArmor/enforcer/runtimeEnforcer.go:153
github.com/kubearmor/KubeArmor/KubeArmor/core.(*KubeArmorDaemon).InitRuntimeEnforcer
	/usr/src/KubeArmor/KubeArmor/core/kubeArmor.go:259
github.com/kubearmor/KubeArmor/KubeArmor/core.KubeArmor
	/usr/src/KubeArmor/KubeArmor/core/kubeArmor.go:433
main.main
	/usr/src/KubeArmor/KubeArmor/main.go:44
runtime.main
	/usr/local/go/src/runtime/proc.go:250
2023-02-03 03:18:21.638470	INFO	Error Initialising BPF-LSM Enforcer, Cleaning Up
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x11b81cc]

goroutine 1 [running]:
github.com/kubearmor/KubeArmor/KubeArmor/enforcer/bpflsm.(*BPFEnforcer).DestroyBPFEnforcer(0x40058d6080)
	/usr/src/KubeArmor/KubeArmor/enforcer/bpflsm/enforcer.go:169 +0xfc
github.com/kubearmor/KubeArmor/KubeArmor/enforcer.selectLsm(0x40058ae2a0, {0x400022be30?, 0x11344bc?, 0x40058840d0?}, {0x40058ae270?, _, _}, {_, _, _}, ...)
	/usr/src/KubeArmor/KubeArmor/enforcer/runtimeEnforcer.go:105 +0x318
github.com/kubearmor/KubeArmor/KubeArmor/enforcer.NewRuntimeEnforcer({{0x16fcc93, 0x7}, {0x400011fd40, 0x29}, {0x40004a11f0, 0xb}, 0x40003f6270, 0x40003f62a0, {0x40003a4a00, 0x16, ...}, ...}, ...)
	/usr/src/KubeArmor/KubeArmor/enforcer/runtimeEnforcer.go:153 +0x268
github.com/kubearmor/KubeArmor/KubeArmor/core.(*KubeArmorDaemon).InitRuntimeEnforcer(...)
	/usr/src/KubeArmor/KubeArmor/core/kubeArmor.go:259
github.com/kubearmor/KubeArmor/KubeArmor/core.KubeArmor()
	/usr/src/KubeArmor/KubeArmor/core/kubeArmor.go:433 +0x7cc
main.main()
	/usr/src/KubeArmor/KubeArmor/main.go:44 +0x2a8

$ kubectl logs -n kube-system kubearmor-annotation-manager-7fc8d9b964-g8lhf
Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, manager
I0203 03:15:52.857202       1 main.go:190] Valid token audiences: 
I0203 03:15:52.857284       1 main.go:262] Generating self signed cert as no cert is provided
I0203 03:16:07.456386       1 main.go:311] Starting TCP socket on 0.0.0.0:8443
I0203 03:16:07.457371       1 main.go:318] Listening securely on 0.0.0.0:8443

$ kubectl logs -n kube-system kubearmor-host-policy-manager-5644c558d8-grn7f
Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, kubearmor-host-policy-manager
exec ./kube-rbac-proxy: exec format error

$ kubectl logs -n kube-system kubearmor-policy-manager-5cc6867465-7h4v5
Defaulted container "kube-rbac-proxy" out of: kube-rbac-proxy, kubearmor-policy-manager
exec ./kube-rbac-proxy: exec format error

@rksharma95
Copy link
Collaborator Author

rksharma95 commented Feb 8, 2023

Currently BPF-LSM programs are not supported for arm64 platform, therefore KubeArmor Can't support enforcement on OCI Ampere and at this point only Observability/Audit mode is supported.

Enforment Supported: No
Audit/Observability supported: Yes

karmor probe
Found KubeArmor running in Kubernetes

Daemonset :
 	kubearmor 	Desired: 1	Ready: 1	Available: 1	
Deployments : 
 	kubearmor-relay	Desired: 1	Ready: 1	Available: 1	
Containers : 
 	kubearmor-68dll                               	Running: 1	Image Version: kubearmor/kubearmor:latest               	
 	kubearmor-annotation-manager-7fc8d9b964-9n4qq 	Running: 2	Image Version: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0	
 	kubearmor-host-policy-manager-5644c558d8-nfrb7	Running: 2	Image Version: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0	
 	kubearmor-policy-manager-5cc6867465-6xwf5     	Running: 2	Image Version: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0	
 	kubearmor-relay-6fddb8865b-n8d8h              	Running: 1	Image Version: kubearmor/kubearmor-relay-server:latest  	
Node 1 : 
 	OS Image:                 	Oracle Linux Server 8.7       	
 	Kernel Version:           	5.15.0-6.80.3.1.el8uek.aarch64	
 	Kubelet Version:          	v1.25.4                       	
 	Container Runtime:        	cri-o://1.25.1-111.el8        	
 	Active LSM:               	                              	
 	Host Security:            	false                         	
 	Container Security:       	false                         	
 	Container Default Posture:	audit(File)                   	audit(Capabilities)	audit(Network)	
 	Host Default Posture:     	audit(File)                   	audit(Capabilities)	audit(Network)
KubeArmor Telemetry
kubectl exec -it -n wordpress-mysql   wordpress-787f45786f-hzfp4 -- bash
root@wordpress-787f45786f-hzfp4:/var/www/html# ls
index.php    readme.html      wp-admin            wp-comments-post.php  wp-config.php  wp-cron.php  wp-links-opml.php  wp-login.php  wp-settings.php  wp-trackback.php
license.txt  wp-activate.php  wp-blog-header.php  wp-config-sample.php  wp-content     wp-includes  wp-load.php        wp-mail.php   wp-signup.php    xmlrpc.php

karmor log --logFilter system --operation process                                                                                                             [918/918]
local port to be used for port forwarding kubearmor-relay-6fddb8865b-n8d8h: 32767 
== Log / 2023-02-08 11:11:55.669589 ==                                                                                                                                          
ClusterName: default                                                                                                                                                            
HostName: oke-cc2xs7urxaa-nddk6kgcxrq-snkwcxc5thq-1                                                                                                                             
Labels: app=wordpress                                                                                                                                                           
ContainerName: wordpress                                                                                                                                                        
ContainerID: bcf855aa4e263220ec0a88215126386b38c1fee6d1f4ae0b4b1af42c3b4cd342                                                                                                   
ContainerImage: docker.io/library/wordpress:4.8-apache@sha256:6216f64ab88fc51d311e38c7f69ca3f9aaba621492b4f1fa93ddf63093768845                                                  
Type: ContainerLog                                                                                                                                                              
Source: /bin/bash                                                                                                                                                               
Resource: /bin/ls -A                                                                                                                                                            
Operation: Process                                                                                                                                                              
Data: syscall=SYS_EXECVE                                                                                                                                                        
Result: Passed                                                                                                                                                                  
HostPID: 2.720641e+06                                                                                                                                                           
HostPPID: 2.720623e+06                                                                                                                                                          
PID: 6                                                                                                                                                                          
PPID: 1                                                                                                                                                                         
ParentProcessName: /bin/bash                                                                                                                                                    
ProcessName: /bin/ls  

@daemon1024
Copy link
Member

Handled and updated the support matrix.

@github-project-automation github-project-automation bot moved this from In Progress to Done in v0.9 backlog Feb 24, 2023
@nyrahul
Copy link
Contributor

nyrahul commented Jul 27, 2023

Oracle kernel team member has provided a response for the possible issue here. Quoting it verbatim:

On 7/27/23, 4:34 AM, "Alan Maguire" <[xxxx@xxxxx](mailto:xxxx@xxxx)> wrote:


hi folks


the background here is that the aarch64 platform until very recently
did not support the BPF trampoline, which is the basis for BPF tracing
programs, fentry, fexit and fmodify_return. The latter two in particular
are used in BPF LSM. We have a long-standing bug to backport this
functionality to UEK7 once it lands upstream:


Bug 34405795 - [UEK-7-U1] bpf: backport arm64 trampoline support from
upstream


This would benefit BPF tracing also. Support upstream is very fresh; the
patches landed within the last few months.


The challenge with this backport is that we end up bringing in a lot of
other pieces, but last time I did a proof-of-concept on the in-progress
patches it did seem tractable.


However, as a first step, I wonder if it would be feasible to get our
partners to check if a LUCI-based kernel resolves the issues they
currently face? This would help ensure that the blocking issues are
resolved by arm64 BPF trampoline support. I will also start looking into
any other potential missing pieces as I haven't explored BPF LSM much yet.


Thanks!


Alan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
No open projects
Status: Done
Development

No branches or pull requests

3 participants