Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trivy 0.29.0 produced an error while trying to scan k8s deployment #2349

Closed
przemolb opened this issue Jun 17, 2022 · 12 comments
Closed

Trivy 0.29.0 produced an error while trying to scan k8s deployment #2349

przemolb opened this issue Jun 17, 2022 · 12 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. target/kubernetes Issues relating to kubernetes cluster scanning

Comments

@przemolb
Copy link

Description

Trivy doesn't scan k8s deployment

What did you expect to happen?

A report with vulnerabilities found in a deployment.

What happened instead?

> trivy k8s -s HIGH,CRITICAL -n mynamespace deployment/myapp
2022-06-17T11:09:33.242+0100    FATAL   failed getting k8s cluster: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

Output of run with -debug:

> trivy --debug k8s -s HIGH,CRITICAL -n mynamespace deployment/myapp
2022-06-17T11:12:02.437+0100    DEBUG   Severities: HIGH,CRITICAL
2022-06-17T11:12:02.438+0100    FATAL   failed getting k8s cluster: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

Output of trivy -v:

Version: 0.29.0
Vulnerability DB:
  Version: 2
  UpdatedAt: 2022-06-16 12:07:33.727709033 +0000 UTC
  NextUpdate: 2022-06-16 18:07:33.727708733 +0000 UTC
  DownloadedAt: 2022-06-16 14:32:39.967455234 +0000 UTC
@przemolb przemolb added the kind/bug Categorizes issue or PR as related to a bug. label Jun 17, 2022
@josedonizetti josedonizetti self-assigned this Jun 17, 2022
@josedonizetti
Copy link
Contributor

@przemolb Thank you for reporting. It is having problem authenticating. Do you mind posting here which kubectl version you are using? And perhaps upgrading it, if it is too behind? Thank you!

@przemolb
Copy link
Author

przemolb commented Jun 17, 2022

I use kubectl v1.22.4.
It works with Trivy 0.28 (which I have to use atm).

@josedonizetti josedonizetti added the target/kubernetes Issues relating to kubernetes cluster scanning label Jun 20, 2022
@josedonizetti
Copy link
Contributor

@przemolb how are you configuring your kubectl? Is it automatically for some cloud provider? (aws, gke, etc?)
Trying to emulate the error locally for me.

@przemolb
Copy link
Author

I use aws for this. This is standard configuration - nothing fancy.

@r8474
Copy link

r8474 commented Jun 28, 2022

Having the same issue.

Trivy version: 0.29.2
Kubectl version: 1.21.0

@sinner-
Copy link

sinner- commented Jul 18, 2022

Just ran into this issue on trivy 0.30.0 with kubectl v1.22.6-eks-7d68063 and confirmed there is no issue on trivy 0.28.0.

kubectl configured with aws eks update-kubeconfig --region region-code --name cluster-name

@josedonizetti josedonizetti added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jul 22, 2022
@josedonizetti josedonizetti added this to the v0.31.0 milestone Jul 22, 2022
@josedonizetti josedonizetti modified the milestones: v0.31.0, v0.32.0 Aug 15, 2022
@josedonizetti
Copy link
Contributor

Can someone having this issue help me debug it? I created an EKS cluster, and was able to scan it without any issue.

@sinner-
Copy link

sinner- commented Sep 3, 2022

Hey @josedonizetti thanks for taking a look at this.

EKS cluster (k8s version 1.22) created with the following CloudFormation template

---
AWSTemplateFormatVersion: '2010-09-09'

Resources:
  EKSIAMRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - eks.amazonaws.com
            Action:
            - 'sts:AssumeRole'
      RoleName: ekswksclusterrole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
        - arn:aws:iam::aws:policy/AmazonEKSServicePolicy

  VPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: "192.168.0.0/16"
      EnableDnsSupport: true
      EnableDnsHostnames: true

  InternetGateway:
    Type: "AWS::EC2::InternetGateway"

  VPCGatewayAttachment:
    Type: "AWS::EC2::VPCGatewayAttachment"
    Properties:
      InternetGatewayId: !Ref InternetGateway
      VpcId: !Ref VPC

  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref VPC

  PublicRoute:
    DependsOn: VPCGatewayAttachment
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway

  PublicSubnet01:
    Type: AWS::EC2::Subnet
    Properties:
      MapPublicIpOnLaunch: true
      CidrBlock: "192.168.0.0/24"
      AvailabilityZone: !Select 
        - 0
        - Fn::GetAZs: !Ref 'AWS::Region'
      VpcId: !Ref VPC

  PublicSubnet01RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet01
      RouteTableId: !Ref PublicRouteTable

  PublicSubnet02:
    Type: AWS::EC2::Subnet
    Properties:
      MapPublicIpOnLaunch: true
      CidrBlock: "192.168.1.0/24"
      AvailabilityZone: !Select 
        - 1
        - Fn::GetAZs: !Ref 'AWS::Region'
      VpcId: !Ref VPC

  PublicSubnet02RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet02
      RouteTableId: !Ref PublicRouteTable

  ControlPlaneSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Cluster communication with worker nodes
      VpcId: !Ref VPC

  EKSCluster:
    Type: AWS::EKS::Cluster
    Properties:
      Name: testeks
      RoleArn: !GetAtt EKSIAMRole.Arn
      Version: "1.22"
      ResourcesVpcConfig:
        EndpointPublicAccess: false
        EndpointPrivateAccess: true
        SecurityGroupIds:
        - !Ref ControlPlaneSecurityGroup
        SubnetIds:
        - !Ref PublicSubnet01
        - !Ref PublicSubnet02
    DependsOn: [EKSIAMRole, PublicSubnet01, ControlPlaneSecurityGroup]

  NodeInstanceRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      ManagedPolicyArns:
        - "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
        - "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
        - "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      Path: /

  NodeInstanceProfile:
    Type: "AWS::IAM::InstanceProfile"
    Properties:
      Path: /
      Roles:
        - !Ref NodeInstanceRole

  EKSNodegroup:
    DependsOn: [EKSCluster, NodeInstanceProfile]
    Type: 'AWS::EKS::Nodegroup'
    Properties:
      ClusterName: testeks
      NodeRole: !GetAtt NodeInstanceRole.Arn
      ScalingConfig:
        MinSize: 1
        MaxSize: 1
        DesiredSize: 1
      Subnets:
        - !Ref PublicSubnet01
        - !Ref PublicSubnet02

kubectl installed from https://s3.us-west-2.amazonaws.com/amazon-eks/1.22.6/2022-03-09/bin/linux/amd64/kubectl

Trivy 0.31.2 installed using trivy_0.31.2_Linux-64bit.rpm (same issue with any version after 0.28.0, e.g. originally mentioned 0.29.0).

Running

$ trivy k8s --report summary testeks
2022-09-03T02:37:05.325Z	FATAL	failed getting k8s cluster: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

Uninstalling that version and installing trivy_0.28.0_Linux-64bit.rpm then running:

[trivy@ip-192-168-1-246 ~]$ trivy k8s --report summary 
195 / 195 [-------------------------------------------------------------------------------------------------------] 100.00% 15 p/s

Summary Report for arn:aws:eks:<REDACTED>:<REDACTED>:cluster/testeks
┌─────────────┬──────────────────────────────────┬─────────────────────────┬────────────────────┬───────────────────┐
│  Namespace  │             Resource             │     Vulnerabilities     │ Misconfigurations  │      Secrets      │
│             │                                  ├────┬────┬─────┬────┬────┼───┬───┬───┬────┬───┼───┬───┬───┬───┬───┤
│             │                                  │ C  │ H  │  M  │ L  │ U  │ C │ H │ M │ L  │ U │ C │ H │ M │ L │ U │
├─────────────┼──────────────────────────────────┼────┼────┼─────┼────┼────┼───┼───┼───┼────┼───┼───┼───┼───┼───┼───┤
│ kube-system │ DaemonSet/aws-node               │ 10 │ 34 │ 133 │ 6  │ 13 │   │ 4 │ 8 │ 15 │   │   │   │   │   │   │
│ kube-system │ DaemonSet/kube-proxy             │ 13 │ 27 │ 14  │ 73 │    │   │ 2 │ 5 │ 7  │   │   │   │   │   │   │
│ kube-system │ Service/kube-dns                 │    │    │     │    │    │   │   │ 2 │    │   │   │   │   │   │   │
│ kube-system │ Deployment/coredns               │    │ 2  │  2  │ 1  │ 4  │   │   │ 4 │ 3  │   │   │   │   │   │   │
│ default     │ Service/kubernetes               │    │    │     │    │    │   │   │ 1 │    │   │   │   │   │   │   │
│             │ PodSecurityPolicy/eks.privileged │    │    │     │    │    │   │   │ 1 │    │   │   │   │   │   │   │
└─────────────┴──────────────────────────────────┴────┴────┴─────┴────┴────┴───┴───┴───┴────┴───┴───┴───┴───┴───┴───┘
Severities: C=CRITICAL H=HIGH M=MEDIUM L=LOW U=UNKNOWN

works fine.

Upgrading the cluster to k8s version 1.23 still has the same issue.

When you say

I created an EKS cluster, and was able to scan it without any issue.

Can you provide more details? How was the cluster created (console/API/CFN?)/cluster version/Trivy version/kubectl version/kubectl source?

@sinner-
Copy link

sinner- commented Sep 9, 2022

@josedonizetti hello, any update?

@josedonizetti josedonizetti removed this from the v0.32.0 milestone Sep 15, 2022
@bbodenmiller
Copy link
Contributor

bbodenmiller commented Nov 16, 2022

Same issue here. k8s 1.22. Cluster setup via AWS UI. Tried deleting and resetting kubeconfig and that didn't help either.

$ aws eks update-kubeconfig --region us-gov-west-1 --name cluster-name
$ trivy -v
Version: 0.34.0
$ trivy k8s --report summary cluster
2022-11-16T14:12:13.382-0800	FATAL	failed getting k8s cluster: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
$ trivy k8s -n kube-system --report summary all
2022-11-16T14:18:20.755-0800	FATAL	failed getting k8s cluster: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

If it makes a difference, there is no pods in my default namespace but there is in other namespaces.

$ kubectl get pods
No resources found in default namespace.

@chen-keinan
Copy link
Contributor

@bbodenmiller not sure if it still an issue or how you progress with it, here are my two cents :

Causes of Error

The error message indicates that kubectl is trying to use an API version client.authentication.k8s.io/v1alpha1 that is not supported by your cluster. This can be due to a few reasons:

You are using an outdated version of kubectl: The API version supported by your kubectl version may not be compatible with the version of your Kubernetes cluster.
Your cluster has been upgraded to a newer version of the Kubernetes API that is not compatible with your kubectl version: The API version supported by your kubectl version may not be compatible with the version of your Kubernetes cluster after an upgrade.
How to Resolve the Error on Kubectl Command

Resolve this error, you have a few options:

  • Upgrade your kubectl version: This will ensure that your kubectl version is compatible with the API version of your Kubernetes cluster.
  • Update your resource definition files to use an older API version that is compatible with your kubectl version: This will ensure that your resource definition files are compatible with the API version supported by your kubectl version.
  • Upgrade your Kubernetes cluster: This will ensure that the API version supported by your cluster is compatible with your kubectl version.
    Use the below-mentioned commands to resolve this error, here we are reducing the cluster version.

@chen-keinan
Copy link
Contributor

Closing issue as nothing to update here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. target/kubernetes Issues relating to kubernetes cluster scanning
Projects
None yet
Development

No branches or pull requests

6 participants