Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency rancher/rke2 to v1.29.4+rke2r1 #4471

Merged
merged 1 commit into from
May 1, 2024

Conversation

uniget-bot
Copy link

This PR contains the following updates:

Package Update Change
rancher/rke2 patch 1.29.3+rke2r1 -> 1.29.4+rke2r1

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

rancher/rke2 (rancher/rke2)

v1.29.4+rke2r1: v1.29.4+rke2r1

Compare Source

This release updates Kubernetes to v1.29.4.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.29.3+rke2r1:

  • Update channel server (#​5631)
  • Enable apiserver to access updated encryption-config.json (#​5604)
  • Delete epic github action (#​5626)
  • Remove kube-proxy static pod manifest during agent bootstrap (#​5619)
  • Properly handle files and sockets in extra mounts (#​5621)
  • Bump flannel version (#​5638)
    • Fix flannel bug to work in cluster with taints
  • Improve how flannel-windows reserves an IP for kube-proxy vip (#​5661)
  • Add doc on building multi-arch images (#​5670)
  • Add kine support (#​5540)
  • Reenable Unit Testing in GitHub Actions (#​5676)
  • Overhaul integration testing (#​5679)
  • Bump ingress-nginx to 1.9.6 (#​5671)
  • Rework and fix nightly install tests (#​5692)
  • Update flannel to v0.25.0 (#​5708)
  • Fix Windows path setting (#​5698)
  • Update to Cilium v1.15.3 (#​5713)
  • Bump K3s version for 2024-04 release cycle (#​5714)
  • Calico and canal update (#​5712)
  • Check if the kube-proxy VIP was already reserved (#​5705)
    • Flannel in windows checks if a VIP was already reserved
  • Update flannel to v0.25.1 (#​5747)
  • Fix subcommand mapping for rke2 certificate (#​5750)
  • Bump harvester-cloud-provider v0.2.3 (#​5694)
  • Bump RKE2 CCM image tag (#​5751)
  • Bump metrics-server version (#​5660)
    • Bump metrics server version to v0.7.1 and start using scratch as its base image
  • Update to Cilium v1.15.4 (#​5764)
  • Bump vsphere csi chart to 3.1.2-rancher300 and add snapshotter image (#​5755)
  • Vsphere csi bump (#​5801)
  • Update Kubernetes to v1.29.4 (#​5799)
  • Bump K3s version for v1.29 to pull through etcd-snapshot save fixes (#​5816)
  • Bump K3s version for dbinfo fix (#​5822)
  • Updated Calico and Flannel to fix ARM64 build (#​5825)
  • Update rke2-canal to v3.27.3-build2024042301 (#​5834)
  • Use the newer Flannel chart (#​5842)
  • Bump metrics-server chart to restore legacy label (#​5849)

Charts Versions

Component Version
rke2-cilium 1.15.400
rke2-canal v3.27.3-build2024042301
rke2-calico v3.27.300
rke2-calico-crd v3.27.002
rke2-coredns 1.29.002
rke2-ingress-nginx 4.9.100
rke2-metrics-server 3.12.002
rancher-vsphere-csi 3.1.2-rancher400
rancher-vsphere-cpi 1.7.001
harvester-cloud-provider 0.2.300
harvester-csi-driver 0.1.1700
rke2-snapshot-controller 1.7.202
rke2-snapshot-controller-crd 1.7.202
rke2-snapshot-validation-webhook 1.7.302

Packaged Component Versions

Component Version
Kubernetes v1.29.4
Etcd v3.5.9-k3s1
Containerd v1.7.11-k3s2
Runc v1.1.12
Metrics-server v0.7.1
CoreDNS v1.11.1
Ingress-Nginx nginx-1.9.6-hardened1
Helm-controller v0.15.9
Available CNIs
Component Version FIPS Compliant
Canal (Default) Flannel v0.25.1
Calico v3.27.3
Yes
Calico v3.27.3 No
Cilium v1.15.4 No
Multus v4.0.2 No

Helpful Links

As always, we welcome and appreciate feedback from our community of users. Please feel free to:

v1.29.4-rc4+rke2r1: v1.29.4-rc4+rke2r1

Compare Source

v1.29.4-rc3+rke2r1: v1.29.4-rc3+rke2r1

Compare Source

v1.29.4-rc2+rke2r1: v1.29.4-rc2+rke2r1

Compare Source

v1.29.4-rc1+rke2r1: v1.29.4-rc1+rke2r1

Compare Source


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

Copy link

@nicholasdille-bot nicholasdille-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Auto-approved because label type/renovate is present.

Copy link

github-actions bot commented May 1, 2024

🔍 Vulnerabilities of ghcr.io/uniget-org/tools/rke2:1.29.4-rke2r1

📦 Image Reference ghcr.io/uniget-org/tools/rke2:1.29.4-rke2r1
digestsha256:bb0bd53529d9a6b66ead5273fcca29b8c354f54f5d64b39099f7bcc7dd4a2e98
vulnerabilitiescritical: 0 high: 1 medium: 3 low: 0
platformlinux/amd64
size35 MB
packages317
critical: 0 high: 1 medium: 0 low: 0 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc 0.35.0 (golang)

pkg:golang/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/[email protected]

high 7.5: CVE--2023--47108 Allocation of Resources Without Limits or Throttling

Affected range<0.46.0
Fixed version0.46.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Summary

The grpc Unary Server Interceptor opentelemetry-go-contrib/instrumentation/google.golang.org/grpc/otelgrpc/interceptor.go

// UnaryServerInterceptor returns a grpc.UnaryServerInterceptor suitable
// for use in a grpc.NewServer call.
func UnaryServerInterceptor(opts ...Option) grpc.UnaryServerInterceptor {

out of the box adds labels

  • net.peer.sock.addr
  • net.peer.sock.port

that have unbound cardinality. It leads to the server's potential memory exhaustion when many malicious requests are sent.

Details

An attacker can easily flood the peer address and port for requests.

PoC

Apply the attached patch to the example and run the client multiple times. Observe how each request will create a unique histogram and how the memory consumption increases during it.

Impact

In order to be affected, the program has to configure a metrics pipeline, use UnaryServerInterceptor, and does not filter any client IP address and ports via middleware or proxies, etc.

Others

It is similar to already reported vulnerabilities.

Workaround for affected versions

As a workaround to stop being affected, a view removing the attributes can be used.

The other possibility is to disable grpc metrics instrumentation by passing otelgrpc.WithMeterProvider option with noop.NewMeterProvider.

Solution provided by upgrading

In PR #4322, to be released with v0.46.0, the attributes were removed.

References

critical: 0 high: 0 medium: 1 low: 0 golang.org/x/net 0.17.0 (golang)

pkg:golang/golang.org/x/[email protected]

medium 5.3: CVE--2023--45288 Uncontrolled Resource Consumption

Affected range<0.23.0
Fixed version0.23.0
CVSS Score5.3
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L
Description

An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of header data by sending an excessive number of CONTINUATION frames. Maintaining HPACK state requires parsing and processing all HEADERS and CONTINUATION frames on a connection. When a request's headers exceed MaxHeaderBytes, no memory is allocated to store the excess headers, but they are still parsed. This permits an attacker to cause an HTTP/2 endpoint to read arbitrary amounts of header data, all associated with a request which is going to be rejected. These headers can include Huffman-encoded data which is significantly more expensive for the receiver to decode than for an attacker to send. The fix sets a limit on the amount of excess header frames we will process before closing a connection.

critical: 0 high: 0 medium: 1 low: 0 gopkg.in/square/go-jose.v2 2.6.0 (golang)

pkg:golang/gopkg.in/square/[email protected]

medium : CVE--2024--28180

Affected range>=0
Fixed versionNot Fixed
Description

An attacker could send a JWE containing compressed data that used large amounts of memory and CPU when decompressed by Decrypt or DecryptMulti.

critical: 0 high: 0 medium: 1 low: 0 github.com/docker/docker 25.0.4+incompatible (golang)

pkg:golang/github.com/docker/[email protected]+incompatible

medium 5.9: CVE--2024--29018 Incorrect Resource Transfer Between Spheres

Affected range>=25.0.0
<25.0.5
Fixed version25.0.5
CVSS Score5.9
CVSS VectorCVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N
Description

Moby is an open source container framework originally developed by Docker Inc. as Docker. It is a key component of Docker Engine, Docker Desktop, and other distributions of container tooling or runtimes. As a batteries-included container runtime, Moby comes with a built-in networking implementation that enables communication between containers, and between containers and external resources.

Moby's networking implementation allows for creating and using many networks, each with their own subnet and gateway. This feature is frequently referred to as custom networks, as each network can have a different driver, set of parameters, and thus behaviors. When creating a network, the --internal flag is used to designate a network as internal. The internal attribute in a docker-compose.yml file may also be used to mark a network internal, and other API clients may specify the internal parameter as well.

When containers with networking are created, they are assigned unique network interfaces and IP addresses (typically from a non-routable RFC 1918 subnet). The root network namespace (hereafter referred to as the 'host') serves as a router for non-internal networks, with a gateway IP that provides SNAT/DNAT to/from container IPs.

Containers on an internal network may communicate between each other, but are precluded from communicating with any networks the host has access to (LAN or WAN) as no default route is configured, and firewall rules are set up to drop all outgoing traffic. Communication with the gateway IP address (and thus appropriately configured host services) is possible, and the host may communicate with any container IP directly.

In addition to configuring the Linux kernel's various networking features to enable container networking, dockerd directly provides some services to container networks. Principal among these is serving as a resolver, enabling service discovery (looking up other containers on the network by name), and resolution of names from an upstream resolver.

When a DNS request for a name that does not correspond to a container is received, the request is forwarded to the configured upstream resolver (by default, the host's configured resolver). This request is made from the container network namespace: the level of access and routing of traffic is the same as if the request was made by the container itself.

As a consequence of this design, containers solely attached to internal network(s) will be unable to resolve names using the upstream resolver, as the container itself is unable to communicate with that nameserver. Only the names of containers also attached to the internal network are able to be resolved.

Many systems will run a local forwarding DNS resolver, typically present on a loopback address (127.0.0.0/8), such as systemd-resolved or dnsmasq. Common loopback address examples include 127.0.0.1 or 127.0.0.53. As the host and any containers have separate loopback devices, a consequence of the design described above is that containers are unable to resolve names from the host's configured resolver, as they cannot reach these addresses on the host loopback device.

To bridge this gap, and to allow containers to properly resolve names even when a local forwarding resolver is used on a loopback address, dockerd will detect this scenario and instead forward DNS requests from the host/root network namespace. The loopback resolver will then forward the requests to its configured upstream resolvers, as expected.

Impact

Because dockerd will forward DNS requests to the host loopback device, bypassing the container network namespace's normal routing semantics entirely, internal networks can unexpectedly forward DNS requests to an external nameserver.

By registering a domain for which they control the authoritative nameservers, an attacker could arrange for a compromised container to exfiltrate data by encoding it in DNS queries that will eventually be answered by their nameservers. For example, if the domain evil.example was registered, the authoritative nameserver(s) for that domain could (eventually and indirectly) receive a request for this-is-a-secret.evil.example.

Docker Desktop is not affected, as Docker Desktop always runs an internal resolver on a RFC 1918 address.

Patches

Moby releases 26.0.0-rc3, 25.0.5 (released) and 23.0.11 (to be released) are patched to prevent forwarding DNS requests from internal networks.

Workarounds

  • Run containers intended to be solely attached to internal networks with a custom upstream address (--dns argument to docker run, or API equivalent), which will force all upstream DNS queries to be resolved from the container network namespace.

Background

Copy link

github-actions bot commented May 1, 2024

Copy link

github-actions bot commented May 1, 2024

PR is clean and can be merged. See https://github.com/uniget-org/tools/actions/runs/8904091929.

@github-actions github-actions bot merged commit 12eb93b into main May 1, 2024
9 of 10 checks passed
@github-actions github-actions bot deleted the renovate/rancher-rke2-1.29.x branch May 1, 2024 01:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants