Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when trying to modify a proxy device after last update: "Couldn't identify AppArmor cache directory" #1205

Closed
3 tasks done
tarruda opened this issue Sep 9, 2024 · 4 comments · Fixed by #1206
Closed
3 tasks done

Comments

@tarruda
Copy link
Contributor

tarruda commented Sep 9, 2024

Required information

  • Distribution: Raspbian
  • Distribution version: bookworm (debian 12)
  • The output of "incus info" or if that fails:
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- images_all_projects
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
- network_state_ovn_lr
- image_template_permissions
- storage_bucket_backup
- storage_lvm_cluster
- shared_custom_block_volumes
- auth_tls_jwt
- oidc_claim
- device_usb_serial
- numa_cpu_balanced
- image_restriction_nesting
- network_integrations
- instance_memory_swap_bytes
- network_bridge_external_create
- network_zones_all_projects
- storage_zfs_vdev
- container_migration_stateful
- profiles_all_projects
- instances_scriptlet_get_instances
- instances_scriptlet_get_cluster_members
- instances_scriptlet_get_project
- network_acl_stateless
- instance_state_started_at
- networks_all_projects
- network_acls_all_projects
- storage_buckets_all_projects
- resources_load
- instance_access
- project_access
- projects_force_delete
- resources_cpu_flags
- disk_io_bus_cache_filesystem
- instance_oci
- clustering_groups_config
- instances_lxcfs_per_instance
- clustering_groups_vm_cpu_definition
- disk_volume_subpath
- projects_limits_disk_pool
- network_ovn_isolated
- qemu_raw_qmp
- network_load_balancer_health_check
- oidc_scopes
- network_integrations_peer_name
- qemu_scriptlet
- instance_auto_restart
- storage_lvm_metadatasize
- ovn_nic_promiscuous
- ovn_nic_ip_address_none
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: pi
auth_user_method: unix
environment:
  addresses: []
  architectures:
  - aarch64
  - armv6l
  - armv7l
  - armv8l
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIB9zCCAX2gAwIBAgIQDYYXX6jHTs1izt/a1VQIGDAKBggqhkjOPQQDAzAvMRkw
    FwYDVQQKExBMaW51eCBDb250YWluZXJzMRIwEAYDVQQDDAlyb290QHJwaTQwHhcN
    MjMxMTEyMjMwODUxWhcNMzMxMTA5MjMwODUxWjAvMRkwFwYDVQQKExBMaW51eCBD
    b250YWluZXJzMRIwEAYDVQQDDAlyb290QHJwaTQwdjAQBgcqhkjOPQIBBgUrgQQA
    IgNiAATYDM6T0ceyYQfm+03A6L24j5dQ+8bORsVYhHxNqG6Wt8HC2Ehj23Xk0McY
    t/hrTWyxUd9DKzBhHGtOGvCLak8+vw0zikPKHRhnEgnfVDIgdTptYRSAmcXowSDQ
    AED9VXajXjBcMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAM
    BgNVHRMBAf8EAjAAMCcGA1UdEQQgMB6CBHJwaTSHBH8AAAGHEAAAAAAAAAAAAAAA
    AAAAAAEwCgYIKoZIzj0EAwMDaAAwZQIwN+1GWa7yUXEDaaywZoyK+an+fmDslJUm
    smyTPf8jqnISD68wOEVIUqZQbP+w1huvAjEA2RcCYko0M8Q7Z789b7j4/W+Y2hOP
    L6JKw/6VEqgRVMI7OuSNPGqjeaND12JDGeAM
    -----END CERTIFICATE-----
  certificate_fingerprint: 355d7c06072983624cabddefc9d483b51419dc727debd3a0e7de1ab9e6d7bd7b
  driver: qemu | lxc
  driver_version: 9.0.2 | 6.0.1
  firewall: nftables
  kernel: Linux
  kernel_architecture: aarch64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "false"
    unpriv_fscaps: "true"
  kernel_version: 6.6.47+rpt-rpi-v8
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Debian GNU/Linux
  os_version: "12"
  project: default
  server: incus
  server_clustered: false
  server_event_mode: full-mesh
  server_name: rpi4
  server_pid: 981
  server_version: "6.5"
  storage: btrfs
  storage_version: "6.2"
  storage_supported_drivers:
  - name: dir
    version: "1"
    remote: false
  - name: btrfs
    version: "6.2"
    remote: false

Issue description

After the last update (IIRC I was on 0.7, now on 6.5, also updated a lot of other packages including kernel), I'm unable to make changes to the proxy devices due to "Couldn't identify AppArmor cache directory" . For example, I have a container called "dante-socks5-proxy ". This is what happens when I try to modify it

$ incus config edit dante-socks5-proxy  # make any change to one of the proxy devices, in this case I tried to modify listen port from "pac-server"
Config parsing error: Failed to remove device "pac-server": Couldn't identify AppArmor cache directory
Press enter to open the editor again or ctrl+c to abort change

Steps to reproduce

Not sure how to reproduce, except that all containers I had in Incus 0.7 were affected by this after upgrading to 6.5. Apparently I can modify the configuration as long as I don't touch the proxy devices.

Information to attach

  • Container log (incus info NAME --show-log)
$ incus info  dante-socks5-proxy --show-log
Name: dante-socks5-proxy
Status: STOPPED
Type: container
Architecture: aarch64
Created: 2024/09/07 10:28 -03
Last Used: 2024/09/08 21:07 -03

Log:

lxc dante-socks5-proxy 20240909000749.312 WARN     idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:165 - newuidmap binary is missing
lxc dante-socks5-proxy 20240909000749.313 WARN     idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:171 - newgidmap binary is missing
lxc dante-socks5-proxy 20240909000749.326 WARN     idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:165 - newuidmap binary is missing
lxc dante-socks5-proxy 20240909000749.327 WARN     idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:171 - newgidmap binary is missing
lxc dante-socks5-proxy 20240909000749.331 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:fchowmodat:1897 - No such file or directory - Failed to fchownat(16, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc dante-socks5-proxy 20240909000749.331 WARN     cgfsng - ../src/lxc/cgroups/cgfsng.c:fchowmodat:1897 - No such file or directory - Failed to fchownat(16, memory.reclaim, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )
lxc dante-socks5-proxy 20240909000757.953 WARN     attach - ../src/lxc/attach.c:get_attach_context:478 - No security context received
lxc dante-socks5-proxy 20240909000757.955 WARN     idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:165 - newuidmap binary is missing
lxc dante-socks5-proxy 20240909000757.955 WARN     idmap_utils - ../src/lxc/idmap_utils.c:lxc_map_ids:171 - newgidmap binary is missing
  • Container configuration (incus config show NAME --expanded)
architecture: aarch64
config:
 boot.autostart: "true"
 image.architecture: arm64
 image.description: Debian bookworm arm64 (20240906_05:24)
 image.os: Debian
 image.release: bookworm
 image.serial: "20240906_05:24"
 image.type: squashfs
 image.variant: default
 volatile.base_image: ec865857d048deea3488c2c16f401e93eca764234f9ba98d2be2a3426d87020f
 volatile.cloud-init.instance-id: 5d2b376a-c112-44bf-a6d8-f1cd89dbce9c
 volatile.eth0.hwaddr: 00:16:3e:2b:66:fe
 volatile.idmap.base: "0"
 volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
 volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
 volatile.last_state.idmap: '[]'
 volatile.last_state.power: STOPPED
 volatile.uuid: f6bf763f-e711-477e-8b74-1f0478321e65
 volatile.uuid.generation: f6bf763f-e711-477e-8b74-1f0478321e65
devices:
 eth0:
   name: eth0
   network: incusbr0
   type: nic
 pac-server:
   connect: tcp:12.12.12.210:80
   listen: tcp:192.168.1.254:8080
   nat: "true"
   type: proxy
 root:
   path: /
   pool: default
   type: disk
 socks5:
   connect: tcp:12.12.12.210:1080
   listen: tcp:192.168.1.254:1080
   type: proxy
ephemeral: false
profiles:
- default
stateful: false
description: ""
  • Main daemon log (at /var/log/incus/incusd.log)
time="2024-09-08T20:59:08-03:00" level=warning msg=" - Couldn't find the CGroup memory controller, memory limits will be ignored"
time="2024-09-08T20:59:08-03:00" level=warning msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
time="2024-09-08T20:59:12-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=casaos-web driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T20:59:12-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=pihole-dns driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T20:59:12-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=pihole-web driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T20:59:12-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=wgeasy-web driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T20:59:12-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=wireguard driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T20:59:13-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=pac-server driver=proxy err="br_netfilter kernel module not loaded" instance=dante-socks5-proxy project=default
time="2024-09-08T21:05:58-03:00" level=warning msg="AppArmor support has been disabled because of lack of kernel support"
time="2024-09-08T21:05:58-03:00" level=warning msg=" - AppArmor support has been disabled, Disabled because of lack of kernel support"
time="2024-09-08T21:05:58-03:00" level=warning msg=" - Couldn't find the CGroup hugetlb controller, hugepage limits will be ignored"
time="2024-09-08T21:05:58-03:00" level=warning msg=" - Couldn't find the CGroup memory controller, memory limits will be ignored"
time="2024-09-08T21:05:58-03:00" level=warning msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
time="2024-09-08T21:06:02-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=casaos-web driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T21:06:02-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=pihole-dns driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T21:06:02-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=pihole-web driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T21:06:02-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=wgeasy-web driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T21:06:03-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=wireguard driver=proxy err="br_netfilter kernel module not loaded" instance=casaos project=default
time="2024-09-08T21:07:49-03:00" level=warning msg="IPv4 bridge netfilter not enabled. Instances using the bridge will not be able to connect to the proxy listen IP" device=pac-server driver=proxy err="br_netfilter kernel module not loaded" instance=dante-socks5-proxy project=default
time="2024-09-08T21:09:55-03:00" level=warning msg="AppArmor support has been disabled because of lack of kernel support"
time="2024-09-08T21:09:55-03:00" level=warning msg=" - AppArmor support has been disabled, Disabled because of lack of kernel support"
time="2024-09-08T21:09:55-03:00" level=warning msg=" - Couldn't find the CGroup hugetlb controller, hugepage limits will be ignored"
time="2024-09-08T21:09:55-03:00" level=warning msg=" - Couldn't find the CGroup memory controller, memory limits will be ignored"
time="2024-09-08T21:09:55-03:00" level=warning msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
time="2024-09-08T21:18:04-03:00" level=warning msg="AppArmor support has been disabled because of lack of kernel support"
time="2024-09-08T21:18:04-03:00" level=warning msg=" - AppArmor support has been disabled, Disabled because of lack of kernel support"
time="2024-09-08T21:18:04-03:00" level=warning msg=" - Couldn't find the CGroup hugetlb controller, hugepage limits will be ignored"
time="2024-09-08T21:18:04-03:00" level=warning msg=" - Couldn't find the CGroup memory controller, memory limits will be ignored"
time="2024-09-08T21:18:04-03:00" level=warning msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"

Let me know if you need any more information that might be relevant to this error.

stgraber added a commit to stgraber/incus that referenced this issue Sep 9, 2024
stgraber added a commit to stgraber/incus that referenced this issue Sep 9, 2024
@jalbstmeijer
Copy link
Contributor

I assume this will be fixed in 6.6?

Is there an older version to go back to that will work till than?

incus network delete natbr0
Error: Couldn't identify AppArmor cache directory

@tarruda
Copy link
Contributor Author

tarruda commented Sep 11, 2024

@jalbstmeijer the fix was merged but not released yet.

Until a fix is released, the only way to bypass the error is to activate apparmor. If you are using raspbian, it is possible to activate by appending apparmor=1 security=apparmor to /boot/firmware/cmdline.txt and then rebooting

stgraber added a commit that referenced this issue Sep 14, 2024
@acidvegas
Copy link

acidvegas commented Sep 24, 2024

@stgraber got a temporary fix? I am on void linux and they take forever to upstream shit with incus...

@stgraber
Copy link
Member

Nope, the main options are:

  • Revert to previous Incus release (may need a DB revert too)
  • Install AppArmor
  • Cherry-pick the fix

bketelsen pushed a commit to bketelsen/incus that referenced this issue Feb 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging a pull request may close this issue.

4 participants