Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No volumes: section. docker-compose: no declaration was found in the volumes section #17

Closed
vigeland opened this issue May 18, 2021 · 18 comments

Comments

@vigeland
Copy link

I had test it for example with portainer.
because there is no volumes section, docker-compose raise an error.
docker-compose: no declaration was found in the volumes section

Output:
version: "3"
services:
portainer:
container_name: portainer
entrypoint:
- /portainer
environment:
- PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
hostname: 744e2324ef35
image: portainer/portainer-ce
ipc: private
logging:
driver: json-file
options: {}
networks:
- bridge
ports:
- 8000:8000/tcp
- 9000:9000/tcp
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
working_dir: /
networks:
bridge:

Missing in iml file something like this.
volumes:
portainer_data:
external: true

@Red5d
Copy link
Owner

Red5d commented May 18, 2021

If you run "docker inspect portainer", what does the volume/bind-mount section look like for that container?

@vigeland
Copy link
Author

for clarification. Output: mean output from the script autocompose and the missing part is need to reimport with docker-compose up -d. Without the three lines I got the Error.

portainer_data is a volume.
docker volume ls
DRIVER VOLUME NAME
local portainer_data

Do you mean this parts ?
"HostConfig": {
"Binds": [
"portainer_data:/data:rw",
"/var/run/docker.sock:/var/run/docker.sock:rw"
],
"ContainerIDFile": "",
...
"Mounts": [
{
"Type": "volume",
"Name": "portainer_data",
"Source": "/var/lib/docker/volumes/portainer_data/_data",
"Destination": "/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
...
"Image": "portainer/portainer-ce",
"Volumes": {
"/data": {},
"/var/run/docker.sock": {}
},

@Red5d
Copy link
Owner

Red5d commented Aug 7, 2021

I think this has been fixed in some recent updates. Try using the latest image below and see if that fixes the issue.

docker pull ghcr.io/red5d/docker-autocompose:latest

@nikolas-digitalBabylon
Copy link

I think this has been fixed in some recent updates. Try using the latest image below and see if that fixes the issue.

docker pull ghcr.io/red5d/docker-autocompose:latest

Hi, actually just checked and I see the same issue. Volumes section is missing.

@damntourists
Copy link

damntourists commented Oct 13, 2021

I'm having the same issue. See below:

brett@portainer:~$ cat .bash_aliases 
autocompose() {
	docker run --rm -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/red5d/docker-autocompose:latest "$@"
}
brett@portainer:~$ 
brett@portainer:~$ docker ps | grep nzb
a08cfbbcd6fc   ghcr.io/linuxserver/nzbget:latest           "/init"                  2 weeks ago      Up 2 weeks             6789/tcp                                                                                                                                                                                                                                                                                  nzb_nzbget.1.c6cwttfcl9jedclfxo5y7hkrc
brett@portainer:~$ 
brett@portainer:~$ 
brett@portainer:~$ docker inspect a08cfbbcd6fc
[
    {
        "Id": "a08cfbbcd6fcebf2fbb391ec697d419b14a476f7f2f93da2eb52050b02ee743f",
        "Created": "2021-09-29T08:57:31.791392313Z",
        "Path": "/init",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 7498,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2021-09-29T08:57:37.783289223Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:9652129585d1f7fceb72b883ac5e4846d08548d763041c6aba5e4b33ca6012eb",
        "ResolvConfPath": "/var/lib/docker/containers/a08cfbbcd6fcebf2fbb391ec697d419b14a476f7f2f93da2eb52050b02ee743f/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/a08cfbbcd6fcebf2fbb391ec697d419b14a476f7f2f93da2eb52050b02ee743f/hostname",
        "HostsPath": "/var/lib/docker/containers/a08cfbbcd6fcebf2fbb391ec697d419b14a476f7f2f93da2eb52050b02ee743f/hosts",
        "LogPath": "/var/lib/docker/containers/a08cfbbcd6fcebf2fbb391ec697d419b14a476f7f2f93da2eb52050b02ee743f/a08cfbbcd6fcebf2fbb391ec697d419b14a476f7f2f93da2eb52050b02ee743f-json.log",
        "Name": "/nzb_nzbget.1.c6cwttfcl9jedclfxo5y7hkrc",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "docker-default",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "host",
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "default",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": [],
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "Mounts": [
                {
                    "Type": "bind",
                    "Source": "/home/brett/portainer_volumes/nzbget",
                    "Target": "/config"
                },
                {
                    "Type": "bind",
                    "Source": "/mnt/nas/sync/nzb",
                    "Target": "/downloads"
                }
            ],
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/906b2b39d474bf53ca8a79273d398780da5d827a4ec2816d8b0f727d5a64bd31-init/diff:/var/lib/docker/overlay2/fd4696ec720e9738cbc8eebebd379c1b301dbab0768aacbca9ae3d02e2a23910/diff:/var/lib/docker/overlay2/a5befe114dc1506ac23b6ae50053a66265703c55dd6260c06d2b63e471afa29e/diff:/var/lib/docker/overlay2/1e57731f1cde312f07892c46cc1f09f9fb9a46425bab71adb5c8e533ec05bf75/diff:/var/lib/docker/overlay2/a3cb23de53798ac0dc748bc97efe2dcbe8a393a9622b9711c367ecf67c6f7bf4/diff:/var/lib/docker/overlay2/39ee23ccf283c26860079f07c5b4f62239f5fca60379cd5d0ee642c6236f582c/diff:/var/lib/docker/overlay2/9b2e02a6812f36576724d7acb55948cbffe8fdb857a2b7ec2a50070bbbf30ec6/diff:/var/lib/docker/overlay2/156189c432ce4aab5cd0229725cc1126d528e9393f494dedcbc6525e71f472ff/diff:/var/lib/docker/overlay2/b2b61cb710ce01532fc9bec52e5786be94db6a8061b40a4031eb6038bdce960e/diff",
                "MergedDir": "/var/lib/docker/overlay2/906b2b39d474bf53ca8a79273d398780da5d827a4ec2816d8b0f727d5a64bd31/merged",
                "UpperDir": "/var/lib/docker/overlay2/906b2b39d474bf53ca8a79273d398780da5d827a4ec2816d8b0f727d5a64bd31/diff",
                "WorkDir": "/var/lib/docker/overlay2/906b2b39d474bf53ca8a79273d398780da5d827a4ec2816d8b0f727d5a64bd31/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/home/brett/portainer_volumes/nzbget",
                "Destination": "/config",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/nas/sync/nzb",
                "Destination": "/downloads",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "a08cfbbcd6fc",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "6789/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PGID=1000",
                "PUID=1000",
                "TZ=America/Los_Angeles",
                "UMASK_SET=022",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "PS1=$(whoami)@$(hostname):$(pwd)\\$ ",
                "HOME=/root",
                "TERM=xterm"
            ],
            "Cmd": null,
            "Image": "ghcr.io/linuxserver/nzbget:latest@sha256:e9408703e4378f61d7c4dc0948620f3b2a5b2c00cd87b82c55e84f8634affcc8",
            "Volumes": {
                "/config": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "/init"
            ],
            "OnBuild": null,
            "Labels": {
                "build_version": "Linuxserver.io version:- v21.0-ls68 Build-date:- 2021-01-18T03:05:18+00:00",
                "com.docker.stack.namespace": "nzb",
                "com.docker.swarm.node.id": "9b5wk469txhr8jyz4tojovr2n",
                "com.docker.swarm.service.id": "otnm7k37nl3ytcqd9ch51vpx4",
                "com.docker.swarm.service.name": "nzb_nzbget",
                "com.docker.swarm.task": "",
                "com.docker.swarm.task.id": "c6cwttfcl9jedclfxo5y7hkrc",
                "com.docker.swarm.task.name": "nzb_nzbget.1.c6cwttfcl9jedclfxo5y7hkrc",
                "maintainer": "thelamer"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "ae09667add3d71e55ba730283815c876fdf951fa8b51828d53bf074c28753091",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "6789/tcp": null
            },
            "SandboxKey": "/var/run/docker/netns/ae09667add3d",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "ingress": {
                    "IPAMConfig": {
                        "IPv4Address": "10.0.0.5"
                    },
                    "Links": null,
                    "Aliases": [
                        "a08cfbbcd6fc"
                    ],
                    "NetworkID": "tjp5c3mbtbivv3812uopx3bh4",
                    "EndpointID": "651d6c8030a6e66b178f5b5d3a48b4a4ec80d83bd3f1516c147f32be1ea01f72",
                    "Gateway": "",
                    "IPAddress": "10.0.0.5",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:0a:00:00:05",
                    "DriverOpts": null
                },
                "nzb_default": {
                    "IPAMConfig": {
                        "IPv4Address": "10.0.1.3"
                    },
                    "Links": null,
                    "Aliases": [
                        "a08cfbbcd6fc"
                    ],
                    "NetworkID": "zy9mfxdwq0jype0bdb9oh69ms",
                    "EndpointID": "b93af2c0af7d38bc96d0e26c88a15f9d86bc9161871dd927684ce08f98776757",
                    "Gateway": "",
                    "IPAddress": "10.0.1.3",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:0a:00:01:03",
                    "DriverOpts": null
                }
            }
        }
    }
]
brett@portainer:~$ 
brett@portainer:~$ autocompose a08cfbbcd6fc
Unable to find image 'ghcr.io/red5d/docker-autocompose:latest' locally
latest: Pulling from red5d/docker-autocompose
a0d0a0d46f8b: Pull complete 
ba51967de001: Pull complete 
5ed3eaf4d331: Pull complete 
8fec21e4ed42: Pull complete 
5cb87e8bc1a2: Pull complete 
4a3acfe240f1: Pull complete 
2ccee4582b0f: Pull complete 
d23dfe695497: Pull complete 
Digest: sha256:ce8c5d9f929f0c2c4b4c7a04f6fb7fb96905481169f2ca2e6a5ed37eb6d2cfcf
Status: Downloaded newer image for ghcr.io/red5d/docker-autocompose:latest
version: "3"
services:
  nzb_nzbget.1.c6cwttfcl9jedclfxo5y7hkrc:
    container_name: nzb_nzbget.1.c6cwttfcl9jedclfxo5y7hkrc
    entrypoint:
      - /init
    environment:
      - PGID=1000
      - PUID=1000
      - TZ=America/Los_Angeles
      - UMASK_SET=022
      - PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
      - 'PS1=$(whoami)@$(hostname):$(pwd)\$ '
      - HOME=/root
      - TERM=xterm
    expose:
      - 6789/tcp
    hostname: a08cfbbcd6fc
    image: ghcr.io/linuxserver/nzbget:latest@sha256:e9408703e4378f61d7c4dc0948620f3b2a5b2c00cd87b82c55e84f8634affcc8
    ipc: private
    labels:
      build_version: 'Linuxserver.io version:- v21.0-ls68 Build-date:- 2021-01-18T03:05:18+00:00'
      com.docker.stack.namespace: nzb
      com.docker.swarm.node.id: 9b5wk469txhr8jyz4tojovr2n
      com.docker.swarm.service.id: otnm7k37nl3ytcqd9ch51vpx4
      com.docker.swarm.service.name: nzb_nzbget
      com.docker.swarm.task: ""
      com.docker.swarm.task.id: c6cwttfcl9jedclfxo5y7hkrc
      com.docker.swarm.task.name: nzb_nzbget.1.c6cwttfcl9jedclfxo5y7hkrc
      maintainer: thelamer
    logging:
      driver: json-file
      options: {}
    networks:
      - nzb_default
      - ingress
networks:
  ingress:
    external: true
  nzb_default:
    external: true
brett@portainer:~$ 

@damntourists
Copy link

damntourists commented Oct 14, 2021

@Red5d

Following up -
I forked this repo and tested it on my end, perhaps this would fix it?

at https://github.com/Red5d/docker-autocompose/blob/master/autocompose.py#L66

        'volumes': cattrs['HostConfig']['Binds'],

change to:

        'volumes': cattrs['HostConfig']['Binds'] or
                   [f'{m["Source"]}:{m["Target"]}' for m in cattrs['HostConfig']['Mounts']],

I don't have any examples that include the read/write mode, so I'm unsure if that exists under the Mounts section.

@Red5d
Copy link
Owner

Red5d commented Oct 16, 2021

@damntourists thanks, I've got a really busy weekend, but I'll check on this after. If you've verified that it works, feel free to submit a pull request to change that item and I'll review/merge it.

@Sapemeg
Copy link

Sapemeg commented Oct 19, 2021

@Red5d will you be merging that fix soon ;

@acdoussan
Copy link
Contributor

running into this same problem, is there an open pr for this?

acdoussan pushed a commit to acdoussan/docker-autocompose that referenced this issue Aug 13, 2022
Red5d pushed a commit that referenced this issue Aug 13, 2022
* fix volume export based off of #17 (comment)

* remove unneeded space

* export volumes in addition to networks

* fix syntax error

* actuall fix syntax errors

Co-authored-by: Adam Doussan <[email protected]>
@d-EScape
Copy link
Contributor

d-EScape commented Aug 20, 2022

I ran into a similar problem. The latest version seems to ignores binds and the read only flag.
I just started experimenting and needed to figure out how compose and this script even work. So, sorry no fork or completed/tested PR, but hopefully this is still helpful.

As far as I understand we need a volume list returned in the value dict for volumes and binds + a volume dict stating which volumes already exist and should be reused (are 'external'). Type 'bind' should use its source instead of a name in the compose file (only volumes have names).

The code below seems to dot that, at least for my containers (both volumes and binds, and some read only). It also adds the ":ro" when needed.

EDIT: I also made the reuse of existing volumes optional with a --createvolumes command line argument.

#! /usr/bin/env python3
import datetime
import sys, argparse, pyaml, docker
from collections import OrderedDict


def list_container_names():
    c = docker.from_env()
    return [container.name for container in c.containers.list(all=True)]


def main():
    parser = argparse.ArgumentParser(description='Generate docker-compose yaml definition from running container.')
    parser.add_argument('-a', '--all', action='store_true', help='Include all active containers')
    parser.add_argument('-v', '--version', type=int, default=3, help='Compose file version (1 or 3)')
    parser.add_argument('cnames', nargs='*', type=str, help='The name of the container to process.')
    parser.add_argument('-c', '--createvolumes', action='store_true', help='Create new volumes instead of reusing existing ones')
    args = parser.parse_args()

    container_names = args.cnames
    if args.all:
        container_names.extend(list_container_names())

    struct = {}
    networks = {}
    volumes = {}
    for cname in container_names:
        cfile, c_networks, c_volumes = generate(cname, createvolumes=args.createvolumes)

        struct.update(cfile)

        if c_networks is None:
            networks = None
        else:
            networks.update(c_networks)

        if c_volumes is None:
            volumes = None
        else:
            volumes.update(c_volumes)

    render(struct, args, networks, volumes)


def render(struct, args, networks, volumes):
    # Render yaml file
    if args.version == 1:
        pyaml.p(OrderedDict(struct))
    else:
        ans = {'version': '"3"', 'services': struct}

        if networks is not None:
            ans['networks'] = networks

        if volumes is not None:
            ans['volumes'] = volumes

        pyaml.p(OrderedDict(ans))


def is_date_or_time(s: str):
    for parse_func in [datetime.date.fromisoformat, datetime.datetime.fromisoformat]:
        try:
            parse_func(s.rstrip('Z'))
            return True
        except ValueError:
            pass
    return False


def fix_label(label: str):
    return f"'{label}'" if is_date_or_time(label) else label


def generate(cname, createvolumes=False):
    c = docker.from_env()

    try:
        cid = [x.short_id for x in c.containers.list(all=True) if cname == x.name or x.short_id in cname][0]
    except IndexError:
        print("That container is not available.", file=sys.stderr)
        sys.exit(1)

    cattrs = c.containers.get(cid).attrs


    # Build yaml dict structure

    cfile = {}
    cfile[cattrs['Name'][1:]] = {}
    ct = cfile[cattrs['Name'][1:]]

    default_networks = ['bridge', 'host', 'none']

    values = {
        'cap_add': cattrs['HostConfig']['CapAdd'],
        'cap_drop': cattrs['HostConfig']['CapDrop'],
        'cgroup_parent': cattrs['HostConfig']['CgroupParent'],
        'container_name': cattrs['Name'][1:],
        'devices': [],
        'dns': cattrs['HostConfig']['Dns'],
        'dns_search': cattrs['HostConfig']['DnsSearch'],
        'environment': cattrs['Config']['Env'],
        'extra_hosts': cattrs['HostConfig']['ExtraHosts'],
        'image': cattrs['Config']['Image'],
        'labels': {label: fix_label(value) for label, value in cattrs['Config']['Labels'].items()},
        'links': cattrs['HostConfig']['Links'],
        #'log_driver': cattrs['HostConfig']['LogConfig']['Type'],
        #'log_opt': cattrs['HostConfig']['LogConfig']['Config'],
        'logging': {'driver': cattrs['HostConfig']['LogConfig']['Type'], 'options': cattrs['HostConfig']['LogConfig']['Config']},
        'networks': {x for x in cattrs['NetworkSettings']['Networks'].keys() if x not in default_networks},
        'security_opt': cattrs['HostConfig']['SecurityOpt'],
        'ulimits': cattrs['HostConfig']['Ulimits'],
# the line below would not handle type bind
#        'volumes': [f'{m["Name"]}:{m["Destination"]}' for m in cattrs['Mounts'] if m['Type'] == 'volume'],
        'mounts': cattrs['Mounts'], #this could be moved outside of the dict. will only use it for generate
        'volume_driver': cattrs['HostConfig']['VolumeDriver'],
        'volumes_from': cattrs['HostConfig']['VolumesFrom'],
        'entrypoint': cattrs['Config']['Entrypoint'],
        'user': cattrs['Config']['User'],
        'working_dir': cattrs['Config']['WorkingDir'],
        'domainname': cattrs['Config']['Domainname'],
        'hostname': cattrs['Config']['Hostname'],
        'ipc': cattrs['HostConfig']['IpcMode'],
        'mac_address': cattrs['NetworkSettings']['MacAddress'],
        'privileged': cattrs['HostConfig']['Privileged'],
        'restart': cattrs['HostConfig']['RestartPolicy']['Name'],
        'read_only': cattrs['HostConfig']['ReadonlyRootfs'],
        'stdin_open': cattrs['Config']['OpenStdin'],
        'tty': cattrs['Config']['Tty']
    }

    # Populate devices key if device values are present
    if cattrs['HostConfig']['Devices']:
        values['devices'] = [x['PathOnHost']+':'+x['PathInContainer'] for x in cattrs['HostConfig']['Devices']]

    networks = {}
    if values['networks'] == set():
        del values['networks']
        assumed_default_network = list(cattrs['NetworkSettings']['Networks'].keys())[0]
        values['network_mode'] = assumed_default_network
        networks = None
    else:
        networklist = c.networks.list()
        for network in networklist:
            if network.attrs['Name'] in values['networks']:
                networks[network.attrs['Name']] = {'external': (not network.attrs['Internal']),
                                                   'name': network.attrs['Name']}
#     volumes = {}
#     if values['volumes'] is not None:
#         for volume in values['volumes']:
#             volume_name = volume.split(':')[0]
#             volumes[volume_name] = {'external': True}
#     else:
#         volumes = None
        
    # handles both the returned values['volumes'] (in c_file) and volumes for both, the bind and volume types
    # also includes the read only option
    volumes = {}
    mountpoints = []
    if values['mounts'] is not None:
        for mount in values['mounts']:
            destination = mount['Destination']
            if not mount['RW']:
                destination = destination + ':ro'
            if mount['Type'] == 'volume':
                mountpoints.append(mount['Name'] + ':' + destination)
                if not createvolumes:
                    volumes[mount['Name']] = {'external': True}    #to reuse an existing volume ... better to make that a choice? (cli argument)
            elif mount['Type'] == 'bind':
                mountpoints.append(mount['Source'] + ':' + destination)
        values['volumes'] = mountpoints
    if len(volumes) == 0:
        volumes = None
    values['mounts'] = None #remove this temporary data from the returned data


    # Check for command and add it if present.
    if cattrs['Config']['Cmd'] is not None:
        values['command'] = cattrs['Config']['Cmd']

    # Check for exposed/bound ports and add them if needed.
    try:
        expose_value = list(cattrs['Config']['ExposedPorts'].keys())
        ports_value = [cattrs['HostConfig']['PortBindings'][key][0]['HostIp']+':'+cattrs['HostConfig']['PortBindings'][key][0]['HostPort']+':'+key for key in cattrs['HostConfig']['PortBindings']]

        # If bound ports found, don't use the 'expose' value.
        if (ports_value != None) and (ports_value != "") and (ports_value != []) and (ports_value != 'null') and (ports_value != {}) and (ports_value != "default") and (ports_value != 0) and (ports_value != ",") and (ports_value != "no"):
            for index, port in enumerate(ports_value):
                if port[0] == ':':
                    ports_value[index] = port[1:]

            values['ports'] = ports_value
        else:
            values['expose'] = expose_value

    except (KeyError, TypeError):
        # No ports exposed/bound. Continue without them.
        ports = None

    # Iterate through values to finish building yaml dict.
    for key in values:
        value = values[key]
        if (value != None) and (value != "") and (value != []) and (value != 'null') and (value != {}) and (value != "default") and (value != 0) and (value != ",") and (value != "no"):
            ct[key] = value

    return cfile, networks, volumes


if __name__ == "__main__":
    main()

d-EScape added a commit to d-EScape/docker-autocompose that referenced this issue Aug 20, 2022
One PR that includes my suggestions for Red5d#17 and some new ones:

The -all option would not work because every iteration of container_names could set the 'networks' and 'volumes' to None. Even if a previous container had a network. Later iterations could not add a network, because it was no longer a dict, resulting in an exception.

The code might need some cleaning up. I left some comments and old pieces (commented out) to explain to @Red5d what I did and why. Since I am new to this script and the docker-compose format i might have overlooked something. Please check.
@d-EScape
Copy link
Contributor

I have made a PR of the code above and including some other modifications/fixes.

Red5d added a commit that referenced this issue Aug 20, 2022
)

(from @d-EScape)

One PR that includes my suggestions for #17 and some new ones:

The -all option would not work because every iteration of container_names could set the 'networks' and 'volumes' to None. Even if a previous container had a network. Later iterations could not add a network, because it was no longer a dict, resulting in an exception.

The code might need some cleaning up. I left some comments and old pieces (commented out) to explain to @Red5d what I did and why. Since I am new to this script and the docker-compose format i might have overlooked something. Please check.

Co-authored-by: d-EScape <[email protected]>
@Red5d
Copy link
Owner

Red5d commented Aug 20, 2022

@d-EScape , thanks! The PR wasn't actually submitted to the repo, but I created one and merged it. I also made an additional commit just now to increase the compose file version number to 3.6 which is necessary for the output compose file to pass validation using "docker-compose config" with the current network/volume config capabilities that have been included in the script recently.

@Anaerin
Copy link

Anaerin commented Mar 3, 2023

This appears to still be happening with named volumes. For instance:

version: "3"
services:
  grafana:
    cap_add:
      - AUDIT_WRITE
      - CHOWN
      - DAC_OVERRIDE
      - FOWNER
      - FSETID
      - KILL
      - MKNOD
      - NET_BIND_SERVICE
      - NET_RAW
      - SETFCAP
      - SETGID
      - SETPCAP
      - SETUID
      - SYS_CHROOT
    cap_drop:
      - AUDIT_CONTROL
      - BLOCK_SUSPEND
      - DAC_READ_SEARCH
      - IPC_LOCK
      - IPC_OWNER
      - LEASE
      - LINUX_IMMUTABLE
      - MAC_ADMIN
      - MAC_OVERRIDE
      - NET_ADMIN
      - NET_BROADCAST
      - SYSLOG
      - SYS_ADMIN
      - SYS_BOOT
      - SYS_MODULE
      - SYS_NICE
      - SYS_PACCT
      - SYS_PTRACE
      - SYS_RAWIO
      - SYS_RESOURCE
      - SYS_TIME
      - SYS_TTY_CONFIG
      - WAKE_ALARM
    container_name: grafana
    entrypoint:
      - /run.sh
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_NAME=Main Org.
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Viewer
      - PATH=/usr/share/grafana/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
      - GF_PATHS_CONFIG=/etc/grafana/grafana.ini
      - GF_PATHS_DATA=/var/lib/grafana
      - GF_PATHS_HOME=/usr/share/grafana
      - GF_PATHS_LOGS=/var/log/grafana
      - GF_PATHS_PLUGINS=/var/lib/grafana/plugins
      - GF_PATHS_PROVISIONING=/etc/grafana/provisioning
    hostname: 44f1f25d3954
    image: grafana/grafana:latest
    ipc: shareable
    mac_address: 02:42:ac:11:00:02
    networks:
      - inter-conteiner network
    ports:
      - 3010:3000/tcp
    restart: unless-stopped
    user: grafana
    volumes:
      - grafana-storage:/var/lib/grafana
    working_dir: /usr/share/grafana
networks:
  'inter-conteiner network':
    external: true

Note lack of volumes section despite a named volume (grafana) being defined.

@Red5d
Copy link
Owner

Red5d commented Mar 3, 2023

@Anaerin , am I looking at the wrong thing? The compose file in your comment has a volumes section with a "grafana-storage" volume 5-6 lines from the bottom.

@Anaerin
Copy link

Anaerin commented Mar 3, 2023

Yes, that's for the container. The image should then be defined in a separate "volumes" section, and as this is using a named volume, it should be declared as "external". For example:

version: "3"
services:
  grafana:
    container_name: grafana
    entrypoint:
      - /run.sh
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
    hostname: 44f1f25d3954
    image: grafana/grafana:latest
    ipc: shareable
    mac_address: 02:42:ac:11:00:02
    networks:
      - inter-conteiner network
    ports:
      - 3010:3000/tcp
    restart: unless-stopped
    user: grafana
    volumes:
      - grafana-storage:/var/lib/grafana
    working_dir: /usr/share/grafana
networks:
  'inter-conteiner network':
    external: true
volumes:
  grafana-storage:
    external: true

@Red5d
Copy link
Owner

Red5d commented Mar 3, 2023

Can you provide the "docker inspect" output for your container?

@Anaerin
Copy link

Anaerin commented Mar 3, 2023

grafanainspection.txt

@Anaerin
Copy link

Anaerin commented Mar 3, 2023

And now after doing the move over from aufs to overlay2 it's working. That's odd.

@Red5d Red5d closed this as completed Apr 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants