Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bootkit] purpose of func metaDataQuery and container_uuid #110

Closed
jacobweinstock opened this issue Apr 5, 2022 · 0 comments · Fixed by #149
Closed

[bootkit] purpose of func metaDataQuery and container_uuid #110

jacobweinstock opened this issue Apr 5, 2022 · 0 comments · Fixed by #149
Labels
kind/support Categorizes issue or PR as a support question. triage/discuss Indicates a PR or issue that requires discussion

Comments

@jacobweinstock
Copy link
Member

Anyone know any context/purpose for the metaDataQuery function? It looks like it queries the metadata server (hegel) and just pulls out the id field. Then uses this value as an env var container_uuid when starting the Tink-worker. A quick grep of the Tink code bases doesn't come up with any references to this container_uuid field.

Also, it looks like this id string is the same as the WORKER_ID in /proc/cmdline. This feels like something that could potentially be removed entirely from bootkit?

CC @thebsdbox

This is from the sandbox:
"id": "0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94"
worker_id=0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94

metadata (formatted for readability)

{
    "id": "0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94",
    "metadata": {
        "facility": {
            "facility_code": "onprem",
            "plan_slug": "c2.medium.x86",
            "plan_version_slug": ""
        },
        "instance": {},
        "state": "provisioning"
    },
    "network": {
        "interfaces": [
            {
                "dhcp": {
                    "arch": "x86_64",
                    "ip": {
                        "address": "192.168.56.43",
                        "netmask": "255.255.255.0"
                    },
                    "mac": "08:00:27:9e:f5:3a"
                },
                "netboot": {
                    "allow_pxe": true,
                    "allow_workflow": true
                }
            }
        ]
    }
}

/proc/cmdline (formatted for readability)

ip=dhcp
modules=loop,squashfs,sd-mod,usb-storage
alpine_repo=http://192.168.56.4:8080/misc/osie/current/repo-x86_64/main
modloop=http://192.168.56.4:8080/misc/osie/current/modloop-x86_64
tinkerbell=http://192.168.56.4
syslog_host=192.168.56.4
parch=x86_64
packet_action=workflow
packet_state=provisioning
docker_registry=192.168.56.4
grpc_authority=192.168.56.4:42113
grpc_cert_url=http://192.168.56.4:42114/cert
instance_id=
registry_username=admin
registry_password=Admin1234
packet_base_url=http://192.168.56.4:8080/workflow
worker_id=0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94
packet_bootdev_mac=08:00:27:9e:f5:3a
facility=onprem
plan=c2.medium.x86
manufacturer=
slug=
initrd=initramfs-x86_64
console=tty0
console=ttyS1,115200
@jacobweinstock jacobweinstock added kind/support Categorizes issue or PR as a support question. triage/discuss Indicates a PR or issue that requires discussion labels Apr 5, 2022
@mergify mergify bot closed this as completed in #149 Oct 20, 2022
mergify bot added a commit that referenced this issue Oct 20, 2022
## Description


This removes unused code making network calls to Hegel and updates the linter version.

## Why is this needed



Fixes: #110 

## How Has This Been Tested?





## How are existing users impacted? What migration steps/scripts do we need?





## Checklist:

I have:

- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. triage/discuss Indicates a PR or issue that requires discussion
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant