Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing manifests in .build #104

Closed
vsoch opened this issue Mar 6, 2024 · 9 comments · Fixed by #118
Closed

Missing manifests in .build #104

vsoch opened this issue Mar 6, 2024 · 9 comments · Fixed by #118
Labels
build tools Improvements to the build scripts or pipelines docs Documentation improvements

Comments

@vsoch
Copy link

vsoch commented Mar 6, 2024

I'm trying to follow the install instructions:

https://appliedcomputing.io/simkube/docs/intro/installation.html

And when I generate the binaries in .build there are no manifests:

$ ls
cargo  debug  skctl  sk-ctrl  sk-driver  sk-tracer
$ ls debug/
build  deps  examples  incremental  skctl  skctl.d  sk-ctrl  sk-ctrl.d  sk-driver  sk-driver.d  sk-tracer  sk-tracer.d
$ ls debug/examples/
# empty

Do you have updated instructions for using this?

@vsoch
Copy link
Author

vsoch commented Mar 6, 2024

Ah looks like I need:

 make k8s

But I just hit a bug with that, let me see if I can figure that out. That part is written in the docs but very hard to see, I actually found it by looking at the Makefile!

@vsoch
Copy link
Author

vsoch commented Mar 6, 2024

here is the full bug:

Updating dependencies
Resolving dependencies... (2.8s)

Package operations: 15 installs, 0 updates, 0 removals

  - Installing cattrs (23.2.3)
  - Installing importlib-resources (6.1.2)
  - Installing publication (0.0.3)
  - Installing python-dateutil (2.9.0.post0)
  - Installing typeguard (2.13.3)
  - Installing typing-extensions (4.10.0)
  - Installing jsii (1.95.0)
  - Installing constructs (10.3.0)
  - Installing ordered-set (4.1.0)
  - Installing cdk8s (2.68.46)
  - Installing deepdiff (6.7.1)
  - Installing mypy-extensions (1.0.0)
  - Installing simplejson (3.19.2)
  - Installing fireconfig (0.4.0 24770a8)
  - Installing mypy (1.8.0)

Writing lock file
cd k8s && JSII_SILENCE_WARNING_UNTESTED_NODE_VERSION=1 CDK8S_OUTDIR=/home/vanessa/Desktop/Code/flux/flux-k8s/examples/simkube/simkube/.build/manifests BUILD_DIR=/home/vanessa/Desktop/Code/flux/flux-k8s/examples/simkube/simkube/.build poetry run ./main.py
Traceback (most recent call last):
  File "/home/vanessa/Desktop/Code/flux/flux-k8s/examples/simkube/simkube/k8s/./main.py", line 4, in <module>
    import fireconfig as fire
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/fireconfig/__init__.py", line 6, in <module>
    from cdk8s import App
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/cdk8s/__init__.py", line 41, in <module>
    from ._jsii import *
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/cdk8s/_jsii/__init__.py", line 13, in <module>
    import constructs._jsii
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/constructs/__init__.py", line 43, in <module>
    from ._jsii import *
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/constructs/_jsii/__init__.py", line 13, in <module>
    __jsii_assembly__ = jsii.JSIIAssembly.load(
                        ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/jsii/_runtime.py", line 55, in load
    _kernel.load(assembly.name, assembly.version, os.fspath(assembly_path))
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/jsii/_kernel/__init__.py", line 299, in load
    self.provider.load(LoadRequest(name=name, version=version, tarball=tarball))
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/jsii/_kernel/providers/process.py", line 354, in load
    return self._process.send(request, LoadResponse)
           ^^^^^^^^^^^^^
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/jsii/_utils.py", line 23, in wrapped
    stored.append(fgetter(self))
                  ^^^^^^^^^^^^^
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/jsii/_kernel/providers/process.py", line 349, in _process
    process.start()
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/jsii/_kernel/providers/process.py", line 260, in start
    self._process = subprocess.Popen(
                    ^^^^^^^^^^^^^^^^^
  File "/home/vanessa/anaconda3/lib/python3.11/subprocess.py", line 1026, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/home/vanessa/anaconda3/lib/python3.11/subprocess.py", line 1950, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'node'
Exception ignored in: <function _NodeProcess.__del__ at 0x7f835a01f240>
Traceback (most recent call last):
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/jsii/_kernel/providers/process.py", line 228, in __del__
    self.stop()
  File "/home/vanessa/Desktop/Code/compspec/compspec/env/lib/python3.11/site-packages/jsii/_kernel/providers/process.py", line 291, in stop
    assert self._process.stdin is not None
           ^^^^^^^^^^^^^
AttributeError: '_NodeProcess' object has no attribute '_process'
make: *** [build/k8s.mk:7: k8s] Error 1

@vsoch
Copy link
Author

vsoch commented Mar 6, 2024

Please tell me it isn't trying to subprocess to nodejs? 😆 😭

@vsoch
Copy link
Author

vsoch commented Mar 6, 2024

Oh god it is, lol https://github.com/cdk8s-team/cdk8s. Can you provide example manifests that don't require that installed on my machine? That's a huge ask.

@vsoch
Copy link
Author

vsoch commented Mar 6, 2024

For others that hit this (and don't have node/poetry, etc) you can also build in a container. I did:

# note that the simkube clone where I've already done make is in the PWD
docker run -it --entrypoint bash -v $PWD/:/code  node:bookworm

apt-get update && apt-get install -y python3-poetry
make k8s

And then (granted you have the volume) you should get the manifests in the same directory on the host! I haven't tested beyond that, hoping it works because this tool looks really cool.

@drmorr0 drmorr0 added build tools Improvements to the build scripts or pipelines docs Documentation improvements labels Mar 6, 2024
@drmorr0
Copy link
Contributor

drmorr0 commented Mar 6, 2024

@vsoch Yes, we need proper helm charts or something (see #97), I just haven't gotten around to it. Thanks for posting your workaround, and for your comments on the docs. I will see if I can get an update to the docs out later today, and maybe pull your dockerfile change into the makefile.

Let me know if you have other questions or need any help getting it working, I'd love to hear how it works for you!

@vsoch
Copy link
Author

vsoch commented Mar 6, 2024

Thanks @drmorr0 ! I got everything running last night, but for some reason the pods never go out of pending:

$ kubectl get pods --all-namespaces
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
cert-manager         cert-manager-5b54fc556f-cks6v                1/1     Running   0          11h
cert-manager         cert-manager-cainjector-7d8b6cf7b9-2pbrr     1/1     Running   0          11h
cert-manager         cert-manager-webhook-7d4744b5ff-n4c4g        1/1     Running   0          11h
kube-system          coredns-76f75df574-svcp7                     1/1     Running   0          12h
kube-system          coredns-76f75df574-z9kp2                     1/1     Running   0          12h
kube-system          etcd-kind-control-plane                      1/1     Running   0          12h
kube-system          kindnet-4qrsh                                1/1     Running   0          12h
kube-system          kindnet-96g4f                                1/1     Running   0          12h
kube-system          kindnet-gdq2l                                1/1     Running   0          12h
kube-system          kindnet-h486s                                1/1     Running   0          12h
kube-system          kindnet-k56nm                                1/1     Running   0          12h
kube-system          kindnet-mwwcm                                1/1     Running   0          12h
kube-system          kindnet-rljsd                                1/1     Running   0          12h
kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          12h
kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          12h
kube-system          kube-proxy-88fss                             1/1     Running   0          12h
kube-system          kube-proxy-c42dn                             1/1     Running   0          12h
kube-system          kube-proxy-dd94c                             1/1     Running   0          12h
kube-system          kube-proxy-g7swj                             1/1     Running   0          12h
kube-system          kube-proxy-gqjkj                             1/1     Running   0          12h
kube-system          kube-proxy-vvxf5                             1/1     Running   0          12h
kube-system          kube-proxy-xhf5s                             1/1     Running   0          12h
kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          12h
kube-system          kwok-controller-dff54c87c-sffrd              1/1     Running   0          12h
local-path-storage   local-path-provisioner-7577fdbbfb-bp4c8      1/1     Running   0          12h
monitoring           alertmanager-main-0                          2/2     Running   0          11h
monitoring           alertmanager-main-1                          2/2     Running   0          11h
monitoring           alertmanager-main-2                          2/2     Running   0          11h
monitoring           blackbox-exporter-6b5475894-k87rl            3/3     Running   0          11h
monitoring           grafana-64c4f67f5b-nktll                     1/1     Running   0          11h
monitoring           kube-state-metrics-65474fb4c6-5mvdt          3/3     Running   0          11h
monitoring           node-exporter-7zdsh                          2/2     Running   0          11h
monitoring           node-exporter-hp84w                          2/2     Running   0          11h
monitoring           node-exporter-k86dn                          2/2     Running   0          11h
monitoring           node-exporter-kpr6q                          2/2     Running   0          11h
monitoring           node-exporter-npvl8                          2/2     Running   0          11h
monitoring           node-exporter-x48h7                          2/2     Running   0          11h
monitoring           node-exporter-zcr47                          2/2     Running   0          11h
monitoring           prometheus-adapter-74894c5547-l942k          1/1     Running   0          11h
monitoring           prometheus-adapter-74894c5547-nl4hw          1/1     Running   0          11h
monitoring           prometheus-k8s-0                             2/2     Running   0          11h
monitoring           prometheus-k8s-1                             2/2     Running   0          11h
monitoring           prometheus-operator-5575b484df-xjnwp         2/2     Running   0          11h
simkube              sk-ctrl-depl-fb96c967f-gqrnl                 0/1     Pending   0          11h
simkube              sk-tracer-depl-b8cf6c6bc-28x9f               0/1     Pending   0          11h

I tried adding two workers to my kind cluster and it didn't help. Here is the kind config:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 8080
    hostPort: 8080
    protocol: TCP
  - containerPort: 4242
    hostPort: 4242
    protocol: TCP
  - containerPort: 4243
    hostPort: 4243
    protocol: TCP
- role: worker
  extraMounts:
    - hostPath: /tmp/fluence-node-data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /tmp/fluence-node-data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /tmp/fluence-node-data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /tmp/fluence-node-data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /tmp/fluence-node-data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /tmp/fluence-node-data
      containerPath: /data

And the issue the pods are saying there aren't enough resources:

Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  tracer-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      sk-tracer-tracer-config
    Optional:  false
  kube-api-access-ghql8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              type=kind-worker
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  6s (x144 over 11h)  default-scheduler  0/7 nodes are available: 7 node(s) didn't match Pod's node affinity/selector. preemption: 0/7 nodes are available: 7 Preemption is not helpful for scheduling.

I'm going to try removing the selector for the kind-worker.

@drmorr0
Copy link
Contributor

drmorr0 commented Mar 6, 2024

This is my kind config:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry]
    config_path = "/etc/containerd/certs.d"
nodes:
  - role: control-plane
    image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9227b8f201df25e55e268e5d619312131292e324d570
    labels:
      type: kind-control-plane
  - role: worker
    image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9227b8f201df25e55e268e5d619312131292e324d570
    labels:
      type: kind-worker
    extraMounts:
      - hostPath: /home/drmorr/tmp/kind-node-data
        containerPath: /data

The kind-worker label is added by hand. Also note you'll need the containerdConfigPatches section if you're pulling from a local docker registry.

I'll add this to the docs as well.

@drmorr0
Copy link
Contributor

drmorr0 commented Jun 17, 2024

@vsoch OK sorry this took wayyyyyyyyyy too long, but SimKube now has publicly-available docker images on quay.io and supports kustomize, so no more nodejs required to deploy.

I would love to hear if you ever made any progress with the tool and/or if there's anything I can help with!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build tools Improvements to the build scripts or pipelines docs Documentation improvements
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants