Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm-dasbhoard pod crashing due to resource usage #317

Open
chitudorin opened this issue Apr 23, 2024 · 0 comments
Open

helm-dasbhoard pod crashing due to resource usage #317

chitudorin opened this issue Apr 23, 2024 · 0 comments

Comments

@chitudorin
Copy link

I have deployed helm-dashboard using the helm chart and after adding a repository and trying to install a helm chart (while browsing different versions of that helm chart to see the manifest) the pod immediately spikes in resource usage and dies. I removed the memory limits once and I noticed that it spiked up to 8GB of memory usage.

I am running a latest-version RKE2 cluster installed using the tarball method.

# kubectl version
Client Version: v1.28.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.12+rke2r1

Hardware info:

CPU: Intel Core i7-7700HQ
RAM: 32GB
Disk Usage: 50% out of 240GB

After install the usage is normal:

# kubectl -n helm-dashboard top pods
NAME                              CPU(cores)   MEMORY(bytes)   
helm-dashboard-64495d4658-kdxmq   169m         39Mi

Adding the bitnami helm charts (https://charts.bitnami.com/bitnami) and attempting to install a helm chart, browsing different versions for install I see the memory spike:

# kubectl -n helm-dashboard top pods
NAME                              CPU(cores)   MEMORY(bytes)   
helm-dashboard-64495d4658-kdxmq   877m         546Mi

And it eventually crashes:

# kubectl -n helm-dashboard top pods
error: Metrics not available for pod helm-dashboard/helm-dashboard-64495d4658-kdxmq, age: 4m41.264431643s
# kubectl -n helm-dashboard describe pod helm-dashboard-64495d4658-kdxmq
Events:
  Type     Reason                  Age                  From                     Message
  ----     ------                  ----                 ----                     -------
  Warning  FailedScheduling        5m15s                default-scheduler        0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
  Warning  FailedScheduling        4m37s                default-scheduler        0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
  Normal   Scheduled               4m35s                default-scheduler        Successfully assigned helm-dashboard/helm-dashboard-64495d4658-kdxmq to nirod
  Normal   SuccessfulAttachVolume  4m24s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-3e6974d4-aba1-4298-8f94-c52dbeac8200"
  Normal   Pulling                 4m23s                kubelet                  Pulling image "komodorio/helm-dashboard:1.3.3"
  Normal   Pulled                  3m45s                kubelet                  Successfully pulled image "komodorio/helm-dashboard:1.3.3" in 38.062571434s (38.062606493s including waiting)
  Warning  Unhealthy               3m43s                kubelet                  Liveness probe failed: Get "http://10.42.0.49:8080/status": dial tcp 10.42.0.49:8080: connect: connection refused
  Normal   Created                 38s (x2 over 3m45s)  kubelet                  Created container helm-dashboard
  Normal   Started                 38s (x2 over 3m45s)  kubelet                  Started container helm-dashboard
  Normal   Pulled                  38s                  kubelet                  Container image "komodorio/helm-dashboard:1.3.3" already present on machine
  Warning  Unhealthy               36s (x5 over 3m44s)  kubelet                  Readiness probe failed: Get "http://10.42.0.49:8080/status": dial tcp 10.42.0.49:8080: connect: connection refused

I tried to remove the resource limits and the pod reached 8GB of memory usage at one point:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant