NatronTech Logo
Best Practices

Compute Resources

Stage
Experimental
Requires

Compute Resources

Best practices for Managing Compute Resources for Containers.

CPU Requests vs Limits

See Understanding resource limits in kubernetes: cpu time.

CPU Requests: Uses the cpu.shares system. "Soft" limit. Used by the scheduler to find a node. CPU Limits: Uses cpu.cfs_period_us and cpu.cfs_quota_us (bandwidth control). "Hard" limit. If exceeded, the process is throttled.

Conclusion

  • Setting a CPU request sets cpu.shares (proportional share).
  • Setting a CPU limit sets cpu.cfs_quota_us (hard ceiling).
  • Recommendation: Removing CPU limits is completely fine and will make more use of unused CPU time.

Nodes and Resources

Each node has resources like CPU, memory, and storage.

  • Capacity: Total resources the node has.
  • Allocatable: Resources available for Pods (Capacity minus System Reserved).

Reservations (System Reserved)

Natron configures system-reserved resources similar to OpenShift/AKS best practices:

  • cpu: "150m"
  • memory: "300Mi"

Storage

Filesystems

Kubelet tracks the root filesystem (nodefs) for ephemeral storage.

  • nodefs: /var/lib/kubelet/ (logs, emptyDir not backed by memory).
  • imagefs: /var/lib/containerd/ (Container writable layers/images).

Eviction

Kubelet evokes Pods if node resources correspond to hard eviction thresholds:

  • memory.available < 100Mi
  • nodefs.available < 10%
  • imagefs.available < 15%

Image Garbage Collection

  • LowThresholdPercent: 80% (starts warning/checking)
  • HighThresholdPercent: 85% (starts deleting images)

Ephemeral Storage

Local Ephemeral Storage

Currently tracked on the root disk.

  • Note: OS layouts that mount a separate disk to /var/lib/kubelet will not report ephemeral storage correctly.
  • Root Disk: 10GiB.

Generic Ephemeral Volumes

Alternatively, use Generic Ephemeral Volumes to give a Pod a scratch volume backed by a PVC, avoiding root disk usage.

  volumes:
    - name: scratch-volume
      ephemeral:
        volumeClaimTemplate:
          spec:
            accessModes: [ "ReadWriteOnce" ]
            storageClassName: "scratch-storage-class"
            resources:
              requests:
                storage: 1Gi

MaxPods

Kubernetes nodes have a default maximum of 110 pods per node.

On this page