Cloud and platform

EKS with Kubernetes 1.35: what in-place resize and image volumes change in real operations

Amazon EKS added Kubernetes 1.35 support in January 2026. The release matters because it changes how teams handle CPU/memory tuning and artifact delivery.

2/25/20266 min readCloud
EKS with Kubernetes 1.35: what in-place resize and image volumes change in real operations

Executive summary

Amazon EKS added Kubernetes 1.35 support in January 2026. The release matters because it changes how teams handle CPU/memory tuning and artifact delivery.

Last updated: 2/25/2026

Executive summary

On January 28, 2026, AWS announced Amazon EKS support for Kubernetes 1.35. At first glance, this can look like a standard version upgrade. In practice, two capabilities have direct impact on day-2 operations:

  • In-place Pod CPU/memory resource updates reduce the need to restart workloads just to adjust sizing.
  • Image volumes allow mounting OCI artifacts (for example model/data bundles) as volumes, avoiding ad-hoc side-loading patterns.

For platform teams, this release is less about "new features" and more about reducing operational friction where it hurts most: scaling behavior, rollout stability, and runtime data delivery.

Why this release is strategically relevant

Most production incidents around Kubernetes upgrades are not caused by missing API objects. They are caused by behavioral mismatches in workloads and platform assumptions.

Kubernetes 1.35 affects exactly these assumptions:

  • Resource changes can happen with less disruption for supported paths.
  • Data artifacts can be distributed through OCI-native mechanisms instead of custom init/download logic.
  • Traffic and topology-related enhancements increase placement and routing control in dense clusters.

When those controls are mapped to SLO-driven operations, teams can cut both toil and tail-latency instability.

In-place resource updates: where it helps, where it does not

The value proposition is simple: avoid avoidable restarts when adjusting resource requests/limits.

Operational wins

  • Faster response to load profile shifts.
  • Lower disruption for latency-sensitive services.
  • Fewer rollout events triggered only for resource tuning.

Important limits

  • Not every resource mutation path behaves identically across workloads.
  • Policy, autoscaler, and admission controls still determine what can be changed safely.
  • Teams need clear guardrails to avoid "continuous resizing" turning into noisy operations.

Treat in-place resize as a precision tool. Without policy boundaries, it can become another source of configuration drift.

Image volumes: OCI as a data distribution path

Kubernetes 1.35 continues the image volume evolution (beta, enabled by default) so OCI artifacts can be mounted as volumes.

For teams running AI and data-heavy services, this can simplify recurring pain points:

  • Shipping medium-size model/data assets without rebuilding main app images.
  • Reducing custom startup scripts that download artifacts on boot.
  • Reusing registry governance and provenance controls already in place for container images.

A key implementation detail is runtime compatibility: image volumes require a compatible container runtime stack (for example containerd v2.1+ in upstream guidance). This should be part of pre-upgrade validation, not discovered in production.

Upgrade blueprint for EKS clusters

  1. Build an inventory of workloads that frequently change CPU/memory allocations.
  2. Map workloads that currently use init-container download flows for runtime artifacts.
  3. Validate cluster/node runtime compatibility and add conformance tests for resize and image volume paths.
  4. Run canary node groups with representative workloads and SLO monitors.
  5. Roll out in waves with explicit rollback criteria by business criticality.

This rollout discipline is what separates "cluster upgraded" from "platform actually improved".

Engineering and cost implications

DimensionBefore 1.35 adoption patternWith disciplined 1.35 adoption
Resource tuningFrequent restart-oriented adjustmentsMore targeted, lower-disruption tuning
Artifact distributionCustom fetch scripts and side channelsOCI-native distribution model
Ops loadHigh manual interventionBetter standardization potential
Risk profileHidden behavior drift during peak loadMore explicit policy and test boundaries

The financial effect is usually indirect but real: fewer avoidable incidents, less engineer time spent on runbook churn, and better infrastructure utilization.

Conclusion

EKS 1.35 is worth attention because it improves two hard production problems: safe runtime resizing and predictable artifact delivery.

The question for platform leaders is not whether Kubernetes 1.35 has good features. The question is whether your operating model can convert these features into measurable reliability and cost outcomes.

Teams frequently discover that these changes require platform engineering maturity beyond a simple eksctl upgrade cluster command. That gap is where many real-world modernization projects succeed or stall.

Sources

Related reading