Our experience running an AI workload in Kubernetes – Part 2 <em>Limitations & Pitfalls of our solution with RayCluster CRD</em>

Our experience running an AI workload in Kubernetes – Part 2 <em>Limitations & Pitfalls of our solution with RayCluster CRD</em>

In this part of our series, we share the challenges we faced running Ray Serve Deployments in production using the RayCluster CRD. Along the way, we tackled issues like ephemeral head nodes, RayCluster’s autoscaling quirks, and the limitations of rolling updates. If you’re curious about bridging the gap between traditional Kubernetes workloads and the unique demands of AI applications on Ray, this post dives deep into using the RayCluster CRD in K8s.

Read More
Our experience running an AI workload in Kubernetes – Part 1 &lt;em&gt;Lift &amp; Shift Ray applications to K8s&lt;/em&gt;

Our experience running an AI workload in Kubernetes – Part 1 <em>Lift & Shift Ray applications to K8s</em>

In this post, we share our hands-on experience helping our client, Mixedbread, run their AI applications on Kubernetes using the KubeRay Operator. During the migration from a hyperscaler to a multi-cloud environment powered by claudie.io, we cut infrastructure costs by 70% while tackling challenges around RayCluster resilience, Ray Serve Deployments.

Read More