Kubernetes Learning Week Series 8

Kubernetes Learning Week Series 8

·

5 min read

Kubernetes Learning Week Series 7


Mastering Kubernetes Networking: A Journey into Cloud-Native Packet Management

https://otterize.com/blog/mastering-kubernetes-networking-otterize-s-journey-in-cloud-native-packet-management

This article delves into Kubernetes networking, highlighting its unique approach to managing networks through the Container Network Interface (CNI) and the importance of network policies for security. It discusses how Otterize automates the creation of these policies based on actual traffic patterns, simplifying management in cloud-native environments and enhancing security.

Key Points:

  • Kubernetes abstracts network management to improve scalability and simplicity, offering flexibility in network solutions through the Container Network Interface (CNI).

  • Pod networking in Kubernetes assigns a unique IP to each Pod, simplifying communication across the cluster without complex routing.

  • The service network uses ClusterIP to provide stable access to Pods, with kube-proxy managing traffic routing via iptables, IPVS, or nftables.

  • NodePort and LoadBalancer services allow external access to Kubernetes clusters, while Ingress Controllers offer more advanced traffic management.

  • Network policies in Kubernetes provide fine-grained control over Pod communication, which is critical for isolation and security in multi-tenant environments.

  • A zero-trust security model is recommended, starting with a deny-all policy and explicitly allowing necessary communication.

  • The article provides a detailed walkthrough of Kubernetes networking, illustrating traffic flow from the user to the application.

  • Otterize simplifies the creation of network policies by mapping actual traffic patterns, streamlining management processes, and aligning with CI/CD pipelines.

  • Otterize’s automation helps maintain a secure and adaptive Kubernetes environment, reducing the attack surface and supporting continuous compliance.


How Kubernetes Selects Pods to Delete During Scale-Down

https://rpadovani.com/k8s-algorithm-pick-pod-scale-in

This article explores how Kubernetes determines which Pods to delete during scale-down operations—a process that is not thoroughly documented. The author investigates the source code to explain the logic, focusing on Kubernetes version v1.30.0-alpha.0. The article details the role of ReplicaSets in managing Pod scaling, the ranking and sorting criteria used to decide which Pods to delete, and the impact of the pod-deletion-cost annotation.

Key Points:

  • Kubernetes does not delete Pods randomly during scale-down; users can influence this decision with the pod-deletion-cost annotation.

  • Scale-down refers to reducing the number of Pods in a Deployment, which can be done manually or automatically through an autoscaler.

  • The ReplicaSet controller manages the logic for scaling down Pods, using a ranking system and sorting rules to decide which Pods to delete.

  • Related Pods, including all Pods owned by the same ReplicaSet, are considered for ranking.

  • The ranking process prioritizes Pods on the same node, giving higher priority to Pods co-located on a single node.

  • The sorting logic involves eight criteria to determine the order of Pod deletion, including Pod scheduling, phase, readiness, and pod-deletion-cost.

  • The pod-deletion-cost feature allows users to assign a cost to Pod deletion, influencing its deletion priority.

  • The article summarizes the criteria for sorting Pods and highlights the importance of the deletion order.


Is Your Kubernetes Probe Configuration Correct?

https://medium.com/@juliorenner123/k8s-probes-done-wrong-184d238b3883

Kubernetes probes are critical for monitoring the health of applications running in Kubernetes. However, incorrect configurations can cause more problems than they solve. This article discusses the three types of probes: Startup, Readiness, and Liveness, explaining their purposes and potential pitfalls. The Startup probe delays other checks until the application is ready, while the Readiness probe determines whether the container can handle traffic without terminating it on failure. The Liveness probe checks whether the container is running properly and restarts it if necessary. Through real-world scenarios, the article highlights common mistakes, such as using the same endpoint for multiple probes and relying on dependencies that can cause unnecessary restarts or downtime. It recommends understanding probe behaviors and avoiding dependency checks to prevent exacerbating existing issues.

Key Points:

  • Kubernetes probes are essential for application health checks but can cause issues if misconfigured.

  • The Startup probe delays other checks until the application is ready, preventing premature failures.

  • The Readiness probe determines if a container can handle traffic without impacting its running state.

  • The Liveness probe checks if a container is functioning properly and restarts it if it fails.

  • Misusing probes, such as reusing endpoints or checking dependencies, can lead to downtime and instability.

  • Proper understanding of probe behavior is crucial to avoid worsening existing problems.


When Kubernetes and Go Don’t Work Well Together

http://lalatron.hashnode.dev/when-kubernetes-and-go-don't-work-well-together

This article discusses a specific issue encountered when using Go in a Kubernetes environment, where the Go runtime is unaware of the container’s memory limits, leading to out-of-memory (OOM) errors. The author describes the troubleshooting process and explores how Go’s garbage collector contributes to the problem. A solution involving the GOMEMLIMIT environment variable is proposed to better align Go’s memory management with Kubernetes constraints.

Key Points:

  • Go is unaware of Kubernetes container limits, causing OOM errors when memory usage exceeds the set limits.

  • API endpoints triggered Kubernetes Pods to restart due to exceeding memory limits, despite sufficient RAM allocation.

  • Go’s garbage collector expands heap memory without considering container limits, resulting in OOM errors.

  • The GOMEMLIMIT environment variable helps manage memory usage by setting a soft limit for Go’s garbage collector.


Managing 100 Kubernetes Clusters with Cluster API

https://techblog.citystoragesystems.com/p/managing-100s-of-kubernetes-clusters

To efficiently manage hundreds of Kubernetes clusters, the core infrastructure team at City Storage Systems transitioned to using Cluster API, significantly automating cluster configuration and management. This shift reduced cluster setup time and enabled seamless migration to Microsoft Azure, despite challenges with the initially hosted Kubernetes distribution. The team leveraged Kubernetes operators and GitOps to achieve full automation, improving reliability and operational efficiency. They plan to scale further by automating more processes and increasing the number of clusters.

Key Points:

  • City Storage Systems uses Cluster API to automate Kubernetes cluster management, reducing setup time from 1.5 weeks to less than a day.

  • The transition included migrating over 80 clusters to Microsoft Azure, doubling the number of managed clusters.

  • Cluster API’s scalability and operator model allow efficient management of multi-tenant clusters.

  • Initial challenges, such as limited support for hosted Kubernetes distributions, were resolved through collaboration with Microsoft.

  • Automation includes leveraging custom Kubernetes operators for cluster creation and workload preparation.

  • Node pool management has been automated to streamline operations during upgrades and replacements.

  • The team plans to manage over 500 clusters by further automating processes and minimizing manual intervention.


Kubernetes Learning Week Series 7