Kubernetes Learning Week Series 9
How to Keep Docker Containers Running for Debugging
This article provides several methods to keep Docker containers running for debugging and troubleshooting purposes.
Key Points:
Why Docker containers exit immediately after startup.
How to keep a container running by adding a foreground process to the Docker entrypoint.
Four different ways to keep a container running using the docker run command: interactive shell session, tail -f /dev/null, sleep infinity, and using a keep-alive command in the entrypoint.
How to keep a Kubernetes Pod running by adding a custom command to the container specification.
Use cases where keeping a container running is essential, such as testing/developing Docker images, system troubleshooting, and Kubernetes cluster debugging.
DIY: Create Your Own Cloud Using Kubernetes (Part 1)
https://kubernetes.io/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/
This article discusses the author’s experience in developing a Kubernetes-based cloud platform, highlighting the open-source projects used and the challenges faced when running Kubernetes on bare metal.
Key Points:
Kubernetes Ecosystem:
The author emphasizes that Kubernetes can efficiently manage tenant clusters without relying on additional complex systems like OpenStack.
Cloud vs. Bare Metal:
Cloud Kubernetes: Simplifies operations as cloud providers handle infrastructure management, allowing users to focus on application deployment.
Bare Metal Kubernetes: More complex due to the need to manage networking, storage, and load balancing within the cluster.
Challenges:
The article outlines the difficulties of updating and maintaining bare-metal installations compared to cloud environments.
Talos Linux:
The author prefers Talos Linux for its ability to create system images with the necessary kernel modules, simplifying the deployment process.
Deployment Techniques:
PXE booting is recommended for delivering system images to nodes, and scripts are provided on GitHub for quickly deploying Kubernetes.
GitOps Practices:
Tools like ArgoCD and FluxCD are suggested to manage deployments and updates declaratively across multiple clusters.
Future Topics:
This article introduces a series that will cover topics such as networking, storage, and virtual machine management in Kubernetes.
In conclusion, the article promotes the Cozystack project, which aims to provide a repeatable and reliable Kubernetes environment.
How does etcd achieve high availability and strong consistency through the Raft protocol?
Key Points:
Etcd uses the Raft protocol to quickly detect leader node failures and efficiently elect a new leader from the follower nodes, ensuring high availability of services.
Raft divides time into terms and ensures that at most one leader is elected per term through a leader election process, preventing data inconsistency.
The Raft leader replicates logs to follower nodes and ensures log consistency through a commit process, which requires a majority of nodes to agree on the log entries before they are committed.
Raft ensures safety through election rules, the leader completeness property, the append-only principle, and the log matching mechanism, preventing data loss and inconsistency even in the event of a leader crash.
Can Kubernetes Pods attach to multiple networks?
This article discusses how Kubernetes allows configuring additional networks for Pods and virtual machines using the Multus CNI plugin, NMState Operator, and OpenShift Virtualization. It provides an overview of Kubernetes networking concepts, explains the role of CNI plugins, and demonstrates how to set up extra network interfaces with Multus CNI and NMState Operator.
Key Points:
Kubernetes Network Model: Defines how containers, Pods, and nodes communicate.
CNI Plugins: Used to configure network interfaces within Linux containers.
Multus CNI: Enables configuring additional networks beyond the default Pod network.
NMState Operator: Manages physical network interfaces on nodes declaratively.
OpenShift Virtualization: Allows hosting and managing virtualized workloads on the same platform as containerized workloads.
Collaboration of Tools: Multus CNI, NMState Operator, and OpenShift Virtualization work together to support multiple network interfaces for Pods and virtual machines.
Use Cases for Additional Networks: Include network segmentation, specialized traffic handling, static IP addressing, and transitioning workloads from virtualization to containerization.
Configuration Demonstration: The article shows how to configure additional bridged networks using NMState Operator and how to create network attachment definitions with Multus CNI.
Integration with Pods and VMs: Both Pods and virtual machines can be configured to use extra network interfaces, with options for assigning static IP addresses or utilizing DHCP.
How to Dynamically Adjust Container CPU in Kubernetes?
https://medium.com/@mathieuces/how-to-calculate-cpu-for-containers-in-k8s-dynamically-47a89e3886eb
This article discusses methods for dynamically adjusting container CPU resources in Kubernetes, focusing on the feature gate ‘InPlacePodVerticalScaling.’ It outlines three strategies:
Always Use 80% of CPU:
Adjust applications using 50% of their CPU allocation to utilize 80%.
This method reserves 20% as a safety margin, effective for downsizing but potentially problematic for upsizing.
Exponential Growth to 80% CPU:
Similar to the first downsizing strategy.
For upsizing, it uses a formula to exponentially increase CPU based on current usage, allowing significant increases when nearing full capacity.
Target 100% Usage:
Adjusts to ensure 100% CPU utilization during downsizing.
For upsizing, it monitors CPU pressure (microseconds lost due to insufficient CPU) to determine necessary increases, though this approach presents practical challenges.
Conclusion:
Dynamic resizing is a significant advancement in Kubernetes, promising reduced resource waste. Tools like Kondense are recommended for implementing these strategies. The article highlights ongoing developments in this area and the potential for future automation of CPU resizing.
Explanation of Kubectl Port Forwarding Streams
https://blog.kftray.app/kubectl-port-forward-flow-explained?showSharer=true
This article provides a detailed explanation of the kubectl port-forward command in Kubernetes, covering the entire process from initialization to data transmission.
Key Points
Shares detailed information about the kubectl port-forward command, as the official documentation does not comprehensively explain the process in one place.
The article covers the following sections: initialization, authentication and authorization, retrieval of Pod information, establishment of the port-forwarding session, configuration of iptables for port forwarding, and SPDY session for port forwarding.
The port-forwarding process is initiated by the user executing the kubectl port-forward command.
The CLI sends a request to the Kubernetes API server to validate the user’s token and verify permissions.
The CLI retrieves basic details about the target Pod by sending a GET request to the Kubernetes API server.
The CLI initiates a port-forwarding session by sending a POST request to the Kubernetes API server, which switches the protocol to SPDY.
The Kubernetes API server instructs Kubelet to configure iptables for port forwarding.
The user sends requests via SPDY streams to interact with the application running in the Pod.