Day Two with Kubernetes: Finding the Right Limits and Requirements | stormforge (2023)

Kubernetes is a powerful tool that helps DevOps teams work more efficiently and smoothly, which is why more and more people are using it every day. Most developers have little trouble creating clusters and planning workloads. But when it comes to tackling the ongoing task of optimizing your deployments for performance and resource efficiency, that's a different story.

In this post, we look at some common Kubernetes challenges on day two and how machine learning technology can help developers overcome them.

Kubernetes has caught on and has become very popular with DevOps teams, and for good reason. Developers and engineers love it because it offers easier ways to build more flexible and scalable applications and infrastructure. In theory, it also helps developers ensure their apps work reliably and contribute to a positive user/customer experience.

(Video) Setting Resource Requests and Limits in Kubernetes

Kubernetes also helps developers reduce time to market with new applications or features. Its flexible, microservices-based approach and better use of resources reduce friction losses in the development process. It also helps development teams respond to business needs, breaks down monolithic applications, and paves the way for moving more operations to the cloud.

No wonder, according to the Cloud Native Computing Foundation (CNCF), this orchestration system continues to lead the container load. In your CNCF survey 202091% of respondents said they use Kubernetes, and 83% of them said they currently use it in production. With adoption and usage stats like these, it's safe to say that Kubernetes is here to stay.

What are the Kubernetes Day 2 challenges?

While the Kubernetes advantage is enticing, this container orchestration system has its challenges.

One area where development teams often encounter challenges is during the day two phase. Of course, none of these phases is a specific day. In reality it is weeks, months or even years. Day two follows day zero (requirements, design, and prototyping) and day one (build, test, and deployment).

In the second phase of the day, the application runs in production. That means you need all the "care and support" that comes with production assets, including performance monitoring, troubleshooting and remediation, maintenance and upgrades, security and compliance testing, and more. On day two, teams will also become familiar enough with their new capabilities to start tweaking and tweaking things to achieve the desired balance of performance and cost.

But there's a catch, and it's the source of day two hurdles for many developers. Kubernetes provides capabilities to change things like application settings to increase performance or reduce cloud costs; However, these controls are not automated. But given the dynamic nature of many Kubernetes workloads, it's just not possible to make the right changes fast enough to get the desired result.

Developers can sit at a console all day and analyze Prometheus metrics to make educated guesses about what capabilities their containers need. But beyond a container or two, this isn't a viable strategy, and developers often have higher priorities. Instead, they need automated tools that are fast enough to keep up with development and know what configuration changes are impacting performance and resource consumption.

The good news for DevOps teams is that new machine learning-based solutions are replacing manual configuration management. Let's take a look at one of these Day Two Kubernetes problems and how machine learning can help developers solve it.

(Video) Kubernetes Resource Requests and Limits

Start using StormForge

Try StormForge for FREE and start optimizing your Kubernetes environment now.

start testing

(Video) Is Your DevOps Ready for Kubernetes Day 2 and Beyond?

The essentials of Kubernetes requirements and limits

One of the goals of day two is to ensure containers have the right set of features to work best for organizations. That means making sure the containers have enough resources so they don't fail or fail, but also don't overprovision and skyrocket cloud costs. Without automation tools, it is very difficult for developers to achieve this balance "accurately".

A common factor affecting developer resource decisions is the importance of an application or service to the business. An application's rating on the mission-critical scale often determines what Quality of Service (QoS) developers assign to it. There are three classes of QoS in Kubernetes. The highest class is Guaranteed, the middle category is Explosive, and the lowest is Best Effort.

When a Kubernetes node experiences resource pressures, especially non-compressible resources like memory or disk space, your kubelet can remove the pods to keep the node stable. In this case, Kubernetes first tries to remove pods with lower QoS classes. BestEffort pods do not specify requests or resource limits and are considered lower priority. Scalable pods request a minimum amount of resources and a resource cap, and if they exceed the minimum requirement, they are candidates for eviction. After all, there is no other option, Kubernetes removes pods with guaranteed QoS class. These pods can die during normal operation if they exceed their resource limits.

QoS classes are determined by two main controls: requests and limits. Essentially, these are controls that developers can use to increase or decrease the amount of resources, such as CPU or memory, allocated to a container. Requirements specify guaranteed resources for a container. When a container requests a resource, Kubernetes only schedules it to a node that has the required resource. On the other hand, it limits a "don't exceed" limit beyond which Kubernetes will not let your resource allocation.

(Video) All You Need to Know in 12 Minutes: Pods' Requests and Limits in Kubernetes

Other workload-specific configuration options that developers can control, such as JVM settings for Java applications, can be specified in the pod in a variety of ways, including environment variables, volumes mounted with configuration files, and volumes for the container. Changing these settings, e.g. B. adjusting the heap size or garbage collection parameters can have a major impact on application performance.

In the early days of deploying an application or service, there is usually some leeway for configuration settings. Developers who don't want their apps to fail for obvious reasons often tend to overdesign them. But as the Kubernetes deployment grows and new applications with different behaviors and resource requirements are added, performance and cost can become taboo.

This is when developers really need to tweak their app's settings. It's also when they realize they don't have the tools to do it effectively, accurately, and quickly. Without insight into the impact of changing configuration settings, developers resorted to guesswork.

Until relatively recently, developers didn't have a better alternative. However, that has changed with the emergence of new solutions that leverage artificial intelligence and machine learning to provide smarter, more effective, and automated ways to optimize applications running on Kubernetes environments. The StormForge platform is one such solution.

StormForge Kubernetes optimization features

IsKubernetes optimization of the StormForge platformFunctions automatically analyze, optimize and refine cloud-native application settings. This intelligent automation helps development teams ensure their applications and services consistently meet their stability, performance, and cost goals. StormForge's machine learning engine continuously monitors applications and services, measuring how they respond to configuration changes in real time. With each test, you learn more and focus on the optimal configuration to meet your performance and cost goals. It also automatically shows developers the pros and cons associated with configuration changes, recommends optimal settings, and makes it easy for developers to download and apply recommended settings.


The second day phase of Kubernetes deployments should not involve developers trying their luck at guessing the best configuration options. You shouldn't always clean up an operational mess that stems from a wrong guess. Instead, it should be a time when your cloud-native applications and services run mostly on autopilot, but also automatically adapt to changes - in the environment, business goals, or whatever delta.

In other words, it should be a time when an organization realizes all the benefits of Kubernetes and developers can work on more urgent and interesting projects than chasing configuration changes all day.

The StormForge platform can make this goal a reality.Request a demoCheck out our Kubernetes performance and optimization tests to see for yourself.

(Video) Troubleshooting & Debugging Kubernetes common problems | Kubernetes Handbook | Episode -1| #devops


How do you determine resource limits in Kubernetes? ›

Resource units in Kubernetes

Limits and requests for CPU resources are measured in cpu units. In Kubernetes, 1 CPU unit is equivalent to 1 physical CPU core, or 1 virtual core, depending on whether the node is a physical host or a virtual machine running inside a physical machine. Fractional requests are allowed.

What is the minimum resource requirements for Kubernetes? ›

The application can use more than 256MB, but Kubernetes guarantees a minimum of 256MB to the container. On the other hand, limits define the max amount of resources that the container can consume.

What is the limit max for Kubernetes? ›

More specifically, Kubernetes is designed to accommodate configurations that meet all of the following criteria: No more than 110 pods per node. No more than 5,000 nodes. No more than 150,000 total pods.

What is Kubernetes default resource limits? ›

Kubernetes doesn't provide default resource limits out-of-the-box. This means that unless you explicitly define limits, your containers can consume unlimited CPU and memory. Pods deployed after this LimitRange, without their own CPU or memory limit, will have these limits applied to them automatically.

Which command can be used to find resource limits? ›

You can test resource limits by running the ulimit command and restarting the affected processes.

Should you use limits in Kubernetes? ›

Best practices. In very few cases should you be using limits to control your resources usage in Kubernetes. This is because if you want to avoid starvation (ensure that every important process gets its share), you should be using requests in the first place.


1. Hands-On Kubernetes - Resource Requests and Limits
(Build with latest)
2. Getting Started With Managed Kubernetes Day-2 Operations
3. Common Kubernetes Mistakes - CPU and Memory Requests (part 1)
4. Day Two Kubernetes: Tools for Operability
5. Open Source Project Goldilocks: How to Right-Size Kubernetes
6. Resource Requests and Limits in Azure Kubernetes Service(AKS)
(Shailender Choudhary)


Top Articles
Latest Posts
Article information

Author: Velia Krajcik

Last Updated: 25/08/2023

Views: 5927

Rating: 4.3 / 5 (54 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Velia Krajcik

Birthday: 1996-07-27

Address: 520 Balistreri Mount, South Armand, OR 60528

Phone: +466880739437

Job: Future Retail Associate

Hobby: Polo, Scouting, Worldbuilding, Cosplaying, Photography, Rowing, Nordic skating

Introduction: My name is Velia Krajcik, I am a handsome, clean, lucky, gleaming, magnificent, proud, glorious person who loves writing and wants to share my knowledge and understanding with you.