Comparing different methods to save cost of computing resources for cloud-development.
With the rise of the DevOps paradigm and the container-orchestration technology Kubernetes, more and more developers are working with Kubernetes not only for deployment but also during development. This brings up the question of how to enable developers to do this without spending excessive amounts of money on computing resources, especially in public clouds. In this post, I will compare different ways of dealing with this issue.
Local or Cloud Development
At first, you need to decide, which Kubernetes environment developers should use. In principle, there are two options: Using the local resources of the developer’s computer or using cloud resources. While local development is an option for simpler applications and for individuals or smaller teams, cloud environments are feasible also for larger teams and very complex applications. For a comparison of the different approaches to provide developers with access to Kubernetes, refer to my post about this topic.
The Easy Way: Local Development
The easiest way to save costs for computing resources is to just use the resources that are available on local computers. If it is possible to run your application in a local k8s environment, e.g. with tools such as minikube, you will not face any cost for doing this (ignoring marginally higher electricity bills if your computers are used more intensely). However, you might need to have very powerful computers for all developers and they will be heavily used, which might lead to some cost for buying better hardware.
Since local development is limited in terms of computing resources, it can only be used for simpler applications and is rather suited for small teams as it requires to set up and maintain Kubernetes on each computer leading to a lot of maintenance overhead in larger teams. However, simple apps and small teams usually also do not cause high cost in a cloud environment reducing the urgency of the problem in general.
Overall, this approach is of course very cheap making it a possible solution for teams without a lot of resource needs and also attractive for hobby programmers who want to experiment with Kubernetes. However, it is often just not a feasible solution for businesses.
The Real Problem: Cloud-Based Development
Many companies are already using cloud-based development, i.e. they give developers access to the cloud to run and test their software. This can be for two reasons: Sometimes, they simply have to do it because they need additional cloud computing resources that are not available on local computers. This is especially the case with larger applications, microservice architectures or applications that require GPUs, such as machine learning apps. The second reason is that businesses want to use cloud development. This can be due to its easier maintenance in teams or its ease of use for developers without k8s knowledge, so they can work more productively in this environment.
As described in my other post, there are two possibilities to give developers access to a Kubernetes cluster: Giving each developer an own cluster or share one cluster with many developers. Giving each developer an own cluster has several disadvantages. One of them is that it is not efficient in using the resources as the necessary Kubernetes functions have to run many times in parallel and cannot be shared, which is a waste of resources. For this, sharing a cluster is more efficient regarding the use of computing resources, which is why I assume this approach in the following.
However, also one shared cluster can need quite a lot of resources, especially if multiple developers want to develop complex software on it in parallel. There are different ways of dealing with this issue:
Approaches to Deal With High Cloud Computing Cost For Development
1. Just Pay For It
The first solution is to just pay the cost of the cloud resources. This is, of course, a very simple approach, but while it sounds very trivial, it might still be a good option in some cases. These include cases where you have a lot of cloud credits from the public cloud providers (in some cases you can get $100,000 or even more), so you do not actually “pay” for it. These credits are often available for startups, but even companies without credits may choose to simply pay the price without trying to optimize anything if other issues are more important. For example, imagine a startup that wants to release a new product as quickly as possible and thus does not want to deal with other issues besides product development. Just paying the cloud resources also makes sense if the associated cost is not very high in absolute terms, so an investment in optimization would not pay off.
However, this simplistic approach is obviously not optimized in any way and can become very expensive in the long run.
2. Establish Resource Limits
The second approach is to limit the resource usage by implementing either technical limits or rules that forbid to use more than a pre-specified amount of resources. This approach will set a maximum cap for your cost, so you can be sure that not more than this is spent every month. Technical enforcement of resource limits can be ensured by connecting your Kubernetes cluster to DevSpace Cloud, where you can centrally set limits for individual users and namespaces in a GUI.
With this approach, it is important to set the limit to the right amount as otherwise you could spend too much or it can slow down the development flow. When you are just setting rules on how developers shall deal with the cloud resources, you also need to pay attention that everybody in the team is abiding by these rules as they otherwise will become useless.
While this solution can save you some cost and will set a cap on your cloud bill, it is still not optimized. In some situations, more resources might be needed than allowed, e.g. to test a new feature, while in other situations, tasks might have been executed with fewer resources than allowed.
Overall, this approach is a good starting point and easy to implement and has the main advantage that your total costs become predictable. For your workflows, it is still not optimized but it can be combined with one of the next two solutions.
DevSpace Cloud is available as a SaaS-solution or on-premise. In any case, you simply connect your Kubernetes cluster and can invite additional users to the cluster whose permissions and limits can be set in a graphical UI.
3. Shut Off Namespaces Manually
A third approach is to instruct all developers to delete or scale down their namespaces as soon as they do not need them anymore and to restart or scale them up when they resume their work. This can save a lot of cost because most of the time, the computing resources will not be needed but you still have to pay them, e.g. at night, on the weekends, on holidays or during meetings. Due to the use of containers and Kubernetes, it is usually possible to delete containers during development and to restart them again. With this approach, your cost savings can be significant and at the same time, it does not limit your developers in any way as they can still scale up as far as they want when it is needed.
However, this solution also comes with some disadvantages: At first, restarting the application after every shut-off will cost some time. Secondly, this solution is implemented by asking the developers to do it, so you have to rely on them to actually do it. Unfortunately, scaling down and shutting off resources is a task that can be easily forgotten, especially if sudden events happen, such as a spontaneous meeting before the weekend starts. It can also be seen as an “annoying” task as the developers do not see any benefit from doing it, but it rather leads to them waiting when they want to continue to work. For this, it might make sense to incentivize them to reliably do this task, which again would lead to some additional efforts for the management.
4. Automatically Pause Namespaces
Related to the third solution is the approach to automatically pause namespaces after some time of inactivity. This can be done with DevSpace Cloud. DevSpace Cloud will automatically scale Kubernetes ReplicaSets, Deployments and StatefulSets to zero, the pod amount in the resource quota will be set to 0 and all existing pods in the namespace will be killed. (click here for more information)
It is possible to configure the time of inactivity that is needed for the namespaces to pause, so you can individually decide if they should already pause for shorter breaks and meetings or just for weekends and holidays. This configuration can be set on individual namespace, user or cluster level. As long as the user uses commands such as
devspace logs or
devspace enter, they will signal to DevSpace Cloud that the namespaces are still active, so they will not go into sleep mode. When the developer starts to work again with a paused namespace, the namespace will automatically continue to work.
If you configure this sleep mode appropriately, it becomes a “smart” approach to reduce your cloud computing cost by considering specifications of your application and work behavior of individual developers. As a result, you can save up to 70% of cloud cost without limiting developers in their work. It is also pretty easy to implement, as the developers do not need to adapt their normal workflows, but the cluster just has to be connected to DevSpace Cloud and the sleep mode has to be configured once in its graphical user interface.
Cloud computing costs can be a hurdle for the adoption of development with Kubernetes. However, this is a rather strategic decision and cost should not be the main reason to use the cloud for development or not, especially as there are ways to reduce the associated cost to a large degree.
Which approach to deal with the cloud cost is best for you depends on your specific situation and also the urgency of this issue. If you are developing locally, you will not face the issue at all, but also if you use cloud development, it might be the right approach not to act immediately, if your costs are not very high and you have other issues that have a higher priority. In the long run, however, it will usually pay off to save computing resources. For this, you can set strict limits for the developers, ask them to shut down their environments or pause their namespaces automatically. Especially a combination of setting a technical limit and automatically pausing the namespaces can save you a lot of costs and will set a maximum limit to it while not affecting the developers’ workflow. Both of these can be achieved by connecting your Kubernetes cluster to DevSpace Cloud.