What are missing in Kubernetes?

Tags

, , , , , ,

As Kubernetes becomes the de facto solution for container orchestration, more and more people expect that it will be the orchestrator of data centers. For example, ZDNet predicted Kubernetes will rule the hyperscale data center in 2018. In a little over four years’ time, the project born from Google seems going to change everything. Tracing back to its root from Google Borg, Kubernetes is nicely designed to run web services. As StatefulSets became stable in 1.9, it is also able to manage stateful applications such as database, message queue, etc. To conquer enterprise data centers, however, there are still several missing pieces.

In the data centers of large corporates (e.g. banks, pharmaceutical, energy companies), there are a variety of workloads such as HPC (high-performance computing), HPA (high-performance analytics), and batch jobs. Compared to them, web services use only a small portion of compute resources. Unfortunately, Kubernetes has been weak to orchestrate these workloads so far.

HPC

There are many kinds of HPC workloads. For simplicity, let’s just consider Monte Carlo simulation here. It is a simple use case but consumes a lot of compute time in many enterprise data centers. A typical Monte Carlo simulation involves millions of tasks with complicated dependency. The scheduling algorithm is generally task driven. Since each task doesn’t run very long (seconds), the low latency of scheduling is critical. In contrast, the median k8s pod start up latency on large cluster could be as long as 25 seconds. Of which, 80% time is for deploying container images. Although one may argue that local cache of docker images should help, quick release of new versions/images is a norm today with agile development. Hiccups or even choking happen frequently therefore.

Even worse, there will be thousand machines that simultaneously request the docker image from the docker registry server when a HPC job starts. The central registry is not only the bottleneck but also may not survive the heavy volume of requests. Instead, a distributed registry solution is a better approach. For example, in NERSC’s Shifter project, docker images are converted to tgz files and transferred to Lustre parallel distributed file system.

HPA

Since Spark 2.3, we can submit spark jobs to Kubernetes. However, the current integration takes a static resource allocation approach. When submitting a job, the user needs to configure the number of executors, which will book the resources from kubernetes across the lifetime of job. Note that a Spark application often run several or many Spark jobs, which are decomposed into stages and tasks for scheduling. Each job and stage generally has different number of tasks and requires different amount of resources. But the user has to allocate the maximum number of executors up front. The static resource allocation approach will certainly waste a lot of CPU time and RAM.

Batch Jobs

Kubernetes’s batch job support is extremely simple, basically run to completion. However, enterprise batch jobs are way more complicated than that. For example, a batch job may execute in parallel across many hundred or even thousands of nodes using a message passing library to synchronize state. It may also require specialized resources like GPUs or require access to limited software licenses. Organizations may enforce policies around what types of resources can be used by whom to ensure projects are adequately resourced and deadlines are met. Therefore the capabilities like array jobs, configurable priority and preemption, user, group or service based quotas and a variety of other features are mandatory. There is a SIG kub-batch working on a batch scheduler for Kubernetes. But the road map and expected GA date are not available yet.

Advertisements

IBM Is Missing A Disruptive Innovation In Her Own Hand

Tags

, , , , ,

With $80B revenue but a streak of 21 consecutive quarters of declining revenue, IBM has a big appetite for growth. The hungry of growth drives IBM investing billions into their AI business, namely IBM Watson Group. However, it also drives IBM away from a disruptive innovation even though it is in her own hand. The disruptive innovation in spotlight is Watson for Oncology. Continue reading

How to Kill Bad Projects

Tags

, ,

It is an open secret how hard to kill projects in development. In the Harvard Business Review article “Why Bad Projects Are So Hard to Kill“, professor Isabelle Royer says that many projects are hard to kill because of a “fervent and widespread belief among managers in the inevitability of their projects’s ultimate success.” The desire to believe in something is primal. The excitement and exuberance associated with a project typically originate with the project champion, whose unyielding conviction that the project will succeed is often based on a hunch rather than on strong evidence. The champion’s exuberance spreads because others also want to believe, especially if the champion is charismatic and well networked within the company. Continue reading

The Future Business Model of Payroll

 ADP Paypal
Money Movement  $1.7 trillion  $354 billion
Revenue  $12.21 billion  $11.27 billion
Profit  $1.75 billion  $1.42 billion
Market Cap  $43.3 billion  $59.7 billion
  • ADP revenue includes full HCM services besides payroll.

Notice something here? ADP moves a lot of money than Paypal, but makes less revenue on money movement (less the revenue from other HCM services). It has a smaller market cap too. Why? Well, ADP is in the business of solution shop and value add process while Paypal is a facilitated network. Continue reading

Choosing The Best Tools

India is one of few nations that can buy military equipment from both western world and Russia. When building their destroyers, India does take this advantage to install best sensors from multiple countries to their ships. However, this choosing-best-tools-for-each-problem approach is an engineering nightmare. It is extremely challenging to make sensors from Russia, Italy, France, India, etc. work smoothly together due to various compatibility issues.

The issues are not in each module itself. Essentially every large engineering project is an integration work. We can easily lose the big picture when we focus on the performance attributes of each module. So be careful next time when your architect shows you a system architecture like the below.

Disruptive Innovation: When and Where?

Tags

, , ,

In the theory of disruptive innovation, Clayton Christensen argues that the incumbent companies introduce new and improved products year-by-year with the sustaining innovations, which eventually overshoot the performance that some customers can use because companies innovate faster than customers’ lives change. Overshooting creates opportunities for firms to change the basis of competition in order to earn above-average profits. After functionality and reliability have become goo enough, for example, the next competition dimensions could be convenience, customization, and price, etc. Continue reading

Risk Aversion and Sunk Cost Fallacy

In his book Misbehaving, Richard H. Thaler tells an interesting story. In a class on decision-making to a group of executives from a company in the print media industry, Thaler puts the executives to a scenario: Suppose you were offered an investment opportunity for your division that will yield one of two payoffs. After the investment is made, there is a 50% chance it will make a profit of $2 million, and a 50% chance it will lose $1 million. When Thaler asked who would take on this project, only three of twenty-three executives would do it. Then he asked the CEO how many of the projects would he want to undertake (suppose all projects were independent, that is the success of one was unrelated to others), the answer is all of them! Continue reading

Agile Software Development: China Navy

Tags

,

The best demonstration of agile software development is probably the modernization of China Navy. Following a “Run Swiftly in Small Steps” strategy, China Navy has undergone a stunning modernization push that puts it near parity with the US. Look below how China Navy has steadily improved each class of their destroyers in gradually shorter and shorter time. They are the grand master of agile development. Continue reading