Vivek Haldar

CPU pipelines and the structure of work

I want to expand on what I meant in the following tweet a few days ago.

I think the modern CPU is an excellent parable for the nature of modern work itself. In a sense, it is the purest expression of Taylorism.

(Prerequisite: none of this will make sense without a basic knowledge of computer architecture and how CPUs are structured at a high level as a series of stages (called pipelines). The canonical reference is Computer Architecture, by Hennessy and Patterson.)

  1. The goals of structuring the work are efficiency (utilizing resources optimally), latency (reduce the time it takes from start to finish) and throughput (maximize the work completed per unit time).

  2. In order to achieve these goals, work is structured into a sequence of stages, with each stage performing a narrow, well-defined sub-task that is relatively easy and simple.

  3. Work should not pile up. Inventory is anathema. There is a well-defined and bounded buffer between stages to hold intermediate results.

  4. The latency and throughput is limited by the slowest stage in the pipeline. Thus, it is important that each stage be roughly equal in complexity and complete work in about the same time. This also helps to minimize temporary inventory (3).

  5. Specialization is key. As a corollary of all the above points, each stage is naturally pushed to a narrow specialization.

  6. Complexity in the overall system is acceptable, and indeed often inevitable because the above rules gravitate naturally towards a deep pipeline with a large number of stages, as long as each stage is simple.