Comparative Human Advantage

Back in 1817 David Ricardo published a very influential theory on an interesting question: Why trade, and particularly why trade when you are better at producing something than other countries?

He gave an example of England and Portugal, in a world where there were just two goods, wine and cloth. In England it took 100 people-hours to make one unit of cloth, and 120 to make one unit of wine. The Portuguese, on the other hand, took 90 hours to make a unit of cloth and 80 to make a unit of wine. England is worse at making both wine, and cloth, so why trade? Why doesn’t Portugal just make everything for itself?

Well, it turns out that while England lacked the famed Portuguese efficiency, it was way worse at wine than it was at cloth. England could trade one unit of English cloth for one unit of Portuguese wine, which meant the wine cost them (effectively) 100 person-hours vs 120 they would have needed to make it themselves: a clear win! But Portugal won too: by focusing on wine rather than cloth they could trade 80 hours of work (for the wine) for some cloth that would have cost them 90 hours to make.

Ricardo described this as a comparative advantage: by leaning into their relative specialties, countries could benefit from trade, even if they are generally more efficient than their competitors. This was a clever insight, globalization happened, and we eventually ended up with Temu.

Of course, things are never quite as simple as economists’ models (annoyingly to economists the world over), and within his own life there were some interesting wrinkles. Sticking with the textiles theme one of them happened to weavers: people who took thread and turned it into fabric. There was a period, shortly before Ricardo published his theory, that some call the Golden Age of the handloom weaver. Spinning, turning material into threads, had been mechanized thanks to the Spinning Jenny, which made yarn cheaply available. Weavers became the bottleneck to turn that yarn into saleable cloth. Weavers worked from home, controlled their schedule, and made excellent money while doing so.

What changed next was the power loom1. Using the hand loom required dexterity and practice to master the shuttle and weave, but the power loom just needed someone to mind it and occasionally unjam things. Weaver’s earnings collapsed from around 20 shillings a week in 1800 to 8 shillings by 1820. The power loom enabled turning yarn into cloth efficiently and cheaply, without the need of years of deep skill and practice.

Ricardo was, at the end of his life, right there to observe the start of this transition, and in the third edition of his book Principles of Political Economy he added a chapter titled “On Machinery”. Comparative advantage says that if a machine comes out that is better at some job humans should move to a place where they are comparatively better (like fixing the machine). Ricardo realized that machinery could increase the profit for the factory owner while decreasing the gross income to workers: it shifted returns from labor to capital. The power loom took the primary asset of the weavers, their dexterity and practice, and made it economically irrelevant.

This feels worth discussing because in many ways software engineering has been going through a Golden Age of the handloom coder, particularly in the post-pandemic expansion from 2020-2022, where it was a very, very valuable skill indeed.

While SWE wages have yet to collapse to shillings, there has been a definite cooling through rounds of layoffs and shifts to capital expenditure, accelerated by the adoption of strong coding models. Generating syntactically correct code has become way cheaper, and the bottleneck that was shipping code to production is shifting from writing code to proving it is correct. There is still a huge amount that hasn’t changed: identifying requirements, making choices on implementation paths, and thinking about the overall system, but slinging code is becoming a different job, quickly. The primary beneficiaries so far are those selling the pythonic power looms: the big labs and key tooling and hardware providers.

In my own direct experience coding assistance went from being a somewhat niche interest, that required regular selling to VPs to keep them investing in it, to a top level company mandate with accompanying metrics. The question I have found myself discussing recently with many smart engineers recently is: are we the weavers, or, you know, is everyone a weaver? Is this another industrial revolution like steam or electricity, or something perhaps even larger?

Steve Newman of the Golden Gate Institute of AI2 (and one of the creators of Google Docs), wrote up one of the best “maybe it’s different this time” posts I’ve read in a bit, and not just because it involves robots mining Ceres3.

https://secondthoughts.ai/p/the-unrecognizable-age “Presenting the case the future will be unrecognizable”

“I spend a lot of time in this blog arguing that AI’s near-term impact is overestimated, to the point where some people think of me as an AI skeptic. I think that predictions of massive change in the next few years are unrealistic. But as the saying goes, we tend to overestimate the effect of a technology in the short run, and underestimate it in the long run. Today, I’m going to address the flip side of the coin, and present a case that the long-term effect of AI could be very large indeed.”

The core of Newman’s argument is that AI is the first technology we have developed that could, potentially, be more adaptive than we are. As a way of illustrating, let’s stick with what everyone comes to this blog for: 19th century weavers.

Despite all of the above automation, weavers still had a role in more complex or limited run designs where the expense and effort of setting up a power loom didn’t make sense. Then, the Jacquard loom made the design flexible: you specified the design by punching holes in a card4 and the loom wove the design. The comparative advantage shifted away from weaving entirely, into designing and encoding. Pattern designers became some of the first programmers of mechanical systems as card punchers. The unique human advantage was adaptability: we added a level of flexibility, and the humans then adapted to work above this level

Newman argues that the AI is a cognitive loom: the power loom replaced dexterity and practice, the Jacquard loom made it flexible and adaptable, but someone still needed to punch the cards. Humans adapted, and learned new skills. Newman argues that AI might be able to learn those new skills faster.

“My point is simply that once AI crosses some threshold of adaptability and independence, there will be paths around the traditional barriers to change. And then things will really start to get weird.”

This doesn’t inherently invalidate the idea of competitive advantage, but it might make it practically irrelevant if the market value of the human advantage drops below the cost of subsistence. If a future AGIs opportunity cost is tiny, maybe there just isn’t enough left for humans when it comes to matters of substance.

Comparative advantage is, fundamentally, about tradeoffs. Technology is our great lever of progress to remove some of those tradeoffs, but we have historically always run into more. Even if we were out mining asteroids with robots and building giant data centers autonomously there is still not infinite compute, and there is still not infinite time. There will always be some set of tradeoffs that have to be made, some range of competing options to choose between.

What is valuable or notable in that environment can look markedly different. To look at the Victorians again, the art world was significantly impacted by the advent of photography, as (within certain bounds) it effectively solved realism. Artists responded by developing impressionism: the comparative advantage they retained was subjectivity and emotional context. Even the most opium-enhanced Victorian futurist would have to be lucky to predict Cubism from reading about William Henry Fox Talbot.

Humans do seem to me to have a comparative advantage in some areas, particularly:

  • Reality
  • Desires

We are grounded as creatures in the world, not in textual or video inputs. We evolved in the world, and are richly adapted to it, in ways that are not always obvious, even to ourselves.

We also tend to view intelligence as being coupled to wanting things, because things notably less intelligent things than us seem to want things, and we certainly have any number of desires. It might be true that an AGI wants things, but it’s not clear that it must be true. I feel even more confident that on the way to AGI we will build some pretty powerful systems that don’t really “want things” in the same way we do: they may be agentic, but they are not truly agents with goals absent human input.

Since we are already living in part of that future, I asked Gemini what it thought might be the human comparative advantage. As I hoped, it told me I was absolutely right:

“Since we (AIs) are designed to serve human intent, the scarcest resource for us is accurate data on human preference. If you can predict what humanity will value in 10 years (e.g., “Will we value privacy or convenience more?”), that information would be incredibly valuable to a superintelligence trying to optimize its resources.”

In a world of tradeoffs there will still have to be choices, and many of those choices are not easily, observably optimizable. Our ability to be in the world and have preferences might be the most valuable aspect of us after all. Maybe the role of the software engineer of the future, or perhaps of people of the future, isn’t so much doing work or even managing work, it’s instead curating the work.

One example of that kind of activity is a DJ: they create a vibe by arranging songs based on their taste and the response of the audience. Folks choose to go to certain DJs not because they are objectively better, but because they are who they are.

This might sound a bit silly, but in practice much of modern work is not so much about doing the thing as it is about doing the thing a certain way. Still, is the future of humanity collectively making sure the vibes are right? From a certain point of view, what we have always done, collectively, is build a culture. And what is culture other than the right vibes? Perhaps our future is just a continuation of our history, with new technologies, and new tradeoffs.

  1. For a really detailed treatise on this whole idea, see Acemoglu and Johnson’s excellent article “Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution, and in the Age of AI↩︎
  2. And one of the creators of Google Docs among other things ↩︎
  3. Beltalowda! ↩︎
  4. As an aside this influenced various other uses of punch cards for data storage, leading to IBM and from thence to the fact your terminal defaults to 80 character widths ↩︎

Discover more from Ian’s Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading