Reading the classics in other disciplines has deeply influenced my thinking process. Questions that arise on one field should not necessarily be answered with inputs from the same field. We can take questions wherever we want. Other disciplines permit wandering through different territories, some of which may give us answers our land may not.
Earlier this year, I was thinking about the Chat GPT vs Google situation. I didn’t understand why OpenAI seemed to have won nor why did it peak at 300M MAUs. Why didn’t it completely replace Google?
I got an interesting idea from Darwin’s specialization of organs. Then I started noticing how a very similar thing occurred at the system-level with Adam Smith’s division of labor. Both helped me build bridges toward Christensen’s ideas and find a potential answer to this question. I think these set of ideas extend beyond this case and compose a very interesting framework.
The Specialization of Organs
Living beings are optimized for procreation. In doing so, nature makes them economize on effort. Maintaining composure and structure is a daunting task. The environment applies constant pressure for their dissolution. In consequence, combating this and extending organic beings’ lives consumes vast resources. Partly due to this, genetic codes don’t urge organic beings to perform unnecessary functions, so that energy and resources can be focused on survival and procreation.
“Natural selection is continually trying to economise every part of the organization.” Darwin
On a similar note, by keeping profitable variations, a clear tendency made itself apparent to Darwin. Natural Selection will cause organs to specialize, if proven to enhance an individual’s survivability prospects.
“For all physiologists admit that the specialization of organs inasmuch as in this state they perform their functions better, is an advantage of each being; and hence the accumulation of variations tending towards specialization is within the scope of natural selection.”
Spreading resources will tend to deliver worse results at the micro level. The obverse is not necessarily true, as per my understanding goes. An organic being whose structure is not optimal for its specific circumstances should succumb to competitors that are better prepared. These beings tend to present more variability than usual. Similar observations led Charles to notice that functions were most perfectly performed when organs were specialized in doing them. He further noticed that the higher in the scale of nature, the more specialized the being’s organization.
“Multiple parts are variable in number and in structure, perhaps arising from such parts not having been closely specialized for any particular function, so their modifications have not been closely checked by natural selection. It follows probably from this cause, that organic beings low in the scale are more variable than those standing higher in the scale, and which have their whole organization more specialized.”
Experiments point out that much better results are achieved if an organ performs only one function than if it resolves two or more. Were Nature to detect a higher degree of relevance in one of these functions and it will cause the organ to focus on doing only that one.
“No physiologist doubts that a stomach adapted to digest vegetable matter alone, or flesh alone, draws most nutriment from these substances”
I suspect it is based on these inferences that Darwin concludes that organic beings that stand low in the scale of nature possess more variable structures and organs. This entails that their bodies are not adapted for performing specific functions and remain somewhat diversified in nature. It reminds me of the quote that “the something of somewhere is mostly just the nothing of nowhere.” Maybe this phenomenon explains why their organs need to remain variable, for it is yet unclear to Nature which is the competitive advantage of these beings.
The Division of Labor
In The Wealth of Nations, Smith speaks about how remarkable a difference the division of labor did for society’s productivity. This implied fragmenting a complex operating process into smaller components. Smaller dimensions of the tasks generally embed increased simplicity. On aggregate, complexity might remain even; but at the individual level, it is astonishing how much can be removed.
Most systems where labor is divided categorically outperform those wherein single operators deal with the whole. Smith concluded this is explained by three factors: (i) the increase of dexterity in the workman; (ii) an avoidance of time lost by switching activities; (iii) the enhanced allowance for automation.
Making a task more granular helps delineate it to much further specificity. In consequence, it becomes easier to practice. Continuous focus on these types of tasks helps master them faster. Secondly, people think by association, and every time we become aware of some variable, infinite related images flood the mind. A useful metaphor is to think of people as users of different glasses, all of which are required for dealing with a certain thing. Switching tasks is metaphorically similar to repeatedly changing glasses, leading to a great time loss at the aggregate level. Lastly, better defining an activity, which is a byproduct of the reduced dimension, facilitates its automation. The difficulty in creating a machine for performing a task is directly proportional to the difficulty of the task itself.
To this, I might add, Smith speaks of man’s proclivity to spend the least effort possible on disliked tasks. In consequence, when someone finds himself in this situation, he’ll try to cleverly design an automated system to do the work. Many improvements have arisen this way. The following example stayed with me since I read Smith in March.
“In the first fire engines, a boy was constantly employed to open and shut alternately the communication between the boiler and the cylinder, according as the piston either ascended or descended. One of those boys, who loved to play with his companions, observed that, by tying a string from the handle of the valve which opened this communication to another part of the machine, the valve would open and shut without his assistance, and leave him at liberty to divert himself with his play-fellows. One of the greatest improvements that has been made upon this machine, since it was first invented, was in this manner the discovery of a boy who wanted to save his own labour”
Jobs Needed to be Done and the Fragmentation of Jobs
Clayton Christensen pointed out the fact that, in everyday life, people face problems. They encounter jobs they need to get done. To resolve them, we hire products and companies.
Individuals’ characteristics are the wrong unit of analysis. Correlation will always exist among some elements, but it isn’t those that cause the person to hire the product. It’s the problem they face and the related job they need to get done. A classic job might be physically getting from point A to B as fast as possible. Initially, we’d walk; then we’d hire horses; then cars; now planes. The job remains the same.
What I think occurs in some industries is that, at first, there might be only one company or product. Considering the product lacks a precise definition of the job it does, a broad scope of use cases may arise. In consequence, people with seemingly different jobs to be done turn to this product, since it offers a good approximate solution to the problem they have. The addressable market for the product, therefore, encompasses multiple different markets, all of which represent specific jobs needed to be done.
Nature found it wise to make organic beings’ organs tend towards specialization, as did industry owners. I suspect both premises would tend to predict a similar pattern in industries wherein the underlying characteristics are as mentioned. Invariably, by focusing on a more specific job, the agent will be able to provide a better service than the general player.
My sense is that such an event will only occur if the specific job that’s needed to be done exceeds, on aggregate, a certain threshold in addressable market. In addition to this, there needs to exist an appropriate distribution channel. The lack thereof would not allow for the improved solution to reach destiny, killing the startup in the process.
The catalyst that caused this waterfall of thoughts was Google and ChatGPT. Google Search has dominated internet search for two decades, and ChatGPT posited the first proper threat. However, it peaked at 200-300 million monthly active users (MAUs) in 16 months, whereas GSearch still has in excess of 3bn MAUs.
I think this is the reason why ChatGPT captured only a part of Google’s addressable market, but, in its current form, it’s not clear if it can take any more. I think Google Search did (and does) multiple jobs for people, and ChatGPT does only one or two, but it does them much better than its competitor. Under the hypothesis that ChatGPT already serves the 80-90% of people who need this job done, Google Search remains the best approximate solution to all of the other jobs.
Overshooting the customer in any dimension is what gives room for disruptive innovation. It is unclear to me what’s the specific job that ChatGPT does. My sense is that people that need to get this job done were previously overshot by the amount of information Google Search provided. This carries the necessity from customers to do proper selection of sources and then come out with a cross-checked and decently thought conclusion. ChatGPT removes these inconveniences, dramatically bringing down the effort and time needed to do the job.
Interdependence vs Modularity
Products are constructed with multiple components. Value chains are made of several steps. When an industry begins, a completely new solution for a job is created. For it portrays novelty, there will be no other company that produces any of the components, implying the new business has to do everything. Vertical integration, or an interdependent architecture, is required.
As industries evolve, some tend towards a modular architecture. This implies, in the most extreme example, defining all steps of the value chain, having companies focus on single ones, and for customers to get customized solutions. Interdependent architectures optimize for performance, while modular architectures for flexibility. Generally, when a product overshoots the customer in performance, the surplus of the latter can be sacrificed for customization and convenience.
The semiconductor industry is an example I find interesting in this regard. Initially, semiconductor companies had to do every part of the manufacturing process for the chip to function accordingly. As time went by, the process got fragmented to the point that we now have different monopolies/oligopolies in each part of the value chain. I suspect this may have been caused by society’s desire and necessity of keeping up with Moore’s law, which predicts that the number of transistors in a chip would double every two years. The complexity of each vertical, combined with the inefficiency bureaucracy brings, forced the industry into dividing labor. It seems as if it was the only way to meet demand on performance.
Natural Selection predicts that only profitable variations will be kept. I think this unequivocally applies to industries. Only if a modular configuration helps address customers’ needs, namely the measurement of performance they value, is that the industry will tend towards modularity.
“Under very simple conditions of life a high organization would be of no service”
Similarly, an organ that previously performed two functions will start performing only one if doing so increases the being’s chances of survival. For getting to that point, refinements had to be done to the organ across multiple generations. The moment in which manufacturing processes start getting fragmented is when each part of it has a decently sized addressable market on its own. I’d be inclined to believe that steps of the process would be separated one by one from the interdependent system; for if a step crosses the threshold sooner, it shall be separated earlier.
Final Remark
This write-up is largely based on something I wrote several months ago, with some slight modifications. After going over it again, I still sense these set of ideas are incredibly sound to explain different types of phenomena.