The ultimate reason for this is Moore's Law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent is constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but over time, more computation becomes available...Time spent on one is time not spent on the other.
The bitter lesson is based on the historical observations that (1) AI researchers have often tried to build knowledge into their agents, (2) this always helps in the short term and is personally satisfying to the researcher, but (3) in the long run plateaus and even inhibits further progress and (4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored human-centric approach.
The second general point...is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of the mind, such as simple ways to think about space, objects, multiple agents, or symmetries.
We should. build upon the meta-methods that can find and capture this arbitrary complexity.
We want AI agents that can discover like we can, not which contain that which we have discovered.