Illiquid Economics Economics with and without money

Decoupling Growth From Physical Resources

“Infinite growth is possible due to infinite avenues for expansion (not all of them resource-sapping), which is possible due to infinite ideas.” - random internet comment.

The idea that growth can be decoupled from resources is appealing, but equally ridiculous. The escape into an information centric world gives the appearance that we can just abandon our physical bodies and forget that we are alive in the first place.

Inefficiency decreases are insufficient to reduce resource usage to zero

There is this common misconception that efficiency increases create value and utility out of nothing and since the input was nothing, the process can be repeated endlessly. What they appear to be forgetting is that the economic process described takes both energy as well as materials as inputs with a useful output and a useless output. Therefore, an increase in efficiency merely represents a decrease in useless waste, but the laws of physics tell us there cannot be waste free processes. Indeed, there are sometimes scenarios where a tremendous amount of potential exists and thus by decreasing inefficiency, it looks like there is infinite potential. The classic example is how semiconductors become smaller and smaller utilizing the same quantity of silicon. The quantity of transistors has been growing exponentially for half a century. Surely Moore’s Law is for forever? Unfortunately, reality betrays us. How are semiconductor manufacturers going to perform the miracle of not needing any semiconductor (or any other material) at all? They won’t.

Growth independent of physical resources would require utility to be derived purely from information itself

Resource independence implies that the utility evaluator can take pure information as an input. That data has to be transmitted through our senses or via a machine to brain interface. This is a physical process, so we are still bounded by the availability of physical resources. Growth remains coupled to physical resources.

Another problem is that information can be copied and therefore a utility maximizing system would need to be capable of dealing with goods that have zero marginal cost and high upfront costs, which capitalism inherently struggles with. “Intellectual property” was a way to introduce artificial scarcity to something that was not scarce, just so that the value of information remains high in relation to physically scarce resources. If basic needs are met, then a development process based on free software or open source software might end up meeting the needs of more people, than a commercial development process. Thus it is not obvious that “infinite growth” would occur in a way that ends up part of the GDP, as no trade occurs. So our problem would be that in the long run GDP metrics are irrelevant and don’t actually matter. If growth happens, but we can’t measure it, then it appears to be a meaningless problem to set ourselves. It makes it feel like economists are trying to put a specific type of interaction on a pedestal and any other interaction would not be counted as growing, no matter how much people would want that interaction instead. The type of growth that is being demanded primarily exists to satisfy the unsatiable demand for money and investment returns, rather than the unsatiable demand for something concrete. If people were happy, economists would demand this growth in spite of the happiness.

Exploitation of weaknesses in the utility evaluation of humans

The other problem is that any non-exact utility evaluator is not perfect and has weaknesses. An information based system, whose goal is the maximization of utility could create an adversarial model of the utility evaluator and find weaknesses to maximize its perceived utility.

One could build a computer program that shows people an endless stream of exactly what they want to see, so that they spend all day in front of the computer or television, but this would not be enough to result in exponential growth. Even if this growth can be considered infinite, it would still be subexponential growth and therefore be too slow.

Alternatively, the information based system could, for instance, create a drug that completely remodels the brain of the user so that they permanently feel an exponentially growing utility regardless of their current situation (“the perfect drink”) [0]. The unfortunate consequence of this is a singularity I call “The end of all trade”, as utility can now be produced self sufficiently and economic interactions cease to have any meaning, akin to a naked singularity in a black hole causing all known physical laws to break down. This does not make investors happy. In fact, the drug creator might think he just got himself screwed out of the value he produced and has the crass idea that he deserves infinite payback. The logic is sound to a certain extent. The production of the drug must be paid for in one way or the other. Building a drug that makes humans both infinitely happy and become slaves willing to produce more of the drug, would be in the utilitarian spirit. A good idea deserves to self replicate after all. I hope the sarcasm is obvious.

Alternatively, images could take the form of “BLIT”’s basilisks which are images that contain patterns within them that exploit flaws in the structure of the human mind to produce a lethal reaction, “crashing the mind” [1]. Of course this is pure fiction, but lets apply it to endless exponential growth: Instead of killing the user, the picture reprograms the users brain to increase his current utility by a multiplicative factor such as 1.05 or a 5% increase. Then economic exchange would occur in the form of the user paying a dollar amount corresponding to the utility of the current picture. To make more economic growth possible, the viewer is paired up with other participants and they both mutually create the pictures and buy each others pictures. The picture that is being drawn does not matter, because the reprogramming makes whatever is being drawn be perceived to have more utility than the previous picture. The infrastructure provider charges a 5% fee and provides infinite money via loans to the pair. The pair ends up in infinite debt to the infrastructure provider as otherwise infinite growth would not be possible for the infrastructure provider, only for the participants watching each others’ pictures, but that wouldn’t be part of GDP and does not reward investors for parting with their money.

We can discuss more and more of these fancy schemes, but the idea remains the same: We are somehow manipulating the human brain to perceive something, even if it doesn’t exist in reality. The conclusion is that economic activity in the real world is illogical, as it is very resource intensive and we should all live in a simulation, either via advanced computers or force happiness through drugs as these strategies minimize resource consumption.

Longtermism and the fallacy of 10^58 happy simulated human lives

There is a secular movement that is anti-thetical to its own name. It is called Longtermism, which is an extremist offshoot of Effective Altruism. Effective Altruism in its most benign form is the idea that if one wants to be philanthropic, one should not volunteer and do good for low pay, but instead one should specialize and maximize their income so that they can donate and perform philanthropy as much possible. The basic problem they set themselves out is to evalulate whether it is more effective to “do it yourself” or pay a charity. As the input to effective altruism is money or more specifically dollars, the question then is how much good can one do per dollar and how does one maximize this ratio and will it be more effective than the “do it yourself” altruism? So how does that work in practice? The most obvious way is to start a supervising charity organization that tracks the spending and outcomes of any given charity organization. GiveWell is such a meta-charity, a charity that donates to charities based on effectiveness. The classic high impact charity is a charity organization delivering malaria nets free of charge to regions with a high prevalence of malaria. The likelihood to die of malaria is low and the cost of malaria nets is low. Thousands of nets add up to a single live being saved. [2] Thus, there exists a number, the minimum cost to save a live and using this number as a benchmark, you can argue about the effectiveness of “Effective Altruism”. How many lives is your particular volunteer work going to save? One? The Effective Altruist counters and says he donated $4504.50 dollars to a malaria net organization and saved 1.001 lives. He clearly “won” and he had to spend less time volunteering (zero) to boot, so he could easily save a second or third life, if he worked more hours. Thus Effective Altruists are motivated by a simple idea: to optimize philanthropy.

But what if philanthropy isn’t enough? What if we want to extend this to utilitarianism? Why stop at optimizing charitable giving? Why not optimize total human happiness? Longtermists do not concern themselves with the immediate short term, that is anything less than ten thousand years from now. Given the previous blog post, sustained exponential growth is enough to colonize the obvservable universe in the short term future with a palty 0.57% return. The longtermists are happy with less, that is, they would be happy with any growth rate as long as it colonizes the entire universe as densely as possible, because they are concerned primarily with maximizing global human happiness. The danger in the longtermism lies in the fact that it is the Paperclip Maximizer problem applied to Effective Altruism. Rationalists concern themselves with the danger of runaway AI and the AI alignment problem, while simultaneously trying to roleplay the AI alignment problem with humans. As any movement in the mortal realm, it needs to stay alive and justify the continuation of its existence. How do longtermists do it? Easy, longtermists are considering the impact of their actions on the extreme long term future of humanity. Even if humanity can’t do it in two thousand years, the colonization of the universe is inevitable, therefore there will be an insanely large number of future people. Any decision made today therefore impacts an unimaginably large number of people. The longtermist behaves like an Effective Altruist, except with the goal to maximize the number of happy human lives. Therefore if the longtermist does anything to accellerate the development of humanity and prevent extinction risks, he will be responsible for saving trillions upon trillions of lives. His impact and legacy on the long term future will be so vast as to overshadow any other moral framework. How many people did you save? Eight billion? Not enough! You are morally deficient! Going against Longtermism is the equivalent of killing trillions of people. Even if you do not want to personally engage in Longtermism, it is your moral duty to prevent Longtermism from disappearing and fading away! If Longtermist organizations rally to prevent human extinction, then the prevention of human extinction is contingent on the survival of the Longtermists. Therefore not only are humans not allowed to go extinct, it is inconceivable for Longtermism to go extinct! It is estimated that 10000000000000000000000000000000000000000000000000000000000 people will be alive in the long term future. How are those numbers achieved? First of all, the numbers are completely arbitrary and made up, second, these people aren’t real people. The longterm future is that crazy. Nobody has physical bodies anymore and everyone lives in a computer simulation, because if there is something worse than mass happiness, it is mass suffering of the same magnitude. To prevent this, the computer simulation simply does not create unhappy people to begin with. How convenient! The problem with Longtermism is that it creates an artificial problem that only it can solve, and the problem is of such importance, that everything else is meaningless in retrospect.

Longtermism and infinite growth exist to justify suffering in the present day

What is particularly perplexing then, is that the longtermists betray their own premise. It is a given that humans are present time oriented, that is given a path through space time that connects the current point in space-time and another point in space-time, they prefer a path that is as short as possible. It is only reasonable after all. The future is uncertain and it is not guaranteed that you end up at your chosen destination. This then appears to create humans that are excessively focused on the short term and forget the long term future. However, what kind of answers does Longtermism provide? What if you care about the immediate future, the next ten thousand years? Unfortunately, the longtermists disappoint. In fact, they argue that the short term, even extended to the next ten thousand years, is irrelevant and that any suffering in the present will be outweighed by positive utility in the long term future. The risk of a nuclear war that does not result in extinction is irrelevant, since humanity can easily rebuild within a few thousand years. In fact, it is better to instantly stop mutually assured destruction and for all nuclear powers to simply surrender to a single global nuclear power, as it significantly reduces the extinction risk by nuclear war. The same can be argued about infinite growth even in the immediate future, as even small growth rates spiral out to have a significant impact. The future will be great, therefore we can afford to undermine that future through extreme short term thinking. Being excessively future oriented eventually “wraps around” and looks like being excessively short term oriented. In an ironic twist, the Longtermists become Shorttermists.

For anyone interested in seeing how silly Longtermism is, I recommend watching this video by Sabine Hossenfelder: https://youtu.be/B_M64BSzcRY

What if decoupling growth from physical resources was possible anyway?

If growth is perfectly decoupled from both cognitive resources and physical resources, then we run into another problem. Why would this growth be bounded by something as arbitrary as an exponential function? In fact, the expectation is that all of the infinite economic growth would happen instantly. The expectation of decoupling growth from physical resources would not result in endless exponential growth. It would result in super exponential growth. What could super exponential growth look like?

What exactly is supposed to happen after the singularity?

At T+1 all growth has happened instantaneously and no more growth will occur beyond T+1 [3].

Such a singularity would imply the existence of a timeless and endless universe, so that you are most likely a boltzman brain. What is a boltzman brain you ask? Well, it is merely a thought experiment about the fact that thermal fluctuations are random events and that it is possible for a random flucutation to result in any random arrangement of atoms. Given enough time and an infinitely large universe, anything that is not impossible does and will inevitably happen and due to simple statistics, the probability that a big bang forms spontaneously to give rise to a universe that contains an earth-like planet, which allows humans to exist is so unlikely, that it is far more likely to think about a human brain spontaneously emerging from a random collection of matter. Yes, the idea of being born by another human is such an unlikely event, that humans born through the random process would exceed the number of humans born through evolution and reproduction of their parents. Why is it like that? It’s because the big bang is a highly ordered state spanning the entire universe. Imagine rolling the dice on every single atom in the universe and getting a 6 every single time in a row. Now imagine the likelyhood of getting a 6 for every atom in your human brain? Sounds way more likely, doesn’t it? Of course, a boltzman brain is an argument ad absurdum. That is, the purpose is to show how extrapolating a silly thought leads to bizarre conclusions. It exposes an inconsistency in a theory. It’s far more realistic to think that human brains only develop on earth and nowhere else.

What in reality is going to happen…

I will ruin everyone’s fun and say it: There are limits to growth. No, you cannot grow endlessly simply by increasing efficiency. However, I am not going to say that these limits are unchanging. It is entirely possible for limits to persist, perhaps for thousands of years, and then suddenly, a breakthrough is made and the limit keeps rising, until it ceases to rise and then we wait again, for the limit to rise. In other words, we at the very least have to entertain interrupted growth. For example, what if it takes a thousand years to colonize Mars? If we aren’t patient enough, we could go extinct before we have terraformed our second planet!

Seeth? This is what they hath forswore thou! The crook'd growth path!

The desmos link itself can be found here [4]. It is a random layering of logistic functions with a linear component and cycles via a sine component, but with an overall tendency to keep growing linearly. There is an interesting blog post by Mark Buchanan about the limits that can be considered a starting point for the mindset of the next blog posts [5].

Conclusion

Decoupling growth from resources primarily serves the interests of investors and people whose economic models fail to reflect reality. Because the picture they have in their mind on how the world behaves diverges so much with how the world actually works, they have decided that it is reality that is wrong and that it should be changed to fit their ideas, ideas that have no hope of ever becoming real or influencing the world in a positive way. If it is attempted anyway, then the simplest strategy is to simply manipulate the human brain directly, as that requires the least amount of resources. Alternatively, if perfect decoupling is possible and no resources are needed, then all growth would happen instantaneously, in fact, it would happen so quickly that the vast majority of human brains would not belong to a real human body born on earth, but rather a so called boltzman brain, with the possibility that it was born with the maximization of utility already integrated into its brain.

[0] https://scp-wiki.wikidot.com/scp-294

[1] https://en.wikipedia.org/wiki/BLIT_(short_story)

[1] http://www.infinityplus.co.uk/stories/blit.htm

[2] https://www.givewell.org/how-much-does-it-cost-to-save-a-life

[3] https://www.desmos.com/calculator/vo1mprowka

[4] https://www.desmos.com/calculator/vtgssqk6g5

[5] https://medium.com/bull-market/steaming-slowly-toward-the-limits-of-growth-cc455dccd829