Illiquid Economics Economics with and without money

The Neoclassical Solver

The neoclassical solver is a hypothetical algorithm that obtains the optimum configuration of an illiquid model derived from a typical liquid neoclassical model. Although it can be realized as software and run to make decisions in an economy, its actual realization doesn’t matter. The concept alone is enough to utilize it to criticize neoclassical economics.

Why does the neoclassical solver exist?

The neoclassical solver represents an implementation of an optimal decision making algorithm. That is, a rational agent is allowed to use its information as input to this algorithm to make a decision.

In a liquid model, there are usually only aggregate statistics and there are no “decisions”, let alone individual agents. However, as the liquid model achieves optimality trivially, the optimality of each individual decision must be guaranteed in the illiquid model. This means approximative solutions are not allowed and instead exact solutions are necessary, hence the name “solver” and not approximation scheme.

Additionally, some form of ruleset or pattern of congition is to be expected when making decisions. The concept of a “prophetic” oracle that simply knows the correct answer without performing any cognitive operations might be possible, but meaningless in reality, as people do have thoughts.

To not get lost in obscurity, the explanation of the concept of an oracle driven rational actor will be postponed. In its essence, the neoclassical solver exposes the fact that economies have to be implemented as mental processes and don’t simply fall from the sky, as many neoclassicals would like you to believe.

What properties does the neoclassical solver have?

Interestingly, the illiquid world is far bigger than the liquid world. The reason for that is simply because of the signficant increase in the number of variables to consider. For this reason, even given the objective of achieving optimality, there is no guarantee that there is a unique solution or that the solution space is even finite.

This then creates a conundrum. The neoclassical solver will have to pick one out of many, whereas the liquid model simply abstracts over all configurations that lead to optimality. The liquid model will tell you how to influence the economy in a way to get a higher optimum, but it can’t tell you how that would actually look like in an illiquid model.

The neoclassical solver can, but it must have a subjective bias, that is, it must have some properties that make it decide between two equally viable options and prefer one over the others, as there are no rational or irrational criteria to base the decision on. Bias does not necessarily mean anything negative, but if the solution space is big enough, then even though the utilitarian principle of global utility is followed, there can be local suffering and in fact it could end up accidentially being considered a good thing, even if it is avoidable.

Another property that the neoclassical solver has is its space and time requirements. Space refers to the working memory necessary to run the algorithm and time refers to the number of discrete steps each taking a fixed amount of time.

For an economy where everything is divisible and accountable down to the quantization of reality (if such a thing even exists), the expected time and space requirements are, surprisingly, linear. That is, the algorithm is efficient.

However, in the case of non-divisible commodities or a mix of non-divisible and divisible commodities, the expectation is that an exact neoclassical solver is NP-hard and therefore computationally infeasible for large economies.

Problems with the neoclassical solver

The neoclassical solver is a hypothetical algorithm that tells us what actions we should perform to achieve optimal outcomes. However, even if it is possible to use the neoclassical solver indirectly by building liquid models that are equivalent to the neoclassical solver, to actually use the neoclassical solver requires information from the real world. This information is hard to come by.

Dependence on perfect information

The neoclassical solver is merely an algorithm which processes a data input and produces a data output, therefore the quality of the results are utterly dependent on the quality and exhaustiveness of the input data.

Generally, more data is better and for exact solutions only “all data” will do. It is absolutely essential to have a complete theory of everything, that is, a theory that can let us simulate the world in its entirety.

Our physicists do not have such a theory. However, even if we did have such a theory, the uncertainty principle of quantum mechanics tells us that there is a limit to the precision of measurements of multiple properties [0]. Having an accurate measurement of position implies uncertainty about the momentum. In other words, we cannot have both.

The No-cloning theorem [1] states that it is impossible to create an identical copy of an arbitrary unknown quantum state. Why is this section about quantum mechanics? Because quantum mechanics represents the boundary of the maximum possible information that we need to describe the universe. If you can simulate quantum mechanics, you can simulate everything on a higher plane of existence. However, what quantum mechanics tells us is that we cannot even hope to simulate a copy of the universe down to the required precision.

Therefore, due to lack of information about the real world, the neoclassical solver cannot be exact, it has to be an approximation. This is a devastating result as it completely crushes the idea of “Capitalist realism” embodied by the famous quote of Margaret Thatcher “There is no alternative”.

If the neoclassical solver can only give us an approximation, that is more than enough theoretical headroom to argue that alternative systems can exist that are better than Capitalism. After all, to say that Capitalism is only approximately optimal, is to say that capitalism is suboptimal.

However, the lack of exact solutions by itself does not state that Capitalism is inferior to its alternatives or that we know about the superior alternatives in the first place. To do that, we would have to explore more realistic strategies than the neoclassical solver and by extension, all relaxations or liquid models that give us exact solutions are obsolete. Neoclassical economics centering around “Dynamic Stochastic General Equilibrium” (DSGE) models is dead.

Decentralization of the solver increases computational complexity

Another problem with the neoclassical solver is that it essentially is advertising for central planning as a central neoclassical solver that is provided all information is obviously more efficient, than to let every single individual simulate the entire economy.

Distributing all information about reality to all economic participans appears to be quite a logistical challenge and it doesn’t really have to be stated, but running the same program eight billion times requires more resources than doing it once.

A common fallacy is to point at the “Economic Calculation Problem” as evidence that central planning doesn’t work. It is evidence that no amount of planning, at any level, whether decentralized or centralized, works. In other words, it exposes the logical impossiblity of both communism and capitalism to be ideal systems. The “Economic Calculation Problem” is therefore a problem for both communists and capitalists.

Arguments based around the concept of decentralizing economic decision making ignore the fact that it is already computationally infeasible for the central planner. If it were possible to decentralize an NP-hard algorithm and split it up among \(n\) firms or economic agents who only process a small part of the problem independently, then it would be possible for the centralized system to use the exact same strategy, but more efficiently.

Parallelizing a problem does not decrease its intractability, it merely speeds the process up. If anything, the argument must be that an inexact numerical online strategy is used instead and that therefore the solution must deviate from optimality to a certain extent.

Dependence on perfect cooperation

Even if we disregard the computational intractability, there is another problem. If there is no unique solution, how are the independent agents supposed to reach an agreement?

Each economic agent only has influence over their own body, but for them to know how to move their body, they need to know how other economic agents behave. For an economic agent to successfully predict the behavior of another agent, in response to his own actions, he has to consider all his potential actions independently, as his own actions influence the behavior of the other agents.

Is it not absurd to think that every single human on this planet is considering the effect of his actions on the behavior of 8 billion other people, which then affect the optimality or suboptimality of this action, which then make him consider a different action, which then affects the 8 billion other people in a way that forces him to respond yet again?

In the neoclassical solver, causality does not exist, only correlation! The reason for that is that in the pure neoclassical solver, all actions happen simultaneously. Ordering effects cause unsightly results. Results that give rise to eventual equilibria and delayed equilibria. Results that make capitalism look bad.

Perfect cooperation could be loosely associated with rational expectations, however, the problem here is not the optimization process and the availability of information. It is the fact that there is no unique solution in an illiquid economy and thus all individual agents would have to somehow resolve the differences in their equilibrium solutions or risk suboptimality. If this is an information problem, then all agents would have to distribute their solutions to all agents and then perform a resolution process to pick one solution out of the up to eight billion solutions.

It quite literally is perfect cooperation. Nobody would be sane enough to start a conflict of any kind, i.e. a dispute, an argument or to commit crime. Neoclassical economists’ insistence on decentralization of exact solvers results in central planning. What an amusing paradox.

The easy way out is to deny the existence of perfect cooperation and allow the existence of disagreements, conflicts and power struggles over the optimal course of economic action.

Institutions and economic structures reduce computational complexity

Given a pricing system with infinite precision, the pricing system will capture minute details about the movement of individual atoms and smaller particles. Every problem experienced in reality can be broken down into an economic problem to be solved by finding the optimal price vector. This means that the pricing system can in theory solve any problem.

Since the computational costs of the neoclassical solver are not taken into account, it will look as if specialized solutions that save computational costs are a waste of resources and an obstacle towards optimality. In fact, this is the go to explanation for why governments should cease all interference with the market.

When neoclassical economists criticise specific policies and institutions, they tend to move the costs saved by institutions around in such a way that the neoclassical solver has to pay the cognitive costs. The neoclassical solver has to work harder and harder, but since congnitive costs aren’t part of the neoclassical model, the appearance of a free lunch via extreme liberalisation of markets emerges.

Government administration and money as an institution are seen as unnecessary costs as their costs are primarily of the cognitive kind. Centralized markets reduce the cognitive cost of spot exchanges, but since cognitive cost is considered to be free, there is no benefit to the institution of a centralized market. Even private institutions are considered irrelevant and theories of the firm have to be established to explain their existence.

If one applies this idea in its inverse, then one has to arrive at the conclusion that a reduction in computational complexity is a good indicator of the usefulness of institutions. A lot of problems can be solved reasonably through delegation to the right party, which now possesses enough concentrated information to perform the task at low cognitive cost and low communication overhead.

The Universal solver

There might be one escape hatch, one that is quite silly, but it can be entertained. What if the universe was such a solver? We do not need a simulation or understand how the universe functions or even perfect information. We therefore do not need a central solver to exist inside of the universe. In fact, the universe solves itself.

However, this opens a can of worms. If there is in fact a global competitive equilibrium, then it would be attempting to raise entropy as fast as possible to achieve thermodynamic equilibrium and once accomplished, no macroscopic change occurs. This is bad, this is really bad. It means the universe has a shelf life and that it will one day end in heat death. In fact, to be in equilibrium is to be dead.

It is also terrible in far more terrifying ways, at least for neoclassicals. To live is to be in disequilibrium, to be constantly changing. The economy of living beings is always in a disequilibrium! Capitalism would be a disequilibrium economy and so would Communism. The fact that communism is less efficient at turning low entropy energy to high entropy energy, just means that capitalism brings us to heat death faster!

Thus, thermodynamic equilibrium hardly can be what capitalists want and therefore, the universal solver is not the escape hatch they need. After all, thermodynamic equilibrium doesn’t have the properties that neoclassical economics gloat about.

Sales and purchases within the same agent, for instance, do not need to be in equilibrium under the gobal condition of reaching thermodynamic equilibrium. It is in fact possible for the sun to one sidedly send its low entropy energy to earth to be converted into high entropy energy.

We therefore utterly depend on the “exploitation” of the sun’s resources. We don’t pay the sun, we don’t even pay the plants we harvest on earth. There is no equilibrium with man and nature. He simply takes, takes in a way that it should be considered theft and murder from a different species’ perspective.

Slaughtering of animals, for example, can hardly be justified as some form of labor being mixed in such a way as to deserve both the meat and the life of the animal. The animal’s existence is the precondition for the labor to be mixed. The animal contributed the vast majority of the labor, the human only the energy. If theft and murder are accepted under equilibrium, then equilibrium is meaningless.

Variants of the neoclassical solver

We can make simplifying assumptions in the name of tractability, right? Except, none of these “assumptions” are going to make neoclassicals happy. In fact, to call them assumptions is a bit misleading.

No, they are not the absolute truth and unchallengable.

The problem is that they are what could be considered “common sense”.

That, is in fact the problem, they are mostly real, they are very reasonable approximations of the real world. The computerized version may be abused to ad absurdum, but that isn’t the goal here.

Relaxation: Allow ordering of solver runs

Consider that agents can only set their own variables. We could therefore get the idea that we can simply solve for one agent at a time, right?

However, solving only one agent at a time still requires one agent to act before others, which grants the agent power to make an early decision that other agents cannot respond to and must accept.

Since everyone else has to obey his decisions, the ordering of agents is by definition also a power hierarchy and this power hierarchy will result in suboptimality the further bottom one is in the power hierarchy as ones decisions have been limited by the decisions made by previous agents.

Relaxation: Do not simulate all time periods in one go

One of the most annoying limitations of the neoclassical solver is that it is inherently an offline algorithm, that is, it is an algorithm which solves the entire economy for all time periods. This is necessary under time preference theory, since making a decision in the present requires discounting the impact it has on future utility. We cannot make a decision in the present, without knowing its ultimate outcome.

However, what if we restrict a solver run to only be used to perform actions in the present, instead of simulating time to infinity? We aren’t necessarily limited to solving a single day at a time, we can still solve up a batch of time periods to optimality using our computational budget. The downside is that there will be time inconsistency and suboptimality by accident.

Relaxation: Allow suboptimality through problem relaxation

Instead of solving the direct problem, which is difficult, we can decide that we want to calculate the optimal solution to an easier problem. This is not accidential suboptimality, it is intentional suboptimality. Ideas include the use of conventional linear programming or the lagrange relaxation of the problem, followed by filling the output variables using the rounded down LP results and a final pass that solves the direct problem on what the relaxation has left over.

The neoclassical solver is a scientific dead end

Why do I even entertain the neoclassical solver? The reason is that it is an obvious dead end. It prevents you from studying large scale illiquid models due to extremely high standards on the quality of the solution. Not having integer variables is unthinkable, but the straight jacket of optimality prevents the application of tractable approximation schemes.

When your benchmark is an exact solution, then the fact that a solution is inexact immediately disqualifies it, even though it may have more to say about the real world than the exact solution. Therefore, the neoclassical solver is an example of what has been tried and is simply not viable for anything but toy models.

Who in their right mind would think it sane, to restrict oneself to the simulation of less than a dozen agents, if there are potential pathways to the simulation of hundreds or even thousands of agents, commodities, time periods and locations?

Why demand the application of supercomputers for basic every day activity? Using an exact solver for benchmarking the approximation schemes is a good idea. Insisting on an exact solver, merely for the sake of pretentious superiority, is not.

I had hopes and dreams based on taking neoclassicals by their words

I have read a lot of papers written by neoclassicals with their fancy pants models and I kind of respect that they have built anything at all, considering the difficulties mentioned above, but for me it simply wasn’t enough to dumb down the problem to something that is trivial to solve, such as mandating a single obvious global optimum.

I despise the concept of representative agents and the like, but if I want to build a complex economic simulation, then how am I supposed to prove that my simulation is a better model of reality than theirs? They will obviously call me out for having worse solutions, so I tried to give them the benefit of the doubt and to simply assume that there should be a way to bruteforce an equivalent solution to what they are doing.

Analytical solutions are possible if you have a liquid economic model, but an illiquid model? The equivalent requires expensive solvers and to my dismay, the problem is so difficult, that I have yet to have built even the most simplest toy model and by that I don’t even mean something comparable to what economists are doing in their research, no I am referring to something far far simpler. The truth is that you cannot simplify a neoclassical economy down to an integer linear program or quadratic program. The neoclassical solver is a computational nightmare.

Mathematical optimization with equilibrium constraints

At the core of neoclassical economics is the concept of economic equilibrium. How is equilibrium expressed mathematically? In principle it is quite simple, in fact, given the mathematical expressions you are going to see, you might think that I am overemphasizing the problem.

Consider a simple economy with the equilibrium constraint:

\(\forall A: \sum_{G} \sum_{B \neq A} p_G \cdot T_{GAB} = \sum_{G} \sum_{B \neq A} p_G \cdot T_{GBA}\).

For each Agent \(A\), we sum over all commodities \(G\) which we sell at price \(p_G\) to Agent \(B\) in quantity \(T_{GAB}\), where the first agent is the seller and the second agent is the buyer. The left hand side of the equation therefore sums up the sales of Agent \(A\) to the rest of the economy and the right hand side represents the purchases of Agent \(A\) from the rest of the economy.

Simple, right? Not so fast. If this were nothing but a single equation to solve, then if either \(p_G\) or \(T_{GAB}\) was given as a constant, we could have solved it as a linear complementarity problem, but the reality is that we have yet to define the rest of the economy or any other constraints. When inserted into a conventional linear program, we get a new type of problem called LPCC, which refers to linear programming with complementarity constraints.

The general problem class MPEC stands for mathematical optimization with equilibrium constraints. Now you may rightfully ask what the deal is, it is a simple extension of a problem with a well known solution. You see, the problem is that we have left the realm of convex optimization.

Non-convex optimization does not have optimal answers. Even the simplest possible neoclassical economy is extremely difficult to solve without the simplifying assumptions that economists make.

It’s not necessarily that they are stupid, it simply is difficult to do better than what they already do, but then the question arises, what possible relationship can it have to real life?

None.

[0] https://en.wikipedia.org/wiki/Uncertainty_principle

[1] https://en.wikipedia.org/wiki/No-cloning_theorem

“Solving Linear Programs with Complementarity Constraints using Branch-and-Cut” https://arxiv.org/pdf/1802.02941.pdf

Relevant Kurzgesagt video: https://youtu.be/UebSfjmQNvs