

Jonathan Benchimol
16.7K posts

@Benchimolium
Economist, Research Department, Bank of Israel. Co-organizer @VIMacro_org. AE @ IREF & IF. Previously @Harvard @ESSEC @SorbonneParis1. All views are my own.




















SHAME: The Islamic Republic of Iran has just been nominated to the U.N. Committee for Program and Coordination, which meets soon to shape policy on women's rights, human rights, disarmament, and terrorism prevention. ECOSOC members who backed this include: 🇬🇧🇪🇸🇨🇦🇫🇷🇩🇪🇳🇴🇳🇱🇦🇺🇨🇭🇦🇹🇫🇮





A point that is sometimes overlooked is that PDEs in physics and economics have a subtle but important difference. When a physicist solves the Schrödinger equation (see my slide below), the potential is given. The coefficients of the equation are part of the problem statement. You pick your grid, refine your mesh, and the equation never changes on you. Better numerics give a better approximation to a fixed target. In economics, this is not the case. Look at the Hamilton-Jacobi-Bellman equation for the neoclassical growth model (also slide below). The drift of capital depends on a derivative of the value function, the very object you are trying to solve for. The “coefficients” of the PDE are endogenous to the optimal choices of the agents. This is what @UncertainLars and Sargent referred to as the cross-equation restrictions implied by optimizing behavior. This is what @MahdiKahou and I call the “equilibrium loop”: improving your approximation changes the policy, which changes the dynamics, which changes where in the state space the economy spends its time, which changes where your approximation needs to be accurate. You are not chasing a fixed target with a better net. Moving the net moves the target. This has serious consequences for computation. You cannot just borrow neural network architectures from deep learning in the natural sciences. The loss function comes from equilibrium conditions, not from labeled data. The evaluation points are not given. Instead, they are regenerated each epoch from the current approximation. Ignoring it is why you often get solutions that look good on a training set but fall apart in simulation.

