This book examines methods for controlling or guiding a sector of the economy that do not require all the apparatus of economic planning or rely on the vain hope of sufficiently "perfect" competition, but instead rely entirely on the self-interest of economic agents and voluntary contract. The methods involved require trial-and-error steps in real time, with the target adjusted as the results of each step become known. The author shows that the methods are equally applicable to industries that are wholly privately owned, wholly nationalized, mixed or labor-managed.The suggestion seems to be that one can emulate the outcomes that would be produced by competitive markets -- if not something "better" -- by writing rules that, if followed, would mimic the behavior of competitive markets. The problem with that suggestion -- as I understand it -- is that someone outside the system must make the rules to be followed by those inside the system.
And that's precisely where socialist planning and regulation always fail. At some point not very far down the road, the rules will not yield the outcomes that spontaneous behavior would yield. Why? Because better rules cannot emerge spontaneously from rule-driven behavior. (It's notable that the book's index lists neither Hayek nor spontaneous order.)
Where, for instance, is there room in the socialist or regulatory calculus for a rule that allows for unregulated monopoly? Yet such an "undesirable" phenomenon can yield desirable results by creating "exorbitant" profits that invite competition (sometimes from substitutes) and entice innovation. (By "unregulated" I don't mean that a monopoly should be immune from laws against force and fraud, which must apply to all economic actors.)
I suppose exogenous rules are all right if you want economic outcomes that accord with those rules. But such rules aren't all right if you want economic outcomes that actually reflect the wants of consumers.
It reminds me of the Turing test:
The Turing test is a proposal for a test of a machine's capability to perform human-like conversation. Described by Alan Turing in the 1950 paper "Computing machinery and intelligence", it proceeds as follows: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. It is assumed that both the human and the machine try to appear human. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of some machine), the conversation is usually limited to a text-only channel.And so, the machine might -- sometimes -- emulate human behavior, but only then if it can engage in an interaction that's limited to textual conversation. And that's as far as it goes. The machine cannot be human, nor can it emulate the many, many other aspects of human behavior.
If you want to interact with a human, don't talk to a rule-based computer. If you want an economy that produces outcomes desired by humans, don't rely on an economy that's run by the equivalent of rule-based computer. Why settle for a machine when you can have the real thing?
Of course, the whole point of socialist planning is to produce outcomes that are desired by planners. Those desires reflect planners' preferences, as influenced by their perceptions of the outcomes desired by certain subsets of the populace. The immediate result may be to make some of those subsets happier, but at a great cost to everyone else and, in the end, to the favored subsets as well. A hampered economy produces less for everyone.