News

Loading...

Sunday, December 14, 2014

A marvel of economics…

Is how little understanding there is of the existence of the firm. The fact that there are at least four rival theories in economics explaining its existence is symptomatic of the darkness prevailing in this area.

When the average person thinks of "the economy," perhaps the first thing they think of is "businesses." But economics cannot yet explain the existence of these businesses.

Saturday, December 13, 2014

The world of physics is an abstract world and not the whole of reality

As Ed Feser notes:

"As I have emphasized many times, what physics gives us is a description of the mathematical structure of physical reality. It abstracts from any aspect of reality which cannot be captured via its exclusively quantitative methods. One reason that this is crucial to keep in mind is that from the fact that something doesn’t show up in the description physics gives us, it doesn’t follow that it isn’t there in the physical world. This is like concluding from the fact that color doesn’t show up in a black and white pen and ink drawing of a banana that bananas must not really be yellow. It both cases the absence is an artifact of the method employed, and has nothing whatsoever to do with the reality the method is being used to represent. The method of representing an object using black ink on white paper will necessarily leave out color even if it is there, and the method of representing physical reality using exclusively mathematical language will necessarily leave out any aspect of physical reality which is not reducible to the quantitative, even if such aspects are there.

"But it’s not just that such aspects might be there. They must be there. The quantitative description physics gives us is essentially a description of mathematical structure. But mathematical structure by itself is a mere abstraction. It cannot be all there is, because structure presupposes something concrete which has the structure. Indeed, physics itself tells us that the abstraction cannot be all there is, since it tells us that some abstract mathematical structures do not fit the actual, concrete material world. For example, Einstein is commonly taken to have shown that our world is not really Euclidean. This could only be true if there is some concrete reality that instantiates a non-Euclidean abstract structure rather than a Euclidean abstract structure. So, physics itself implies that there must be more to the world than the abstract structure it captures in its purely mathematical description, but it does not and cannot tell us exactly what this concrete reality is like."

Come get your agent-based modeling here!

Here.

Indra is an agent-based modeling system written in Python and available for download. I just finished coding Adam Smith's fashion model using it, and one of my students is going through Schelling writing up his models using it. Contact me if you would like to try the system, and I will help you get going.

Friday, December 12, 2014

Programmer bleg

I realized that two of my classes were going to use a function identical in all respects except that one of them would test for amount x being greater than amount y, while the other would test for x being less than y. Right now, I have coded the function to accept a boolean parameter I call "gt" which controls an if statement as to which test the function does. But what I really wanted to do was to pass in the operator to use itself.

However, generally speaking, programming languages do not accept operators as parameters to functions.

Is there a way to do this without the if statement?

The genius of Ken Thompson and Dennis Ritchie

How many pieces of technology developed 45 years ago are now more popular than ever?

I thought about this while running some UNIX shell commands in Linux (based on UNIX), which I am running as a alternate operating system on my Chromebook to ChromeOS (based on UNIX). So I picked up my iPhone running iOS (based on UNIX) to write this post, which I will put on Facebook, which runs on UNIX-based servers. Some of my friends will read my post on their Android phones, which run an operating system based on UNIX. Others will read it on their Macintosh computers, which run an operating system... based on UNIX.

And the really amazing thing here is that the work of Thompson and Ritchie endured several decades of ridicule before becoming the most ubiquitous piece of software in the world. And the reason for its success is intimately connected to their humility: instead of believing that they knew everything a user would want and building it into a monolithic operating system, they built a minimal framework within which it was very easy to add your own tools. They crowd-sourced the development of their operating system well before anyone had invented that term.

Agent-based modelling and the vindication of Mises

I've been reading agent-based modelling (ABM) literature the last week, and I am struck by its vindication of Mises's vision of economics. It turns out that to get phenomena like markets, firms, and market-clearing prices, the modelers only have to build agents that:

1) have a purpose
2) have some idea how to achieve it, even if that idea is sub-optimal
3) interact with other agents; and
4) face scarce resources.

Well, folks, this is nothing less than the basis of Mises's much reviled "praxeology." Mises just lacked the tools to formalize his vision, but they are here now.

Tuesday, December 09, 2014

Why is it difficult to detect bugs in agent-based models?


Rob Axtell, in his 2000 paper "Why agents? On the Varied Motivations for Agent Computing in the Social Sciences," attributes the existence of what he calls "artifacts" (program behavior that is not a part of the model being created, but a byproduct of a coding decision which was intended only to implement the model, but actually did something else as well) "partially" to the fact that, in agent models, a small amount of source code controls a large amount of "execution" code. As an example, he offers a system where millions of agents may be created and which might occupy up to a gigabyte of memory, even though the source code for the program is only hundreds of lines long.

But this explanation cannot be right, because the causal factor he is talking about does not exist. In any reasonable programming language, only the data for each object will be copied as you create multiple instances of a class. The functions in the agent-object are not copied around again and again: they sit in one place where each agent "knows" how to get to them. What causes the huge expansion in memory usage from the program as it sits on disk to the program running in RAM is the large amount of data involved with these millions of agents: each one has to maintain its own state: its goal, its resources, its age: whatever is relevant to the model being executed.

So what we really have is a small amount of code controlling a large amount of data. But that situation exists in all sorts of conventional data-processing applications: A program to email a special promotional offer to everyone in a customer database who has purchased over four items in the last year may control gigabytes of data while consisting of only a few lines of source code. So this fact cannot be the source of any additional frequency of artifacts in agent-based models.

So what is really going on here? (And I have no doubt that something is going on, since Axtell's basic point that we have to take special care to watch for these artifacts in agent-based models is surely correct.) I have done both traditional IT-type coding and agent-based modeling, and here is what I think is the main difference between the two in terms of the production of these artifacts: artifacts in both cases are the result of programming errors, but in the latter case, when you don't know what your output should be, it is very hard to distinguish them from interesting and valid results.

In most traditional data processing, it is easy to say just what the result should be: they are what your user told you they should be. (This "user", of course, may be a multitude of users, or even an imagined multitude of users you hope your new product will appeal to.) If you were asked to write the above program, that will email a special offer to customers who purchased over four items in the last year, it is easy to tell if your program is working: did those customers, and only those customers, receive the promotion, and only the promotion? Although you were writing the program to save going across the million customer database records by hand and generating the emails, you can easily select a small portion of the database and check your program by hand against that portion. If it is working for that portion, and it is a representative sample, you can assume it will work across all records. Or you can automate that process itself with a test suite, which contains a certain number of cases with known correct output, that your program's results can be checked against. (Of course, even this testing does not ensure the absence of bugs: there may be special cases in the full database that we omitted from our test data. Perhaps, for instance, for some customers, multiple orders were rolled into one for shipping purposes. The intention of the marketing department might be to still send them the special offer, but if we missed putting any such customers in our test cases, we may not detect that our code fails in these instances.)

But at least for Axtell's third type of agent-based model, the very reason we are writing the program is that what we don't know what the results of running it ought to be. We are using the program to explore the implications of the model, in a case where we don't know beforehand what those implications will be. This is not fundamentally different from what Ricardo did with his model farm (see Mary Morgan, The World in the Model, on this point), but while Ricardo was limited to using a limited number of simple cases where he could do all the necessary calculations by hand, by using a computer, we can easily test millions of more complicated cases.

We hope our code implements our model, and only our model. But we can easily make a mistake through a seemingly innocuous coding decision: for instance, as Axtell notes, the order in which agents act can be important in many models. If we write a simple loop proceeding from agent 1 agent N, we may give the agents earlier in our list a decided edge in something like grabbing resources for their own use. We might have to randomize the order in which agents act in every "period" of the test run to truly capture the model. If we fail to account properly for this fact, we might mistakenly think that these agents had some superior resource-capturing feature, instead of realizing that they are only "rich" because we (arbitrarily) stuck them early on in a list of agents.

If I am correct about the main source of these artifacts, then what are we to do about the problem? Although I have just begun to think about this problem, I do have one suggestion already: we can do something similar to what is done in more traditional IT programming: examine small test cases. But since we don't know the "correct" output, the procedure to do so will be somewhat different. In our early runs of our system, we can use a very small number of agents, and proceed step-by-step through our run, with lots of debugging information available to us. This allows us to get an intuitive feel for how the agents are interacting, and perhaps spot artifacts of our coding choices early on.

But while this is a help, it falls far short of the kind of systematic checking of our code that we can achieve with test suites for more typical IT problems. Is it possible to create a more automated method of detecting artifacts? Well, at this point, all I can say is that I am thinking about it.

Thoughts on software and hardware

There is no essential difference between software and hardware except the economic difference. The people who put forth metaphors such as "hardware is like the brain and software is like our thoughts" apparently have no understanding of how computers work.

Everything that is done in software can be done in hardware. In fact, the way software works is by reconfiguring the hardware. The introduction of the programmable computer was the invention of a machine that could be endlessly reconfigured without having to actually take tools to it and physically adjust its parts. And various features of computers have at various times moved from hardware to software or the reverse: The original Macintoshes could get by with so little RAM because a lot of the operating system was actually put into the hardware. The real difference between software and hardware is that it is cheap to reconfigure software and expensive to reconfigure hardware.

So software is simply a way to cheaply and continually reconfigure an electronic machine into new states. Those states by themselves have no meaning: any state could represent an attempt to solve a differential equation, a position in a chess game, or a line of music, depending upon what its users intend it to mean. The "analysis" of a chess game by a computer could be hooked up to a synthesizer and treated as a musical composition instead.



The chief impetus of new political movements...

Is a rising elite trying to seize control of power from an existing elite. The ideas they use to gain their followers commitment and enthusiasm are what Pareto called "derivations": secondary phenomena of secondary importance.

Monday, December 08, 2014

Idealism to the rescue!

Both "the right" and "the left" suffer from a one-sided focus on an aspect of poverty at the expense of the full picture. The right focuses on agency, and tends to dump the entire blame for their condition on the poor, failing to keep in mind adages like, "There but for the grace of God go I!"

The left tends to focus exclusively on circumstances, which winds up denying the poor any agency themselves, and portrays them like shelter animals waiting for a good progressive to come along and adopt them.

The reality is that both views are partial truths, each of which needs the other to round out the picture.

Although Hegel was somewhat mad at times, teaching us to look at these supposedly irreconcilable divides like this was surely a great contribution to human thought.


Sunday, December 07, 2014

Not getting the concept

There are ads running now during the football games saying that "every kid has to play, 60 minutes a day."

So "play" is now a duty that has an allotted time scheduled for it in a day full of other duties.

Friday, December 05, 2014

Forgetting Mises When Doing Comparative Political Economy


In the field of Constitutional Political Economy, analysis often starts from the assumption that "political agents act to fulfill their interest just like everyone else."

But then the analysis immediately assumes that the interest of political actors consists solely of seeking monetary gain. Since most of the people I read working in this area (for instance, this post is inspired by a paper I am currently refereeing in this field) are at least passingly familiar with Austrian economics, this is a somewhat surprising assumption.

One of the things Mises was surely correct about is that "pursuing one's interest," if it is to be a priori true of all agents, must be interpreted extremely broadly. In this sense, as Mises taught us, "one's interest" must include anything that might motivate an agent to act: an aescetic's efforts to abjure all worldly goods, a hero's noble sacrifice of his life for his comrades, and a serial killer's attempts to create as much destruction and suffering as possible, are all examples of agents acting in "their own interest" in this broad sense. Mises was entirely dismissive of the idea that acting in one's own interest could only mean pursuing material gain. And yet, I keep encountering papers that seem to equate the two, from people who I would think ought to know better.

When someone presented a paper at NYU equating "a political agent pursuing his interests" with his "maximizing the revenue he can draw from his position," I offered two notable examples of quite different behavior, and could have offered many more if time had permitted.

My first case was Alexander the Great: if he had merely wanted to maximize the wealth he could extract from his realm, after conquering Persia, he would have simply stopped his campaign, and enjoyed the fabulous wealth of the Persian Empire. But Alexander was obsessed with becoming the greatest warrior-king who had ever existed, and so continued eastward well beyond any point of "revenue maximization."

On the other hand, Ashoka, a king in India, converted to Buddhism (or at least began to support it strongly: there is some historical debate here) after being filled with horror at the deaths resulting from his Kalinga War. In any case, he began to promote Buddhism, erect Buddhist monuments, and do things like use his wealth to establish healthcare facilities for his subjects.

Another obvious counter example would be Hitler: once he had acquired the Rhineland, Austria, Bohemia, and half of Poland, he had a whole lot of territory from which to draw revenue. But his racial obsession would not allow him to stop at that point, leaving him to make decisions that, from a revenue-maximizing point of view, were quite insane.

In Misesian terms, all of three of these rulers were "pursuing their own interest." But the interests that political agents can embrace are no less diverse than those of any other agent.

Now, I have no problem with someone creating a model that assumes political agents are "personal (monetary) revenue maximizing," and seeing what results that model yields. But the papers I have read in this field generally do not do that: they seem to simply assume that what political agents pursue must be gains in material wealth. And I do not see any warrant for that assumption.