I first learned of the learning curve when I was a newly hired analyst at a defense think-tank. A learning curve
is a graphical representation of how an increase in learning (measured on the vertical axis) comes from greater experience (the horizontal axis); or how the more someone (or something) performs a task, the better they [sic] get at it.
In my line of work, the learning curve figured importantly in the estimation of aircraft procurement costs. There was a robust statistical relationship between the cost of making a particular model of aircraft and the cumulative number of such aircraft produced. Armed with the learning-curve equation and the initial production cost of an aircraft, it was easy to estimate of the cost of producing any number of the same aircraft.
The learning curve figures prominently in tests that purport to measure intelligence. Two factors that may explain the Flynn effect — a secular rise in average IQ scores — are aspects of learning: schooling and test familiarity and a generally more stimulating environment in which one learns more. The Flynn effect doesn’t measure changes in intelligence, it measures changes in IQ scores resulting from learning. There is an essential difference between ignorance and stupidity. The Flynn effect is about the former, not the latter.
Here’s a personal example of the Flynn effect in action. I’ve been doing The New York Times crossword puzzle online since February 18 of this year. I have completed all 170 puzzles published by TNYT from that date through today, with generally increasing ease:
The difficulty of the puzzle varies from day to day, with Monday puzzles being the easiest and Sunday puzzles being the hardest (as measured by time to complete). For each day of the week, my best time is more recent than my worst time, and the trend of time to complete is sharply downward for every day of the week (as reflected in the graph above).
I know that that I haven’t become more intelligent in the last 24 weeks. And being several decades past the peak of my intelligence, I am certain that it diminishes daily, though only fractionally so (I hope). I have simply become more practiced at doing the crossword puzzle because I have learned a lot about it. For example, certain clues recur with some frequency, and they always have the same answers. Clues often have double meanings, which were hard to decipher at first, but which have become easier to decipher with practice. There are other subtleties, all of which reflect the advantages of learning.
In a nutshell, I am no smarter than I was 24 weeks ago, but my ignorance of TNYT crossword puzzle has diminished significantly.
The title of this post is an allusion to an earlier one: “Modeling Is Not Science“. This post addresses a model that is the antithesis of science. Tt seems to have been extracted from the ether. It doesn’t prove what its authors claim for it. It proves nothing, in fact, but the ability of some people to dazzle other people with mathematics.
In this case, a writer for MIT Technology Review waxes enthusiastic about
the work of Alessandro Pluchino at the University of Catania in Italy and a couple of colleagues. These guys [sic] have created a computer model of human talent and the way people use it to exploit opportunities in life. The model allows the team to study the role of chance in this process.
The results are something of an eye-opener. Their simulations accurately reproduce the wealth distribution in the real world. But the wealthiest individuals are not the most talented (although they must have a certain level of talent). They are the luckiest. And this has significant implications for the way societies can optimize the returns they get for investments in everything from business to science.
Pluchino and co’s [sic] model is straightforward. It consists of N people, each with a certain level of talent (skill, intelligence, ability, and so on). This talent is distributed normally around some average level, with some standard deviation. So some people are more talented than average and some are less so, but nobody is orders of magnitude more talented than anybody else….
The computer model charts each individual through a working life of 40 years. During this time, the individuals experience lucky events that they can exploit to increase their wealth if they are talented enough.
However, they also experience unlucky events that reduce their wealth. These events occur at random.
At the end of the 40 years, Pluchino and co rank the individuals by wealth and study the characteristics of the most successful. They also calculate the wealth distribution. They then repeat the simulation many times to check the robustness of the outcome.
When the team rank individuals by wealth, the distribution is exactly like that seen in real-world societies. “The ‘80-20’ rule is respected, since 80 percent of the population owns only 20 percent of the total capital, while the remaining 20 percent owns 80 percent of the same capital,” report Pluchino and co.
That may not be surprising or unfair if the wealthiest 20 percent turn out to be the most talented. But that isn’t what happens. The wealthiest individuals are typically not the most talented or anywhere near it. “The maximum success never coincides with the maximum talent, and vice-versa,” say the researchers.
So if not talent, what other factor causes this skewed wealth distribution? “Our simulation clearly shows that such a factor is just pure luck,” say Pluchino and co.
The team shows this by ranking individuals according to the number of lucky and unlucky events they experience throughout their 40-year careers. “It is evident that the most successful individuals are also the luckiest ones,” they say. “And the less successful individuals are also the unluckiest ones.”
The writer, who is dazzled by pseudo-science, gives away his Obamanomic bias (“you didn’t build that“) by invoking fairness. Luck and fairness have nothing to do with each other. Luck is luck, and it doesn’t make the beneficiary any less deserving of the talent, or legally obtained income or wealth, that comes his way.
In any event, the model in question is junk. To call it junk science would be to imply that it’s just bad science. But it isn’t science; it’s a model pulled out of thin air. The modelers admit this in the article cited by the Technology Review writer, “Talent vs. Luck, the Role of Randomness in Success and Failure“:
In what follows we propose an agent-based model, called “Talent vs Luck” (TvL) model, which builds on a small set of very simple assumptions, aiming to describe the evolution of careers of a group of people influenced by lucky or unlucky random events.
We consider N individuals, with talent Ti (intelligence, skills, ability, etc.) normally distributed in the interval [0; 1] around a given mean mT with a standard deviation T , randomly placed in xed positions within a square world (see Figure 1) with periodic boundary conditions (i.e. with a toroidal topology) and surrounded by a certain number NE of “moving” events (indicated by dots), someone lucky, someone else unlucky (neutral events are not considered in the model, since they have not relevant effects on the individual life). In Figure 1 we report these events as colored points: lucky ones, in green and with relative percentage pL, and unlucky ones, in red and with percentage (100pL). The total number of event-points NE are uniformly distributed, but of course such a distribution would be perfectly uniform only for NE ! 1. In our simulations, typically will be NE N=2: thus, at the beginning of each simulation, there will be a greater random concentration of lucky or unlucky event-points in different areas of the world, while other areas will be more neutral. The further random movement of the points inside the square lattice, the world, does not change this fundamental features of the model, which exposes dierent individuals to dierent amount of lucky or unlucky events during their life, regardless of their own talent.
In other words, this is a simplistic, completely abstract model set in a simplistic, completely abstract world, using only the authors’ assumptions about the values of a small number of abstract variables and the effects of their interactions. Those variables are “talent” and two kinds of event: “lucky” and “unlucky”.
What could be further from science — actual knowledge — than that? The authors effectively admit the model’s complete lack of realism when they describe “talent”:
[B]y the term “talent” we broadly mean intelligence, skill, smartness, stubbornness, determination, hard work, risk taking and so on.
Think of all of the ways that those various — and critical — attributes vary from person to person. “Talent”, in other words, subsumes an array of mostly unmeasured and unmeasurable attributes, without distinguishing among them or attempting to weight them. The authors might as well have called the variable “sex appeal” or “body odor”. For that matter, given the complete abstractness of the model, they might as well have called its three variables “body mass index”, “elevation”, and “race”.
It’s obvious that the model doesn’t account for the actual means by which wealth is acquired. In the model, wealth is just the mathematical result of simulated interactions among an arbitrarily named set of variables. It’s not even a multiple regression model based on statistics. (Although no set of statistics could capture the authors’ broad conception of “talent”.)
The modelers seem surprised that wealth isn’t normally distributed. But that wouldn’t be a surprise if they were to consider that wealth represents a compounding effect, which naturally favors those with higher incomes over those with lower incomes. But they don’t even try to model income.
So when wealth (as modeled) doesn’t align with “talent”, the discrepancy — according to the modelers — must be assigned to “luck”. But a model that lacks any nuance in its definition of variables, any empirical estimates of their values, and any explanation of the relationship between income and wealth cannot possibly tell us anything about the role of luck in the determination of wealth.
At any rate, it is meaningless to say that the model is valid because its results mimic the distribution of wealth in the real world. The model itself is meaningless, so any resemblance between its results and the real world is coincidental (“lucky”) or, more likely, contrived to resemble something like the distribution of wealth in the real world. On that score, the authors are suitably vague about the actual distribution, pointing instead to various estimates.
I had been thinking recently about that meaningless phrase, and along came Bill Vallicella’s post to incite this one. As BV says, it’s a stock leftist exclamation. I don’t know when or where it originated. But I recall that it was used a lot on The West Wing, about which I say this in “Sorkin’s Left-Wing Propaganda Machine“:
I endured The West Wing for its snappy dialogue and semi-accurate though cartoonish, depictions of inside politics. But by the end of the series, I had tired of the show’s incessant propagandizing for leftist causes….
[The] snappy dialogue and semi-engaging stories unfold in the service of bigger government. And, of course, bigger is better because Aaron Sorkin makes it look that way: a wise president, crammed full of encyclopedic knowledge; staffers whose IQs must qualify them for the Triple Nine Society, and whose wit crackles like lightning in an Oklahoma thunderstorm; evil Republicans whose goal in life is to stand in the way of technocratic progress (national bankruptcy and the loss of individual freedom don’t rate a mention); and a plethora of “worthy” causes that the West-Wingers seek to advance, without regard for national bankruptcy and individual freedom.
The “hero” of The West Wing is President Josiah Bartlet[t], who — as played by Martin Sheen — is an amalgam of Bill Clinton (without the sexual deviancy), Charles Van Doren (without the venality), and Daniel Patrick Moynihan (without the height).
Getting back to”That’s not who we are”, it refers to any policy that runs afoul of leftist orthodoxy: executing murderers, expecting people to work for a living, killing terrorists with the benefit of a jury trial, etc., etc., etc.
When you hear “That’s not who we are” you can be sure that whatever it refers to is a legitimate defense of liberty. An honest leftist (oxymoron alert) would say of liberty: “That’s not who we (leftists) are.”
The shootings yesterday and today in El Paso and Dayton have, of course, redoubled the commitment of Democrats to something called “gun control”. This is nothing more than another instance of the left’s penchant for magical thinking.
The root of the problem isn’t a lack of “gun control”, it’s a lack of self-control — a lack that has become endemic to America since the 1960s. As I say in “Mass Murder: Reaping What Was Sown“, that lack is caused by (among other things):
- governmental incentives to act irresponsibly, epitomized by the murder of unborn children as a form of after-the-fact birth control, and more widely instituted by the vast expansion of the “social safety net”
- treatment of bad behavior as an illness (with a resulting reliance on medications), instead of putting a stop to it and punishing it
- the erosion and distortion of the meaning of justice, beginning with the virtual elimination of the death penalty, continuing on to the failure to put down and punish riots, and culminating in the persecution and prosecution of persons who express the “wrong” opinions
- governmental encouragement and subsidization of the removal of mothers from the home to the workplace
- the decline of two-parent homes and the rise of illegitimacy
- the complicity of government officials who failed to enforce existing laws and actively promoted leniency in their enforcement (see this and this, for example).
It is therefore
entirely reasonable to suggest that mass murder … is of a piece with violence in America, which increased rapidly after 1960s and has been contained only by dint of massive incarceration. Violence in general and mass-murder in particular flow from the subversion and eradication of civilizing social norms, which began in earnest in the 1960s. The numbers bear me out.
Drawing on Wikipedia, I compiled a list of 317 incidents of mass murder in the United States from the early 1800s through 2017….
These graphs are derived from the consolidated list of incidents:
The vertical scale is truncated to allow for a better view of the variations in the casualty rate. In 1995, there were 869 casualties in 3 incidents (an average of 290); about 850 of the casualties resulted from the Oklahoma City bombing.
The federal assault weapons ban — really a ban on the manufacture of new weapons of certain kinds — is highlighted because it is often invoked as the kind of measure that should be taken to reduce the incidence of mass murders and the number of casualties they produce. Even Wikipedia — which is notoriously biased toward the left — admits (as of today) that “the ban produced almost no significant results in reducing violent gun crimes and was allowed to expire.”
There is no compelling, contrary evidence in the graphs. The weapons-ban “experiment” was too limited in scope and too-short lived to have had any appreciable effect on mass murder. For one thing, mass-murderers are quite capable of using weapons other than firearms. The years with the three highest casualty rates (second graph) are years in which most of the carnage was caused by arson (1958) and bombing (1995 and 2013).
The most obvious implication of this analysis is found in the upper graph. The incidence of mass murders was generally declining from the early 1900s to the early 1960s. Then all hell broke loose.
I rest my case.
(See also “Reductio ad Sclopetum, or Getting to the Bottom of ‘Gun Control’“, “‘This Has to Stop’“, and “Utilitarianism vs. Liberty“, especially UTILITARIANISM AND GUN CONTROL VS. LIBERTY.)
Richard Thaler, with whom I had a nodding acquaintance many years ago, is one of my least favorite economists — and a jerk, to boot. (See, for example, “The Perpetual Nudger“, “Richard Thaler, Nobel Laureate“, “Thaler’s Non-Revolution in Economics“, “Another (Big) Problem with ‘Nudging’“, and ” Thaler on Discounting“.) What the world needs isn’t a biography of the nudger-in-chief, but that’s what the world now has, no thanks to The Library of Economics and Liberty, where the mercifully brief bio is posted.
In it, the reader is treated to such “wisdom” as this:
Economists generally assume that more choices are better than fewer choices. But if that were so, argues Thaler, people would be upset, not happy, when the host at a dinner party removes the pre-dinner bowl of cashews. Yet many of us are happy that it’s gone. Purposely taking away our choice to eat more cashews, he argues, makes up for our lack of self-control.
Notice the sleight of hand by which the preferences of a few (including Thaler, presumably) are pushed front and center: “many of us are happy”. Who is “us”? And what about the preferences of everyone else, who may well comprise a majority? Thaler is happy because the the host has taken an action of which he (Thaler) approves, because he (Thaler) wants to tell the rest of us what makes us happy.
Thaler … noticed another anomaly in people’s thinking that is inconsistent with the idea that people are rational. He called it the “endowment effect.” People must be paid much more to give something up (their “endowment”) than they are willing to pay to acquire it. So, to take one of his examples from a survey, people, when asked how much they are willing to accept to take on an added mortality risk of one in one thousand, would give, as a typical response, the number $10,000. But a typical response by people, when asked how much they would pay to reduce an existing risk of death by one in one thousand, was $200.
Surveys are meaningless. Talk is cheap (see #5 here).
Even if the survey results are somewhat accurate, in that there is a significant gap between the two values, there is a rational explanation for such a gap. In the first instance, a person is (in theory) accepting an added risk, one that he isn’t already facing. In the second instance, the existing risk may be one that the person being asked considers to be very low, as applied to himself. The situations clearly aren’t symmetrical, so it’s unsurprising that the price of accepting a new risk is higher than the payment for reducing a possible risk.
That’s enough of Thaler. More than enough.
Affluent “elites” are well-insulated from the consequences of the policies that they promote and enact. Sometimes the elites justify those policies because they are “good” for an amorphous aggregation (“the country”, “the people”, GDP). Thus, for example, elites favor “free trade” and “open borders” because some they (supposedly) result in a net gain in GDP. That the gain is net of the losses incurred by many taxpayers and victims of crime is irrelevant to the elites.
And when elites are promoting “social justice” they favor policies that are best for a particular group, to the exclusion of other groups. It doesn’t start that way, but that’s how it ends up, because the easiest way to “make things right” for a particular group is to penalize others, as in “affirmative action”, “affordable (tax-funded) housing”, suppression of speech, etc.
You get the idea. Elites stroke their own egos in the pursuit of abstract measures of “good”, and disdain those who are harmed, calling them — among many things — “bitter clingers”, “deplorables”, and denizens of “flyover country”.
I have been guilty of elitism, but I am cured of it. I have joined the no-longer-silent majority. No-longer-silent thanks to Donald Trump — praise be!