What happens if both are fix(x=0.29) and intervals of 0.01, 0.05 , 0.10 and 0.20 are used?
If the fix(x) value is not evenly divisible by cost increment some inaccuracy will result. For instance, suppose that cost is being incremented by 0.05. That means that the program checks for quits at 0.05, 0.1, 0.15, 0.2, 0.25, 0.3 etc. Since both contestants do not quit until 0.29, they will not be counted until 0.3. The program will say that a tie occurred at 0.3, not at 0.29. Thus, it will appear that they quit at a larger cost then they actually quit at. So the effect is that quitting cost is always rounded up to the next highest cost.
Now try the contest with one contestant at fix(x=0.45) and the other at fix(x=0.5). What do your results tell you about how the contest works (i.e, when are decisions to quit made -- at the start or end of an interval?) What do your results tell you about the relationship between the simulator's cost interval and its ability to determine accurately when a contest has actually ended? Do ties ever occur when you would predict that one or the other should win?
Same problem as above. While the "wrong party" never is credited with a win, there will be ties when one or the other should have won. Ties will happen whenever the two strategists quit in the same delta x interval. Thus, the larger the cost interval (delta x) the greater the chance of inadvertent ties.
Try some contests when both individuals are 'var' strategists. Do you feel very certain that you know exactly when individuals quit? What does lowering the value of delta x do to your degree of confidence?
Recall var has a constant probability of quitting. The program simply determines, at the start of each cost interval whether or not var has quit. It does not do so at any cost in between two adjacent intervals. So there is no inaccuracy with respect to quitting except to say that the chance of having quit is the sum of chances of quitting over a relatively large interval of cost, not a small discrete cost.
Let's imagine a real behavioral experiment involving observing some hypothetical war of attrition between two animals. Suppose that the experimenter only observes the contest at intervals (for example, she is also observing other animals or her view is often blocked). Alternately she observed them continuously but she is using a digital clock. Will she know exactly when quits occurred? Is it possible that she might score a contest as tie (assume she couldn't see the resource) when one or the other animal won? Is there much difference between her situation and this game?
Clearly the answer is that there is not much, if any, difference. When we record data, we always digitize it to some degree. As a result we often take what are continuous variables and convert them into discrete ones with some loss of accuracy. Now in many cases the degree of loss can be slight -- it depends on the difference between the sampling rate (how often our investigator looks or how often the program checks for a winner (delta x)). So, this example is meant to emphasize the inherent potential problems with sampling as much as it was meant to teach about the war of attrition.
With regards to 'var' strategists. When are quits most likely? Does changing V in a contest against a given fix(x) strategist do what you might expect to quitting times?
Quits are most likely at low costs and decreasing V makes low cost quits even more likely and vice versa. This is as predicted.
What is the relationship between number of contests
run and the degree of fit between the data for cost values less than 1.9
and the theoretical curve?
The greater the number of contests run, the better the fit. This shouldn't be surprising -- quits in fix only occur at x = 1.9 and so they are not part of histogram below 1.9. On the other hand, var uses a constant probability rule for deciding whether or not to quit in any given interval. We know from experience that when events are governed by probability, they are most likely to approach the theoretical distribution only after they have repeated many times. We expect over a large number of trials to get 50% heads in a coin flip; we are not surprised to get 100% heads if we only flip a coin a few times. The same reasoning applies here. The biological significance is that var's opponent only knows in a general sense what will happen -- it cannot predict exactly what it will do next, even if it knows it is facing a 'var' strategist.
At cost >= 1.9, are all the quits done by var, fix(x) or both?
Explain.
At 1.9 fix(x) individuals all quit. Any var individuals who are still playing also quit (the proportion of the time this happens is exp(-1.9/1) = 0.15.
Who won all of the contests at costs < 1.9 or can you say? Explain.
All contests that ended before x = 1.9 were won by the fix(x=1.9) strategist. They don't quit till x = 1.9 unless their opponents quit, therefore any quits before 1.9 were initiated by the var contestant.
When did ties occur? Explain.
The only ties that occurred happened at x = 1.9 when a var quit at exactly this cost. This would be a very unlikely event since the chance of var quitting at any exact cost approaches zero. On the other hand, if we assume that if var quit between x=1.895 and 1.905(using our cost interval of 0.01) and that this was the same interval that fix(x=1.9) actually quit in (i.e., they don't really all quit exactly at 1.900 but instead all quit between x=1.895 and 1.905, then the chance of a tie is equals the chance that var quit in this interval which is exp(-1.895/1) - exp(-1.995/1) = 0.1496 - 0.1360 = 0.0136.
Who won all the contests at cost > 1.9 or can you say? Explain.
Var won all of these contests since fix(x) quit at 1.9.
What is the relationship between the actual payoff E(var,fix(x)) (from the simulation results) and the expected (theoretical) payoff as a function of the number of contests?
They are more likely to converge closely at high numbers of contests.
What is the relationship between the actual payoff E(var,fix(x)) (from the simulation results) and the expected (theoretical) payoff as a function of delta x? Why is the actual payoff usually less than the predicted value?
The smaller delta x the more closely actual and theoretical results converge.
Recall from calculus that discrete calculations come to resemble continuous
ones as the differences in values of the independent variable (here cost)
approach zero.
With the method of calculation used in this simulation, actual
payoffs to var against fix(x) will tend to over-estimate theoretical values
since costs are calculated at the beginning, not middle of a decision increment.
Based on what you know about the symmetrical
war of attrition, should
E(fix(x=1.9), fix(x=1.9))
be greater than, less than, or equal to E(var, fix(x=1.9))? Explain.
We learned that the mixed strategy we call 'var' was an ESS against any
pure strategy in the symmetrical war of attrition. Since we know from the
Bishop-Cannings Theorem that when a mixed strategy such as 'var' is composed
of all pure (fix(x)) strategies that
E(var, var) = E(fix(any
x), var) and so for 'var' to be an ESS, then
E(var, fix(x))
> E(fix(x), fix(x)).
Those are the only payoffs that
matter. In every case, you will find that the simulation gives the result
that E(var, fix(x)) > E(fix(x), fix(x)).
Does it bother you that fix(x=1.9) wins most of the contests against var when V=1 -- i.e. that E(fix(x=1.9), var) > E(var, fix(x=1.9))? After all, var is supposed to be an ESS.
Here is what is most difficult about the war of attrition. Yes, fix(x) wins most of these contests (about 85% of them). But remember that the fitness of var and fix(x) are not determined only by this interaction. While var may not do well against fix(x), fix(x) does very poorly against itself. For example, if fix(x=1.9) goes against an identical opponent, the expected payoff to fix(x=1.9) is -1.4. By contrast, the expected payoff to var against fix(x=1.9) is -0.7; for var vs. var (V still equaling 1.0) it is 0.
Make both contestants 'var'. Run the simulation using different numbers of contests. Remember that the histogram output is total quits (both contestants added together) and the theoretical line is for expected quits for a single contestant playing var.
Is there good agreement between the theoretical line and the actual data? If not, explain what you think is the cause of the difference and attempt to explain any differences you might observe.
Here, the agreement between quits and the theoretical plot is not good. That is because the chance that one or other of the contestants will quit at a given interval is the sum of the chance that either might quit by them self: 2/V*exp(-2x/V).