5 modeling misconceptions
Part 3 of 6, Modeling for metascientists (and other interesting people).
In Part 1, I covered the misconception that models are mainly useful if you want to generate precise predictions. In fact, models have many uses, a key one being that models promote transparency.
In Part 2, I covered the misconception that models must be realistic to be useful. In fact, models are unrealistic by design: just like maps, models leave out details to provide a clearer picture of the things that really matter, and different types of maps (models) are useful for different purposes.¹ ²
In part 3, I’ll go over a few more misconceptions. My goal here is constructive: I love that metascientists are seriously discussing theory and models, and hope that this post will, in some way, contribute to building a field that has an even better understanding and appreciation of formal theoretical work.
Misconception 1: It’s bad when a model’s results are a consequence of its assumptions.
I’m afraid…that the results struck me as a bit inevitable, given the assumptions of the model.
- Anonymous Reviewer
Models are logical machines for deducing conclusions from assumptions.³ In other words, unless there’s a mathematical error or a bug in the code, the results of a model will necessarily be a consequence of its assumptions. That’s what it means for an argument to be logically-coherent.
If I was in particularly snarky mood, I might be tempted to answer the reviewer by saying,
“OK, fair point. So would you prefer that my results didn’t logically follow from my assumptions?”
I could certainly do that. Start by assuming 4 eggs. Add 1 cup flour, 3 apples, a 1/2 cup sugar, and pinch of baking powder. Fold (don’t stir). Put everything in the oven at 200C, wait for 45 minutes and…out comes a Nature paper! But that would be weird.
A more generous interpretation of “the results are inevitable, given the assumptions of the model“ might be something along the lines of, “the model was set up to guarantee the conclusion that you desired.”
In other words, it’s arguing that a form confirmation bias took place. There were many ways that you could have built the model, but you chose to build it in this exact way to produce the result you wanted; and had you made different assumptions, your conclusion would have been different.
This is a more reasonable point. In fact, I think that this happens all the time.
Somebody wants to show something and builds a model to generate that result. And this model is different than the model they would have built had they wanted to know something [I first heard this phrasing from Kim Hill at ASU, I think].
If you want to show something, you make whatever assumptions are necessary to generate that outcome. If you want to know something, you make the most reasonable assumptions possible, or build a range of models with different assumptions to understand the conditions in which an outcome emerges.
The original point still stands though — given a set of assumptions, a model’s results are inevitable. Often times models are useful precisely for this reason, by revealing which assumptions are sufficient to generate a result. Then, once the assumptions are transparent, we can more easily criticize the original idea: “well, you need to assume x, y, and z to get that result, but these assumptions are never met in the system that I study, so, meh.”
Misconception 2: If a model shows something, then that thing must be true.
This might seem like a dumb one, but I encounter this all the time.
A model gets published that finds something — say, that “selection for high output leads to poorer methods and increasingly high false discovery rates,” ⁴ or that “scoop protection promotes larger sample sizes.” ⁵ Soon enough, people begin to interpret this result in the broadest way possible, talking about it as if it were a general theoretical truth.
The problem is, just as with empirical work (where any single experimental paradigm or operationalization of a theoretical construct can only tell us so much, and where we care a lot about the mapping between theory and operationalization),⁶ ⁷ any single theoretical model provides just one of many possible instantiations of an idea.
So what do we typically learn from a theoretical model?
Well, we learn, given this specific way of carving up the world, these are the logical implications; we get proofs of concept and “how-possibly” scenarios ⁸ ⁹ of how phenomena may arise; and we gain a better understanding of alternative ways to carve up the world that potentially capture other aspects of the idea.
So just as we should be humble about what we can infer from any single experiment, we should typically be humble about the extent to which any single model can reveal a general theoretical truth.
For any phenomenon, there are many reasonable modeling approaches, each of which makes different assumptions, explores a different range of conditions, and so on. This means that, to overcome the limitations of any single model, we must rely on many models of the same phenomenon, what Scott Page refers to as “many-model thinking.”¹⁰
If a wide range of models converge upon the same qualitative conclusions, then we gain confidence in the generality of a result. On the other hand, if different approaches produce different results, then we learn something about the conditions in which a phenomenon arises. We can then use this information to figure out which conditions better correspond to empirical reality, or to think about counterfactual or “what would happen” scenarios.
All of this provides a strong argument for diversity: the more types of models we keep in our toolkit, the less likely we are to be blinded by the limitations of any single one.
Misconception 3: You can show anything with a model, so models are useless.
At the other extreme, there’s another risk: correctly recognizing that any one model is limited, and then throwing the baby out with the bathwater and discounting models altogether, because it’s always possible to build a model that generates some result.
Sure “can we possibly get a result” can be a useful question (and often times the answer is yes, although it’s hard to be 100% certain without a model).
But typically, more interesting questions are “what assumptions are necessary or sufficient to get a result?” or “how large is the range of conditions in which a pattern emerges?”
Good luck answering these without formal models : )
As one example, a prominent hypothesis in developmental psychology is that when environments are unpredictable, it is evolutionarily adaptive to bet-hedge by producing offspring that vary in their level of plasticity (some offspring can adjust their phenotypes to environmental conditions, other offspring less so).
To evaluate the logic of this hypothesis, Willem Frankenhuis, Karthik Panchanathan and Jay Belsky built a model.¹¹ Their model demonstrated that the hypothesis was logically coherent, but only under restrictive conditions: environmental variation occurs temporally (not exclusively spatially), fitness effects are large, and the costs to phenotype-environment mismatch exceed the benefits of being well matched.
In other words, the model answered all of our questions:
- Can we possibly get the result that it is adaptive to produce offspring with varying levels of plasticity? Yes.
- What assumptions are necessary or sufficient to get the result? Well, we don’t know which ones are necessary (it’s just one model). But we do know which ones are sufficient: parents can produce plastic or fixed offspring, payoffs are symmetric (“safe-specialists in a safe environment attain the same fitness as danger-specialists in a dangerous environment”), environmental parameters are extrinsic, and so on.
- How large is the range of conditions in which bet-hedging via differential plasticity emerges? Not very.
Of course, after the model is built, there’s always room for the scientist who came up with the hypothesis to proclaim, “Vindication! I was right all along!”
But that’s not really fair.
Presumably, the original claim was meant to be general: the claim, “this idea is big, important, and will make you happier, healthier, and wealthier”¹² is a lot sexier than, “under a restrictive set of conditions that are unlikely to occur, this thing makes sense.”
This reminds me a bit of the “hidden moderator” debates that are now commonplace in psychology, due to the growing number of direct replications. In many cases, when a finding fails to replicate, the original authors argue that the failure was due to a hidden moderator. Of course, this is always a possibility. But at the very least, the replication demonstrates that the target phenomenon is relevant in a narrower range of conditions than was originally claimed.
Misconception 4: Models can’t test hypotheses.
You say at the outset that you set out to test an hypothesis, but…you use simulation rather than experimentation…
- Anonymous Reviewer
This is sort-of true.
It’s true in the sense that models aren’t about empirical reality. As Hanna Kokko reminds us, “models do not investigate nature. Instead, they investigate the validity of our own thinking, i.e. whether the logic behind an argument is correct.”¹
But models can test the logic of hypotheses.
For example, say that I propose the hypothesis, “x follows from a and b.” You model my hypothesis and find, “um, no, x doesn’t follow from a and b.” You’ve sort of tested my hypothesis, haven’t you? I posited that a conclusion follows from some assumptions, when in fact it does not. Pre-modeling, we thought that the hypothesis was plausible. Post-modeling, we think it’s less plausible (and indeed, the hypothesis is false, at least in that specific form).
If that’s not “testing,” then I’m not sure what is.
Think back to the model of differential plasticity in Misconception 3. The model found that differential plasticity could evolve, but only in a narrow range of conditions. For empiricists, this type of result is useful: no need to waste time collecting data in all those other conditions. Non-empirical testing of this sort makes empirical research more efficient, and is a regular part of scientific inquiry in other fields.¹³ For examples from biology and physics, take a look at “Are theoretical results ‘Results’?”¹⁴
Misconception 5: A model must generate novel results to be useful.
None of the results surprise, given how the model was constructed.
- Anonymous Reviewer
Hopefully, this misconception is obvious by now.
But just in case: models have many functions.¹⁵ Sure, generating novel results is nice. But just as we have begun to appreciate that methodologically-sound empirical studies are worth publishing regardless of their results (e.g., as a registered report,¹⁶), we might also consider extending this appreciation to theoretical models.
Leo’s Epilogue
That’s it for me.
For the next 3 posts, I have asked a few modelers from the evolutionary sciences to provide their perspectives on this series and on modeling in general.
Although these scholars do not necessarily represent the entire spectrum of views on modeling, each of them has a richer understanding than I do. I am grateful for their contributions and look forward to reading all of the smart things that they have to say.
With a bit of luck, I hope that this series will contribute to a deeper understanding of theoretical modeling in our field, and will provide a broader perspective on the role that models play in science in general.
Perhaps this is overly optimistic, but maybe, just maybe, one day you’ll be working on a model and have a discussion that goes something like:
Hey, didn’t that Leo guy write something about how this was a misconception?
Ah, yea, you’re right! Great that you pointed that out. Let’s hold off and think about how to fix this. Science!
More likely though, it will go something like:
Hey, didn’t that Leo guy write something about how this was a misconception?
Eh, who cares what that guy thinks anyways. We already wrote the paper.
Yeah, but he kind of has a point.
Jesus. OK, but don’t change anything. Just add a sentence about how some scientists have argued for a different perspective. And cite his blog, in case we get him as a reviewer.
Isn’t it wrong to cite someone just because they might be a reviewer?
Aww, that’s cute. Principles. When I was your age…eh, never mind. Just cite him.
- Kokko, Hanna. Modelling for field biologists and other interesting people. Cambridge University Press, 2007. p.7. https://doi.org/10.1017/CBO9780511811388
- Smaldino, Paul E. “Models are stupid, and we need more of them.” Computational social psychology (2017): 311–331. https://books.google.nl/books?hl=en&lr=&id=gjwlDwAAQBAJ&oi=fnd&pg=PA311&dq=smaldino+models+are+stupid&ots=K1MCBe1kq8&sig=QAClKYlxKvEOUjg_Mu4cZSLbPhE&redir_esc=y#v=onepage&q=smaldino%20models%20are%20stupid&f=false
- Gunawardena, Jeremy. “Models in biology: ‘accurate descriptions of our pathetic thinking’.” BMC biology 12.1 (2014): 1–11. https://doi.org/10.1186/1741-7007-12-29
- Smaldino, Paul E., and Richard McElreath. “The natural selection of bad science.” Royal Society open science 3.9 (2016): 160384. https://doi.org/10.1098/rsos.160384
- Tiokhin, Leonid, Minhua Yan, and Thomas JH Morgan. “Competition for priority harms the reliability of science, but reforms can help.” Nature human behaviour (2021): 1–11. https://doi.org/10.1038/s41562-020-01040-1
- Yarkoni, Tal. “The generalizability crisis.” Preprint]. PsyArXiv. https://doi. org/10.31234/osf. io/jqw35 (2019).
- Landy, Justin F., et al. “Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.” Psychological Bulletin 146.5 (2020): 451. https://psycnet.apa.org/doi/10.1037/bul0000220
- Rosenstock, Sarita, Justin Bruner, and Cailin O’Connor. “In epistemic networks, is less really more?.” Philosophy of Science84.2 (2017): 234–252. https://doi.org/10.1086/690717
- Frey, Daniel, and Dunja Šešelja. “What is the epistemic function of highly idealized agent-based models of scientific inquiry?.” Philosophy of the Social Sciences 48.4(2018): 407–433. https://doi.org/10.1177%2F0048393118767085
- Page, Scott E. The model thinker: What you need to know to make data work for you. Hachette UK, 2018. https://books.google.nl/books?hl=en&lr=&id=4a5PDwAAQBAJ&oi=fnd&pg=PT8&dq=scott+page+model+thinker&ots=Cp340Z4kpP&sig=WwNa8c8MFeEThq93_nEvxlzKVnM&redir_esc=y#v=onepage&q=scott%20page%20model%20thinker&f=false
- Frankenhuis, Willem E., Karthik Panchanathan, and Jay Belsky. “A mathematical model of the evolution of individual differences in developmental plasticity arising through parental bet‐hedging.” Developmental science 19.2 (2016): 251–274. https://doi.org/10.1111/desc.12309
- Nettle, Daniel. “How my theory explains everything: and can make you happier, healthier, and wealthier.” Hanging on to the Edges: Essays on Science, Society and the Academic Life. Open Book Publishers, 2018. http://library.oapen.org/handle/20.500.12657/27512
- Scheel, Anne M., et al. “Why hypothesis testers should spend less time testing hypotheses.” Perspectives on Psychological Science (2020): 1745691620966795. https://doi.org/10.1177%2F1745691620966795
- Goldstein, Raymond E. “Point of View: Are theoretical results ‘Results’?.” Elife 7 (2018): e40018. https://doi.org/10.7554/eLife.40018
- Epstein, Joshua M. “Why model?.” Journal of artificial societies and social simulation 11.4 (2008): 12. http://jasss.soc.surrey.ac.uk/11/4/12.html
- Chambers, Chris, and Loukia Tzavella. “Registered reports: Past, present and future.”(2020). https://osf.io/preprints/metaarxiv/43298/download