Stabilizing simulations with asynchronous programming
There are many ways to check whether a simulation shows reasonable results, besides the code-smell tests and the verification of reasonable defaults in assumptions (for example the rationality of the agents is a strong assumption in models). You can run the same scenario several times or vary a single parameter to check whether the results show too much variation on each run or change. In this task you can also employ the asynchronous programming techniques discussed previously to launch several asyncio tasks simultaneously, running the same model with slight differences.
In our case we choose to keep the scenarios with 20 agents making business for 10 years (40 quarters), but changing for each execution the LLM that will make the cooperate or cheat decision:
agents = 20
years = 10
models = ["cogito:8b","gemma3n:e4b","granite3.3:8b","qwen3:4b"]
runners = [SimRunner(agents, years...