Computing The Climate
Happy Canada Day!
As a member of the Zephyr Foundation, I spend a fair bit of time thinking about transportation analysis and how to make it more open and transparent. At the same time, I am an academic who conducts computational research and thinks about how to do a better job of sharing data, code, and papers. On the paper front, I do an okay job. On the data and code… it’s a work in progress. A few years ago (Joe Castiglione tells me it was maybe 2015), there was a session at the Transportation Research Board (TRB) annual meeting on hurrican forecasting and how they collaborate on model development. The implication is that we as transportation modelers could follow a similar path to more collaborative and transparent model practices. My perspective on this question comes from experience in local government, consulting, and academia. I think there are two difference between hurricane and transportation demand forecasting that are relevant to this discussion (likely many more, but let’s focus on two avoid a classic Hawkins multi-dimensional ramble).
The first difference is the spatial scope. Transportation demand models tend to be developed at the metropolitan or state scale. The reason this matters is that it has led to a competitive market among consulting firms. While a hurricane forecast model is essentially a natural monopoly - i.e., there is a large region affected and competing commercial models would not be feasible as only one contract would be awarded by a federal agency. Granted, there are competing research models for hurricans, which we do not tend to see in transportation where competition occurs among firms but across metropolitan areas (minimal comparison of models within a single region). Consider the excellent history of the transportation forecasting field in the United States by Konstantinos Chatzis. While including many prominent academics, it is predominantly a history of consultancy and commercial contracts rather than academic grants to develop general model systems (with the except of TRANSIMS, POLARIS, and a few other example) - note: while I love the POLARIS developers, it is still a largely commercial package in that model applications rely on grant dollars for development within a specific region and the codebase is not wholely available open access.
The second difference is the politics of transportation. I am not an expert on hurricanes and disaster management, but I think we can be fairly confident politics has less influence on the location of hurricanes (at the risk of a digression - politics certainly has some indirect effect on the frequency and severity of hurricanes through insufficient action on anthropogenic climate change). Transportation investments are an extremely politic process, particuarly at the scale assessed by transportation demand models. These politics lead to distorted incentives, legal battles, and other messiness.
Related to hurricane forecasting, climate models seek to forecast climatic conditions in response to technology and policy decisions, as well as economic and health impacts depending upon the particular model focus. Steve Easterbrook at the University of Toronto provides an excellent overview of this field in Computing The Climate. I’ve yet to shell out for the full book, but the Chapter 1, 2, and 9 are available online. The book is highly readable, including a nice history and summary of interviews with modeling teams from around the world. As stated by Steve: > The question they were called upon to answer wasn’t whether the planet would warm in response to rising carbon dioxide emissions – by the late 1970s, that was no longer in dispute in the scientific community. The question was: how certain can we be about the numbers?
Hey, this sounds a lot like the uncertainty movement in transportation demand modeling! We’re just 50 years late to the game. One impetus for these efforts to understand climate uncertainty was a report from the JASON group (a group of top physicist advisors to the US president on nuclear security matters. They were named for Jason and the Argonauts - I wasn’t part of the team, being born many years after its inception). National Academy of Science (NAS) panels were established and many discussions had as to how best to model climate systems and quantify uncertainty. While NAS has empaneled many committees on transportation issues, to my understand they have not had the same collectivizing effect on our field.
One reason I like Steve’s book is that he is not a climate scientist. He is a computer scientist interested in how large teams of scientists, largely untrained in production grade software development practices, develop complex computer programs and manage quality control. This topic is generalizable and very useful to understand! Something I found interesting from the start was the MetOffice approach in the UK. Their climate model is built on the same code base as the weather model. The weather model must be run daily to provide forecasts to the UK government and public. As such, its code is constantly reviewed, optimized, and validated against observations. This software setup means the climate model code also exhibits a high level of consistency. The parallel I could draw to transportation is that we produce daily road condition reports, short-term forecasts at intersections, Waze routing recommendations, etc. However, these models are decentralized and variable. They are not a national forecast produced by one centralized agency. Within a metropolitan area, I could see a structure similar to that of the MetOffice being feasible. We have loads of new data and model providers - StreetLight, INRIX, Replica, etc. - which could be ingested into a short-term forecasting and analysis software framework. That same forecasting framework could be extended to provide long-range forecasts at greater spatial aggregation and with additional modules for auto ownership and other long-term decisions. It’s not a perfectly fleshed out idea, but it is something.
Charney Sensitivity - You’ll have to read Steve’s book to get the full history, but Jules Charney was an early climate modeler - as in was tasked by John von Neumann in 1948 to build a weather forecasting program. The concept of Charney Sensitivity is simple. Double the \(CO_2\) in the atmosphere and see what happens a few decades in the future. Do models agree? The first issue we would face in transportation modeling is that not all models follow an evolutionary, path dependent, simulation approach. Many models forecast population, land use, economics, and transportation with separate and loosely coupled model systems. Furthermore, many of these models simply provide forecasts for a subset of future years (say 10, 25, 40 years into the future). It could be difficult to compare these models. One “elephant in the room” referenced above is that most models are developed by consultants. Backcasting to check the accuracy of a model or comparing your model against the competitor models may not be good for business. However, we can take a different perspective, as was done by Charney and his team when tasked with the problem of uncertainty quantification by US President Jimmy Carter in the 1970s. While each model may provide a single output that will vary as a function of slight differences in model formulation and setup, the combination of results provides an uncertainty band on the climate (transportation) future.