Earthquake science’s imminent ML-driven paradigm shift?

Updated: Nov 14, 2019

In layman terms, what do you do? 

Arnaud Mignan “tries to understand past catastrophes, their physics and statistics, to hopefully improve the forecast and mitigation of future, potentially damaging, events.”


“My main expertise is on earthquakes but I have projects in all sorts of risks, from asteroid impact to cyber-risk via domino effects. I also look at geo-energy and carbon capture projects and their potential risks in the wider context of climate change.”


He gave a simple example of research from a few years ago, detailing an exercise with some high school teachers in natural sciences that illustrated how an earthquake can cascade into extreme consequences. The research involved ‘reasoned imagination’.

Mignan et al. (2016), 'Using reasoned imagination to learn about cascading hazards: a pilot study', https://www.emerald.com/insight/content/doi/10.1108/DPM-06-2015-0137/full/html)

He references one particular scenario from the research.


"A large aftershock triggered a dike breach, which led to flooding. When receding, the flood in turn provoked a landslide on an unstable slope, which cut vital sections of the infrastructure networks of the area. With no water available, multiple gas leaks and roadblocks limiting access to first responders, fires quickly propagated. It led to a major industrial accident, the interruption of its business activities, and in consequence to a general slowdown of the regional economy, which is highly dependent on this industry. In this situation, riots and lootings followed.”


From an earthquake to riots. Preventing that sounds good.



I mentioned a recent medium post of his, in which he argued ML models may not be as effective as suggested in his line of work, catastrophe risk modeling. (It was actually this post that first drew my attention to Arnaud for exactly the reason that it was critical of machine learning.)


“I'm a huge fan of machine learning. I just meant that far simpler ML techniques may often do as well as - if not better than - complex, fancy models.”

He goes on to explain a general trend for making things more complex than they need to be in Science, with which I think many readers will agree.


“It’s not always justified. It might be easier to sell a complex model to a high-impact journal, rather than [selling in] a boring logistic regression. Although we did just that in Nature (with his colleague Marco Broccardo)! It is also tempting to surf on the AI wave and oversell fancy ML models to clients.”


Portraying his point, he references this recent NYT article, which makes an argument against ‘high tech disaster response’. The basic outtake is that the respective approaches to risk (in machine learning versus catastrophe risk) are misaligned, that if you get it wrong in risk management it means lives.


https://www.nytimes.com/2019/08/09/us/emergency-response-disaster-technology.html



I asked if he saw this as a problem in general (machine learning touted as being able to solve something it can’t) in catastrophe risk modeling.


He believes that, as with computer science, the main issue is “to understand how to deal with the model bias-variance trade-off”.


“I’m a proponent of Occam's Razor and First Principles but I find it ironic that complex models are easier to build - since they provide more flexibility (variance) to the detriment of rules (model bias) that one would have to define otherwise.”


I find it interesting to note that in this particular field, a spin-off of intelligence, growing complexity appears to garner solid progress.


“So far, many firms in the risk business (such as the insurance industry) have tried to develop an AI-centric view on risk, but deep learning requires huge amounts of data and I'm not sure we have enough.”


“Risks evolve over time and extreme events are too rare to be correctly represented in existing databases. So, ML's potential is huge but it must be intertwined with physical and engineering discoveries.”


I ask what he thinks will have the greatest potential impact for ML in his line of work. Where our efforts should be focused?

“The area of reinforcement learning likely has the greatest potential, as it is well suited for real-life conditions. It is by definition ‘decision-making’ but optimized at its best.”

What he’s saying makes me think that machine learning fractures ‘decision making’ itself into sub-decision increments. In a sense, ‘utopia’ in machine learning is when there are no decisions left to be made. Everything is organised, handled, sent, delivered and even perhaps experienced at a sub-decision level.


“Reinforcement learning should therefore be applied at all stages of the risk process, from optimization of data acquisition to risk mitigation and urban planning.”


“Although it has a long history in real-life optimization problems, not so in catastrophe risk. Algorithmic risk governance is one direction I'm taking and my first project was on the site optimization of geothermal plants - depending on highly uncertain seismic feedback and stakeholder risk aversion. But there is no reinforcement learning in there yet.”


(Mignan et al. (2019), 'Including seismic risk mitigation measures into the Levelized Cost Of Electricity in enhanced geothermal systems for optimal siting', Applied Energy, https://www.sciencedirect.com/science/article/pii/S0306261919301230)


Perhaps this field is ready for innovation?


I ask him what he thinks is the most exciting thing in machine learning.


“Without hesitation, reinforcement learning as illustrated by DeepMind in 2014 when an algorithm was able to play Atari games like no human being could.”