A New Paradigm in Decision Making?

Paper: Quantifying Topological Uncertainty in Fractured Systems using Graph Theory and Machine Learning

Authors: Gowri Srinivasan, Jeffrey D. Hyman, David A. Osthus, Bryan A. Moore, Daniel O’Malley, Satish Karra, Esteban Rougier, Aric A. Hagberg, Abigail Hunter & Hari S. Viswanathan

Geophysics problems are as difficult as Nobel Prize-winning physics problems.

Dr. Jérõme A.R. Noir

This quote from Dr. Jérõme Noir has stayed with me throughout my career. The idea: while physicists face extreme math, but also have extremely precise data for unknown phenomena, geoscientists must find vital solutions for known phenomena using just a few data points on a planet. With very little data, how can complex problems in geoscience be solved? And, how do we assess the risk of being wrong? An uncertainty quantification framework recently developed by researchers at Los Alamos National Lab uses machine learning to help geoscientists arrive at quality decisions using limited data.

One example of a problem with limited data is that of fluid flow through fractured rock, which is important for assessing geothermal resources, drainage requirements for tunnels, and ground-water usage and remediation. Here, the behavior we wish to predict is at the meter-to-hectare scale. Behavior at this large scale is governed by properties at all levels, even down to the micrometer scale.  But, because we cannot take apart a mountain (or a reservoir) to put all pieces through a 3D scanner,  this key information is only available statistically from a limited number of samples.

By quantifying the uncertainty of our final results, we can estimate the range of likely outcomes and the risks associated with them. However, such an uncertainty quantification requires us to run many additional models with a variety of small changes to the input parameters (which we base on those limited samples). This used to be so computationally intensive that only a few models were run, and then decisions had to be made using that limited information. I write “used to”, as we can now do much better using the recent framework from Los Alamos.

This framework combines machine learning with a variety of modelling approaches to increase the accuracy of uncertainty quantification with the same, or even lower cost. The trick is to use a simplification in the modelling, which vastly reduces the computational cost of the model, and then to correct the result using knowledge obtained from more expensive models. These expensive models, referred to as high-fidelity models, are highly accurate, but need supercomputers to run. The lowest-fidelity methods, which require only milliseconds on a laptop, focus on the most impactful aspect of these flow problems: the connectivity between the fractures.

The beauty of the framework lies in how it combines these high, intermediate, and low-fidelity models. The total computational cost is optimized so that enough high and intermediate models are sampled, to produce the required correction for the vast amount of results from the lowest-fidelity models. This approach allows a thorough estimation of the actual distribution, instead of relying on assumptions.

Getting better information for the same cost should be enough incentive to propel such frameworks to widespread adoption.

So why does my title have a question mark? Getting better information for the same cost should be enough incentive to propel such frameworks to widespread adoption. Unfortunately, there is a human factor at play in decision making. Not only do we need to use such frameworks, but we also need to make good use of their results when making decisions, and we humans need training to overcome our instinctive biases. We tend to be better at estimating if something is a definite yes or no, and not if something is 74.3% vs 75.1% likely. Sometimes, even our blood-sugar levels can influence our decisions.

Although the framework designed by the Los Alamos team cannot help overcome all our failings, it does provide major support. Instead of providing petabytes of information to be integrated into the final decision, the framework optimizes the results towards specific decision-making criteria. Then us mere humans can focus on the vital aspects of our problem, with their related uncertainties, while still having a vast array of models to dive into for specific details.

For geoscience projects, this increase in clarity could both allow earlier identification of risks and opportunities, and make it easier to convince investors/governments to fund the projects. In that sense, we may be on the brink of a vast improvement in decision making and I look forward to seeing where we go from here.

A New Paradigm in Decision Making? by Alex Hobé is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Photo by Burst on Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *