Created by Wendy Ware, Brenton Crawford and Tom Carmichael.
Why geometallurgy still matters, and how it is finally starting to deliver on its promise.
Ask a roomful of mining professionals what geomet means — and listen to the debates begin.
Process mineralogy? OBK? Block models? Ore classes? Testing workflows? A mindset? How do you pronounce that? Who cares?
The word shows up in strategy decks, annual reports, investor briefings, dashboards – even plant survey reports. Sometimes it’s front and centre. More often, it’s buried in appendices or dropped as a buzzword – unexamined, undefined and lacking substance.
That ambiguity is not harmless. When a concept as important as this loses definition, it loses force. It loses momentum. And geomet cannot afford that – not now. Because just as mining becomes more complex, more scrutinised, and more data-saturated, geomet should be stepping up, rather than fading into the background.
This article pulls it back into focus, discussing where geomet came from, how it has evolved and why the shift from testwork to prediction is not just a technical nice-to-have – it is essential for an industry on the cusp of transformation.
From field notes to frameworks
Long before anyone uttered the word geometallurgy, geologists were scribbling ash values into coal logs, and plant metallurgists were puzzling over stubborn ore zones that refused to float or grind as expected. The goal was always the same: to understand how rocks behave – not just where they came from or their defining features. In many ways, geomet was already being practised. It just did not have a name.
That changed in 1969, when Frank McQuiston and L.J. Bechaud Jr. introduced the term geometallurgy in a chapter on metallurgical sampling and testing for mine development purposes. Writing in the Seeley W. Mudd Series on Surface Mining, they emphasised the importance of understanding geology before initiating mine development – highlighting how geological variation can influence metallurgical performance. Decades before the field was formally recognised, this was an early call for integration; a signpost for what geomet would eventually become.
Figure: First documented use of the word “Geo-metallurgy” (McQuiston and Bichaud, 1969).
The 1980s and 1990s were not just a turning point for digital technology, they marked a quiet but critical shift in how the mining industry understood rocks. Many of the foundational ideas behind geometallurgy were seeded during this time.
This was the era when computers began embedding into the technical backbone of mining. Tools like the SGS Genesis suite – SectCAD, PolyCAD, and Geostat – emerged as some of the earliest geological modelling software built for supercomputers. At the same time, automated mineralogy systems like QEMSCAN and MLA started gaining traction. Backed by CSIRO and JKTech, and adopted by operations like Anglo American and Rio Tinto, they opened up new ways to quantify mineralogical complexity.
These technologies enabled a fundamental reframing: rocks were not just mined, they were modelled, measured, and managed.
Several commodities quietly led the charge:
- Coal: Long before “geomet” became a recognised term, coal operations were routinely logging variables like ash, yield, and calorific value, and tying them into spatial models for mine planning. It was geomet by practice, if not by name.
- Heavy mineral sands: In the 1990s, workflows began linking mineral assemblage, grain size, and processing behaviour directly to resource models – a shift that blurred the boundary between geology and metallurgy.
- Copper: At Codelco, Pedro Carrasco and his team built a structured framework for metallurgical prediction, underpinned by sound sampling theory, robust testwork, and tight integration with mine planning.
- Olympic Dam: Under Kathy Ehrig’s leadership, the operation developed what is still considered a global benchmark in mineralogical integration, combining detailed characterisation with predictive models of plant behaviour.
And there were undoubtedly others; teams and individuals who worked quietly, yet critically, behind the scenes. These early efforts – sometimes ad hoc, sometimes deliberate – all shared a common shift in mindset: they treated rock not just as a material to be mined, but as a system to be understood.
By the early 2000s, adoption began to accelerate. Research papers, conference series, and formal course offerings ramped up — and over the next two decades, geomet would be formalised, taught, and scaled across the mining world.
Figure: Analysis of publication data via Google Scholar.
Great science, late insight: where traditional geomet falls short
Most geomet programs are designed to support life-of-asset planning – the long-range NPV-driven strategies that shape cut-off grades, mine sequencing, recovery forecasts, and throughput profiles. They are built on discipline, structure, and collaboration across geology, metallurgy, and planning.
The typical workflow looks like this:

Figure: Traditional end-to-end geometallurgical process flow.
Drilling and sampling: Usually embedded within resource definition programs – designed to serve both geological and metallurgical objectives.
Ore characterisation: Mineralogical and metallurgical testwork defines how the rock behaves under processing conditions.
Modelling and estimation: Test data are extrapolated spatially – often via domaining and kriging – into the block model.
Planning and scheduling: Block-level metrics like throughput and recovery inform sequencing and life-of-asset valuation.
Reconciliation: Plant performance is compared to modelled estimates – closing the loop and recalibrating expectations.
This system underpins long-term planning and investment decisions. But it is slow, expensive, and often disconnected from design gate reviews or day-to-day operations.
The risks are clear:
- Cadence mismatch when planning cycles run annually or longer, limiting responsiveness.
- Sparse testwork where limited spatial density leaves gaps in vertical and horizontal rock knowledge.
- Transitional boundaries when domains are treated as hard lines, even when geology is gradational or complex.
- Siloed data that’s fragmented across teams, tools, and update cycles.
- Technological lag as legacy testwork and models lose relevance over time.
- Operational risk when delays in reconciliation weaken ore control and short-term decision-making.
And the cost can be substantial:
- $100M–$500M in rework or retrofits
- $50M–$200M in delayed production
- $100M+ in lost value from poor ore control
Traditional geomet often lags behind operational reality. And as orebodies become harder to mine – and ESG expectations intensify – that lag becomes a liability.
Still, traditional geomet remains essential. It is our most complete record of how ore actually behaves in real-world systems. And it is the foundation predictive geomet must now learn from – faster, cheaper, and more adaptively.
Predictive geomet – smarter, faster and future-ready
So, what do we do when the question shifts, and the traditional geomet model cannot keep up?
We could do nothing. We could try to re-use the model anyway. We could abandon geomet altogether. Or we could start again – redrill, resample, retest – and risk missing the window entirely.
Or… we could do something smarter.
That is where predictive geomet enters the picture.
It is not just a new workflow. It is a new way of thinking; a system of systems designed to understand how rocks behave, and to do it in time to act.
This shift is being driven by real change: orebodies are getting deeper, lower grade, more variable. ESG expectations are rising. And digital capabilities are expanding fast, from real-time sensing and cloud storage to machine learning and automation. We now have more data than ever – and more pressure to make decisions with it.
Traditional geomet remains vital, especially as ground truth. But it is no longer enough on its own. The cadence is too slow, the resolution too low, and the insight often arrives after the moment has passed.
Predictive geomet changes the game. It uses structured data from across the value chain to predict how rocks will behave – at the resolution and frequency needed for real decisions.
Primary and response: the materials science viewpoint
At its heart, predictive geomet builds on a simple but powerful idea: that what a rock is (its mineralogy, chemistry, and texture) determines how it behaves (recovery, throughput, acid generation, and more). This relationship sits at the centre of how we model, predict, and ultimately manage variability across the mining value chain.
This way of thinking traces back to the Primary-Response Framework proposed by Coward et al. (2009) – a foundational concept that helped define how geomet practitioners think about variable classification. It distinguishes between primary variables (those intrinsic to the rock, like mineralogy and chemistry) and response variables (those that describe how that rock behaves under specific processes, like recovery or throughput).
The real power of the framework lies in how it informs sampling, modelling, and interpretation, helping teams reduce bias, understand scale effects, and build more robust spatial models across the full project lifecycle.
We can even draw a parallel with the materials science tetrahedron – structure, processing, properties, and performance – which maps how a material behaves based on what it comprises and how it is handled. Deagen et al. (2022) recently revisited this concept, showing how digital systems can turn the tetrahedron into a dynamic, predictive tool – one that learns and adapts over time.
Figure: Predictive Geomet Tetrahedron — adapted from the classic materials science model to show how rock characterisation relates to performance across the mining value chain.
At the centre of predictive geomet is rock characterisation – how we define a material, whether in situ (spatial) or after it has been moved, blended, stockpiled, or processed (spatio-temporal to just plain temporal).
Surrounding this are the four traditional nodes of the materials science tetrahedron:
Structure, Processing, Properties, and Performance – each representing a different response of the rock under various conditions.
In geomet terms, the primary descriptors – mineralogy, chemistry, and texture – define the rock’s structure. The response variables – recovery, throughput, grindability, acid-forming potential – emerge from how that structure interacts with processing.
Predictive geomet models that relationship. It does not replace testwork, it learns from it. Historical geomet testwork becomes the training and validation set. Once trained, the model can use sensor-rich data – from logging, scanning, assay pulps and more – to forecast rock behaviour in space and time.
The result? Models that evolve with every blast, every drillhole, every new dataset.
No more waiting for quarterly reconciliations or annual updates; predictive geomet brings the rock model closer to real-time reality.
Two tried and tested examples: FTIR and MWD
Say you have a robust geomet model – BBWi, DWi, recovery curves – but the resolution’s too coarse for daily decisions. More testing is not viable, and chip samples will not cut it for metallurgical testing.
Now layer in FTIR (Fourier Transform Infrared), a rapid sensor that captures mineralogy directly from existing assay pulps. Mineralogy is a primary variable. Recovery is a response. With historical testwork as ground truth, you can train a model to predict recovery at ore control resolution, no resampling required.
Take it a step further by combining FTIR with MWD (Measure While Drilling) data. Together, they offer high-frequency proxies for mineralogy, texture, and hardness – key drivers of fragmentation potential, throughput and energy consumption. Same logic applies: real-time sensing, trained on historical testwork, predicting how rock will actually behave.
This is not just a theoretical shift. These methods are already being used – in real deposits, on real data – to deliver a level of responsiveness traditional workflows cannot match.
That is predictive geomet. Built on what you already have. Adaptable to what you need next.
A predictive system of systems
Now zoom out.
Imagine all the workflows that feed into rock-based decisions: resource modelling, ore control, environmental planning, even tailings.
Each one involves:
- Characterisation: sensors, drilling, scanning, testing
- Modelling: building and updating spatial predictions
- Application: feeding that data into decisions
Predictive geomet sits above these workflows – learning from them, linking them, and adding predictive power.
Animation: System-of-systems framework for predictive geomet, connecting rock characterisation to real-time decision-making.
This digital layer does not replace the physical layers – it augments them.
It brings adaptability to slower workflows. It helps close the loop between testwork and operations. And it transforms rock from something we react to, into something we can proactively manage.
Let me leave you with this
Geomet is not just a single workflow. It is a logic stack – a way of thinking that builds toward better decisions.
- You start with the system. What decision are you trying to support? What is the timing, the risk, the value at stake?
- Then you bring in the people. Align the workflows. Connect the teams. Make sure data flows to where decisions are made, not just where it is collected.
- And only then do you ask: What do we actually need to know about the rock to make this decision? Not everything. Just what matters. When it matters.
And that is where data science changes the game. It lets us take what we already know – and extend it. Faster, broader, smarter.
That is how we move from tradition to prediction.
Not by throwing anything out, but by building on it, with purpose.
Coming soon
This is the second in a geomet series exploring how better rock knowledge supports more achievable decisions, across how we discover, define, operate, process, and rehabilitate in mining.
Next up: Geomet meets data science. Brenton Crawford and I will unpack how the tools of modern machine learning are transforming geomet workflows. We will explore how unsupervised learning, model pipelines, and new approaches to uncertainty are reshaping not just what we know about the rock, but how we act on it.
If geomet is about rock-based decision-making, then data science is what makes those decisions sharper, faster, and more scalable.
References
Beniscelli, J., 2011. Geometallurgy – Fifteen Years of Developments in Codelco: Pedro Carrasco Contributions. Presented at: The First AUSIMM International Geometallurgy Conference, Brisbane, QLD, 5–7 September 2011. Brisbane: AusIMM, pp. 3–7.
Coward, S., Vann, J., Dunham, S. and Stewart, M., 2009, August. The primary-response framework for geometallurgical variables. In Seventh international mining geology conference (pp. 109-113).
Deagen, M.E., Brinson, L.C., Vaia, R.A. and Schadler, L.S., 2022. The materials tetrahedron has a “digital twin”. MRS bulletin, 47(4), pp.379-388.
Ehrig, K., McPhie, J. and Kamenetsky, V., 2012. Geology and mineralogical zonation of the Olympic Dam iron oxide Cu-U-Au-Ag deposit, South Australia.
McQuiston, F.W. Jr., & Bechaud, L.J. Jr. (1969). Metallurgical Sampling and Testing. In E.P. Pfleider (Ed.), Surface Mining (pp. 103–121). Seeley W. Mudd Series. New York: The American Institute of Mining, Metallurgical, and Petroleum Engineers.