Mining operations are complex environments. Even for something as “simple” as surface mining or open pit mining, there are a whole host of variables to consider. Small changes in blasting operations, haul truck speed, stockpiling, and equipment can have significant impact on throughput and operating cost. Furthermore, all these components are inter-related, making second-order effects difficult to identify, let alone correct.
On top of this, many companies treat different parts of their mining operations as independent units. It’s not uncommon for a large mine to have independent planners looking after blasting, primary crusher throughput, secondary and ball mill grinding as well as the concentration plant. A single-minded focus by any of these groups to maximize throughput may come at the expense of other units, and ultimately at the expense of overall throughput.
Consider the flow that an ore body makes through the mine. Based on the density of the rock body being blasted, the drilling pattern, and a whole host of other factors, there will be a range of rock sizes included in the run of mine (ROM) ore. Smaller pieces will fall through the rock crusher and make their way directly to the SAG mill or ball mill. Mid-sized pieces will get processed by each crusher before arriving at the SAG mill. Other pieces… well they just get stuck in the crusher.
The varying proportions of ROM ore sizes and their associated densities create challenges, as the dynamic nature of the ore characteristics has a direct impact on the amount of time each piece of ore spends in each stage of the comminution circuit. In other words, with each new load of ore entering the system there is a new “optimal” configuration of machine settings for the entire circuit that minimizes energy consumption and maximizes throughput. Furthermore, other variables such as the speed at which various stockpiles change size, the original size of these stockpiles, maintenance timing, etc. all have an impact on the overall effectiveness of the system.
And how do most mines manage this complex orchestration? Independently. Each team - blasting, crushing, grinding and maintenance all work diligently to achieve their targets and KPIs, but they generally run as independent groups.
We haven’t even factored in unexpected events, like bad weather or Fridays. Clearly, this is not optimal.
Not surprisingly, comminution is a huge expense. After all, we’re crushing rocks here. Interestingly enough, the crushing isn’t always the most expensive part of crushing. Crushing machinery, particularly the SAG mill is massive, and much of the expense of running the mill comes from powering the movement of this giant mass of steel. Running it half empty, or at any suboptimal level is a huge waste of energy.
Inventory (various ore stockpiles) has an associated cost as well. Stockpiling too much increases working capital, while stockpiling too little increases the risk of running out of feedstock for downstream operations. Given that the SAG mill often shows up as one of the last milling processes, shutting down or running half-empty just because stockpiles got too low is expensive.
Ore size also has an impact on the comminution circuit, in terms of wear and tear on equipment. Grinding and milling excessively large pieces will wear out not only your equipment, but also consumables like liners. On the other hand, smaller ore start to clog things up. Even the SAG ball charge needs to be adjusted based on the size and hardness of ore it’s working on. Too many grinding balls can prematurely wear out the grinding balls themselves.
Operating a mine mill requires balancing a dynamic set of tradeoffs. Each of these tradeoffs will impact the overall operating conditions of the comminution circuit, which means that every other variable now faces a new set of conditions to be optimized against.
Given that mining is such a dynamic, complex environment it’s clear that the ideal situation would be to constantly adjust every component as conditions change. And when you think about it, each employee does this daily. Planners adjust to the variables they can measure during each planning cycle, managers and superintendents make changes to the mill as they see fit, and even workers apply their skill to improve output at their machine.
That said, it’s difficult, if not impossible for everyone to coordinate this continuously, particularly across the entire operation. At best, planners work according to their planning cycles and superintendents work according to weekly or daily shifts.
To optimize against all variables all the time, we would need to look at a system that could manage the processing and calculations associated with balancing top-line objectives against systemic constraints and ever-changing conditions.
What other benefits could come from continuous optimizations?
Such an auto-optimization system (let’s call it Optimizer) would have other benefits including:
With such an Optimizer system, we should also anticipate a better understanding of overall milling capacity. Mills are designed with a particular capacity in mind, but this can change over time. Your mill might actually be able to run faster than expected, but you can only know this for sure if you try.
Testing is not easy with complex operations. Say for example your team runs a test batch of larger-sized ore. You analyze the results and see that the mill throughput slowed down. Obviously, this is because the larger ore takes longer to crush, right? But then you run the same test again and get a different result – the mill ran faster than on the previous run. What happened?
Multiple factors are involved in determining throughput, and although you may feel that your experiment is holding everything constant except for ore size, the reality is that as a complex system these interactions may lead to dynamic changes that you did not fully realize and understand. What if, for example, residue in the system from before the first test impacted your results?
By continuously tracking and monitoring a system along with all its variables, you can better understand what is changing with each run. Furthermore, if you can isolate these variables through some sort of modeling, you might be able to statistically tease out interactions that happen as a result of normal variations in operations. Two birds with one stone (pun intended).
Tests are expensive and come with opportunity cost. Passively testing by better modeling and monitoring of continuous operations is much more efficient.
Companies lacking an Optimizer-like solution will eventually reach a point where their operations are obviously not running at 100% efficiency. When productivity declines enough, they will often bring in a team of consultants who carefully examine the systems, map out processes, and generally strike fear into workers as they watch over them with their clipboards.
Following an extensive, multi-month/year consultation project these teams come back with a clear plan to improve operations. Problem solved and money well spent, right?
Let’s think about this for a minute…
If we agree that mills continuously degrade as they wear down, then the ideal situation would be to constantly look for improvements, whether to operating inputs, maintenance schedules, or some other factor that can be easily modified.
Better yet, by constantly tracking operating conditions, and making financial calculations against the remaining operating life of capital assets, we could not only optimize for throughput; we could optimize against financial targets like revenue and capex.
The good news is that you probably already have the information you need to get started. Even without fancy IOT sensors, your mill is tracking machine operating metrics, production volumes, ROM hardness, blast size, etc.
With this information you can build out a simple model. For example, this model could include the following components:
And this model would subsequently look at operating variables such as:
Short on data? Probably not. Remember that your ERP and other data systems have a wealth of information, even if it’s not real-time. In fact, this data is probably a stranded resource because of poor interfaces and reporting mechanisms. Fear not – there are ways to access ERP data.
In the beginning, such a model wouldn’t need to be 100% accurate. Arguably it will never be 100% accurate. But by mapping the interactions between different components we can already get useful insight to operations. Over time, as the model gets more refined, you’ll be able to track its accuracy. As well, as new data sources like sensors get installed you can integrate them into the system for further refinements.
You might be surprised to find some obvious improvements which has more to do with lack of communication between different teams than anything else. Communication is hard for any organization, especially as it gets larger. An Optimizer-like system can transcend departments, teams and corporate hierarchy. No broken telephone, no politics.
When you start, you’ll want to optimize against current equipment configurations (i.e. the quick wins) with primary physical metrics like tons of ore processed. Over time other second-order metrics like revenue and operating costs can be added. Eventually you’ll be able to incorporate other external factors, like blasting operations, futures prices and even weather forecasts.
Most importantly, development of an Optimizer-like solution should lead to better visibility into your operations. The increase in information will be accompanied by improved reporting and predictions, especially if these insights are made available to the entire team.
Even a simple integrated model of your milling operations can have big impact by connecting different components of the comminution circuit together. Mining operations are complex. Shouldn’t they be run with more than just a set of spreadsheets?
Speak to Our Experts
Connect with a 3AG Systems expert today and start your journey towards efficient and effective data management.