Measurement & Control FAQ


Measurement & Control Filtering

What is Beta Rating?

Beta rating is the most commonly used rating in industry for filters. Its comes from the multi-pass method for evaluating filtration performance (ISO 16889:1999).

The Beta rating itself refers to filtration efficiency, however it should always be used in collaboration with the absolute rating to understand what contamination is likely to be seen in the system. See the table below for guidelines.

If you know how many particles you have upstream of your filter, from the ratings above you should be able to calculate how many particles appear on the downstream side.

For example, 1,000 particles of a given size upstream on a beta ratio of 20 (95% eff.) will means that 50 particles of that will not be caught by the filter.

A beta rating does not give any indication of dirt holding capacity, nor does it account for stability or performance over time. It should also be pointed out that the micron rating of a filter will not catch all particles greater than that size, mainly due to limitations such as metrology, materials technology & cost implications. The Beta ratio is the same for all particle size ranges. For more information on nominal & absolute ratings, click here.

What is the difference between nominal and absolute ratings?

Nominal ratings on filters are micron values given to filters by the manufacturer. They relate to the typical, or average micron rating for the filter. This does not mean that they will not let particles through which are far greater than nominal rating. There efficiency is less than that or an absolute filter, so you would expect a longer clean up time with this type of filter element.

Absolute ratings give the size of the largest particle that will pass through the filter. This is a much more reliable means of assessing a filter for an application as its performance is more repeatable. There are however no standardized test methods to determine this value currently.

Beta ratios are still the most commonly used method for specifying and selecting filters

What system conditions can effect the efficiency of my filters?

The simple answer to this question is, many. Filter efficiency can be effected heavily by changes in viscosity, fluid homogeneity, electrical conductivity to name but a few. There is currently demand from industry for more standards and norms surrounding test methods. In recent years, due to the wide range of fluids available, some groups have created their own standards for test methods etc, such as automotive, drinking water & pharmaceutical.

When choosing filters, it is important to understand what the beta & absolute rating is, thereby understanding what the largest particle size should be in the system. Couple this with an APC and you can quantify the number of particles of a given size in the system, and start the process of quality control. Several filters in series will often improve the cleanliness of a system, as will exposure time.

When should I change my filters?

Logic would dictate that you change your filters when the system cleanliness starts to increase above useable levels, and this is true to a degree but in actuality, most filters become more efficient as they become more full. The main driver for most people to change their filters will probably be flow.

As the filter becomes more blocked, the flow-rate through it reduces, and therefore the pressure differential increases. Most filters can be fitted with differential pressure indicators which help you to identify when to change them. For optimal performance, an automatic particle counter coupled with a flow-meter downstream of your filters will provide the greatest degree of accuracy.


Measurement & Control Particle Counting

How do automatic particle counters work?

Most commonly by the light extinction principle, but there are some other technologies on the market. Typically a beam of light is projected through the sample fluid, when a particle blocks the light, it results in a measurable electrical signal that can be proportioned to the size of the particle. Couple this with a known sensory volume and the quantity of each size can be determined.

How to conduct fair tests & analysis with APC’s.

As with all APC’s, they rely on statistical analysis of a volume of fluid to derive an international standard format output. When an APC measures some fluid, typically it is only sampling a proportion of the total system volume, and there in lies a source of error per test result. Add to this the un-quantifiable variation in fluid homogeneity & other factors and quite quickly you realise that a more statistical approach is required. When conducting more than one test per day, we would recommend performing the tests at set equal intervals to paint as clear a picture as possible of how your process varies during the day, and perhaps over the course of a week or month.

On-line sampling versus Off-line sampling

The biggest difference is the fact that you are removing fluid from a system rather than measuring in real time. On-line measurement means that you are seeing the true and real time behaviour of the system, whereas off-line sampling is exposed to a number of variables prior to the fluid passing through the APC. This can lead to error if care is not taken. From a practical point of view, sometime systems do not have test points attached to them, which can lead certain decisions about how to analyse fluid.

The importance of detailed counts

Using & analysing international reporting formats exclusively does not provide a true picture of whether a system is in control or out of control.

Although all of the international formats are based on a sound scaling method, they are all sensitive to a change of just one count at any concentration. For example, ISO 14 denotes that you have between 80 & 160 particles of a given size in your system. If the concentration in the system changes to 161, the APC will output a result of ISO 15. Conversely, if the count drops to 79, then the result will be ISO 13.

The question is, does a change of one particle count justify a decision to take action or not? What must be considered is at what point does cumulative effect make a difference to the function of the system?

Although it is easy to arbitrarily set limits, we need to understand how close we are to exceeding them. If ISO 14 is your upper limit of contamination, and at no costs should the system exceed ISO 14, then it would not be very responsible to be unknowingly operating at 99% or even 80% of your upper control limit (Certainly not for an extended period of time).

Although international reporting formats are useful, and in a lot of cases are practically suitable, it is always good to understand the importance of detailed counts to paint a clearer picture of the situation and to set achievable control limits.

What factors can effect APC results?

When it comes to contamination monitoring, entrained bubbles (commonly air or water) in a fluid can cause instability in output readings as the tiny bubbles can be “seen” by the sensor within the product. Where systems have large amounts of aeration, this can lead to a higher contamination reading than would normally be expected, and therefore confidence in system performance can be questionable. In addition, on-line and off-line sampling can also have an effect as the fluid is being removed from the system, and therefore it is always possible that by removing it you are altering its natural state.

It all depends how controlled you need your system to be. Dirtier systems can typically cope with a greater variability of result and as such are not as critical in the way they need to be controlled. Where possible we would always recommend analysing fluid straight from the system for the most representative data.

What is an automatic particle counter?

Automatic particle counters (APC’s) are instruments that quantify the size and quantity of particulate contamination in fluids. Some products have secondary functions such as the ability to measure temperature and moisture content. They normally output results in standard international formats (AS4059E, ISO etc) and more often than not, the data from the units can be stored and retrieved for ongoing analysis of a system. They are currently broken down into two distinct categories, portable and in-line.

Automatic particle counters have existed since the 1960’s. The principle on which they operate has stayed close to their original concept, but over time they have been developed using methods such as lense & light source technology. Historically, particle counting was always done by a fairly rigorous and extended method such as optical microscopy which involved physically counting the particle concentration. As time, and demand has increased for this kind of data, a new technology was required to make analysis more practical and cost effective for the user. Automated particle analysis was more practical and cost effective.


Measurement & Control Moisture Measurement

How do moisture sensors work in oil?

Moisture sensors, beit RH% or ppm operate typically on a capacitive method utilizing a dielectric sandwiched between two metal plates. Various substances, such as air, oil and water have specific dielectric values which allow for calibration of the sensor. For example, the dielectric value of water is 80. The dielectric value for the polymeric sensor is approximately 3. The change in the sensed dielectric, allows for a percentage figure to be arrived at. For example, if the dielectric is 45, then the RH% will be ~58%.

Importantly, all moisture sensors in oil are at risk of damage under prolonged exposed to free water.  Currently there is no economic technology that exists specifically for moisture sensing in liquids. However, through testing and development, sensors designed for use in air can be adopted and applied. We recommend that suitable pre-control be applied when setting alarm limits for moisture content. This will benefit the sensor and the system. When analysing your system/process, take a few moisture readings and applying sound statistical methods to arrive at your system capability.

What is relative humidity (RH%)?

Relative humidity (RH%) describes the amount of water vapour in a hydraulic fluid. When the vapour content increases to a point where is condenses out of the fluid, this is termed “saturation” or “free water”. Whilst in the vapourized state, the water is dissolved and of little consequence to the system. Once it becomes saturated, the water exists as little droplets of water.

A saturated system will give an RH reading of 99%/100%. Generally speaking, an RH reading of 30% to 70% is typical of hydraulic system. The variation in reading more often than not is related to ambient temperature changes. You would expect to see higher RH readings in winter than in the summer months for example. There is no such thing as too little water in a hydraulic system. Always keep moisture levels as low as possible, and do not allow free water to exist in your processes!

What is the difference between parts per million and RH%?

If used responsibly and with the correct approach to quality control, both ppm & RH% are excellent ways to measure moisture content in hydraulic fluids. At MP Filtri Ltd we have chosen to standardise on RH% as this provides us the greatest degree of flexibility and service to our customers.

To successfully use ppm on a wide range of fluids, you would need to test and validate a saturation curve for each particular oil. Given the shear number of fluids available in the industry this can become an un-ending task in the laboratory. Take into account un-predictable error due to changes in fluid chemistry in the live environment and you have quite a complex problem to solve.

Outputting RH% on the other hand does not have this problem. Because it is a measure of the % of saturation, it does not need to be calibrated for specific fluids like parts per million. As long as you are measuring temperature at the same time (in built into the MP Filtri Ltd sensor technology) you can compare systems fairly, using the same datum position (saturation).


The saturation point of a brand new fluid sample in parts per million is validated in the laboratory as being 800ppm (100% RH). The engineer installs a moisture sensor to a system containing the same fluid, and sets an alarm limit of 640ppm (80% RH). The process is set in motion, and the initial sensor reading is 400ppm (50% RH). Everything is OK

Lets now assume that real time changes in the chemical make up of the fluid due to wear and tear cause the saturation point to reduce to 420ppm but the system reading remains at 400ppm. The operator will continue as normal, and the upper control limit alarm (640ppm) has not been reached. What the operator doesn’t know is that the system is now running at 95% saturation which is perilously close to free water existing in the process and above the 80% threshold set in the alarm. Consider that for the alarm to signal, you would have to have free water in the system! This is  a process out of control and the only way of making it capable would be to validate the saturation point of samples taken at set intervals throughout the systems life.

If the engineer had been using RH% from the start and given the example above, the alarm would have been raised when the saturation point of the fluid had reduced to 625ppm (100%). The alarm limit would remain at 80%RH, but the equivalent ppm value would now be 500.