I'm developing firmware for a PI metal detector and am curious of other's experience on how to extract useful signal (amplitude of detected target signal) while also rejecting as much noise as possible.
The frontend/hardware that I have is built for direct digital sampling of the coil signal: P-FET for TX pulse generation, two stage amplifier with adjustable gain and offset (from MCU). The amplified signal is then fed to 14 bit 1MSPS ADC which digitizes the acquired waveform. The MCU is RP2040 (same as RPi Pico). [ATTACH]n415445[/ATTACH]
The acquired waveforms are then analyzed by the MCU, some statistics are calculated (starting from simple ones, like calculating mean, integral and also some more complicated ones, like linear/exponential curve fitting, weighted difference from ref. sample). To make life easier, I've built a GUI tool on PC, which plots both the waveforms, allows configuring various signal acquisition parameters and displays the statistics vs time.

This is all fine and dandy, but looking at the waveforms is not the most practical UI for a metal detector, so it needs some way of processing the signal and outputting a detection amplitude (which is then indicated via audio/LED bar).
This is also the part that's less clear to me. A static/pinpointing operation mode seems more natural than dynamic one (only reacting to changes in target signal).
As a first attempt, I'm acquiring a reference signal (on startup, after automatic offset adjustment), calculating the noise floor over a few seconds, and then comparing the mean of reference signal to current one- basically, if the signal is different from reference, target is detected, and the remaining logic of adjusting noise floor/scaling the amplitude based on sensitivity setting is trivial. The other statistics are plotted just to see visually, which one might give better results.
However, the acquired signal on my lab setup is not as stable as I would have expected- there's long-ish term drift to it (most pronounced right after cold startup, which I kinda expect), but there's also a degree of random noise, and longer-term (tens of seconds) drifts long (tens of minutes) after startup that I have no explanation for (ambient radio/EMI?). This creates a problem for a simple implementation, as the detector starts to indicate detection when there shouldn't be any.
One approach I can think of is slowly updating the reference measurement (like the average of last 60 seconds or so), but I'm not sure if there are any better approaches.
Another way of course is ditching the static operation mode and using a dynamic one, which by design is immune to long-term drifts of the signal, and should allow to indicate detection right on the edge of noise floor- but as I said, it doesn't seem natural to me.
Also, the system right now uses just one pulse length- I'm sure with varying pulse lengths it should be possible to extract more information, but that adds another dimension to the problem, so I'd first like to get the basic operation working.
So, to sum the thread up:
1) Is reference signal vs detected signal usable approach in PI signal processing (for static/pinpointing mode)?
2) If so, are there any alternatives to either have user update the reference signal, or use a long-horizon filtered signal value as a reference?
3) Perhaps anyone has a suggestion for alternative solutions to the problem than described above? My intuition based on theory and observing the shape of return signal says that something like a weighted delta between reference and current signals (that is- difference in the earliest sample points has more weight than later sample points) should be the statistic to use in detection logic, but I know very little about this domain, so would appreciate any suggestions!
The frontend/hardware that I have is built for direct digital sampling of the coil signal: P-FET for TX pulse generation, two stage amplifier with adjustable gain and offset (from MCU). The amplified signal is then fed to 14 bit 1MSPS ADC which digitizes the acquired waveform. The MCU is RP2040 (same as RPi Pico). [ATTACH]n415445[/ATTACH]
The acquired waveforms are then analyzed by the MCU, some statistics are calculated (starting from simple ones, like calculating mean, integral and also some more complicated ones, like linear/exponential curve fitting, weighted difference from ref. sample). To make life easier, I've built a GUI tool on PC, which plots both the waveforms, allows configuring various signal acquisition parameters and displays the statistics vs time.
This is all fine and dandy, but looking at the waveforms is not the most practical UI for a metal detector, so it needs some way of processing the signal and outputting a detection amplitude (which is then indicated via audio/LED bar).
This is also the part that's less clear to me. A static/pinpointing operation mode seems more natural than dynamic one (only reacting to changes in target signal).
As a first attempt, I'm acquiring a reference signal (on startup, after automatic offset adjustment), calculating the noise floor over a few seconds, and then comparing the mean of reference signal to current one- basically, if the signal is different from reference, target is detected, and the remaining logic of adjusting noise floor/scaling the amplitude based on sensitivity setting is trivial. The other statistics are plotted just to see visually, which one might give better results.
However, the acquired signal on my lab setup is not as stable as I would have expected- there's long-ish term drift to it (most pronounced right after cold startup, which I kinda expect), but there's also a degree of random noise, and longer-term (tens of seconds) drifts long (tens of minutes) after startup that I have no explanation for (ambient radio/EMI?). This creates a problem for a simple implementation, as the detector starts to indicate detection when there shouldn't be any.
One approach I can think of is slowly updating the reference measurement (like the average of last 60 seconds or so), but I'm not sure if there are any better approaches.
Another way of course is ditching the static operation mode and using a dynamic one, which by design is immune to long-term drifts of the signal, and should allow to indicate detection right on the edge of noise floor- but as I said, it doesn't seem natural to me.
Also, the system right now uses just one pulse length- I'm sure with varying pulse lengths it should be possible to extract more information, but that adds another dimension to the problem, so I'd first like to get the basic operation working.
So, to sum the thread up:
1) Is reference signal vs detected signal usable approach in PI signal processing (for static/pinpointing mode)?
2) If so, are there any alternatives to either have user update the reference signal, or use a long-horizon filtered signal value as a reference?
3) Perhaps anyone has a suggestion for alternative solutions to the problem than described above? My intuition based on theory and observing the shape of return signal says that something like a weighted delta between reference and current signals (that is- difference in the earliest sample points has more weight than later sample points) should be the statistic to use in detection logic, but I know very little about this domain, so would appreciate any suggestions!
Comment