Announcement

Collapse
No announcement yet.

PI signal processing principles

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PI signal processing principles

    I'm developing firmware for a PI metal detector and am curious of other's experience on how to extract useful signal (amplitude of detected target signal) while also rejecting as much noise as possible.

    The frontend/hardware that I have is built for direct digital sampling of the coil signal: P-FET for TX pulse generation, two stage amplifier with adjustable gain and offset (from MCU). The amplified signal is then fed to 14 bit 1MSPS ADC which digitizes the acquired waveform. The MCU is RP2040 (same as RPi Pico). [ATTACH]n415445[/ATTACH]

    The acquired waveforms are then analyzed by the MCU, some statistics are calculated (starting from simple ones, like calculating mean, integral and also some more complicated ones, like linear/exponential curve fitting, weighted difference from ref. sample). To make life easier, I've built a GUI tool on PC, which plots both the waveforms, allows configuring various signal acquisition parameters and displays the statistics vs time.
    Click image for larger version

Name:	2023-09-21_14-04-12.png
Views:	424
Size:	215.5 KB
ID:	415444
    This is all fine and dandy, but looking at the waveforms is not the most practical UI for a metal detector, so it needs some way of processing the signal and outputting a detection amplitude (which is then indicated via audio/LED bar).
    This is also the part that's less clear to me. A static/pinpointing operation mode seems more natural than dynamic one (only reacting to changes in target signal).

    As a first attempt, I'm acquiring a reference signal (on startup, after automatic offset adjustment), calculating the noise floor over a few seconds, and then comparing the mean of reference signal to current one- basically, if the signal is different from reference, target is detected, and the remaining logic of adjusting noise floor/scaling the amplitude based on sensitivity setting is trivial. The other statistics are plotted just to see visually, which one might give better results.

    However, the acquired signal on my lab setup is not as stable as I would have expected- there's long-ish term drift to it (most pronounced right after cold startup, which I kinda expect), but there's also a degree of random noise, and longer-term (tens of seconds) drifts long (tens of minutes) after startup that I have no explanation for (ambient radio/EMI?). This creates a problem for a simple implementation, as the detector starts to indicate detection when there shouldn't be any.

    One approach I can think of is slowly updating the reference measurement (like the average of last 60 seconds or so), but I'm not sure if there are any better approaches.

    Another way of course is ditching the static operation mode and using a dynamic one, which by design is immune to long-term drifts of the signal, and should allow to indicate detection right on the edge of noise floor- but as I said, it doesn't seem natural to me.

    Also, the system right now uses just one pulse length- I'm sure with varying pulse lengths it should be possible to extract more information, but that adds another dimension to the problem, so I'd first like to get the basic operation working.

    So, to sum the thread up:
    1) Is reference signal vs detected signal usable approach in PI signal processing (for static/pinpointing mode)?
    2) If so, are there any alternatives to either have user update the reference signal, or use a long-horizon filtered signal value as a reference?
    3) Perhaps anyone has a suggestion for alternative solutions to the problem than described above? My intuition based on theory and observing the shape of return signal says that something like a weighted delta between reference and current signals (that is- difference in the earliest sample points has more weight than later sample points) should be the statistic to use in detection logic, but I know very little about this domain, so would appreciate any suggestions!


  • #2
    The vast majority of metal detectors use a dynamic (motion) mode of operation for the very reasons you have found. High-power PI designs are especially prone to drift as the TX warms up. In many VLF designs, a static mode is user-selectable as short-term pinpoint mode; the same could be done with PI.

    You have a few options:

    1. Continuously update the reference signal. The speed at which you do this determines your "motion retune" speed.
    2. Update the ref signal when you press a (retune) button.
    3. Put the main signal through a continuous high pass filter (differentiator) and use a fixed reference afterwards. Temporarily disable the HPF if you need a static pinpoint mode. This is what most VLF/PI designs do.
    4. Put the main signal through a selectable retune algorithm and use a fixed reference afterwards. This is how the old TR detectors did it. In code, this is probably equivalent to #2.

    The F-Pulse pinpointer I designed does #2. It does a power-on calibration to establish the reference signal, which is usually very solid over time because the overall design is very low power so there are no thermal tails to deal with. However, a press of the button recalculates the ref signal, which is useful if it does drift, or if ground creates an offset, of if you just want to ratchet in on a target.

    Comment


    • #3
      What you've done is extremely similar to what I have also done previously. You need to implement drift compensation due to slow changes in the environment. This is effectively low frequency noise. I would suggest:
      1. on startup that you generate a large number of signals and do your calculations to generate a large reference data set to get an accurate signal.
      2. In real-time you then do the same thing but with a smaller dataset (for responsiveness) and do your comparisons between the two datasets.
      3. Update a small section of your reference data set with a section of your real-time dataset. Effectively like a FIFO buffer where each element contains a waveform and the oldest reference data is removed, the dataset is shuffled along by one and then the newest data is inserted into the new empty slot. Eventually you would have a completely new reference dataset, with the rate of the updates being determined by you.
      Additionally, as Carl suggests you should have an instant recalibrate option where the environment or signal changes immediately rather slowly. Otherwise your metal detector will continue to detect until your reference eventually changes.

      Curiously, what is your pulse frequency? How does an M0+ perform with curve fitting when it doesn't have an FPU? I would have thought that the performance would be quite poor.

      Comment


      • #4
        Originally posted by Carl View Post
        You have a few options
        Thank you for the general overview. It's kinda in line of what I expected. Do you perhaps have an opinion on whether a pulse train of varying widths is even worth implementing? I.e. I've already seen that different TX pulse widths result in different received decays, but I'm not sure if there's more information to be found if multiple width TX pulses are interleaved.

        Originally posted by CrizzyD View Post
        I would suggest
        And thank you for the specific suggestion, storing the ref. samples in a circular buffer seems to be an easy solution, considering that RP2040 has quite a bit of RAM to waste instead of trying to come up with something that might be more optimal in terms of RAM efficiency.

        Originally posted by CrizzyD View Post
        Curiously, what is your pulse frequency? How does an M0+ perform with curve fitting when it doesn't have an FPU? I would have thought that the performance would be quite poor.​
        Currently: 400Hz (each decay is running-average'd with previous 15, so stats are also recalculated at 400Hz), but it's not the max. possible (it's actually limited by the less-than optimal implementation of the plotting on debug GUI at the PC side..)

        he whole stat calculation takes ~1000us, of which linear least-squares fit takes ~180us, exponential least-squares fit (really, a linear fit on logarithm of decay) ~580us and the rest are simple ones like mean, integral, weighted delta etc. Since only the simple ones are actually used for detection, the fitting that takes most of the sample processing time might be skipped entirely..

        It probably helps that the RP2040 has a relatively optimized software floating point implementation stored in the ROM (https://datasheets.raspberrypi.com/p...floating_point). I'm also clocking the MCU a bit faster than it's rated because that matches audio timer/PWM frequency better.

        I'd say that overall, the fact that RP2040 is dual-core MCU and has PIO also helps (PIO is used to interface with the particular ADC and drive serial LED bar). Although all the core-metal detection functionality runs with acceptable jitter (especially, when USB is disconnected) on a single core, having one core dedicated to audio was beneficial to achieve jitter-less audio playback and generating the tones that indicate detection.

        Comment


        • #5
          OK, that's much better than I expected. I am running an M4 core clocked at 180MHz and it takes around 10ms to calculate the integral, median and mean and max/min values on mine. That's partially due to the number of recovery curves that I'm storing though. I did some curve fitting in MATLAB but found that there was never enough information in the curve to justify implementing it on a microcontroller. The differences between the curves were just too small, when detecting smaller pieces of metal. It made more sense to stick to the simple stuff with mean and median etc. Have you found that any of the more "advanced" statistical methods have an advantage?

          Comment


          • #6
            I haven't exactly quantified the difference, but the exponential fit seemed promising (in the way that I saw difference with 5c vs 20c EUR-coins, but I am not certain that the difference was caused by target material vs just different amplitude at the same depth).

            I want to try using the mean or weighted delta as the main signal and mix in the exponent as another frequency, perhaps it enables some rudimentary target ID.

            Comment


            • #7
              Part of your problem is the coil heating up, not much, but enough.

              One way to overcome this is to fire a sine wave burst into the coil, then measure all the parameters of the coil every few seconds. You can then take the results and use them to derive compensation values to apply to the PI values.

              Another trick you can use is to "ripple sample". Let's say you sample the first pulse return at 8us. You then delay by ONE WHOLE pulse period (lets say 1000us PLUS 10us). You keep adding a delay of 10us until you have sampled the signal to the extent you want, then you process your result.

              By doing the above, you can use a slower, but higher resolution ADC, and you have more time to process the signal, PLUS you get an added bonus that you are effectively filtering the receive signal applying the equivalent of a "drift blocking filter" as the drift will be much less apparent.

              Comment

              Working...
              X