Hi all,
As most of you are aware, I'm building this project here:
http://www.geotech1.com/forums/showthread.php?t=18897
One of the tasks I have set for myself is to incorporate a software based metal detector into the design. I want to utilize the PC on the rover, and to do so, I will need to learn everything I can about the PI process, such as that used for the Baracuda Legend, so I would really like you to beat up my comments and questions here on this. Eventually, I'll do the same for a VLF process, but we'll start with PI.
The main parts of the Baracuda Legend that I can see are: (and I may have missed a bunch)
1) A clock pulse circuit with a frequency slow enough to give good penetration into soil, yet high enough to see small objects. Does 640hz sound right?
2) An output circuit driven from the clock to drive the coil. The current will be limited by the output FET, usually under 10A or so, since this current will be reflected back to the damping resistor (1300-1800V). This is vague and seems to vary based on the coil resistance and damping resistors of different circuits.
3) A clip circuit and dampening resistor to measure the reflected signals and to prevent damage to the preamp stage.
4) A pre-amp circuit to amplify the remaining signal. 1000X or more. This amplifies noise along with signal though. Offset adjustment provided to move output signal + or -.
5) A delay circuit based on the original pulse drive, to allow preamp output into comparator stage. Designed to pass a band of signal from preamp. Delay adjustment provided to increase or shorten delay.
6) A power supply circuit with a virtual ground reference generator to provide + and - rails to circuitry.
7) A second comparator with its input enabled only when the primary pulse is over.
8.) The audio output circuit that takes the 2nd comparator and drives a speaker or headphone.
Do these 8 functions more or less describe the PI detector layout of the Baracuda? Am I incorrect on any of the 8 pieces, and in what way am I incorrect?
Ok. Here is my noob viewpoint on this design as applicable to using PC technology.
The PC card can handle the clock pulses to drive the MOSFET through the line output. The sound card outputs a pulse which is used to drive the MOSFET stage. A timer can be initiated on the falling edge of that pulse. Multiple timers can be utilized if there are more multiple areas of data in the raw data set we want to analyze. For this example, there is one data set we will analyze this way. This delay will have a the sample start point, and a sample stop point.
Even an average sound card has very decent hardware that can also be used for signal processing and analysis. Most modern day sound cards can do 24bit A/D conversion up to a 192khz sampling rate. However, the card's input voltage is limited, so it will have to be protected. The sound card input should be limited to about 1V RMS (1.4VPeak).
Using a voltage divider circuit that doesn't cut or clip out any parts of the waveform, the raw signal will be brought into an isolation amplifier and will interface to the voltage divider through a matched resistor pair and will have a high CMRR to get rid of any unwanted noise on the signal. The output will then go into the sound card, with little or no amplification.
With a 1300V+ reflected signal, this means the voltage divider output of 1V RMS = 1300V. Assuming the "interesting" part of the signal is in the first volt (i.e back to back 1n4148 diodes in the traditional analog PI design), this means only about 1mV of the voltage divider signal is real information.
Normally for 10, 12, or even 16 bits in a traditional PIC or other micro, this would be a problem, i.e the reason for a big preamp stage.
However for a 24bit signal, this 1mV signal is over 12000 counts; the equivalent of a 14bit signal.
Sample it at 24 bit resolution and record it over the whole time period, and save that to the RAW sample data set. This whole time period would include initial pulse, echoes, etc, down to dead signal, waiting on the next pulse. For the 640 hz pulse this would be approx 1.5ms, so much less than that would need sampling, i.e 100uS or less I'm guessing.
24bit@96kbps/single channel = 2.3 Mbps
1minute sample = 16.5MB. Modern day RAM is cheap. 4GB is normal, and even 16gb can be had under $100. Modern PC's can incorporate all the RAM needed. Running large point FFT's in real time are no problem for PC's.
Now, plenty of info in that raw data sample set. No amplification means no amplified noise. The typical PI analog unit's 1000X preamp basically takes that 1mV signal and makes it usable. But it also amplifies noise X1000. Hopefully the high CMRR of the amplifier knocked off some of that noise. Even this may be unnecessary on a high end sound card.
So, I now have a data set that contains the original pulse, any echos from that pulse, and any perturbations (i.e. eddy currents from metal in the ground). The entire amount of information is in the microseconds in duration. Call this data set #1. Copy this into data set #2.
Using data set#2, take any sample that is higher than 12000 (or whatever the user wants) and set them to 0. The software equivalent of clipping, with one difference. Since everything over 12000 was set to 0, instead of simply limited to 12000, i.e. a clip, that doesn't introduce a bias. No bias means remaining data set is still referenced to 0, rather than an average. Now we can amplify this. X500 or more. All usable data.
Now copy this into data set #3, starting at the start sample delay time, and ending at the stop sample time.
So. Now I have 3 data sets of samples that I can display as is, or do further analysis on, such as an FFT, or moving average on, to get some form of a VDI. These data sets are:
#1-Raw data, no clipping. Not much usable info here with the large peak still there.
#2-raw data minus any sample above 12K (or whatever user wants), amplified. data set#2 FFT generated VDI displayed with this.
#3-data sampled after a user determined delay for a user determined duration, amplified. data set#3 FFT generated VDI displayed with this.
If I displayed all three waveforms, and provided a start and stop slider on the raw data set, to allow refinement of the 3rd data set. Would this be beneficial for the operator to look at the waveform this way?
I'd like any thoughts on this if possible. I realize I'm super new at this much of an in-depth look into the metal detector and how it functions, but I'd really like to understand it better so that I can use it on this project, and maybe help progress the software metal detector side of things.
As most of you are aware, I'm building this project here:
http://www.geotech1.com/forums/showthread.php?t=18897
One of the tasks I have set for myself is to incorporate a software based metal detector into the design. I want to utilize the PC on the rover, and to do so, I will need to learn everything I can about the PI process, such as that used for the Baracuda Legend, so I would really like you to beat up my comments and questions here on this. Eventually, I'll do the same for a VLF process, but we'll start with PI.
The main parts of the Baracuda Legend that I can see are: (and I may have missed a bunch)
1) A clock pulse circuit with a frequency slow enough to give good penetration into soil, yet high enough to see small objects. Does 640hz sound right?
2) An output circuit driven from the clock to drive the coil. The current will be limited by the output FET, usually under 10A or so, since this current will be reflected back to the damping resistor (1300-1800V). This is vague and seems to vary based on the coil resistance and damping resistors of different circuits.
3) A clip circuit and dampening resistor to measure the reflected signals and to prevent damage to the preamp stage.
4) A pre-amp circuit to amplify the remaining signal. 1000X or more. This amplifies noise along with signal though. Offset adjustment provided to move output signal + or -.
5) A delay circuit based on the original pulse drive, to allow preamp output into comparator stage. Designed to pass a band of signal from preamp. Delay adjustment provided to increase or shorten delay.
6) A power supply circuit with a virtual ground reference generator to provide + and - rails to circuitry.
7) A second comparator with its input enabled only when the primary pulse is over.
8.) The audio output circuit that takes the 2nd comparator and drives a speaker or headphone.
Do these 8 functions more or less describe the PI detector layout of the Baracuda? Am I incorrect on any of the 8 pieces, and in what way am I incorrect?
Ok. Here is my noob viewpoint on this design as applicable to using PC technology.
The PC card can handle the clock pulses to drive the MOSFET through the line output. The sound card outputs a pulse which is used to drive the MOSFET stage. A timer can be initiated on the falling edge of that pulse. Multiple timers can be utilized if there are more multiple areas of data in the raw data set we want to analyze. For this example, there is one data set we will analyze this way. This delay will have a the sample start point, and a sample stop point.
Even an average sound card has very decent hardware that can also be used for signal processing and analysis. Most modern day sound cards can do 24bit A/D conversion up to a 192khz sampling rate. However, the card's input voltage is limited, so it will have to be protected. The sound card input should be limited to about 1V RMS (1.4VPeak).
Using a voltage divider circuit that doesn't cut or clip out any parts of the waveform, the raw signal will be brought into an isolation amplifier and will interface to the voltage divider through a matched resistor pair and will have a high CMRR to get rid of any unwanted noise on the signal. The output will then go into the sound card, with little or no amplification.
With a 1300V+ reflected signal, this means the voltage divider output of 1V RMS = 1300V. Assuming the "interesting" part of the signal is in the first volt (i.e back to back 1n4148 diodes in the traditional analog PI design), this means only about 1mV of the voltage divider signal is real information.
Normally for 10, 12, or even 16 bits in a traditional PIC or other micro, this would be a problem, i.e the reason for a big preamp stage.
However for a 24bit signal, this 1mV signal is over 12000 counts; the equivalent of a 14bit signal.
Sample it at 24 bit resolution and record it over the whole time period, and save that to the RAW sample data set. This whole time period would include initial pulse, echoes, etc, down to dead signal, waiting on the next pulse. For the 640 hz pulse this would be approx 1.5ms, so much less than that would need sampling, i.e 100uS or less I'm guessing.
24bit@96kbps/single channel = 2.3 Mbps
1minute sample = 16.5MB. Modern day RAM is cheap. 4GB is normal, and even 16gb can be had under $100. Modern PC's can incorporate all the RAM needed. Running large point FFT's in real time are no problem for PC's.
Now, plenty of info in that raw data sample set. No amplification means no amplified noise. The typical PI analog unit's 1000X preamp basically takes that 1mV signal and makes it usable. But it also amplifies noise X1000. Hopefully the high CMRR of the amplifier knocked off some of that noise. Even this may be unnecessary on a high end sound card.
So, I now have a data set that contains the original pulse, any echos from that pulse, and any perturbations (i.e. eddy currents from metal in the ground). The entire amount of information is in the microseconds in duration. Call this data set #1. Copy this into data set #2.
Using data set#2, take any sample that is higher than 12000 (or whatever the user wants) and set them to 0. The software equivalent of clipping, with one difference. Since everything over 12000 was set to 0, instead of simply limited to 12000, i.e. a clip, that doesn't introduce a bias. No bias means remaining data set is still referenced to 0, rather than an average. Now we can amplify this. X500 or more. All usable data.
Now copy this into data set #3, starting at the start sample delay time, and ending at the stop sample time.
So. Now I have 3 data sets of samples that I can display as is, or do further analysis on, such as an FFT, or moving average on, to get some form of a VDI. These data sets are:
#1-Raw data, no clipping. Not much usable info here with the large peak still there.
#2-raw data minus any sample above 12K (or whatever user wants), amplified. data set#2 FFT generated VDI displayed with this.
#3-data sampled after a user determined delay for a user determined duration, amplified. data set#3 FFT generated VDI displayed with this.
If I displayed all three waveforms, and provided a start and stop slider on the raw data set, to allow refinement of the 3rd data set. Would this be beneficial for the operator to look at the waveform this way?
I'd like any thoughts on this if possible. I realize I'm super new at this much of an in-depth look into the metal detector and how it functions, but I'd really like to understand it better so that I can use it on this project, and maybe help progress the software metal detector side of things.
Comment