You have the formula - no need for an on-line calculator. The analysis is correct up to the point where you have identified the phase angle of the trigger voltage. You stopped at that point. The next step is to measure the phase angle at the point where the ADC executes the read, and see what the ADC would expect the voltage to be at that time. This requires you to know the time taken to execute the read so you can calculate the phase angle at that time and therefore the voltage (using the inverse of the function you already have). You might assume that it is zero for ADC1 or you could calculate it using the MCU specs for interrupt processing and instruction execution times (assuming you are running in one-shot mode which you havenāt confirmed but is probably the default for the device) and the processing time for each subsequent ADC read.
Your graph shows a linear relationship for the four ADC readings which is exactly what you would expect if the readings are occurring sequentially rather than interleaved (which you have not confirmed, but is probably the default for the MCU). So it seems that the software correction fixed the initial problem.
Knowing the time required to get each ADC reading would give you a generalized solution, but it might be hard to measure. You can approach the problem from the opposite direction, with some loss of accuracy.
Feed the trigger signal to the ADC and compare the ADC results to the trigger voltage. It doesnāt matter whether or not the trigger voltage is exactly 0.8V because you are only trying to calculate the error in the ADC. This gives you the measurement error adjustment for each ADC. It doesnāt include the timing error adjustment for ADC1, however.
The difference data you already have gives you the timing error adjustment for the remaining ADCs. The total adjustment is the measurement error plus the timing error.
You might be able to use this data to discover the general function for timing error for any frequency, but if you are only testing at a few different frequencies it will be just as easy to apply separate adjustments for each.
Note that the device you are using does not have four ADCs - it has one ADC which is multiplexed amongst the four inputs. If you trying to increase accuracy by averaging from multiple readings then I think that your results are being degraded rather than enhanced by using multiple readings per MCU, due to the very timing issue that you are trying to resolve.
Hi,
First of all, sorry for the late reply, and thank you for your guidance; it has helped a lot. Iāve observed that with the PICO RP-2040, when I set the amplitude of the input signal (e.g., 1.0 V, 1.1 V) close to the trigger voltage level (0.8 V), the error percentage is around 9%. However, when I increase the input signal amplitude to 1.4 V, the percentage error rises to approximately 18.5%.
Additionally, I calculated the angles for both the measured and calculated signals. At an input voltage of 1.0 V, the measured angles are 133.016, 133.259, 133.413, and 133.552 degrees, while the calculated angles are consistently around 126.870 degrees. This reflects a difference of about 6 degrees. With an increased input voltage of 1.4 V, the measured angles are 151.459, 151.635, 151.759, and 151.895 degrees, compared to calculated angles of 145.150 degrees, increasing the error to about 18% in measured voltage signals.
Could you please guide me on how I could minimize this percentage error?
Thanks for your reply, and yes, I read this reply. I am getting worried about not getting a slight difference. In my case, I am getting a difference of 9% when the input voltage is 1.0 V and the trigger voltage is 0.8 at 360 Hz, which increases to 18% when the input voltage I set to 1.4 V.
Kind regards
Thatās why I said to feed the trigger signal to the ADC - you need to know how accurate that ADC reading is. You need to supply the same DC voltage to whatever you are measuring the trigger voltage with (which you havenāt described) and the ADC at the same time to see the ADC error. If the error varies with voltage that tells you that the ADC error is non-linear, which would not be unusual. You could repeat the measurement over the range 0V to 1V at 0.1V intervals and plot the results to see the error curve, but you really only need to know what it is at the trigger voltage.
Then you can apply the error amount at that trigger voltage with the AC signal to see the real difference that is due to timing - a difference that has been corrected for the known error in the ADC.
The additional calculations you have now done are not useful. You canāt reduce the āerrorā (other than eliminating the difference between the actual trigger voltage and the ADC reading, as described above) because that āerrorā is the expected result of the ADC measurement being delayed until some (unknown) time after the trigger point. All you can do is work out what it is and allow for it in your calculations.
That would depend on the ultimate purpose of this experiment. But it isnāt relevant to the current problem of determining what voltage is being applied to the ADC input under the existing setup (which is the way that I re-stated your objective and which you didnāt disagree with).
Thanks for your reply. I am using AC voltage signal (single source) for both trigger and ADC channels and I measuring it using an oscilloscope. I am trying to show that ADC values should be near the trigger value.
I will try your suggestion to give a DC signal to both trigger and ADC channels and then check the ADC error, as earlier I did this, but that time I gave a DC signal to ADC channels and an AC signal to trigger.
Hey Jeff
You are a GENIUS.
Finally got a tiny bit of information out of this Hasham.
Good luck with interpreting an oscilloscope trace to the accuracy he is trying for. I think you would need some of the better ones Techtronix or similar have to offer. Be nice to know exactly what he has got.
Hasham also has not taken much notice of your statements about positive only signals being applied to an ADC.
I personally donāt think that is ever going to happen with the devices he is using, or most anything else either.
Cheers and good luck Bob
Thanks for your reply. I am using an oscilloscope to measure the amplitude of the input so I can use it for further calculation. For data collection from ADC channels, I am using LabVIEW code.
Hi Jeff
And if he is using the comparator to generate the trigger pulse there is a delay right at the start of the process between trigger point and output pulse. It might be small in comparison to the input sine wave but these little things tend to add up.
Cheers Bob
Anyway I have just about given up on this thread.
Yes, I am using comparator. So, if you donāt mind and I apologies if I use wrong word could you please guide me alternate option or method to give trigger signal. As I am using Comparator LM311ā¦
Thereās no problem using a comparator, but since your discussion started out with comments about an AC signal and a trigger and you wouldnāt describe your setup fully I think there was an assumption that you were using an oscilloscope as the trigger. When I saw the comparator and trimpot I changed my mind about what you were doing and assumed you were using a meter of some sort to measure the trigger voltage setting for the comparator, in which case you simply needed to drive the ADC for testing from the same input, but now you say you are using an oscilloscope. Thatās the sort of confusion that is caused when you donāt show the full details of what you are doing.
I think that says it all.
Several weeks now over 2 separate posts on the same subject and still no one has any idea (beyond guess work) what Hasham is trying to achieve. If he is just trying to prove that an ADC measurement done in this manner (only guessing the āmannerā bit) is not accurate he is correct. No need to prove it as because of fixed and other delays it could not possibly be.
Anyway I personally donāt feel inclined to go on persevering so am out of here.
Cheers Bob
First of all, I apologize for disturbing you. I conducted an activity by supplying a DC input signal to all ADC channels, and all the DC signal values were correct. Subsequently, I switched to using an AC signal source with an input signal amplitude varying from 1.0 V to 1.4 V and a trigger signal of 0.8 V. The frequency was adjusted from 100 Hz to 360 Hz in 50 Hz. I observed that the percentage error increased as I raised either the frequency or the voltage amplitude. Over the past weeks.pdf (126.0 KB)
Without knowing how you did that I would assume that the āerrorā you saw increase was due to how the frequency change and the timing delay affects your measuring, and was not a change in the error.
So if you know that the ADC is reporting a correct voltage measurement then you know that the difference between the reference voltage and the measured voltage should be entirely due to the timing delay (comparator delay and ADC processing delay).
Since you know the slope of the voltage curve at the point of triggering you can convert that voltage difference into a time at your reference frequency of 100 Hz.
Then you can calculate what the voltage difference should be for that same timing delay at your other frequencies. That gives the expected measured voltage at each frequency.
If you compare those calculated values to the measured values from the ADC at each frequency you have your accuracy measure, which I think is what you were originally after.
As you already know the difference in measured voltage between the different ADC readings for the one MCU, you can also demonstrate that those additional readings arenāt adding anything useful to your results (unless for some reason you want to calculate ADC delay separately from comparator delay).