Stream Data contains spikes to 10V when computer is under load | LabJack
 

Stream Data contains spikes to 10V when computer is under load

5 posts / 0 new
Last post
Christopher Rooney
ctr44's picture
Stream Data contains spikes to 10V when computer is under load

Hi there,

    I've been working on a project that is supposed to replace a very old labview program. It worked as expected when I tested it on my laptop, but when I tried it on my Raspberry Pi, all of the voltage readings were off by ~10%. I take my voltage readings by using Stream Data and then averaging all of the resulting data points.

    I subsequently installed it on the Windows computer that was running Labview, and the voltage readings were back to normal.

    However, when the computer is under load (it's a very old computer - opening Chrome does the trick) the voltage readings are inaccurate again (in about the same way that I observe on the Pi).

    To debug, I had my program save the raw stream data when it observed the inaccurate voltage reading, and the culprit is one value of 10 V among around 200 values of 0.18 V in the stream (as shown in the attachment). Thus when I average it out, I get a very inaccurate voltage measurement (0.24 V instead of 0.18 V). I can always implement a filtering algorithm that will discard outliers, since I expect my data to vary smoothly, but I would rather see if I can fix this issue at its source.

Any advice would be appreciated! If you feel like you need to go through my code, it's available on github: https://github.com/NanoExplorer/Zeus2-cycle-box and the relevant files are probably labjack.py and HK_server.py. These two files are based on the example python script called something like "labjack threaded streaming". I should note I'm using python 3

 

 

File Attachment: 
LabJack Support
LabJack Support's picture
You are likely seeing when

You are likely seeing when autorecovery ends (the 10 V sample), meaning a stream buffer overflow occurred. Stream buffer overflows occur when applications cannot read stream samples from the U6 stream buffer at a fast enough rate to keep up with the scan rate, which can happen if your system is under load enough.

Raw 0xFFFF samples are about 10 V readings. When autorecovery ends there is a scan of 0xFFFF samples to separate new data from pre autorecovery data, and returnDict["missed"] will indicate the number of scans that were missed during autorecovery. Low-level details on autorecovery are here:

https://labjack.com/support/datasheets/u6/low-level-function-reference/s...

In your code, check if "missed" is not zero indicating a stream buffer overflow and autorecovery occurred. Then check if that is when the scan of 10 V (raw 0xFFFF) samples are occurring. Disregard those 10 V samples and account for the missed samples in their place.

You could try bumping up the priority of your Python application's process and see if that helps with preventing the buffer overflows when your system is under load.

Christopher Rooney
ctr44's picture
All right! You've put me on

All right! You've put me on the right track.

I must've lost the error detection logic that was present in the original labjack example script while I was modifying it. I added it back in, and sure enough the returnDict['missed'] can be quite high when I'm doing other things on the pc.

I set the priority of the python process to high (and subsequently 'realtime') but opening Chrome still injected lots of missed samples into the system.

Do you think the best course of action would be to just filter out all the 0xFFFF readings? I'm not very particular about how many records LabJack returns; I'll just be averaging all of them. I would, however, like to be sure that data collection isn't too far behind - a few seconds would be tolerable, but if I'm recieving voltage measurements from 5 or 10s ago that could become a problem.

Thanks for your help!

Christopher Rooney
ctr44's picture
After doing some more testing

After doing some more testing, setting the priority of the python process higher definitely reduces the number of errors I'm seeing. Thanks for the idea!

LabJack Support
LabJack Support's picture
Do you think the best course

Do you think the best course of action would be to just filter out all the 0xFFFF readings?

If they are the 0xFFFF dummy samples of the separator scan, yes you should filter them out as they are not valid readings. If you need to account for the missed samples (say for scan/sample timing), they occurred at the separator scan of 0xFFFFs.