3.2 - Stream Mode [U3 Datasheet] | LabJack
« Close

Datasheets and User Guides

App Notes

Software & Driver


3.2 - Stream Mode [U3 Datasheet]

Stream Mode Overview

The highest input data rates are obtained in stream mode, which is supported with U3 hardware version 1.21 or higher. Hardware version 1.21 started shipping in late August of 2006. Contact LabJack for information about upgrading older U3s. Stream is a continuous hardware timed input mode where a list of channels is scanned at a specified scan rate. The scan rate specifies the interval between the beginning of each scan. The samples within each scan are acquired as fast as possible.

As samples are collected, they are placed in a small FIFO buffer on the U3, until retrieved by the host. The buffer typically holds 984 samples, but the size ranges from 512 to 984 depending on the number of samples per packet. Each data packet has various measures to ensure the integrity and completeness of the data received by the host.

Since the data buffer on the U3 is very small it uses a feature called auto-recovery. If the buffer overflows, the U3 will continue streaming but discard data until the buffer is emptied, and then data will be stored in the buffer again. The U3 keeps track of how many packets are discarded and reports that value. Based on the number of packets discarded, the UD driver adds the proper number of dummy samples (-9999.0) such that the correct timing is maintained.

The table below shows various stream performance parameters. Some systems might require a USB high-high configuration to obtain the maximum speed in the last row of the table. A “USB high-high” configuration means the U3 is connected to a high-speed USB2 hub which is then connected to a high-speed USB2 host. Even though the U3 is not a high-speed USB device, such a configuration does often provide improved performance.

Stream data rates over USB can also be limited by other factors such as speed of the PC and program design. One general technique for robust continuous streaming would be increasing the priority of the stream process.

The max sample rate of the U3 is 50 ksamples/second. The max scan rate depends on how many channels you are sampling per scan:

Sample => A reading from one channel.
Scan => One reading from all channels in the scan list.
SampleRate = NumChannels * ScanRate

ScanRate = SampleRate / NumChannels

For example, if streaming 5 channels at ResolutionIndex=0 and all at Range=+/-10V, the max scan rate is 10 kscans/second (calculated from 50 ksamples/second divided by 5).

Table 3.2-1. Streaming at Various Resolutions

Low-Level UD Max Stream ENOB ENOB Noise Interchannel
Res Index Res Index (Samples/s) (RMS) (Noise-Free) (Counts) Delay (µs)
0 100 2500 12.8 10 ±2 320
1 101 10000 11.9 9 ±4 82
2 102 20000 11.3 8.4 ±6 42
3 103 50000 10.5 7.5 ±11 12.5



Full resolution streaming is limited to 2500 samples/s, but higher speeds are possible at the expense of reduced effective resolution (increased noise). The first column above is the index passed in the Resolution parameter to the low-level StreamConfig function, while the second column is the corresponding index for the Resolution parameter in the UD driver. In the UD driver, the default Resolution index is 0, which corresponds to automatic selection. In this case, the driver will use the highest resolution for the specified sample rate.

ENOB stands for effective number of bits. The first ENOB column is the commonly used “effective” resolution, and can be thought of as the resolution obtained by most readings. This data is calculated by collecting 128 samples and evaluating the standard deviation (RMS noise). The second ENOB column is the noise-free resolution, and is the resolution obtained by all readings. This data is calculated by collecting 128 samples and evaluating the maximum value minus the minimum value (peak-to-peak noise). Similarly, the Noise Counts column is the peak-to-peak noise based on counts from a 12-bit reading.

Interchannel delay is the time between successive channels within a stream scan.


 with LabJack U3-HV I can capture a 20KHz signal?

The U3 can scan 1 channel at 50 kscans/second, so if you do that with a 20 kHz signal you will get just 2 or 3 samples per cycle.  So the answer depends on what information you want from the signal.  If you want to display a nice looking waveform, you need many more points per cycle.

For a fluorimeter operated via LabVIEW which would be the best DAC to use and why?

Sounds like you have some sort of device that provides some signal that you want to acquire.  We will need details about the signal.  I suggest you post on our forum, and provide a link to details about the output signal from your flourometer.

Hello. I need to measure the time it takes to perform the conversion when I put my analog signal in one of the AIN inputs using Labview. how can i do it?

I am trying to understand the T7 data acquisitionn rate, if a script contains a scansperread at 500 and a scanrate of 10,000 does that mean that the T7 is acquiring 50k data reads/second?

This is the U3 User's Guide, and the U3 uses the UD library on Windows.  The T7 uses the LJM library on all platforms.

UD or LJM:  SampleRate = NumChannels * ScanRate.  So if you have ScanRate = 10 kscans/second and NumChannels = 5 (NumAddresses on the T7), you are sampling at 50 ksamples/second and that is the number you use when looking at data rate limits.

LJM:  ScansPerRead is not directly related to ScanRate or SampleRate.  ScansPerRead controls how many scans you read per call to the read function.  A typical value in this example would be 5k, so that the read call would wait and return 5k scans per call (which is a 1/2 second of data).

UD:  The number of scans you retrieve per read is controlled by the value parameter in the request with ioType LJ_ioGET_STREAM_DATA (i.e. the stream read call).  See Section 4.3.7 of the U3 User's Guide.

This section has been updated with clarification on sample versus scan, as have T7/LJM sections:





I have found that I can read and write data in stream mode at 50000 samples/s and resolution 0, causing a number of errors in each block equal to numPackets, but there are no missed readings. If no information is lost because of buffer overflow, what errors are actually occurring?

Is this low-level stream code or through the UD library?  What errorcode are you getting?

Low-level stream code. I was just looking at the number of errors per block, as given by the key 'error' to streamData.  From 5.2.12 in the user guide that might be errorcode 11, though I'm not sure.

The error key's value is a count of the errors detected in the StreamData response packet byte 11. Byte 11 in each packet is an errorcode. You could do something like this to print out what the StreamData packets error codes are when streaming to give a better idea on what is happening:

            cnt = 0
            #StreamData packets are 64 bytes and the 11th byte is the error code.
            #Iterating through error code bytes and displaying the error code
            #when detected.
            for err in r['result'][11::64]:
                errNum = ord(err)
                if errNum != 0:
                    #Error detected in this packet
                    print "Packet", cnt, "error:", errNum

Error codes can be found here:


Keep in mind that for a 50k sample rate you may need a high-speed USB 2.0 hub as described in paragraph 4 on this page. If you cannot read the stream data from the U3 at the rate configured, that can lead to a stream buffer overflow and put the device into autorecovery mode. Autorecovery mode is documented here:


With the STREAM_AUTORECOVER_ACTIVE error you will be getting the valid data that was buffered on the U3 before the overflow occurred.

I am encountering a STREAM_SCAN_OVERLAP (errorcode 55) when writing the streamed data from 1 channel to a file, using what seems like appropriate Resolution and sampleFrequency (e.g. 10000 samples/s at resolution index 1, 2, or 3). I don't see the error for 2500 samples/s and resolution index 0. Why might this be?

I'm not sure this is relevant, but I am also manually triggering the stream (since u3 doesn't support triggering) by monitoring an CIO channel and waiting for a TTL signal.

What are you using for software?  See if you can reproduce this using LJStreamUD.exe, and then you can tell us some settings to reproduce it ourselves.

I am using a U3 in stream mode with 6 AI channels at 1 KS/s. The buffer is read approximately every 25 ms by my LabVIEW application. I am also using an eDAC and 2 of the LJUD eDO to issue commands every 25 ms.

For reasons unknown to me, the stream will crash with error 15 (Stream packet received out of sequence, 6015 in LabVIEW). I have also briefly seen error 1008 (7008 in LabVIEW). What are potential causes of these errors? 

My suspicion is that eDO and eDAC executions are somehow overlapping with the stream read and causing the misread.

I made an example that does just what you describe and it works fine for me.  I also made another example that does an add/go/get block rather than the 3 e-function calls, since the add/go/get is more efficient.  To get the examples post on the forum or email [email protected].

alex3's picture

I am using about a dozen U3's, and each U3 is reading four analog signals.  If I specify a scan frequency of X (in my case 200Hz), what is the accuracy of the actual scan frequency?  I'm finding that we are receiving scans from most of our U3's at a frequency of 200.6Hz.  Is there a +/- percentage bound that you can specify?

labjack support's picture

2 things to consider for stream scan rate accuracy:

1.  Due to integer math and rounding limitations, the actual scan rate according to the U3 clock might be different than what you requested.  Look at the return value of the START_STREAM call to see the actual scan rate.  See Section 4.3.7.

2.  The U3 has an RC clock, rather than quartz based, so the clock accuracy of the U3 is only 1.5% per Appendix A.  Our other devices have quartz based clocks which are much more accurate, such as the U6 which is 30 ppm (0.003%) per its Appendix A.

alex3's picture

I'm using the exodriver and LabJackPython.  What's the easiest way to grab the actual scan rate?  The functions described in Section 4.3.7 are Windows only.

labjack support's picture

The scan rate is decided by the internal stream clock frequency configured and the 16-bit scan interval in the StreamConfig low-level level function/packets (the Windows driver configures stream mode with this too).

Currently LabJackPython does not store that information and there is no low-level function to read those values (only set), so you would need to modify the u3.py source code to add that functionality. So from the LabJackPython download, edit src/u3.py and in the streamConfig method add a return at the end of the function like:

        freq = freq/ScanInterval
        if SamplesPerPacket < 25:
            #limit to one packet
            self.packetsPerRequest = 1
            self.packetsPerRequest = max(1, int(freq/SamplesPerPacket))
            self.packetsPerRequest = min(self.packetsPerRequest, 48)
        #User added code
        return freq

Build and reinstall the modified u3.py file (python setup.py install). Then the streamConfig return value is the actual scan frequency. That or just display the value in the u3,py file with print if you don't need a variable.

alex3's picture

I think this might be a duplicate post but I'm not sure.

I am using a bunch of U3's, and we've specified a scan frequency of 200Hz.  We are finding that the scan frequency is closer to 200.6Hz.  What I mean is that over a 48 hour period, we receive 0.3% more scans than expected.  What is the accuracy of the scan frequency?

labjack support's picture

We answered your previous question (which is the same as this one) on the day it was posted, but we just made both your posts public today. The response is above your post today.

GMcBane's picture

We are using U3s to collect fluorescence decay data.  The fluorescence decay looks like an exponential decay from an initially high level, similar to what you see if you put a square wave through a low-pass filter with a 1 ms RC time constant.  

The user's guide says that the low-voltage inputs are "essentially connected directly to the input of an SAR ADC".  Many such ADCs have sample-and-hold amplifiers at the input, so that the input voltage is captured when the conversion is first triggered and held steady while the ADC goes through its try-and-compare sequence. If there's no S/H at the input, then strange things can happen if the input voltage varies as the successive approximations are going on.

Does the U3's SAR ADC have this front-end sample and hold circuit?



labjack support's picture

I looked closely at the chip's datasheet and it mentions tracking and I can see the sample capacitors, but does not explicitly say "track & hold".  That is, does not clearly state that the sample capacitors are detached from the input connections during the conversion process.  I assume that is true but we have contacted the manufacturer for confirmation.

Note that most U3 readings are oversampled.  Even at ResolutionIndex=3 each sample is oversampled 2x:


GMcBane's picture

> Note that most U3 readings are oversampled.  Even at ResolutionIndex=3 each sample is oversampled 2x:

I thought so, though it's not clearly stated in the documentation, and I can't tell what the oversampling factors are for other resolution index values.  (Also, you're actually oversampling a 10-bit ADC, right?)  It's also unclear what the relation is between "QuickSample" which is described in some parts of the documentation, and the resolution index which is described in the stream mode docs.

When you oversample, I presume you start a new full conversion for each subsample, with whatever input voltage you have when the new subsample begins, rather than doing a new SAR conversion on the sampled-and-held value from the first subsample.  Is that correct?  In that case, the returned value is some sort of average of the time-varying input signal across the full oversampling period, with each relevant subsample taken at the beginning of each subsample period.  For instance, if at resolution index 2 you use 4 subsamples (a guess!), and a single 10-bit SAR conversion requires 200 ns, then a single-channel sample reflects an average of the input signal at times 0, 200, 400, and 600 ns after the conversion begins, and is available for placement in the buffer after 800 ns.

Do I have this right?

labjack support's picture

Yes, the U3 has a 10-bit ADC chip that is fast enough, linear enough, and has the right amount of white noise, to provide good 12-bit samples with oversampling.  QuickSample and ResolutionIndex are essentially the same thing ... both control how much oversampling happens.  QuickSample is for command-response mode and ResolutionIndex is for stream mode (See Section 3.0 for clarification on those modes).

Yes, your description is correct.

Imad Lehmidi's picture

The stream mode provides a sample rate at 50Khz, I have 2 questions, can I change this sample rate? and does it have the same value in command-response mode?

Thanks in advance

labjack support's picture

Are you asking about an example for some particular language provided by us?

Command/response is not as fast as stream mode.  See Sections 3.0, 3.1 and 3.2:



Imad Lehmidi's picture

I'm communicating with my U3 via labVIEW, so for example while using the "U3 eAIN loop with chart.vi" there's an option to change the "ms per iteration", so does varying that value affects the sampling rate of the ADC?

labjack support's picture

Yes, in effect, but the mechanisms are different:

Command-Response:  The sampling rate is controlled by software.  For example, you make a loop in your program that uses LabVIEW timing to make the loop run evey 100 ms.  Each time in the loop it reads 1 scan from your channels.  Thus the host software is controlling sample rate.

Stream:  The sampling rate is controlled by hardware.  You tell the U3 how fast to scan a list of channels, and it then does it.

Imad Lehmidi's picture

And I assume I shouldn't go below the loop's execution time while changing the time between each iteration, because in the source code I noticed the use of the "Wait Until Next ms Multiple" function...so to avoid having the loop taking more time to execute than the time specified at millisecond multiple, I think I should take a look at the tables mentioned in https://labjack.com/support/datasheets/u3/operation/command-response to keep my loop execution in a safe margin in comparison with the specified timing between each loop.

labjack support's picture

That is correct.  If concerned, I suggest you keep an eye on the actual time per iteration.