« Close

Datasheets and User Guides

App Notes

Software & Driver

 

2.6.2 - Converting Binary Readings to Voltages

Converting Binary Readings to Voltages Overview

This information is only needed when using low-level functions and other ways of getting binary readings. Readings in volts already have the calibration constants applied. The UD driver, for example, normally returns voltage readings unless binary readings are specifically requested.

Following are the nominal input voltage ranges for the low-voltage analog inputs. This is all analog inputs on the U3-LV, and AIN4-AIN15 on the U3-HV.

Table 2.6.2-1. Nominal Analog Input Voltage Ranges for Low-Voltage Channels

  Max V Min V
Single-Ended 2.44 0
Differential 2.44 -2.44
Special 0-3.6 3.6 0

Table 2.6.2-2. Nominal Analog Input Voltage Ranges for High-Voltage Channels

  Max V Min V
Single-Ended 10.3 -10.3
Differential N/A N/A
Special -10/+20 20.1 -10.3

Note that the minimum differential input voltage of -2.44 volts means that the positive channel can be as much as 2.44 volts less than the negative channel, not that a channel can measure 2.44 volts less than ground. The voltage of any low-voltage analog input pin, compared to ground, must be in the range -0.3 to +3.6 volts.

The “special” range (0-3.6 on low-voltage channels and -10/+20 volts on high-voltage channels) is obtained by doing a differential measurement where the negative channel is set to the internal Vref (2.44 volts). For low-voltage channels, simply do the low-voltage differential conversion as described below, then add the stored Vref value. For high-voltage channels, do the same thing, then multiply by the proper high-voltage slope, divide by the single-ended low-voltage slope, and add the proper high-voltage offset. The UD driver handles these conversions automatically.

Although the binary readings have 12-bit resolution, they are returned justified as 16-bit values, so the approximate nominal conversion from binary to voltage is:

Volts(uncalibrated) = (Bits/65536)*Span (Single-Ended)

Volts(uncalibrated) = (Bits/65536)*Span – Span/2 (Differential)

Binary readings are always unsigned integers.

Where span is the maximum voltage minus the minimum voltage from the tables above. The actual nominal conversions are provided in the tables below, and should be used if the actual calibration constants are not read for some reason. Most applications will use the actual calibrations constants (Slope and Offset) stored in the internal flash.

Volts = (Slope * Bits) + Offset

Since the U3 uses multiplexed channels connected to a single analog-to-digital converter (ADC), all low-voltage channels have the same calibration for a given configuration. High-voltage channels have individual scaling circuitry out front, and thus the calibration is unique for each channel.

See Section 5.4 for detail about the location of the U3 calibration constants.

14 comments

Hello,

 

I have recently purchased a U3-HV to act as a datalogger for a number of small "shed" projects.  I am currently trying to log a simple 1-5V signal outputed from a thermal gas flow meter.  Not being an electrical guru (chemical engineer) the amount of circuitry knowledge in your explanations is very difficult to follow.  However, i am having a lot of trouble simply getting a correct signal to appear on the LJLogUD.

My first issue is that there appears to be a large amount of interference in the measured signals, particularly a random cyclic interference which is different on each single ended wire.  I find that not being able to directly use a differential measurment (due to it converting it directly into binary) is enormously aggravating, particularly when the converted binary (ie by dividing by 65536) still gives me a resultant cyclic interference.  Please help, i had very high hopes for this little device, but have so far found other dataloggers far more user friendly and accurate.

I think Section 2.6.3.4 would be better for this topic, although to continue in detail about your particular signal I would start a forum topic or email [email protected].

Start with all connections removed except for USB and your signal.  Connect your signal to AIN0 and GND, monitor it in the test panel in LJControlPanel or in LJLogUD, and let us know what you see.

We can't provide a single calibration for the differential high-voltage channels on the U3-HV, because the calibration depends on the common-mode voltage on both inputs, so we leave it to the user to calibrate in their actual system.  However, a differential measurement seldom makes sense with high-level signals.  The differential app note has more information.

isp's picture

I can't find similar formula for T7/T7-PRO we use.

Could you point me to correct calculation?

The code I found it one of examples C_T7_TCP_Modbus_Stream/src/calibration.c does the following:

    if(*volts < devCal->HS[gainIndex].Center)
        *volts = (devCal->HS[gainIndex].Center - rawAIN) * devCal->HS[gainIndex].NSlope;
    else
        *volts = (rawAIN - devCal->HS[gainIndex].Center) * devCal->HS[gainIndex].PSlope;
    return 0;
So it doesn't use the offset at all to calculate value in volts.

That would be okay, but changing the range results in different voltages read from the same output.

Specifically here is what I have:

Range: 0.01: Slope: 3.1554174e-7 Center: 33342.086 Offset: -0.010516284 Raw-value: 33529 -> Volts: 5.897919e-5
Range: 0.1: Slope: 3.1550958e-6 Center: 33493.71 Offset: -0.10566833 Raw-value: 35948 -> Volts: 0.007743517
Range: 1.0: Slope: 3.1550582e-5 Center: 33508.027 Offset: -1.057076 Raw-value: 35369 -> Volts: 0.05871477
Range: 10.0: Slope: 3.1554952e-4 Center: 33510.492 Offset: -10.572616 Raw-value: 33716 -> Volts: 0.064847894
So for the range of 0.1V 0.0077V error seems to be way to hight for a 14bit ADC - more like 8bit one.

I totally might be doing something wrong elsewhere, but this is the first thing I found that looks wrong.

Thank you in advance!

LabJack Support's picture

I think the problem you are running into is that the T7 is using the high-resolution (HR) converter, and we are applying the high-speed (HS) constants. The T7 defaults to a resolution index of 0 which is "automatic." When set to automatic the T7-Pro will use index 9 for normal command-response operations. We either need to adjust the calibration routine to use HR in place of HS, or set the AIN's resolution somewhere between 1 and 7 to force the T7-Pro to use the HS converter.

isp's picture

I posted a comment already, but forgot to confirm that the conversion formula I am using is correct, even though it doesn't use offset from the calibration values. You didn't write anything about it:

    if(*volts < devCal->HS[gainIndex].Center)
        *volts = (devCal->HS[gainIndex].Center - rawAIN) * devCal->HS[gainIndex].NSlope;
    else
        *volts = (rawAIN - devCal->HS[gainIndex].Center) * devCal->HS[gainIndex].PSlope;

LabJack Support's picture

The offset is not used. The PSlope is used when the binary reading is >= the center point and the NSlope is used when the binary reading is below the center point.

isp's picture

One more thing I am confused with is what is called streaming.

There are 3 (or 2.5) modes of operation:

  • Single scan as CR or command-response (returns all AINs as floating point values) - I am unsure if I can obtain a digital inputs scan together and synchronized with it if so, how? (we have a Grey encoder that we need to synchronize with the sensor)
  • Buffer of scans in CR or command-response (returns all channels streaming as 16bit integers, including digital inputs, so no problem here)
  • Buffer of scans in SP or spontaneous mode (pushes all channels streaming as 16bit integers, including digital inputs, so no problem here either)

However, when referring to streaming table https://labjack.com/support/datasheets/t7/appendix-a-1 says that high resolution is not supported. I assume that is because 16bit can't hold 22bit values.

  • Am I correct that both second and third options above (buffered ones) are returning 16bit values and I can only use HS conversion
  • Is there any way of achieving 10-100Hz sampling with HR conversion and synchronization with digital inputs.
LabJack Support's picture

There are two modes of communication. Command-Response (CR) and Spontaneous (SP). Command-Response is a two-way communication. The host computer sends a request and the LabJack sends a response. Spontaneous is a one-way communication. The LabJack sends packets to the host without a request. 

Stream mode defaults to spontaneous, but can be set to command-response. In CR mode stream will buffer the data in RAM and the host will have to send requests to read out the buffered data.

"Am I correct that both..." Stream mode produces 16-bit binary data.

"Is there any way of achieving 10-100Hz..." Yes. Command-Response polling can be as fast as 1 kHz. The max speed is heavily dependent on the chosen operations and communication medium. CR is not limited to a single operation. To make usage easier we provide functions that perform only a single function, such as LJM_eWriteName and LJM_eReadName. Functions such a LJM_eReadNames, LJM_eWriteNames, and LJM_eNames can read or write multiple registers in a single packet. So you can read analog inputs and digital IO in a single packet. Keeping the operations in a single packet removes timing errors due to operating systems and communications. With those errors out of the way firmware can quickly read multiple inputs with consistent timing. Jitter will be on the order of 10 µs. You can also measure the time between measurements in a packet by adding reads of the CORE_TIMER register. The core timer register counts at 40 MHz, so you can measure times with 50 ns resolution.

More detail can be found here: https://labjack.com/support/datasheets/t7/communication

isp's picture

Your guess is wrong, but I think it points me to the right direction. I am using HS values (confirmed) because I am reading in stream/spontaneous mode with the sampling rate of 30Hz on 1 single-ended channel at a resolution index 7 and settling 100:

[ ]  READ-MODE     = :SP
[ ]  RESOLUTION    = 7
[ ]  SAMPLING-RATE = 30
[ ]  SETTLING      = 100.0d0

Unfortunately it is hard for me to confirm that this combination makes sense as details are sort of scattered over documentation. As a suggestion, it'd be convenient to have a single table for each reading mode (command-response, spontaneous, single reading) that would cover all valid combinations of all relevant parameters, something similar to this https://labjack.com/support/datasheets/t7/appendix-a-1 but including all the values, maximizing precision for number of channels and specific sampling rate.

It seems to me that the combination of settings I use is invalid:

- settling time might be too low (unsure, as resolution index 7 is missing here: https://labjack.com/support/appnotes/SettlingTime)

- high gains (ranges of 0.1 and 0.01V) are not supported with resolution index 7 as per last table here: https://labjack.com/support/datasheets/t7/appendix-a-1

I would appreciate if you can provide valid combinations of settings for ~10-100Hz and ~100-1000Hz sampling rates on 1-3 channels. One example of valid parameter combinations maximizing precision at each gain would be sufficient.

I would also like to make sure that there are no other settings that I should pay attention to with regards to 16bit -> float conversion.

Thank you very much, I really appreciate your help!

LabJack Support's picture

Normally we don't use stream until we are sampling faster than 800 Hz. At 30 Hz you could use the high-resolution converter (resolution 9-12). From appendix_A I would say resolution_index 10, with settling set to 0 (auto) would work best. The high-resolution converter is much better at rejecting noise and is less prone to settling issues.

When selecting AIN settings I first determine the maximum useful gain. Gain will depend on the maximum range that you need to measure. With gain selected I determine the sampling rate I need and find the best resolution setting (table in appendix_A) that takes less time than one period of my sampling rate. From there if I suspect settling errors I will increase settling and reduce resolution if necessary. Settling errors will depend on your signal's source impedance.

For Example, to measure a ±1V signal at 100 Hz: We can set gain to 10 which is the same as saying "set the range to ±1 V." At 100 Hz our period is 10 ms, so from Appendix_A we find the best resolution at range 1.0V that takes less than 10 ms. Which is resolution 9. To check for settling issues read a grounded channel, then read your signal multiple times in rapid succession (eReadNames). If the results converge on a value that is significantly far from the first value then more settling can help your reading.

If we need 3, ±1.0 V channels in the above example then the sample rate is 300 Hz which leaves us 3.3 ms per sample. Resolution 9 takes 3.5 ms, so we need to drop down to 8. 8 is cutting it a little close if we want to squeeze in other options we may encounter issues.

When applying calibration constants range and converter type are the important factors. If you are reading the temperature sensor you will need to apply the temperature slope and offset to the calculated voltage.

isp's picture

After looking into it more I found out that I can't use HR calibration because this particular physicist chose to be thrifty and bought T7, rather than T7-PRO.

We can try to swap it for one of PRO devices we have, however, given that we still will be owning one T7 (unless you can offer an upgrade) I need to figure out how to use it. Regardless if I am using it in CR or SP mode, I have to perform the same calculation I asked in the initial question to convert from 16bit int to float.

I tried increasing resolution from 7 to 8 and settling set to default 0 allowing device to figure it out, but it still doesn't produce results that are acceptable. Here are the outputs measuring same voltage (only for ranges of 1V and 10V as 0.1 and 0.01 are N.S.):

1.0: Slope:3.1550582e-5 Center:33508.027 Offset:-1.057076 Raw: 34329 -> Float: 0.025902165
10.0: Slope:3.1554952e-4 Center:33510.492 Offset:-10.572616 Raw: 34344 -> Float: 0.263013

I tried using both CR and SP modes and don't see difference.Which still differ by the order of 10 and the error for either one or both ranges is much greater than ADC noise as per specification.

Any other ideas of why I have different results, assuming I have to use T7 and therefore HS constants?

LabJack Support's picture

What are you measuring with that test, a channel tied to ground?

We can swap out our T7 for a Pro for the normal price difference.

It is possible that the converter has been damaged. If you measure a grounded channel (or just channel 15) and the results are not close to the binary center specified in the calibration constants then the converter is indeed bad.

isp's picture

We are measuring output of this sensor http://www.sensysmagnetometer.com/en/fgm1d-4-s1.html

It is connected in single-ended mode, sensor's output connected to channel 0 of T7 and sensor's ground is connected to T7's ground.

We are measuring rather small outputs (as per my previous post, depending on which gain gives better estimate it's either 0.026V or 0.26V) and are trying to convert them into nT. The problem is that there is 10x difference between no amplification and 10x amplification. We'll try to switch T7 to T7-PRO here and if it helps I'll contact you about upgrade.