5.4 - Calibration Constants [U6 Datasheet] | LabJack
« Close

Datasheets and User Guides

App Notes

Software & Driver


5.4 - Calibration Constants [U6 Datasheet]

This information is only needed when using low-level functions and other ways of getting binary readings. Readings in volts already have the calibration constants applied. The UD driver, for example, normally returns voltage readings unless binary readings are specifically requested.

Calibration Constant

The majority of the U6's analog interface functions return or require binary values. Converting between binary and voltages requires the use of calibration constants and formulas.

When using Modbus the U6 will apply calibration automatically, so voltages are sent to and read from the U6, formatted as a float.


Which Constants Should I Use?

The calibration constants stored on the U6 can be categorized as follows:

  • Analog Input
  • Analog Output
  • Current Source
  • Internal Temperature


Analog Input: The U6 has 4 gains and the pro has a 24-bit sigma-delta converter, so total of eight calibrations are provided: one for each gain for each converter. The U6 uses multiplexed channels connected to a differential input amp so, single ended and differential readings use the same calibration.

Analog Output: Only two calibrations are provided, one for DAC0 and one for DAC1.

Current Source: Two calibrations are provided, one for Iout0 and one for Iout1. The calibrations are the number of amps measured during calibration. These are just a number; there is no related formula.

Internal Temperature: This calibration is applied to a reading from channel 14 (internal temp) after the binary has been converted to Volts.


U6 Input Ranges

The U6 has a total of 8 input ranges. Four single ended and four differential. The eight ranges are:

Table 5.4-1. Input Ranges

Range Max V Min V
Single-Ended ±10V 10.1 -10.58
Single-Ended ±1V 1.01 -1.06
Single-Ended ±100mV 101mV -106mV
Single-Ended ±10mV 10.1mV -10.06mV
Differential ±10V 10.1 -10.58
Differential ±1V 1.01 -1.06
Differential ±100mV 101mV -106mV
Differential ±10mV 10.1mV -10.06mV


Note that the minimum differential input voltage of -10.58 volts means that the positive channel can be as much as 10.58 volts less than the negative channel, not that the positive channel can be +10 and the negative -10 as this results in a +20V signal which is outside the range that the U6 can measure. The voltage of any analog input pin, compared to ground, must be in the range -10.58 to +10.10 volts.


U6 Calibration Formulas (Analog In)

Depending on how an analog reading is obtained either 16 or 24 bits are returned. All readings and the calibration constants are 16-bit aligned. This means that 24-bit values must be justified to 16-bit values before applying a calibration. To justify a 24-bit value to 16-bits divided it by 256 and store it as floating point, so that the information in the lower 8-bits is retained. The approximate nominal conversion from binary to voltage is:

Volts(uncalibrated) = (Bits/65536)*Span (Single-Ended)

Volts(uncalibrated) = (Bits/65536)*Span – Span/2 (Differential)

Binary readings are always unsigned integers.

Where span is the maximum voltage minus the minimum voltage from the tables above. The actual nominal constants are provided in the tables below, and should be used if the actual calibration constants are not read for some reason. Most applications will use the actual calibrations constants (PositiveSlope, Offset, Center, NegativeSlope) stored in the internal flash.

if(Bits < Center)
        Volts = (Center - Bits) * NegativeSlope
        Volts = (Bits - Center) * PositiveSlope

The offset calibration has been provided so that the same simple formula used on the U3 and UE9 can be used on the U6. When using the simple formula negative values will be off by a few bits (up to 5 bits in testing, but this value has not been characterized). The simple formula is:

Volts = (Slope * Bits) + Offset


U6 Calibration Formulas (Analog Out)

Writing to the U6's DAC require that the desired voltage be converted into a binary value. To convert the desired voltage to binary select the Slope and Offset calibration constants for the DAC being used and plug into the following formula.

Bits = (DesiredVolts * Slope) + Offset


U6 Calibration Formulas (Internal Temp)

Internal Temperature can be obtained by reading channel 14, applying the proper voltage conversion then using the following formula.

Temp (K) = (Volts *  TemperatureSlope) + TemperatureOffset


U6 Calibration Constants

Below are the various calibration values are stored in the Mem area. Generally when communication is initiated with the U6, ten calls will be made to the ReadMem function to retrieve the first 10 blocks of memory. This information can then be used to convert all analog input readings to voltages. The high level Windows DLL (LabJackUD) does this automatically.

Table 5.4-2. Calibration Constants

Block # Starting Byte Normal ADC Nominal Value
0 0 AIN ±10V Slope 0.00031580578
0 8 AIN ±10V Offset -10.58695652
0 16 AIN ±1V Slope 0.000031580578
0 24 AIN ±1V Offset -1.058695652
1 0 AIN ±100mV Slope 0.0000031580578
1 8 AIN ±100mV Offset -0.1058695652
1 16 AIN ±10mV Slope 0.00000031580578
1 24 AIN ±10mV Offset -0.01058695652
2 0 AIN ±10V NegativeSlope -0.0003158058
2 8 AIN ±10V Center 33523
2 16 AIN ±1V NegativeSlope -0.00003158058
2 24 AIN ±1V Center 33523
3 0 AIN ±100mV NegativeSlope -0.000003158058
3 8 AIN ±100mV Center 33523
3 16 AIN ±10mV NegativeSlope -0.0000003158058
3 24 AIN ±10mV Center 33523
Block # Starting Byte Miscellaneous Nominal Value
4 0 DAC0 Slope 13200
4 8 DAC0 Offset 0
4 16 DAC1 Slope 13200
4 24 DAC1 Offset 0
5 0 Current Output 0 0.00001
5 8 Current Output 1 0.0002
5 16 Temperature Slope -92.379
5 24 Temperature Offset 465.129
Block # Starting Byte Hi-Res ADC (U6-Pro) Nominal Value
6 0 AIN ±10V Slope 0.00031580578
6 8 AIN ±10V Offset -10.58695652
6 16 AIN ±1V Slope 0.000031580578
6 24 AIN ±1V Offset -1.058695652
7 0 AIN ±100mV Slope 0.0000031580578
7 8 AIN ±100mV Offset -0.1058695652
7 16 AIN ±10mV Slope 0.00000031580578
7 24 AIN ±10mV Offset -0.01058695652
8 0 AIN ±10V NegativeSlope -0.0003158058
8 8 AIN ±10V Center 33523
8 16 AIN ±1V NegativeSlope -0.00003158058
8 24 AIN ±1V Center 33523
9 0 AIN ±100mV NegativeSlope -0.000003158058
9 8 AIN ±100mV Center 33523
9 16 AIN ±10mV NegativeSlope -0.0000003158058
9 24 AIN ±10mV Center 33523



Format of the Calibration Constants

Each value is stored in 64-bit fixed point format (signed 32.32 little endian, 2's complement). Following are some examples of fixed point arrays and the associated floating point double values.

Table 5.4-3. Calibration Constants Format

Fixed Point Byte Array(LSB, …, MSB) Floating Point Double
{0,0,0,0,0,0,0,0} 0
{0,0,0,0,1,0,0,0} 1
{0,0,0,0,255,255,255,255} -1
{51,51,51,51,0,0,0,0} 0.2
{205,204,204,204,255,255,255,255} -0.2
{73,20,5,0,0,0,0,0} 0.000077503
{225,122,20,110,2,0,0,0} 2.43
{102,102,102,38,42,1,0,0} 298.15


Does the LJFuse software also perform the calibrated conversions when polled for values, or does this need to be taken into account by the application developer?

Cal constants are handled internally when using the AINx and DACx files in LJFuse.  You don't have to do anything.  Also, if you are using the modbus files they are also all dealing with calibrated voltages.


Then I guess the same applies on windows, ie from the user perspective (via the dll interface) lines like "Which Constants Should I Use?" in the documentation apply to functions like eAIN set to return binary values in the double val parameter?  Correct? And btw,  how the heck is LJ_chAIN_BINARY used?  It's mentioned exactly once in the same way (no description at all) in all the documentation for all the LabJack models that mention it, and never once in any forum post that I could find. 


Keep in mind that this is in Section 5 about low-level communication, so really applies when you are sending/receiving raw packets.  When using the UD driver, it applies the calibration constants for you and thus in Section 4 you see little mention of them.  When you call eAIN from Section 4.2.17, it returns a calibrated voltage unless you set Binary=TRUE.

LJ_chAIN_BINARY is a special channel constant, so is used with the put_config iotype.  When you look at a constant like this, form it as a question:  "Should AIN return binary?".  Thus if you set Value=TRUE=1 you will get binary values from all further analog input readings.

I will try to follow these electrically modulated Jeopardy rules.   thanks.

Sure, like Jeopardy.  It is similar to the labeling of enable/disable pins on ICs.  If a pin is called "enable", that means a TRUE enables the IC, while if a pin is called "disable", that means a TRUE disables the IC.  Of course they further complicate it by calling a bin "!disable" (not-disable), in which case a TRUE does not disable the IC and thus enables it.

I am using the current sources for reading precision thermistors from GE Medical. Is there a simple way to read the calibration values for these current sources  - Iout0 and Iout1 - in LabView? Thanks.

For the best accuracy, add a fixed series resistor such as the Y1453 mentioned in Section 2.5 of the U6 User's Guide.  To read current source values measured during factory calibration, Section 2.5 mentions getting them from LJControlPanel or reading them using iotype LJ_ioGET_CONFIG with the special channel LJ_chCAL_CONSTANTS.  I don't see a LabVIEW example, but you want to use the DLL function "LJUD_eGetS (DBL Array).vi".  IOType = "LJ_ioGET_CONFIG", Channel = the constant provided when you pass "LJ_chCAL_CONSTANTS" to the LJUD_StringToConstant function, Value = Don't Care, and x1 = array initialized with 64 doubles.