Tag Archives: calibration

amandarigdon
The Practical Chemist

Internal Standards– Turning Good Data Into Great Data

By Amanda Rigdon
No Comments
amandarigdon

Everyone likes to have a safety net, and scientists are no different. This month I will be discussing internal standards and how we can use them not only to improve the quality of our data, but also give us some ‘wiggle room’ when it comes to variation in sample preparation. Internal standards are widely used in every type of chromatographic analysis, so it is not surprising that their use also applies to common cannabis analyses. In my last article, I wrapped up our discussion of calibration and why it is absolutely necessary for generating valid data. If our calibration is not valid, then the label information that the cannabis consumer sees will not be valid either. These consumers are making decisions based on that data, and for the medical cannabis patient, valid data is absolutely critical. Internal standards work with calibration curves to further improve data quality, and luckily it is very easy to use them.

So what are internal standards? In a nutshell, they are non-analyte compounds used to compensate for method variations. An internal standard can be added either at the very beginning of our process to compensate for variations in sample prep and instrument variation, or at the very end to compensate only for instrument variation. Internal standards are also called ‘surrogates’, in some cases, however, for the purposes of this article, I will simply use the term ‘internal standard.’

Now that we know what internal standards are, lets look at how to use them. We use an internal standard by adding it to all samples, blanks, and calibrators at the same known concentration. By doing this, we now have a single reference concentration for all response values produced by our instrument. We can use this reference concentration to normalize variations in sample preparation and instrument response. This becomes very important for cannabis pesticide analyses that involve lots of sample prep and MS detectors. Figure 1 shows a calibration curve plotted as we saw in the last article (blue diamonds), as well as the response for an internal standard added to each calibrator at a level of 200ppm (green circles). Additionally, we have three sample results (red triangles) plotted against the calibration curve with their own internal standard responses (green Xs).

Figure 1: Calibration Curve with Internal Standard Responses and Three Sample Results
Figure 1: Calibration Curve with Internal Standard Responses and Three Sample Results

In this case, our calibration curve is beautiful and passes all of the criteria we discussed in the previous article. Lets assume that the results we calculate for our samples are valid – 41ppm, 303ppm, and 14ppm. Additionally, we can see that the responses for our internal standards make a flat line across the calibration range because they are present at the same concentration in each sample and calibrator. This illustrates what to expect when all of our calibrators and samples were prepared correctly and the instrument performed as expected. But lets assume we’re having one of those days where everything goes wrong, such as:

  • We unknowingly added only half the volume required for cleanup for one of the samples
  • The autosampler on the instrument was having problems and injected the incorrect amount for the other two samples

Figure 2 shows what our data would look like on our bad day.

Figure 2: Calibration Curve with Internal Standard Responses and Three Sample Results after Method Errors
Figure 2: Calibration Curve with Internal Standard Responses and Three Sample Results after Method Errors

We experienced no problems with our calibration curve (which is common when using solvent standard curves), therefore based on what we’ve learned so far, we would simply move on and calculate our sample results. The sample results this time are quite different: 26ppm, 120ppm, and 19ppm. What if these results are for a pesticide with a regulatory cutoff of 200ppm? When measured accurately, the concentration of sample 2 is 303ppm. In this example, we may have unknowingly passed a contaminated product on to consumers.

In the first two examples, we haven’t been using our internal standard – we’ve only been plotting its response. In order to use the internal standard, we need to change our calibration method. Instead of plotting the response of our analyte of interest versus its concentration, we plot our response ratio (analyte response/internal standard response) versus our concentration ratio (analyte concentration/internal standard concentration). Table 1 shows the analyte and internal standard response values for our calibrators and samples from Figure 2.

 

Table 1: Values for Calibration Curve and Samples Using Internal Standard
Table 1: Values for Calibration Curve and Samples Using Internal Standard

The values highlighted in green are what we will use to build our calibration curve, and the values in blue are what we will use to calculate our sample concentration. Figure 3 shows what the resulting calibration curve and sample points will look like using an internal standard.

Figure 3: Calibration Curve and Sample Results Calculated Using Internal Standard Correction
Figure 3: Calibration Curve and Sample Results Calculated Using Internal Standard Correction

We can see that our axes have changed for our calibration curve, so the results that we calculate from the curve will be in terms of concentration ratio. We calculate these results the same way we did in the previous article, but instead of concentrations, we end up with concentration ratios. To calculate the sample concentration, simply multiply by the internal standard amount (200ppm). Figure 4 shows an example calculation for our lowest concentration sample.

Figure 4: Example Calculation for Sample Results for Internal-Standard Corrected Curve
Figure 4: Example Calculation for Sample Results for Internal-Standard Corrected Curve

Using the calculation shown in Figure 4, our sample results come out to be 41ppm, 302ppm, and 14ppm, which are accurate based on the example in Figure 1. Our internal standards have corrected the variation in our method because they are subjected to that same variation.

As always, there’s a lot more I can talk about on this topic, but I hope this was a good introduction to the use of internal standards. I’ve listed couple of resources below with some good information on the use of internal standards. If you have any questions on this topic, please feel free to contact me at amanda.rigdon@restek.com.


Resources:

When to use an internal standard: http://www.chromatographyonline.com/when-should-internal-standard-be-used-0

Choosing an internal standard: http://blog.restek.com/?p=17050

amandarigdon
The Practical Chemist

Calibration Part II – Evaluating Your Curves

By Amanda Rigdon
No Comments
amandarigdon

Despite the title, this article is not about weight loss – it is about generating valid analytical data for quantitative analyses. In the last installment of The Practical Chemist, I introduced instrument calibration and covered a few ways we can calibrate our instruments. Just because we have run several standards across a range of concentrations and plotted a curve using the resulting data, it does not mean our curve accurately represents our instrument’s response across that concentration range. In order to be able to claim that our calibration curve accurately represents our instrument response, we have to take a look at a couple of quality indicators for our curve data:

  1. correlation coefficient (r) or coefficient of determination (r2)
  2. back-calculated accuracy (reported as % error)

The r or r2 values that accompany our calibration curve are measurements of how closely our curve matches the data we have generated. The closer the values are to 1.00, the more accurately our curve represents our detector response. Generally, r values ≥0.995 and r2 values ≥ 0.990 are considered ‘good’. Figure 1 shows a few representative curves, their associated data, and r2 values (concentration and response units are arbitrary).

Figure 1: Representative Curves and r2 values
Figure 1: Representative Curves and r2 values

Let’s take a closer look at these curves:

Curve A: This represents a case where the curve perfectly matches the instrument data, meaning our calculated unknown values will be accurate across the entire calibration range.

Curve B: The r2 value is good and visually the curve matches most of the data points pretty well. However, if we look at our two highest calibration points, we can see that they do not match the trend for the rest of the data; the response values should be closer to 1250 and 2500. The fact that they are much lower than they should be could indicate that we are starting to overload our detector at higher calibration levels; we are putting more mass of analyte into the detector than it can reliably detect. This is a common problem when dealing with concentrated samples, so it can occur especially for potency analyses.

Curve C: We can see that although our r2 value is still okay, we are not detecting analytes as we should at the low end of our curve. In fact, at our lowest calibration level, the instrument is not detecting anything at all (0 response at the lowest point). This is a common problem with residual solvent and pesticide analyses where detection levels for some compounds like benzene are very low.

Curve D: It is a perfect example of our curve not representing our instrument response at all. A curve like this indicates a possible problem with the instrument or sample preparation.

So even if our curve looks good, we could be generating inaccurate results for some samples. This brings us to another measure of curve fitness: back-calculated accuracy (expressed as % error). This is an easy way to determine how accurate your results will be without performing a single additional run.

Back-calculated accuracy simply plugs the area values we obtained from our calibrators back into the calibration curve to see how well our curve will calculate these values in relation to the known value. We can do this by reprocessing our calibrators as unknowns or by hand. As an example, let’s back-calculate the concentration of our 500 level calibrator from Curve B. The formula for that curve is: y = 3.543x + 52.805. If we plug 1800 in for y and solve for x, we end up with a calculated concentration of 493. To calculate the error of our calculated value versus the true value, we can use the equation: % Error = [(calculated value – true value)/true value] * 100. This gives us a % error of -1.4%. Acceptable % error values are usually ±15 – 20% depending on analysis type. Let’s see what the % error values are for the curves shown in Figure 1.

practical chemist table 1
Table 1: % Error for Back-Calculated Values for Curves A – D

Our % error values have told us what our r2 values could not. We knew Curve D was unacceptable, but now we can see that Curves B and C will yield inaccurate results for all but the highest levels of analyte – even though the results were skewed at opposite ends of the curves.

There are many more details regarding generating calibration curves and measuring their quality that I did not have room to mention here. Hopefully, these two articles have given you some tools to use in your lab to quickly and easily improve the quality of your data. If you would like to learn more about this topic or have any questions, please don’t hesitate to contact me at amanda.rigdon@restek.com.

The Practical Chemist

Calibration – The Foundation of Quality Data

By Amanda Rigdon
No Comments

This column is devoted to helping cannabis analytical labs generate valid data right now with a relatively small amount of additional work. The topic for this article is instrument calibration – truly the foundation of all quality data. Calibration is the basis for all measurement, and it is absolutely necessary for quantitative cannabis analyses including potency, residual solvents, terpenes, and pesticides.

Just like a simple alarm clock, all analytical instruments – no matter how high-tech – will not function properly unless they are calibrated. When we set our alarm clock to 6AM, that alarm clock will sound reproducibly every 24 hours when it reads 6AM, but unless we set the correct current time on the clock based on some known reference, we can’t be sure when exactly the alarm will sound. Analytical instruments are the same. Unless we calibrate the instrument’s signal (the response) from the detector to a known amount of reference material, the instrument will not generate an accurate or valid result.

Without calibration, our result may be reproducible – just like in our alarm clock example – but the result will have no meaning unless the result is calibrated against a known reference. Every instrument that makes a quantitative measurement must be calibrated in order for that measurement to be valid. Luckily, the principle for calibration of chromatographic instruments is the same regardless of detector or technique (GC or LC).

Before we get into the details, I would like to introduce one key concept:

Every calibration curve for chromatographic analyses is expressed in terms of response and concentration. For every detector the relationship between analyte (e.g. a compound we’re analyzing) concentration and response is expressible mathematically – often a linear relationship.

Now that we’ve introduced the key concept behind calibration, let’s talk about the two most common and applicable calibration options.

Single Point Calibration

This is the simplest calibration option. Essentially, we run one known reference concentration (the calibrator) and calculate our sample concentrations based on this single point. Using this method, our curve is defined by two points: our single reference point, and zero. That gives us a nice, straight line defining the relationship between our instrument response and our analyte concentration all the way from zero to infinity. If only things were this easy. There are two fatal flaws of single point calibrations:

  1. We assume a linear detector response across all possible concentrations
  2. We assume at any concentration greater than zero, our response will be greater than zero

Assumption #1 is never true, and assumption #2 is rarely true. Generally, single point calibration curves are used to conduct pass/fail tests where there is a maximum limit for analytes (i.e. residual solvents or pesticide screening). Usually, quantitative values are not reported based on single point calibrations. Instead, reports are generated in relation to our calibrator, which is prepared at a known concentration relating to a regulatory limit, or the instrument’s LOD. Using this calibration method, we can accurately report that the sample contains less than or greater than the regulatory limit of an analyte, but we cannot report exactly how much of the analyte is present. So how can we extend the accuracy range of a calibration curve in order to report quantitative values? The answer to this question brings us to the other common type of calibration curve.

Multi-Point Calibration:

A multi-point calibration curve is the most common type used for quantitative analyses (e.g. analyses where we report a number). This type of curve contains several calibrators (at least 3) prepared over a range of concentrations. This gives us a calibration curve (sometimes a line) defined by several known references, which more accurately expresses the response/concentration relationship of our detector for that analyte. When preparing a multi-point calibration curve, we must be sure to bracket the expected concentration range of our analytes of interest, because once our sample response values move outside the calibration range, the results calculated from the curve are not generally considered quantitative.

The figure below illustrates both kinds of calibration curves, as well as their usable accuracy range:

Calibration Figure 1

This article provides an overview of the two most commonly used types of calibration curves, and discusses how they can be appropriately used to report data. There are two other important topics that were not covered in this article concerning calibration curves: 1) how can we tell whether or not our calibration curve is ‘good’ and 2) calibrations aren’t permanent – instruments must be periodically re-calibrated. In my next article, I’ll cover these two topics to round out our general discussion of calibration – the basis for all measurement. If you have any questions about this article or would like further details on the topic presented here, please feel free to contact me at amanda.rigdon@restek.com.