This is the second of two IFST Information Statements that replace the previous Food Authenticity Testing statement. It should be read in conjunction with “Food Authenticity Testing: The role of analysis”.
Executive Summary
This paper describes the principles, different configurations, applications, strengths and limitations of some of the more common analytical techniques used in food authenticity testing.
- Mass spectrometry
- Stable isotope mass spectrometry
- DNA analysis
- Nuclear magnetic resonance spectrometry
- Spectroscopy, e.g. Near-Infrared
Generic strengths and limitations of food authenticity test methods, particularly those relating to methods comparing against reference databases of authentic samples, are discussed in ‘Food Authenticity Testing: The Role of Analysis. It also describes the difference between targeted and untargeted analysis.
Sampling for food authenticity is critical and this paper should also be read in conjunction with IFST Sampling for Food Analysis - key considerations1.
There are a host of different analytical techniques; it is an area of continuous development (see 2024 review in Food Chemistry2) and to cover all potential types of fraud a multitude of tests and other traceability investigations are often required. This paper is not intended to be comprehensive and does not infer an IFST endorsement of any one method or application.
Once it is known that a food fraud can be detected, and that testing is in routine use, fraudsters will move on to something else. Therefore, whilst the established validated methods are needed for continued due diligence, development of new methods tends to be a rapidly moving field. There can be a necessary compromise between speed of development, publication, offering to market, and the robustness and scope of the method validation.
Due to analytical and sampling variations the detection of bulk/major product substitution of products tend to be much easier to detect/confirm by analytical measurement techniques. Should, however, a lower level of substitution be ‘detected’ it can be necessary to corroborate this finding with other sources of evidence. Nevertheless, in any instance of suspected fraud, the level of corroboration required should be assessed on a case-by-case basis, taking into account the robustness of the analytical method and any accompanying database of authentic products that was employed in the decision-making process. Defra have published a guide to this ‘Weight of Evidence’ approach3.
Many of the established, validated and documented methods were developed under publicly funded programmes such as the UK Food Authenticity programme (methods now curated online4) or the EU Food Integrity Programme. Examples of other published methods and research papers can be found in Part 1 of this Information Statement; there are thousands in total. This ‘Part 2’ gives an overview of some of the analytical techniques used in these methods and research papers.
The sample is introduced into the mass spectrometer either directly (e.g. vapourised by a laser or via an ambient interface such as DART, DESI, REIMS - see table 1 below), or as the outflow of a separation techniques such as gas chromatography (GC-MS) or liquid chromatography (LC-MS). The molecules that are introduced are ionised (given an electric charge) and may be broken into charged fragments. They are then separated by the path or time they take to travel in a vacuum through an electromagnetic field, which in turn is dependent upon the mass/charge ratio (m/z) of each ion; in effect, dependent upon its mass. The mass of a molecule, or of its characteristic fragments, is a very selective characteristic on which to base identification, but is not completely unambiguous.
The spectrometer can either be pretuned to only allow fragments of a certain m/z to pass through, or continually scanned to detect all m/z. The former configuration is more sensitive, whilst the latter provides more identification information and is used in untargeted analytical approaches. The scope of applications for mass spectrometry is highly dependent on the wide variety of possible operating mechanisms, modes and configuration. There are three key facets: the ionisation mechanism; ion separation mechanism; resolution settings.
There are many ways to ionise molecules as they enter the spectrometer. Some of the more common are listed in table 1. Many can be configured to produce either positive or negative ions, giving a further layer of instrument choice.
Table 1: Common Ionisation Methods
Ionisation Technique |
Use and Limitations |
EI (Electron Impact) |
Standard technique for coupling to GC so only applicable to volatile and thermally labile molecules. Within this, near universal applicability for molecules approx. 50 - 1200 mass range |
CI (Chemical Impact) |
Alternate for coupling to GC. Lower energy than EI, so good for detecting unfragmented molecular ions. More limited applicability |
ESI (Electrospray) |
Standard technique for coupling to LC. Near universal applicability for molecules approx. 50 - 1400 mass range. Prone to ‘matrix effects’ - unpredictable effects on the analyte signal caused by co-eluting molecules |
APCI (Atmospheric Pressure Chemical) |
Alternate for coupling to LC. Lower energy than ESI, so good for detecting unfragmented molecular ion. Less prone to matrix effects. More limited applicability |
ICP (Inductively Coupled Plasma) |
Standard technique for ionising elements: metals and minerals |
MALDI (Matrix Assisted Laser Desorption) |
Direct introduction from samples spotted on a plate. Ideal for large biomolecules; proteins, DNA fragments |
DART (Direct Analysis in Real Time) |
Sample is held on a probe at atmospheric pressure. Simple and quick. Moderate range of applicability for molecules approx. 50 - 1400 mass range; particularly good for volatile and labile compounds. Gives potential for portable devices |
DESI (Desorption Electrospray) |
Atmospheric pressure technique that can also be coupled with LC or other separation techniques. Suitable for both 50 - 1200 mass range and large molecules (e.g. proteins). Lots of parameters to optimise, depending on the analyte. Good for scanning surfaces |
REIMS (Rapid Evaporative) |
Direct analysis of meats, fats (originally developed for pathology/surgery) using a highly charged metal edge (ion knife) to cut into the sample. Proprietary technology to Waters Corporation |
An electromagnetic field is used to either bend ions through a certain path (magnetic sector or quadrupole spectrometers), trap it for a certain length of time (Ion Trap spectrometers), or time its travel down an acceleration tube [Time of Flight (ToF) spectrometers]. The electromagnetic fields in Trap and ToF spectrometers can be varied much faster than in quadrupoles, making them more suitable for full scan applications.
Multiple spectrometers can be used in series to increase selectivity. They can be of the same or different separation mechanisms. Once the ion emerges from the first it is broken into smaller fragments, each of which then passes down the second spectrometer. If two quadrupoles are used in series (called ‘triple quadrupole’ or MS-MS) this gives the optimum sensitivity and selectivity for targeted analysis. LC-MS-MS is the industry standard for detecting trace level adulterants such as melamine or Sudan dyes. If a quadrupole is coupled with a fast scanning spectrometer (Q-ToF or Q-Trap) this gives the potential to preselect the molecular ion in the quadrupole then identify it by the full scan of its fragments (optimum selectivity). Modern Q-ToFs can be operated either as a triple quadrupole or as a Q-ToF, giving scope for dual-stage screening and confirmatory analysis.
Even more selectivity it can be achieved by adding an Ion Mobility (IM) step within the series, e.g. LC-Q-IM-ToF. It separates using the size and physical shape of the fragment, rather than mass. Ion mobility detectors are familiar at airport security checks. It is very quick and so dovetails well with the rapid electronics and computing needed for ToF full scan acquisition.
Resolution is a measure of the accuracy at which the instrument discriminates mass. Generally, the lower the resolution the more sensitive, whilst the higher the resolution the more selective. Natural elements do not have integer masses; carbon has mass 12.0107, hydrogen 1.00794 for example. A low-resolution configuration (quadrupole MS or MSMS) generally discriminates to 0.1 mass unit, and so would not discriminate between 1 carbon versus 12 hydrogens in a molecule. A high-resolution configuration (ToF, Trap) gives much better mass accuracy, and so can infer molecular structure as well as just mass. The greater the resolution the higher the selectivity, but the lower the sensitivity and the greater the risk that a small drift in mass accuracy, for example caused by a small temperature drift in the laboratory, will mean that an analyte signal is missed. Non-targeted applications generally use ToF or Trap in high resolution mode.
Mass spectrometry is transferrable technology with test methods replicable from one laboratory to another. It is widely adopted and well embedded in laboratories. It has a wide scope of application, gives confidence in identification and can be used quantitatively at very low detection levels. With full scan techniques there is scope for retrospective data mining, looking at historical data for signals that were not sought or considered significant at the time.
Equipment is expensive and generally unsuitable for point-of-use testing. High resolution, particularly, needs skilled operators and a well-controlled environment. Test methods must be developed, optimised and validated for each specific application, and in many cases need extensive sample preparation and clean-up. MS signals can easily be enhanced, suppressed or missed, if interfering compounds enter the detector at the same time as the analyte.
Isotopes are forms of a chemical element that differ only by the number of neutrons in the nucleus. They have the same chemical properties, but different mass. Isotopes can either be radioisotopes (decay by emitting radiation) or stable (do not decay to any appreciable extent). Most elements are a mix of isotopes. Natural carbon, for example, consists of approximately 99% stable 12C, 1% stable 13C, a tiny amount of radioactive 14C, and even smaller amounts of other isotopes. Since isotopes differ in mass, they can be measured by mass spectrometry.
The proportion of minor isotopes in the environment is specific to the geographic or geological region. So, in principle, the ratio of major-to-minor isotopes could be used to tell the geographic origin of any food. An example is the ratio of 2H (also called Deuterium, or D) to 1H (hydrogen). Seawater (H2O) contains a very small natural proportion of D2O. As seawater evaporates, and falls as rainfall, the D2O is naturally fractionated. Because it is slightly heavier, the D2O rainfall tends to fall before H2O, so a higher ratio of H/D builds up further inland, with a higher ration of D/H at the first mountain ranges that weather fronts tend to hit.
Test methods to diagnose geographic origin usually measure the isotopic ratios of hydrogen, carbon, sulphur, oxygen, nitrogen and/or strontium, typically using multivariate analysis to seek statistical patterns that are indicative of a particular geographic region.
Stable isotopes ratios in food can also be indicative of production method differences. An example is the difference between mineral and organic fertilisers. Mineral fertilisers are synthesised from ammonia, which is ultimately manufactured from atmospheric nitrogen. This has a slightly different 14N/15N ratio to soil-based nitrogen, the source of most organic fertilisers. So, in principle, the ratio of 14N/15N in a vegetable can be used to diagnose if it was organically fertilised. In practice, this is difficult to interpret in real-world situations (see 3.3 on limitations).
Isotopic differences can also be diagnostic of adulteration. For example, bees tend to forage on plants that photosynthesise using the C3 pathway, producing honey with a certain 13C/12C ratio. Cane sugar photosynthesises using the C4 pathway, which produces a different 13C/12C ratio, hence adulteration of honey with cane sugar will change the 13C/12C ratio of the honey. However, it should be noted that this method is not efficient at detecting beet sugar addition to honey, so other complementary techniques would be required in this case.
Isotope-ratios are a fundamental property and difficult, although not impossible, to disguise. Interpretation of isotope-ratio differences is easiest for foods from simple production systems with very few inputs, and predominantly local inputs e.g. tomatoes grown hydroponically with local water.
Climatic and geological regions do not conveniently match national boundaries. It would be unlikely, for example, to be able to differentiate Italian from Slovenian wine. The more granularity in origin that is sought, the more detailed and extensive the reference database of authentic samples must be. This can become the limiting factor. If there are non-local inputs into the food production system (e.g. animal feed, fertilisers) then the reference database will need re-populating whenever these change in authentically produced samples. Natural variations in the isotope ratio being measured, due to specific and unpredictable local factors such as weather and amount of precipitation in a particular year, can outweigh the difference being sought.
If there are complex inputs into a food production system (e.g. starter feed, medicated feed, drinking water, finishing feed) then it can be difficult to correlate a cause-and-effect relationship with the overall isotope ratio and the single input of interest.
DNA, which is present in all animals and plants, comprises long sequences of just four bases. The sequence of these bases, in certain regions of the DNA, is unique to the species or variety. These are called ‘coding’ regions. DNA can be located in the cell nucleus (genomic DNA) or in cell organelles (mitochondria in animals/plants, or chloroplasts in plants). Genomic DNA has more differential coding regions between closely related species but is present at relatively low copy numbers. Mitochondrial DNA is less discriminatory but is present at much higher copy numbers.
DNA exists as a pair of complementary strands. The four bases consist of two sets of partners. Each base will only pair against its own partner in the complementary strand. The complementary strand is a photographic negative of the original. This property of DNA means that synthetic sequences of bases can be engineered to form strands that will only latch against specific sequences of DNA. This selectivity if the underpinning basis of all DNA analysis.
Before any analysis intact DNA must be extracted from the sample. Extraction conditions may need to be optimised for each sample type. There are a number of proprietary extraction reagents and systems, with Qiagen having dominated the market in recent years. A significant proportion of a DNA assay cost can be the licencing cost of the extraction reagents.
PCR uses an enzyme, polymerase, to create a duplicate copy of a DNA sequence. Every heating cycle creates another copy, doubling the amount of DNA. PCR typically uses 30 - 40 heating cycles over 45 minutes, potentially creating over a billion copies from a single DNA strand.
In order to select the strand to be copied, it is bookended by a pair of primers. These are artificial sequences of bases that have been engineered to latch against the target DNA at the start and end of the section of interest. These primers can be ‘specific’ or ‘universal’, depending on whether they are engineered to fit against coding or non-coding DNA regions. Specific primers will only latch on to coding DNA from a specific species; only DNA from this species will be amplified. Universal primers are engineered to latch onto sequences that are common to all species within a group (e.g. all land vertebrates) but bookending a coding section that contains a lot of interspecies variability.
If universal primers are used, then the section of amplified DNA should contain species-specific variations within the sequence of bases. The sequence of bases can be diagnosed by selectively including polymerase inhibitors (dideoxynucleotides) into the PCR mix. This is known as Sanger sequencing. Bases can then be separated by capillary electrophoresis and compared with public databases. Such testing is used for species identification. There are a number of alternative proprietary systems for diagnosing the sequences of bases which are much more rapid, these are collectively known as Next Generation Sequencing. They include Solexa sequencing (Illumina), pyrosequencing (Roche) and SOLiD (ABI).
The species-specific sequence variations that underpin DNA barcoding also mean that the length of the DNA-fragment bookended by universal primers will differ from species to species, particularly if it is genomic DNA that has been targeted. Fragments of different lengths can then be separated by chromatography, giving a chromatogram where the retention time is indicative of the fragment length (and hence the species) and the peak area is indicative of the fragment concentration (and hence the copy numbers in the sample). This gives an approximate quantification of each species in the mix.
Specific primers are most commonly used for meat species identification. Each primer pair is selective for one species. Combinations of different primers can be used within the same PCR assay, e.g. multiple different primer pairs for beef, horse, pork, goat, chicken, turkey, and donkey all within the same mix. This gives a highly selective presence/absence test for each species.
If a fluorescent dye is incorporated into the DNA during PCR, then the progress of the amplification can be measured in real time. The more amplified DNA produced, the stronger the fluorescent signal. Monitoring the fluorescence signal vs cycle numbers gives a characteristic flat baseline, followed by steep increase, followed by plateau. The steep increase corresponds to the point where DNA copy numbers rise exponentially, from too few to detect to overly saturated. The number of cycles before this steep curve occurs is indicative of the number of DNA copies in the original mix. It can be calibrated in a semi-quantitative manner. Alternately, the curve produced by a given species can be retrospectively calibrated as a percentage of the total primer-targeted DNA in a multi-species assay.
If the target DNA has not been fully studied or characterised then it is still possible to perform PCR using a collection of generic short primers, in the hope that they will latch somewhere onto the DNA and produce PCR fragments. The pattern of random fragments produced by a given set of primers is then unique to the species, and so can be compared to an in-house reference.
Rather than amplification by PCR, DNA can also be selectively detected by the use of hybridisation probes. These are long strands of bases that have been engineered to form complementary strands with specific regions of target DNA in the sample, binding on like a strip of Velcro. The probes are labelled to enable easy detection, for example with an antigen, a radiolabel, or a fluorescent tag. One common way of using the technique is to design a fluorescent tag that is quenched when the probe binds with its target. The mix is then slowly heated, and at a certain temperature the bound strands will separate, and the probe fluoresce again. The plot of fluorescent signal against temperature (the ‘melting curve’) is characteristic of the strength of binding; it will be strongest when the bound DNA is an exact match for the probe, i.e. when the sample contains the target DNA. Hybridisation probes are particularly suited for point-of-use applications, as the equipment is small and portable, and the procedure is faster than PCR.
DNA tests are an easily transferrable technology with a wide scope of application. Because assays can be designed to be incredibly selective there can be good confidence in identification. Although the design of primers takes a great deal of expertise, once they have been engineered, they are extremely cheap to mass produce. Real-time PCR tests are cheap and high throughput once set up. Sequencing equipment has a high capital cost, but tests are now sufficiently high throughput that the unit cost is affordable as a routine assay.
DNA measurements, even when semi-quantified in terms of DNA copy numbers, are extremely difficult to relate back to the g/kg concentration in the original sample (for example g/kg adulteration of horsemeat in minced beef). The amount of DNA, particularly mitochondrial DNA, extracted from any sample is inherently variable. Different cuts of meat have different amounts of DNA. Fish is even worse; the amount of DNA that can be extracted from fish is so variable as to make quantitation meaningless.
Real-time PCR can be overly sensitive. Coupled with the lack of g/kg quantitation, this means that an interpretative line cannot be drawn between low-level species substitution/adulteration and verification of factory cleaning/cross-contamination.
Not all food sample types have intact DNA that can be extracted. Highly processed/heat treated meat products, stocks, soups and gelatins have very low amounts of viable DNA. Laboratories may need to take a try-it-and-see approach in these cases.
DNA barcoding can be applied to mixes of species, as even mixed sequences will give a database match. However, if the species of interest is only present at a small proportion (approximately less than 10%) of the mix, then it is impossible to deconvolute and match to a database.
Public DNA databases are only populated by certified competent laboratories, but there are still occasional errors, as likely to relate to inconsistent taxonomy as to errors in the sequences themselves. Some of the reference sequences were generated from specimens of animals, or plants, collected in the 1950s and put into long-term storage, and verifying these can mean going back to original taxonomical drawings, which may have been misclassified.
Energy is applied to a nucleus (for example, in proton-NMR, to the hydrogen nuclei within organic molecules) by spinning it within a magnetic field. As the nucleus loses this energy it re-emits it at a fixed frequency. This frequency is shifted by nearby electrons in the molecule, so a molecule will emit at a number of different frequencies, each characteristic of the neighbourhood of each different excited nucleus. This overall pattern, called a spectrum, is characteristic of the molecule. Proton-NMR (NMR mode with most food authenticity applications) is suited to identifying molecules with polar hydrogen-containing groups, particularly fats, alcohol and sugars.
The higher the resolution (stronger and higher frequency of magnetic field), the better the granularity in the spectrum, the more expensive the equipment and the more care needed in its environment and siting. Low-resolution instruments can be sited at point of use, but high-resolution instruments need well controlled laboratory environments.
An important application of high-resolution NMR, in food authenticity testing, is based upon Natural Isotope Fractionation (SNIF-NMR) and measures the ratio of 2H/1H at each neighbourhood of the molecule. Sugars are first extracted and fermented to alcohols, which are most amenable to proton-NMR giving a stronger signal. This approach has been used successfully. since the 1990s. for fruit juice identification and has recently been applied to other sugar-rich foods, such as honey.
High resolution SNIF-NMR is standardised and widely accepted for fruit juice authenticity, with a good robust reference database that has been built in collaboration with the juice industry.
Low resolution NMR is ideal for on-line use in factories, giving results within seconds. with little ongoing cost after the initial capital investment. It is well-accepted technology, having been used routinely for many years for measuring the approximate total fat percentage in meat raw materials or product, and so extending scope to applications, such as distinguishing the species of frozen white blockfish by its total fat NMR spectrum being just an incremental change for some factories.
Much of the SNIF-NMR technology and databases are proprietary commercial property of Eurofins and, with the exception of fruit juice, there have been industry concerns that reference databases and interpretations are not open to scrutiny or challenge. High resolution NMR requires an extremely well-controlled environment, making it unsuitable for on-site use. The equipment is also expensive.
Spectroscopy exploits the fact that distinctive parts of a molecule will absorb energy at fixed wavelengths and then re-emit energy at discrete frequencies. These frequencies are shifted by neighbouring electrons in nearby parts of the molecule. In this respect, spectroscopy is similar to NMR. The energy is provided by shining light upon the molecule. Different light frequencies are absorbed by different types of molecular structure.
The most usual light frequencies used for food authenticity testing applications are (approximately):
- IR: 1000 to 2500 nm
- NIR: 700 to 1000 nm
- UV-Visible: 10 to 700 nm
or a multiplex of different light sources. IR, for example, is absorbed and re-emitted by carbon-carbon double bonds. Molecules containing such bonds, for example unsaturated fats, give particularly characteristic IR absorbance spectra.
This is the most common use of infrared. IR spectra are derived indirectly from the changes in an interference pattern of the incident and emitted light waves as the detector is moved; a statistical treatment called Fourier Transformation (FT-IR) which gives much improved sensitivity compared to conventional spectra.
Raman5 spectroscopy is often used in combination with chemometrics to chemically profile food samples and their ingredients. It is a non-destructive technique that can be used as a fingerprinting tool to compare known authentic samples with a test sample. It has also been used to confirm the purity and identity of bulk ingredients.
When a multiplex of different wavelength sources and/or detectors are used, the technique is referred to as Spectral Imaging.
Spectroscopy is suited to point of use and on-line applications; equipment can be small, portable, and give instantaneous results. NIR, particularly, is suitable for miniaturisation into handheld scanners and even smartphone applications. NIR light also passes through glass and thin plastic, meaning that products can be examined through packaging. All spectral imaging can be configured so that it is suited for unskilled operation, giving red or green light results as outputs. Compared to mass spectrometry and NMR, the equipment is cheap.
Spectroscopy is good for checking the overall identity of a food or ingredient but is unsuited to detecting low level contaminants or adulterants. Each new application needs a specific local reference database of authentic product; for example, NIR checking of milk authenticity needs a different reference database for each local dairy collection centre.
There are a host of different analytical techniques, only some of which have been covered in this IFST Information Statement. Each technique has its own strengths, applications, and limitations. It is important to understand these in order to interpret results. It is common to use a panel of different analytical techniques to try and detect a particular food fraud concern.
Glossary
DNA |
Used generically to describe test methods that identify based on protein sequences within nucleic acids |
ELISA |
Enzyme Linked ImmunoSorbent Assay |
GC-MS |
Gas Chromatography-Mass Spectrometry |
HPLC |
High Performance Liquid Chromatography |
HMF |
Hydroxymethyl formamide, formed when honey is overheated |
LC-MS |
Liquid Chromatography-Mass Spectrometry |
MVA |
Multivariate Analysis, a statistical treatment of data clusters |
m/z |
The property which is used to identify mass/charge ratio of an ion in mass spectrometry |
NIR |
Near-Infrared spectroscopy |
NMR |
Nuclear Magnetic Resonance spectroscopy |
PCR |
Polymerase Chain Reaction |
RAMAN |
Raman spectroscopy |
ToF |
Time-of-Flight mass spectrometer |
UV |
Ultraviolet light absorbance spectroscopy |
References
1 IFST Sampling for Food Analysis - key considerations https://www.ifst.org/resources/information-statements/sampling-food-analysis-key-considerations
2 A Vinothkanna et al “Advanced detection tools in food fraud: A systematic review for holistic and rational detection method based on research and patents” Food Chemistry 446 (2024) 138893
3 A Weight of Evidence Toolkit for Food Authenticity Investigations, https://www.gov.uk/government/news/a-weight-of-evidence-toolkit-for-food...
4 Food Authenticity Network http://www.foodauthenticity.uk/, accessed 4 January 2018
Institute of Food Science & Technology has authorised the publication of this Information Statement, entitled 'Food Authenticity Testing Part 2: Analytical Techniques' dated October 2024, replacing that of February 2019.
This updated Information Statement has been prepared and peer reviewed by professional members of IFST and approved by the IFST Scientific Committee.
The Institute takes every possible care in compiling, preparing and issuing the information contained in IFST Information Statements, but can accept no liability whatsoever in connection with them. Nothing in them should be construed as absolving anyone from complying with legal requirements. They are provided for general information and guidance and to express expert professional interpretation and opinion, on important food-related issues.