This is our blog managed by Dr. Mike Bernestein, Vicepresident of Research and Development in Mestrelab Research.

Published on: June 22, 2016 by Mike Bernstein

I recently had the chance to catch up with qNMR “guru”, friend, and collaborator – Torsten Schönberger (BKA, Germany). After a few “Kölsch” beers, attention naturally turned to the very basic question: what is the best pulse tip angle to use for qNMR experiments?

Of course, it all comes down to collecting the best signal:noise ratio (SNR) in the shortest time. When learning NMR, we are all told that using the “Ernst angle” will afford the best overall result. Therefore, it is quite common for NMR spectrometers to be set up to use a 30˚ read pulse, because this will provide 50% of the signal size, but less time is needed to for the Mz magnetisation to return to nearly equilibrium levels. Thus, if you are applying many pulses in your experiment, you can get away with a shorter pulse repetition rate whilst still deriving quite good relative integral values.

With qNMR, however, things are quite different because (very nearly) complete return to equilibrium magnetisation is an absolute “must” before any read pulse.

Let me illustrate with a simple example. Say your sample’s longest T1 is 1.0 sec. Let’s also assume that we want to be quite fussy and have the M(z) at >99.9% of its equilibrium value before the next read pulse is applied. Let’s assume that the SNR that you need is achieved with 10 pulses when a 90˚ read pulse is used, and finally that the sample has been in the magnet long enough for the equilibrium magnetisation to be achieved before we start the NMR measurement.

So, if we use a 90˚ read pulse, we need to wait ca 7.0*T1 between pulses. The experiment will take 10*7 = 70s.

T1 recovery curves

If, however, we use a 30˚ read pulse then 50% of the available magnetisation will be in the X-Y plane after the pulse and contribute to the signal. We will therefore need 4X as many scans to achieve the same SNR as in the case, above. The time penalty is compounded when we consider relaxation: the time taken for the overall magnetisation to return to >99.9% of its equilibrium value is ca. 6.25*T1. The total time for the experiment is therefore 40*6.25 = 250s – a factor of 3.6 longer than the case of the 90˚ read pulse.

In reality, the signal having the longest T1 in the sample is likely to be up to 10s (or even longer in rare cases) and the practical benefit of using a 90˚ read pulse is actually much larger. We will collect the same signal in 10*70 = 700s as with the small tip angle case of 40 * 62.5 = 2500s, which is the same factor of 3.6.

So, in general, this 3.6 penalty factor is more punishing as the T1 and required number of scans increase.

What if you don’t need to wait for relaxation to be complete?

Let’s consider 3 cases:

  1. 7*T1 wait between pulses – required for high precision qNMR
  2. 5*T1 wait between pulses – quite commonly used for conventional qNMR
  3. 1*T1 wait between pulses – used for counting Hs in standard NMR experiments


% of full magnetisation time-saving factor
90˚ read pulse 30˚ read pulse
7 * T1 >99.9 >99.9 3.6
5 * T1 >99.3 >99.7 3.4
1 * T1 >63.2 >81.6 NA


We see that there is still a benefit to using a 90˚ read pulse when the inter-pulse time is 5*T1. But when this is 1*T1 then appreciable integration inaccuracies will be observed. For standard NMR experiments the Ernst angle rule applies.

Heteronuclear qNMR

The discussion, above, assumes that when using a 90˚ read pulse all signals are very close to having their Mz magnetisation fully placed in the xy-plane. This may not be the case when measuring quantitative spectra of heteronuclei – commonly 13C, 19F, and 31P – where wide spectral widths (sw) may be required to observe all the NMR signals. Now, to ensure that uniform excitation occurs across the entire spectral range it may be necessary to use a shorter pulse length. This can be represented by a “bandwidth factor” number (F). The rule-of-thumb is that the sw should be less than 0.3/pulse width (sec.), ensuring that the intensity variation will be less than 1% of the maximum intensity.

In the plot, below, we see the excitation profiles for 10 us (90˚) and 30˚ pulses. Superimposed on this are typical ranges for the indicated ranges in the case where the 1H frequency is 300 MHz. The horizontal axis is frequency (Hz) from the transmitter, and the vertical shows the fraction of maximum intensity for a square pulse.

excitation profiles

Of course, there are excellent practical solutions such as using shaped pulses (CHORUS [1], chirp, etc.) – which have considerably improved overall performance for qNMR in these large sw cases. A further consideration when spectra with large sweep widths are acquired is that signals that are significantly off-resonance suffer effects that reduce their final signal intensity, and cannot be ignored. These effects and approaches to compensation are well described – see, for example, Tim Claridge’s book. [2]


When collecting 1H NMR data where accurate integrals are required, use a 90˚ read pulse.


  1. Power, J. E.; Foroozandeh, M.; Adams, R. W.; Nilsson, M.; Coombes, S. R.; Phillips, A. R.; Morris, G. A. Chem. Commun. 2016, 52, 2916–2919.
  2. Claridge TD. High-resolution NMR techniques in organic chemistry. Elsevier; 2016 Apr 22.

Published on: March 4, 2016 by Mike Bernstein


Quantitative NMR (qNMR) is a common theme in my blogs, and it is always interesting and fun to speak with the growing number of like-minded colleagues. Dr John Gauvin (DSM, Holland) recently gave an excellent webinar presentation where he compared the errors and accuracy that can be expected when either “sum” or “peaks” (GSD) integration is used. [1] For conventional spectral analysis GSD peak picking and integration has a lot to offer, but “Sum” integration is king for the very best results that qNMR usually demands. [2]

There are occasions where a conventional “sum” integration is spoiled by the unfortunately placed impurity peaks in the region that we would like to accurately integrate. We have devised a new integration mode that we call “edited sum” (ES) which is described fully. [3] I wish to introduce you here to this novel solution to a common, practical problem in qNMR analysis. I will show two examples where it might be applied, and how to actually do this in Mnova.

Global Spectrum Deconvolution (GSD) and peak type classification

GSD is a major capability in Mnova. In the context of qNMR and this discussion we need to remember:

  • GSD will provide peak lists along with the calculated area
  • GSD is insensitive to imperfect baseline and whether a peak overlaps with others
  • A resolution enhancement is achieved with GSD
  • Although GSD is not insensitive to phase mis-set, there is sufficient tolerance for minor error. This, especially when very precise phase adjustment is a crucial requirement for sum integration.
  • GSD peak area determination performance is constant even down to a SNR of 20.

Peak type classification is an automatic procedure with GSD. Mnova attempts to categorise peaks (approximately) and specify that they are related to:

  • Compound
  • Solvent peaks – residual solvent, contaminating water
  • Small peaks from 13C satellites, and artifacts
  • Impurity compound(s)

The sum integral for a region should be very close to the summed areas of all peaks in that region, regardless of their classification.

We show, below, the 1H NMR spectrum of estradiol dissolved in DMSO-d6. GSD peak picking was applied using the default settings: average peaks, 2 fitting cycles. The acquired spectrum is shown as a black line, the GSD peaks are blue, the sum of these peaks is red, and the residual is green.



We take note of these high-performance features:

  • In the region 1.0 – 1.4 ppm there is excellent modelling of the closely spaced peaks
  • At ca 3.5 ppm we see that a compound triplet is clearly determined. The broad peaks and are well modelled, too
  • Expansion: a region having small, impurity peaks can be seen to be well fitted, too

Edited sum integration

The basis for ES is that it is a hybrid approach that uses the best features of Sum and GSD peak integration to assist with tricky integration tasks relating to quantitation.

The basic idea is to follow this work-flow:

  • Perform ES integrations of spectral regions required for analysis
  • Perform GSD peak picking in Mnova
  • In the integral regions, set the peak flags for impurity peaks to be of type “Impurity”
  • Integrals are automatically recalculated

Mnova will then sum the GSD areas of these impurity peaks, and remove their contribution from the Sum integral by applying a scaling factor. [3]

Example 1: High-precision qNMR

In the most demanding cases of qNMR, uncertainty levels <0.1% are required. [4] Considering only the signal integration aspect, wide integral limits must be chosen so that the full integral of the peak is determined by Sum integration. This wide integration region increases the likelihood that peaks from impurities, artifacts, and 13C satellites will resonate in the measurement region and adversely affect the measure.

We see in the spectrum, below, a portion of a 1H NMR spectrum. A contaminating triplet signal is seen close to the high-field 13C satellite of the doublet.

When we determine the absolute integral by “Sum” integration (LHS) the value is actually inflated by the area of the contaminating triplet. When we mark the triplet peaks to be non-compound, and use “Edited Sum” integration we see the result on the RHS.

The integral obtained by using ES integration is equivalent to a “Sum” integral of the necessary, wide region – but without the contribution from the contaminating triplet peaks.


small impurity

Example 2: A conventional qNMR case

Consider a spectrum for quantitation at a more typical uncertainty levels of ca 2%. [5] In the spectrum, below, you see a compound doublet at lower field that is overlapping with an impurity doublet. We would like to quantify the main component.

Whilst one solution would be to use GSD-derived integrals for the compound peaks, we would prefer to keep using Sum integral for quantitation. The solution lies with ES: the high-field peaks are set to have their type to be “Weak peaks”, and ES automatically removes their integration contribution from the Sum integration for the region. We see that the Absolute integral for this region is adjusted down.


big impurity

Simple test

To test the operation of ES, I used a 1H spectrum of ibuprofen acquired under quantitative conditions. The spectrum shows two, high-field methyl doublets having a theoretical integral ratio of 6:3. Using conventional, Sum integration I measured a ratio of 6.003:2.000 from the spectrum.

Next, a spectrum having doublets in an integral ratio of 2:1 was simulated. The simulation placed the peaks close to the ibuprofen doublets, and the spectra were added using the Mnova “Arithmetic” tool. These are shown in the figure, below.

On the LHS we can see the result of a conventional Sum integration of these 2 regions. From the absolute integrals we see a relative ratio of 5.730:3.000 – clearly very wrong!

On the RHS I used Edited Sum. The peak “type” of the simulated spectra were changed to be of type “Impurity”. To do this, hover the mouse on the peak, and right-click. Then select “Edit peak type”, and change the Peak flags to “Impurity”.

Lastly, Integral → Options… were used to (a) change the “Calculation method” to “Edited Sum”, and the “Include” list to not have “Impurity” ticked.




The same integral regions were used, and the ratio of integrals changed to the correct value of 3.000:5.986. Differing by 0.28%, this result is reassuringly close to the numbers obtained for the starting spectrum.


ES overlap test


Mnova is a powerful processing capability for quantitative NMR. This has been extended to still accurately integrate spectral regions with “real world” overlap problems with the Edited Sum integration method. The functionality is available in standard Mnova, and the qNMR and SMA plug-ins.


  3. Schoenberger, T.; Menges, S.; Bernstein, M.A.; Pérez, M.; Seoane, F.; Sýkora, S.; Cobas, C. AnalChem.,
  4. Schoenberger, T. Bioanal. Chem. 2012, 403 (1), 247–254.
  5. Malz, F.; Jancke, H. J Pharm Biomed Anal 2005, 38 (5), 813–823.


Published on: December 10, 2015 by Mike Bernstein

There is a rapidly growing use of NMR for quantitation (qNMR) and its extension, mixtures analysis. [1] The experiment most commonly used for the purpose is a simple 1D 1H spectrum. Whilst this suits many purposes, there are cases where this approach is insufficient or ambiguous, often as a consequence of signal overlap. This comes into sharper focus when targeted analysis is performed using mixtures of increasing complexity.

I will highlight here a few publications that point the way to some emerging technical advances that are relevant to extending the reach of targeted analysis using NMR.

qNMR blog post 2

1D NMR spectra of heteronuclei

It can be the case that a complex 1H analysis can be simplified when another nucleus is observed, such as mixtures analysis using 13C{1H} spectroscopy. [2] In fortunate cases 19F NMR (for example) may also be used to mitigate signal overlap and sometimes simplify a species to just one signal.

One critical consideration when making these experiments fully quantitative is to ensure that all resonances in the spectrum observe the same tip angle.

A 13C spectrum may cover 200 ppm, or 20 kHz using a 9.4 T system (1H at 400 MHz). One rule-of-thumb is that the spectral width be < 1/(10*PW), so if PW=10 μs then the effective band width for quantitation is inadequate at about 10 kHz.  There is a long tradition of devising adiabatic pulses that have a large, uniform RF excitation band and good phase characteristics. One popular example of this is the CHIRP pulse. Recently Morris and co-workers described [3] the “CHORUS” pulse excitation scheme and its applicability to qNMR. CHORUS uses linear, phase-modulated, swept-frequency “chirp” pulses and is reported to have excellent excitation characteristics: “CHORUS achieves more than 99.9% excitation, with reproducibility better than 0.1%, over a bandwidth of more than 250 kHz”.

NMR data for quantitation using 13C, 19F observe or any nucleus that requires a wide excitation band should be acquired using a special excitation pulse rather than a standard hard pulse.


Fig. 1. 1H NMR spectra recorded over a 300 kHz range with 99.9% excitation and excellent reproducibility.[3] This would also be very adequate for quantitative 13C and 19F NMR spectrum detection in a single experiment.

Solvent saturation

There are any number of solvent saturation pulse experiments to choose from, starting perhaps with the ubiquitous “noesypr1D”. The key is to achieve a hight level of solvent suppression, with minimum perturbation – saturation, phase-, or baseline distortion –  of nearby signals. Adding the “perfect echo” element to a pulse sequence appears to help considerably, and has been described for WATERGATE [4a] and W5 excitation sculpting [4b] .

High-dispersion NMR

Another approach to effectively extend the applicability of 1H NMR for targeted analysis is by reducing the peak overlap. So-called “Pure Shift” NMR experiments achieve this by “collapsing” all multiplets in a spectrum to a singlet [5]: the equivalent of a fully broad band 1H decoupled spectrum is obtained. Whilst this prize does come at the cost of some loss in signal-to-noise sensitivity, the end result is highly desirable. In fact, Pure Shift experiments are starting to be used in everyday, routine NMR services to determine diastereomeric excesses, impurity levels, etc.

Its potential use for multicomponent mixture analysis using a targeted analytical approach is obvious.[6] Providing that the new “singlets” in the 1H spectra can be confidently ascribed to a particular mixture component, the analysis will use the long-established, standard relationship of proportionality between normalised integrals and concentration.


Fig. 2 Conventional and Pure Shift 1H NMR spectra of a reaction mixture. Note that some signals appear to be single multiplets, but the “fully 1H decoupled” spectrum reveals them to be from different species.


The classical response to the overlap problem  in NMR is to increase the dimensionality of the experiment. It therefore stands to reason that 2D NMR can be used to reduce or eliminate signal overlap in 1H NMR spectra. Whilst the 1H multiplicity is retained, significant simplification is achieved. This fact has been exploited with a number of food mixtures, and Giradeau’s review article details the considerations for successful implementation.[1]

Whilst 2D homonuclear experiments can be of benefit, the greater utility is observed when 2D heterocorrelation experiments such as 2D 1H-13C HSQC are employed. This has been described, for example, for extra virgin olive oil analysis [7] and heparin products [8].

Having your cake and eating it!

Whilst Pure Shift and 2D experiments have significant merit, what if we could combine the two experiments? The resulting 2D spectrum would ideally have a single peak for each of 1H-13C partner because the 1H multiplicity information is suppressed. [9]


Fig 3.  Pure-shift 2D COSY spectrum of a flavonoid mixture. In the upper left are the conventional, phase-sensitive 2D COSY cross peaks, having extensively overlapping cross peaks, and (lower contour plot) the effective separation of cross peaks for closely-related species. [6a]

The Pure Shift 2D HSQC must offer the “ultimate” in resolution. A number of factors contribute towards a complication with this analysis: the “golden rule” of NMR quantitation is typically broken in that no single concentration proportionality factor can be applied to all 2D cross peaks to convert volumes to component concentration. This limitation may be a thing of the past as a result of work from Teo Parella’s laboratory in Barcelona, Spain. The family of so-called “Perfect” 2D HSQC experiments set out to produce NMR data ideally suited to quantitation.[9]


Fig 4. selHSQMBC experiment applied to brucine.

Data analysis

Mnova offers the Simple Mixtures Analysis (SMA) tool for targeted mixtures analysis. All of the experiments described in this article can be used as input for SMA– on their own, or as part of a number of qNMR experiments performed on a single, complex mixture. This will be demonstrated in a future article.


We thank Prof. Gareth Morris and co-workers, Manchester University, for advance information and figures on CHORUS (Fig. 1).

Dr Juan A. Aguivar-Malavia, Durham University, kindly provided images and data shown in Figs 2 and 3.

Dr Teo Parella provided Fig 5.


  1. Giraudeau, P. Reson. Chem. 2014, 52 (6), 259–272.
  2. Caytan, E.; Remaud, G. S.; Tenailleau, E.; Akoka, S. Talanta 2007, 71 (3), 1016–1021.
  3. Power, J. E.; Foroozandeh, M; Adams, R. W.; Nilsson, M.; Coombes, S.; Phillips, A. R.; Morris, G. A. SMASH NMR Conference 2015, Bovena, Italy.
  4. (a) Adams, R. W.; Holroyd, C. M.; Aguilar, J. A.; Nilsson, M.; Morris, G. A. Chem. Commun. (Camb). 2013, 49 (4), 358–360., (b) Aguilar, J. A; Kenright, S. J. Analyst 2015, in press. DOI: 10.1039/C5AN02121A
  5. Castañar, L.; Parella, T. Magn. Reson. Chem. 2015, 53 (6), 399–426.
  6. (a) Aguilar, J. A.; Morris, G. A.; Kenwright, A. M. RSC Adv. 2014, 4, 8278. (b) Aguilar, J. A.; Faulkner, S.; Nilsson, M.; Morris, G. A. Angew. Chemie Int. Ed. 2010, 49 (23), 3901–3903.
  7. Dais, P.; Hatzakis, E. Anal. Chim. Acta 2013, 765, 1–27.
  8. Keire, D. A.; Buhse, L. F.; al-Hakim, A. Anal. Methods 2013, 5 (12), 2984.
  9. Castañar, L.; Parella, T. In Annual Reports on NMR Spectroscopy; 2015; Vol. 84, pp 163–232.

Published on: July 23, 2015 by Mike Bernstein

Biodiesel (BD) is fuel that is produced from a variety of plant and animal fats and oils. Before it can be blended with petroleum diesel fuel (DF), the product is subjected to a transesterification reaction, to usually generate the fatty acid methyl esters. Whilst car engines can be modified to run on pure BD, it is typically blended to a level of 2-20%, removing this need. Determining the percentage of BD (%BD) in a blend is clearly an important analytical check.

Simple Mixtures Analysis (SMA) is an MNova plug-in for targeted mixtures analysis using NMR data. It can be configured by the user to generate almost any experiment that relies on integration data extracted from each component, and convert these into meaningful analytical numbers.

The NMR of these fuel blends has received some attention [1]. There are several factors that make it quite simple to distinguish the BD from the petroleum diesel fuel (DF), the simplest being that BD oils uniquely have olefinic functionalities, and (typically) a methoxy singlet. Petroleum DFs, by contrast, uniquely have aromatic components, and no olefinic. Bearing in mind that the olefinic and aromatic signal regions of a 1H NMR spectrum are distinct and well separated, we can evaluate the best protocols for %BD determination.


In this article we will work through a simple analytical case, starting with the NMR method development, and progressing to creating and testing the SMA experiment. We will use the determination of biodiesel levels in fuel as an example.


Low-field NMR (LF-NMR) is well suited to the task of BD determination [2] because excellent signal dispersion is not required, and we are not sample limited. For this example, spectra were recorded on an Oxford Instruments Pulsar 60 MHz spectrometer. Biodiesel/Diesel fuel blends were purchased from VHG Labs and used neat. Spectra were recorded using a 30s relaxation delay, and 16 scans.

In the figure, below, we see the 1H NMR spectra of 100% BD (lower trace) and 10% BD/90% DF (upper trace). As expected, the BD sample shows prominent C=CH and OMe peaks (red boxes), and the 90% petroleum DF shows significant signal intensity in the aromatic spectral region (blue box).


Method validation

Comparison using authentic samples

with the simplest protocol we compare the integral of a signature BD region with that of an authentic sample of 100% BD. Assuming that the sample preparation, measurement conditions, and data analysis are identical, this has the potential to be a very reasonable approach. We also assume that the spectrometer shows good sample-to-sample reliability.

This analysis was performed with manual “sum” integration of the olefinic region (5.8-4.8ppm), and the results are shown below. The absolute integral (AI) was used for this plot.


A closer analysis of the line fit shows:
Slope = 4101.3 ± 6.7
Y-intercept = -260.1 ± 263
R2 = 1.0

Finally, 5 replicates of a 2% BD solution were examined in a repeatability test. For these, an average AI value of 8022.8 was obtained, and the standard deviation was 30.7 (0.4%).

There clearly exists an excellent correlation between %BD and NMR absolute integral of the olefinic region. It follows that a simple ratio method using the 100% BD integral value is a valid approach to a very accurate and precise measurement.

A ratio method

If the olefinic region represents only the BD and the aromatic region only the petroleum DF, it follows that the ratio of these area integrals should give a good estimation of %BD. The possible advantage of this method is that the result would be determined from a single spectrum rather than using a legacy sample spectrum.

For the following plot we see a ratio between the AI of the olefinic region (5.8-4.80ppm) and aromatic (8.5-6.00ppm), plotted as a function of the known %BD.


The statistics for the straight line fit are a little worse than before, but still acceptable.
Slope = 0.0313 ± 0.0005
Y-intercept = -0.026 ± 0.009
R2 = 0.994
Finally, 5 replicates of a 2% BD solution were examined in a repeatability test. For these, an average value for the ratio of 0.0424 was obtained, and the standard deviation was 0.0003 (0.9%).

The “ratio method” described above provides a very acceptable, albeit slightly less precise method for %BD determination. This could be attributed to a doubling of analysis errors.

Creating the SMA experiment

In this example I will focus on the ratio method. For this analysis we need to constantly refer to the AI value for 100% BD – a value that was determined from that experiment (see chart, above).

Let us go through the steps now of creating the %BD determination in Mnova SMA. First, we need to specify the Library folder – which can be anywhere on the computer drive. In SMA, click on the “Library” icon, and navigate to the folder. If necessary, create a folder first. At this stage you will see a dialogue similar to this, where you can click on the blue “+” to add a new experiment.


Click the “Add item” button to open a blank experiment panel.

Let us start by specifying the following:

  • Experiment name (free form text)
  • Short description (free form text)
  • 1D data will be used
  • That we will use Page 1 from each document
  • The units of the determination are “%”


Every SMA experiment requires one reference compound. The value for the Reference compound’s equation is referred to as “CCF”, and has most obvious significance for concentration determinations. However, we will use a little short-cut and use this to store the AI value of 100% BD.


Note that when using fields for the first time, they may not hold any values: just type “Reference” in the appropriate free text area.

With the Reference specified, we only need to add this information to the compound table: click on the “Add new compound” button (sma4). 

Next, we specify the information for the BD. We can start by simply editing the existing compound sheet. We have to specify a chemical shift range to use, and a simple equation. This can be seen in the screen shot, below.


Note the following:

  • The name has been changed
  • The “Type” is now “Compound” (although name could be used)
  • The PPM range must be specified, along with the number of nuclides corresponding to that. The NN is not used in this instance.
  • The integration method is “Sum”, which is appropriate for a range.

The equation used to determine the BD is specified as: 100 * I1/CCF

What this means is that the AI for the olefinic region (I1) will be divided by the AI for 100% BD – which we have stored in the Reference compound equation. The result is multiplied by 100 to convert the ratio to a percentage.

Lastly, add the new compound to the Experiment, and click on the “OK” button. The experiment has been created, and will now be usable. Let’s see if it works!

To test the SMA experiment we just produced simply load the processed BD spectrum, and press the “Analyse” button to quickly obtain these results for %BD as determined by SMA:

Gravimetric %BD %BD by SMA
2 1.98
5 4.76
10 9.84
15 14.96
20 20.08
25 24.88
30 30.20

A straight line fit to these data had a slope of 1.007 ±0.006, and intercept of -0.179 ±0.113, with R2 = 0.9998. An excellent result!

Implementing the ratio method

The ratio method that compares the integrals of the olefinic and aromatic protons can also be quite easily implemented. This would require specifying a range each for the olefinic and aromatic protons, as we did in the validation stage. We know from the previous analysis that this equation holds:

AI(olefin)/AI(aromatic)+AI(olefinic) = 0.013613*%BD – 0.054

In SMA we therefore need to rearrange the equation and determine the %BD and use this equation:

((I1/(I2+I1)) + 0.054)/0.013613


This simple ratio determination conveniently provides a validated method for %BD determination, with the LF-NMR providing very acceptable quality spectra, and reliability. With the conditions for data processing now determined, it is a simple task to convert this into an SMA experiment. Thereafter, the (automated) analysis of multiple samples is a very quick and simple matter.


[1] Knothe, G. (2001). Determining the blend level of mixtures of biodiesel with conventional diesel fuel by fiber-optic near-infrared spectroscopy and 1H nuclear magnetic resonance spectroscopy. Journal of the American Oil Chemists’ Society, 78(10), 1025–1028.

[2] “Monitoring the Preparation of Biodiesel with Benchtop NMR”,


The data were acquired and kindly supplied by Dr A. Gerdova (Oxford Instruments). These, and insightful input, is gratefully acknowledged.

Published on: June 1, 2015 by Mike Bernstein

A quiet revolution is occurring in the world of quantitation: qNMR. [1] I have written extensively on the subject in this blog [2], and there are many excellent peer-reviewed articles to consider. The big attractions for analysts are speed and convenience. You do not need to determine a response factor for every new compound, and your accuracy may be limited by your weighing skills. Furthermore, your analysis will equally apply to single compounds as well as mixtures.

For some applications, such as forensics, the highest levels of accuracy and precision are required, and a number of studies have shown that careful qNMR is well up to the task. [3] But in this case the technique becomes less of a high throughput method.

However you do qNMR, a huge part of the achievable accuracy and precision will depend on the signal integrations.

In this article I would like to look at integration in a little more detail, and show some of the steps needed for best practice. How far you wish to push qNMR depends on your attention to detail in this regard, so please read on – even if you don’t really want a “black belt” in integration! Note that there is already some good articles on integration and overlap. [4]


In these days of gradient shimming it can be easy to forget that the very best line-shape is a prize that still must be earned. Much of what you may ever want to do with NMR experimental data will be predicated on inherent signal line-shape. To assess you and your spectrometer performance it may be necessary to collect a spectrum that really shows how good your inherent line-shape is. I will speak to signal acquisition below, but assuming you have everything right, you can see my point, below.

Try the quick test to check your instrument’s line-shape: apply line fitting to a nice, clean singlet resonance having good signal-to-noise-ratio (SNR). Given that the algorithm assumes that your signal line-shape is pure Lorenzian, Gaussian, or a mix of both, this is a good check. Be sure to look at the signal at the very base of the peak: peak broadening there is quite common.

The spectrum, below, is of caffeine in DMSO-d6. The real data line is coloured black, the simulated line is purple, and the residual between the two is coloured red. You see that the line is poorly modelled at the base of the peak. If we add the integrals of the peaks in the region we determine the total integral to be 4437. If, however, we perform a standard integration (Sum) between 3.5 and 3.1 ppm, the absolute integral is shown to be 4080. The area overestimation and apparent integral error caused by an attempted line-shape analysis of an imperfect peak is therefore 8.75%, assuming Sum integration to be a correct determination.




What other consequences of poor line-shape are there?

There are many cases when you may wish to use a form of line fitting for quantitation (see below). For example, wide lines can make it more difficult to select the full region you need for accurate integration, because you are likely to be bumping into other signals. However, having asymmetric lines or a hump under the peak just means that any line-shape simulation you may wish to attempt will give an erroneous result.

Spectrum acquisition: use a long acquisition time?

A lot has been said about the considerations when acquiring data for accurate quantitation – and justifiably so. After having looked at a lot of external data I would add one item to the list: avoid truncation artefacts by using a long acquisition time. Why not acquire 128K or 256K data points? Yes, this will adversely affect the final signal-to-noise-ratio (SNR), but I am assuming that you are not sample limited. If you do this then you will be less reliant on apodisation functions to force the signal intensity to zero at the end of the acquisition period, the common way to suppress truncation artefacts.

Some users acquire a 1H NMR spectrum with a short acquisition time, and then try to reclaim decent digitisation using forward Linear Prediction (fLP). This is not a wise strategy, as fLP really works best when there are just a few signals (“coefficients”) – as is the case with 2D data. It is better just to collect a FID having lots of data points, and shorten the relaxation delay accordingly. The total experiment time is unaffected.

There is a measure of personal preference in this recommendation, and many spectra are acquired without the consideration. I advocate collecting as near to a perfect signal as possible from the sample.

Data point resolution

In much of the discussion that follows, it will become apparent that the number of data points that are used to describe an NMR peak will clearly have a significant impact on the integration fidelity. Rather than just considering the point-to-point distance (Hz), it is worthwhile looking at the number of data points that describe the peak, as this will ultimately impact on any integration method.

Classical integration

We use classical (“Sum”) integration every day, but how is this performed with digital data? I can do no better than refer you to an excellent blog on the subject [4], but the basic approach is to approximate the signal to a series of rectangles, and sum those areas. It’s simple but effective.

In the spectrum, below, you see a schematised depiction of the process. The data points are shown as maroon crosses, and the software will determine the area of the peak from the sum of areas of the yellow rectangles it silently determines.



A poorly-digitised peak can only lead to a higher inaccuracy in the Sum integral.

Global Spectrum Deconvolution (GSD)

Whilst GSD [5] is not as complete as classical line fitting, it is an extremely fast algorithm. Hundreds of lines in a spectrum can typically be fitted in a few seconds. GSD determines line position (frequency), width, and height, and a “shape” parameter (kurtosis) using its proprietary fitting algorithm. (Kurtosis is is similar to the Lorentzian/Gaussian factor, yet mathematically different.) From these data the integral for each peak is calculated. For many, GSD is the default peak analysis method used to report peak positions and absolute integrals.

The Mnova user can choose whether to use the GSD or Sum method to determine absolute integrals: this is a key consideration in qNMR.

So, how do the methods compare?

Signal overlap

Where signal overlap is an issue, GSD is highly favoured. If you look at the spectrum of estradiol in DMSO-d6, below, you see that the H-12 triplet cannot be accurately integrated (lower trace) because of overlap with the broad water signal. GSD correctly determines the properties of the triplet peaks and broad, overlapping signal, and this is better seen in the extracted “spectrum” (upper trace) of the compound alone. GSD will provide an integral just of the triplet.


Integration accuracy

As mentioned previously, integration is at the heart of qNMR. So, are integrals derived from GSD and Sum equally accurate and precise? In the test, below, I show a spectrum of felodipine in DMSO-d6 solvent, with 8 multiplet regions selected. If we determine the normalized integrals by dividing by the number of nuclides, the integration value for all regions should be equal, within experimental error.


In the table I have summarized the result of this analysis for the cases where I used Sum and GSD integrals. The average of the 8 values and RMSD is shown.

Average 36298.7 38086.4
Standard dev (%) 0.53 2.90

We see that integrals derived from GSD show a significantly higher standard deviation than those from Sum integration, and this is a typical result. The error for GSD is still acceptable for all but the most demanding integration determinations, and is commonly used in Mnova.

GSD-derived integrals will probably be of sufficient accuracy and precision for many applications, particularly those involving mixtures. But for “ultra high-precision qNMR”, Sum integration should be used wherever possible.

Edited Sum integration

This topic will be discussed in detail in a forthcoming publication. [6] This integration method is a hybrid of Sum and GSD integration, and will be of interest to those seeking the most accurate integration/quantitation result.

The idea is to use Sum integration of peaks, and a wide integral width throughout the spectrum. Low-level impurities, however, can inevitably cause inaccuracies if they lie in the integration limits. Although the Sum integral of the low-level impurities cannot be accurately determined due to insufficient SNR, the GSD-derived integration of the impurities are quite accurately determined. So the method relies on subtraction of GSD integrals of impurities from the Sum integral of the region.


The most accurate integral for this region would therefore be:

110545.7 – (327.6 + 252.0 + 247.2 + 179.6 + 191.3 + 76.1)

= 109271.9

Sensible integral regions selection for qNMR

Fundamental to qNMR in Mnova is the ability to select the regions that will contribute towards the most accurate result. With the qNMR plugin there are sophisticated “rules” that the user can specify the software to consider, such as ignoring labile protons, disfavouring singlets, etc. In practice, it is often sufficient to ask the software to automatically select multiplets that produce a result having the smallest RMSD.

The latter will likely cause the software to do what you would do yourself, by disfavouring the selection of “contaminated” multiplets for quantitation. Common sources of contamination will be residual solvents, degradation products, impurities, etc.


The use of high-resolution NMR for compound quantitation is a powerful technique. We see that signal integration is at the heart of its applicability and accuracy, but options exist to make the technique as reliable as is needed.


I would like to thank Dr Carlos Cobas for valuable input into this article.


  1. Simmler, C., Napolitano, J. G., McAlpine, J. B., Chen, S.-N., & Pauli, G. F. (2014). Universal quantitative NMR analysis of complex natural samples. Current Opinion in Biotechnology, 25, 51–9. doi:10.1016/j.copbio.2013.08.004
  3. Schoenberger, T. (2012). Determination of standard sample purity using the high-precision 1H-NMR process. Analytical and Bioanalytical Chemistry, 403(1), 247–54. doi:10.1007/s00216-012-5777-1
  5. Bradley, S. a, Ouyang, A., Purdie, J., Smitka, T. a, Wang, T., & Kaerner, A. (2010). Fermentanomics: monitoring mammalian cell cultures with NMR spectroscopy. Journal of the American Chemical Society, 132(28), 9531–3. doi:10.1021/ja101962c;
  6. Schoenberger and collaborators, manuscript in preparation.


Published on: April 13, 2015 by Olalla Lema

Origenis – Mestrelab Press Release, worldwide distribution

Munich, Germany – 13 April 2015 – The Spanish scientific software company Mestrelab Research SL and the German Biotechnology company Origenis GmbH announced today that they have entered into a collaboration to jointly develop a set of physic-chemical property prediction plugins for Mestrelab’s software products, which Mestrelab will be responsible for marketing.

The collaboration agreement has been signed following successful proof of concept integration and therefore the path to get these tools to market is expected to be short. The new partners plan to release in excess of 25 different atomic and molecular properties as part of this joint development effort. The collaboration aims to exploit the combination of Mestrelab’s large user base and track record of developing highly popular and widely adopted scientific software tools and of Origenis’ know how and excellence in drug design and in the use of LINGO methods to predict structural properties.

Santi Dominguez, CEO of Mestrelab, commented on the announcement:

‘We are hugely excited by this collaboration. Physico-chemical property prediction is a widely used tool in our current markets, and this has to date been a gap in our product offering. Origenis’ technology in this area is outstanding, and after extensive evaluation we are hugely impressed by its speed and accuracy. This collaboration will allow us not only to fill that gap, but to become the supplier of first-in-class tools for these applications to our growing customer base. The range of properties Origenis can provide is also very exciting, as it will result in our getting to market not only a very high quality but also a very widely applicable set of tools.

We expect to make these predictions available in our current Mnova platform, as well as in our about to be released mobile device, web and SaaS offering. 2015 is going to be our biggest year ever from the perspective of new products, and this collaboration is one of the most exciting opportunities I feel we have. And we expect to get to market rapidly with these tools, with the first set of plugins available no later than Q3.’

Michael Thormann, Managing Director of Origenis, commented:

‘We are very proud to expand our collaboration with Mestrelab. We join our efforts to integrate Origenis’ physico-chemical property predictions, which have a long successful internal track record, into Mestrelab’s excellent scientific software products. Our range of valuable physico-chemical property prediction tools will now be part of their user-friendly and broadly used Mnova software suite. With Mestrelab’s large user base in over 100 countries precise property prediction will then be available worldwide for scientists in Academia, Biotech, and Pharma.’

Download the pdf of the release here.

Published on: April 1, 2015 by Mike Bernstein

Octopus – “pulpo” in Galician – is a favourite dish in Galicia, the home of Mestrelab. If you were fortunate to come to SMASH 2013, you would surely have tasted this local delicacy. The secret to cooking pulpo is to get it soft and flavoursome – not chewy. This can be done done, and there are hundreds of family recipes that prescribe how to get it “just right” – all closely guarded, of course: it’s not trivial.


The new generation of bench-top NMR spectrometers (1) promise to herald a new era of precise food analysis. Researchers have shown how these instruments can be used to show horse meat adulteration of beef (2), olive oil adulteration (3), and what’s really in those herbal supplements “for men only” (4). But in Northern Spain, nothing touches a nerve like cooking perfectly the humble octopus!

We were therefore overjoyed to receive a call from one of our small-magnet collaborators with an eye to using our new mixtures analysis product, SMA, to assist with a new project.

The data still need validation, but it appears that there is substantial promise for this hardware-software combination to take the guess-work out of cooking pulpo!

With substantial IP at stake it is not possible to reveal much detail, but that will follow. Briefly, the NMR spectra of the cooking juices clearly show the sugar profile with respect to constituents and polymerisation, and a careful study has resulted in the discovery of the exact profile that indicates soft, perfectly-cooked pulpo. Researchers believe that this scientific approach will be infallible, and a major advance on approaches based on colour and onions – current best practice.

With tapas-style cooking all the rage in trendy bars and dinner parties around the world, there is little surprise that several television “shopping channels” have shown considerable interest.

“We can speculate dieters and healthy eating fanatics using bench-top NMR coupled with Mnova/SMA in ever-increasing numbers”, a famous TV chef, HB, speculated.

We will reveal more on this exciting topic when the IP is in place. Finally, we have not ruled out the possibility of a bespoke instrument just to do this analysis. The name has not been agreed on, but “Perfect Pulpo” has been mooted. The days of “misinformation” recipes (5) must be numbered!


(2) Jakes, W., Gerdova, a., Defernez, M., Watson, a. D., McCallum, C., Limer, E., … Kemsley, E. K. (2015). Authentication of beef versus horse meat using 60MHz 1H NMR spectroscopy. Food Chemistry, 175, 1–9. doi:10.1016/j.foodchem.2014.11.110





(References are real.)

Published on: March 19, 2015 by Mike Bernstein

It is quite common for NMR peaks or multiplets to move position (chemical shift) through the course of an arrayed experiment. Sometimes this may be undesired and need correction through an alignment procedure, but there are times when this reflects the chemistry that is occurring in the sample tube. The changes may be a consequence of pH changes – deliberate, or resulting from a proton transfer that is part of a chemical reaction. Or the chemical changes to the compounds in the tube could very be the result of new molecules having quite different electronic environments, and therefore chemical shifts.

The Data Analysis capability in Mnova is geared towards the extraction of intensity, integral, or chemical shift information in a stacked series of related spectra. This capability finds extensive use in the study of chemical reactions using NMR, which is often simply called “Reaction Monitoring” (RM). Typically, one or more NMR spectra are recorded at regular intervals through the time course of the reaction. Such “time sliced” data then need quick information extraction – which is the task of the NMR software.

There are a host of difficulties and challenges that are inherent to this data extraction task, and Mnova provides a wide variety of capabilities to facilitate the task. Usually this can be done with minimal user input. The task of extracting data for multiplets whose chemical shifts change in the stacked array has traditionally been tackled with a largely manual process: the user would select a region that is close to what is required. The regions have “handles” which the user can drag, and thereby change the centre and width of the integral region. It is therefore possible to use these to “feed” the multiplet into the integrated region, marked in light green.

Automated peak tracking

But wouldn’t it be much better if peak integration was done perfectly and automatically? In fact the task is more complex than originally stated here because the following are quite common, and the algorithm must be tolerant to their presence:

  • Multiplets change at different rates through the array – it is not a linear process
  • Multiplets may move in different directions in one experimental array
  • Multiplets may change multiplicity
  • Peaks may change width

So, let’s see how well it works!



The functionality is activated by default, but can be toggled off.

You must first select a spectrum in the stack where the peak/multiplet is present. Then, after specifying the graph you want to generate (Integrals, Concentration, etc.), drag across the multiplet of interest. That’s it! The pairs of blue arrows show the stop/start points for the click-and-drag operation that was used to select the region on the active spectrum.


Example 1: A multiplet with no “direction”

The chemistry isn’t important. What you see here is some quite complex phenomena that cause a multiplet from a labile proton at ca 5.0 ppm to take quite an unusual trajectory and alter its multiplicity.

We see that each and every spectrum in the array has an optimal integral region after the Automatic Peak Tracking was applied.




Example 2: Quick and accurate

In this example I have simply used Automatic Peak Tracking to allow me to very quickly generate curves for the seven multiplets – which all move through the course of the reaction.

Each multiplet was correctly selected with a single click and drag operation, making the analysis very quick indeed!


Example 3: pH titration

A pH titration of a histidine-containing peptide causes some quite large chemical shift changes to occur. In the example, below, we see Automatic Peak Tracking successfully applied to two signals. Chemical shift changes are relatively large, but the singlets are nevertheless automatically tracked. This is quite a difficult test!



It is possible create an Automatic Peak Tracking algorithm that works well for a variety of real-world cases. This will significantly facilitate data extraction from more complex arrayed NMR data, further enhancing the capability of Mnova for this application.

Look out for this functionality in Mnova Ver 10.1, coming soon. If you can’t wait, drop me an email:


Published on: January 12, 2015 by sdominguez

Santi Dominguez, CEO of Mestrelab Research

As a company, Mestrelab has been quiet in the aftermath of the momentous announcement by Agilent, a few weeks ago, of the impending closure of what was Varian Inc. This announcement has dramatically changed the global NMR landscape. Those in the community who know me will realize our silence is not due to shyness, but rather the consequence of a desire to watch the story develop and to form a considered opinion about future prospects.

Many of our customers in the last few weeks have however asked us how Agilent’s announcement affects us and what it will mean for the future of our company, and in the light of these repeated questions, we feel it is time to start answering.

Of course, as a vendor-neutral provider of, amongst other things, but still mainly, NMR software, Mestrelab is interested in a competitively healthy and level NMR market. As members of the NMR community we are saddened by Varian’s demise (it is no exaggeration to call it the end of an era) and concerned about what it may mean for the future of NMR.

However, Mestrelab has always been vendor neutral. We have provided market-leading software to labs with magnets from all manufacturers in the market, and we intend to continue to do so. Many of our previous sales have been to users who have instrumentation from the leading high field magnet manufacturer, and this should continue or even increase, now that this manufacturer will be under less pressure from competition to innovate in software. In addition, there are still a number of different players in the NMR hardware market, particularly with the advent and growth of benchtop magnet manufacturers, and we see the challenges posed by the current situation as an opportunity, which can actually be very positive for a company positioned like ours, which has built a product and customer community based on the foundations of quality of product, quality of customer support and attentiveness to market needs. This will continue to be our strategy, and the new future will bring many opportunities, some of them new and greater than ever before.

Our users and prospective users should know that we are very positive when looking forward. 2014 has been our best ever year, built on the basis of independently achieved sales, and the outlook for 2015 is already looking much better. For those wondering about Mestrelab, some things are worth highlighting:

  • We are investing aggressively to bring more products and services to market, and we have just added, in the last 2 months, 15% to our headcount in R&D, as we are confident the NMR market will be hungrier than ever for new products and innovation, which will only stem from competition and the striving of different companies to gain market acceptance through differentiated and exceptional product offers.
  • 2015 will see Mestrelab release more new products than ever, with at least 6 new products focused on the needs of the chemistry community targeted for release in the coming year.
  • We will also be supporting new applications, new market segments, new technology platforms and new look product and services, which will represent very significant, even ground breaking innovation in our field.
  • We will be forming collaborations with market-leading innovators in the NMR community and sharing the new functionality with all our customers
  • Rather than decreasing our appetite for continuing innovation in software development for the NMR sector, we see increasing opportunity to develop useful software that makes our customers’ hardware purchases more productive

It is early days, and we are still looking for answers to questions, such as:

  • How can we maybe support or be of help to Varian/Agilent users who may feel somewhat stranded by the latest developments?
  • What should our strategy be for working with high field hardware vendors going forward?
  • What will be the effects of the latest events on innovation by other parties in the NMR software field?
  • Looking ahead, is the NMR spectrometer market less likely to commit to a single instrument vendor, but spread the risk between several?
  • Etc.

And of course, as we always do, we believe in answers from the market to all these questions, so I encourage you to comment and give your opinions on all the above, and to come up with anything else which you think Mestrelab should be thinking about at present.

I will write with more details on some of these things in the next few days, but so far we are keen to give the NMR market some initial commitments in this new landscape:

  • More investment
  • More innovation
  • More creativity

Than ever before!

Published on: December 5, 2014 by Mike Bernstein


Following the light of the sun, we left the Old World. Christopher Columbus

With Mestrelab celebrating its 10th birthday, now seems an excellent time to reflect on how it all began. It’s a fascinating insight, described by the man with the initial vision: Prof. Javier Sardina of the University of Santiago de Compostela. I am sure that you will enjoy his retrospective as much as I did. Thanks Javier! Here’s to another exciting 10 years! Mike

Anger, family ties, and a very small and crowded organic synthesis lab: what do they have in common? Combined in the right proportions at a fortuitous point in time and space, and the result was the planting of the seed that much later grew into MESTRELAB RESEARCH.

It was the year 1995, the spatial coordinates were those of the department of Organic Chemistry, at the University of Santiago de Compostela (at the time you would have had to use a physical map to find it. GPS and Google maps were still far off in the future), the anger was mine, and I shall tell you about family ties in a minute.

I was a relatively young associate professor then, ready and willing to give you my opinion about anything under the Sun, even if you didn’t ask for it. Yes, I was opinionated, and very passionately so. All Spaniards are, in case that you haven’t noticed. (Of course you have noticed, this is a trait very hard to miss if you have met even just one of our breed.) The only difference was that I was always right, just like the rest of my countrymen and women! So I had an opinion on off-line NMR processing software as well. And now we get to the part where I tell you about why I became very angry.

WARNING: you are now entering into politically correct territory because I have to talk about the then top of the line off-line NMR data processing software, brought to us by a well known NMR instrument manufacturer.

WARNING: we are now back to our normal, politically incorrect space-time continuum. I hope that you have enjoyed your time away from the real World. I thought that program was t***h (read as painfully complicated to use), overp****d and overprot****d (remember the goddam dongle?) – and this made me very angry.

I shall not go into details, but you get the picture, and if you have been in this business long enough, you know exactly about what I am talking about. I was angry and frustrated because I could not get my hands on a piece of friendly software to process my NMR FIDs in my computer, so what? This is usually the stuff that makes physicians treating gastric ulcers make their livelihoods. But not this time, due to three so completely unrelated circumstances that it is hard to believe that Destiny did not take a hand.

First, I learnt about Giuseppe Balacco’s SwaNMR program for the Mac. Remember that this was happening at a time before Google was even a twinkle in anyone’s eye, so this was a major strike of luck. Giuseppe was working at Menarini at the time, and the company let him work on his NMR data processing program and also give it away. (I have always wondered what he actually did at Menarini to justify getting his paychecks!) But they didn’t let him set up a website to distribute it, so getting SwaNMR was a nightmare. You had to write him a letter that he could show to his bosses. An e-mail message would not do. Then he would mail a floppy disk with the program to you. Just imagine what it would take to get frequent program updates! I was only the 11th person to go through that ordeal, but it was well worth it. SwaNMR was fantastic – much better than its commercial competitor (which shall not be named).

I felt obliged by Giuseppe’s generosity towards the NMR community, and I offered him to host his program at one of our University’s servers so people could easily download it without the hassles associated with snail-mail (how many of you remember this term? That just shows your age) delivery of floppy disks. He accepted and we quickly became e-mail friends and collaborators. But he wouldn’t hear of my suggestion of porting SwaNMR to Windows. He thought that Windows 3.1 was a derailed attempt to replicate the very elegant Mac OS to those unreliable Intel-hearted machines, and that he would stick to his beloved Mac with its blindingly fast math co-processor. Can you blame him?

Second, a very young graduate student, fresh from a stay at Leicester University, came to my office to ask for help in transforming (pun intended) the experimental research he did there into an M. Sc. thesis at our University. The fellow was Carlos Cobas, whom you know very well by now. You would have had no problem recognizing him back then. He was the same somewhat shy, wide-eyed, easy-going, fun-loving, very optimistic fellow that he still is. So I became Carlos’ tutor and helped him write his masters thesis and get his degree. And that was the end of my involvement in his career, I thought.

Silly me: little did I know that friendships developed before I was born were going to prevent that from happening. In case you were wondering, this is the part where I tell you about family ties. One evening at home, my late elder brother asked me, in a rather roundabout way – the trademark of Galicians, the peculiar Spanish tribe to which I belong – if I knew a certain guy named Cobas. My early warning system (the one that I use to detect if I am about to be asked for a favour or to do a chore) was evidently off that day, because I answered with a candid “yes, I do”. Then he proceeded to lecture me about how our late father and Carlos’ father had been friends for a long time.

By that time I had realized that all hope of bailing out of whatever my brother was going to ask of me was lost. So he duly proceeded to inform me that Carlos wanted to get a Ph. D. and that he had decided that I should be his advisor. In Galicia, once the families get involved, you had better comply with their “suggestions” or else – usually a very cold and painful “else”. So I did the only thing that I could to avoid shame to my family and unending coldness and pain for me: I said “of course, no problem”.

Third. This is the part where I tell you that my very small lab was overpopulated at that time. How is this relevant to this tale? Very much so, because I did not have lab space to house a single additional Ph. D. student. And yet I absolutely needed to give Carlos a research project for his Ph. D. And he would expect to be provided with the resources to carry it out. So what was I to do? At this point I have to come clean with you and confess that, fully at that time and partly still today, I was your typical, run of the mill, garden variety synthetic organic chemist. So available lab space was a sine-qua-non requirement to undertake any research project in my group, and I didn’t have any.

Well, this is that fortuitous point in time and space that I told you about at the beginning of this blog. Family ties and a very small and crowded lab had already joined forces and played their parts to put me in a tight spot. By now most of you, intelligent readers, will have guessed that anger (together with my admired Giuseppe Balacco) was going to provide the key to finding a way out of that conundrum.

What do you do when you don’t have lab space available but you still want to do research (in Chemistry, that is)? You turn computational, of course. Or, in our case, to writing scientific software.

Computers are cheap, they can be housed in a very small space, and they consume almost no resources, except creativity. And Carlos had that in spades. I was naive enough to believe that we could easily write something on a par with SwaNMR but for PCs, and that we would be dammed if we could not do a better job than our commercial competitors, those that shall not be named. Thus, out of anger, family ties and lack of lab space the research topic for Carlos’ Ph. D. thesis came to be chosen. So much for strategic planning.

I still remember the first day I plugged the FFT code from Numerical Recipes into the very first DOS version of MestReC and saw the first FT spectrum. After that, it was updated to Win3.1 (16 bits), and a few days later to Win95 where it became the first NMR software for Win32. Carlos Cobas, President, Mestrelab Research

Guess what? Despite the fact that our previous experience writing software was as close to absolute zero as you can get (you probably didn’t know that), we actually did a decent job of it. But that is another story. Actually, it is the beginning of the MestRe-C story. But what I intended today was to tell you only about how the seed of what eventually grew into MESTRELAB RESEARCH was planted. I hope that I delivered on that intention.

F. Javier Sardina