- Mnova Tablet
- Mnova main page
- Basic plugins
- Advanced plugins
- Download and free trial
- About us
This refers to the use of analytical equipment and software to fully automatically and without user interaction confirm the proposed identity of a chemical entity or group of chemical entities.
A combination of ASV with the use of analytical equipment and software to fully automatically and without user interaction confirm the purity or concentration of a chemical entity or group of chemical entities. ASIC can therefore be defined as ASV plus purity, since we can understand Structure Integrity as a combination of Identity and Purity.
The last few years have seen increased interest in this topic, for a number of reasons we will examine later on in this publication, and in Mestrelab we have witnessed this through our interaction with customers and the community in general and through our participation in conferences. SMASH 2007 saw the first talk about ASV at an NMR conference, by Dr. Stan Sykora, and this was followed by a very well attended workshop on the topic in Chamonix in SMASH 2011, which is planned to be repeated at Providence in SMASH 2012. Some talks on the subject at ENC, such as that given by Dr. Gonzalo Hernandez at Miami in the last few weeks, are further indications of the expectation in the industry for results in this area.
I have been presenting on ASV / ASIC continuously for the last couple of years, as Mestrelab has sustained an effort to develop a series of algorithms, combined in a software platform, designed to achieve the aims stated in the definitions of these topics which you can find above. Whilst this is already somewhat of a hot subject when we started the project 4-5 years ago, nowadays it is definitely one of the most discussed aspects of what we do.
Structure confirmation is not new, of course. Chemists and analytical chemists have been using analytical data (NMR, LCMS, IR, etc.) for the purpose of confirming structure identities for decades. What is newer is the surge in interest in automated solutions for this purpose.
Of course, it is difficult to comprehensively outline the reasons for such growing interest in automated solution, as there are many perspectives on this, maybe as many as interested users on such systems. However, there are a series of common threads which underlay this growing interest:
The efforts of hardware manufacturers to improve the efficiency of the analytical equipment they offer and to provide automated solutions for its use have resulted in the ability of the average laboratory to generate much larger volumes of analytical data. These large volumes of data need to be reviewed and analysed, but the speed of review and analysis has not grown at the same rate as the speed of data acquisition.
From an analytical data perspective, the approach deployed by most organizations has changed significantly in the last couple of decades. Analytical data review and analysis for structure identity and purity confirmation used to be carried out by centralized analytical laboratories, staffed by experts (analytical chemists) capable of reviewing and evaluating analytical data with speed and reliability. However, improvements and simplification of analytical equipment interfaces, combined with automation capabilities, have resulted in the advent and ubiquitous spread of the open access analytical laboratory (this is a separate topic in this publication), where synthetic, organic, medicinal and bio chemists typically walk up their own samples, set up the experiments and are expected to review the analytical results. Since the area of expertise of these users of analytical data is not data analysis, it becomes important to support their analysis efforts with software to prevent mistakes or large time investments in this analysis.
The advent of open access laboratories, combined with the well documented general attrition in the pharmaceutical and biotechnology industries, have resulted in significant headcount reductions in analytical departments / groups and on a push on productivity of every single chemist engaged in this field.
Outsourcing of chemical research and synthesis results in the need by outsourcing customers to quality control large volumes of data on compounds generated externally. This poses significant challenges due both to the often large volumes of samples received at one time and to the potential lack of detailed information on the synthesis processes which have resulted in those samples. This places further demands on already outstretched analytical departments.
A downward trend in the last few years in new compound (NCE or New Chemical Entities) discovery and development and the much higher cost per NCE, which is at present a widespread problem in the pharmaceutical industry (see Fig 1), for reasons not discussed here, result in further pressure to limit resources and increase output on every function of discovery and development organizations, including compound analytics.
Due to these reasons, and possibly some others, analytical data users, whether analytical chemists or open access users, need to analyze more data with less resource in less time. Automation of the analysis is an obvious solution to this challenge. Hardware and software are being developed to enable a fully automatic workflow in which experiments can be carried out on samples and the results of the experiments can be evaluated to establish the correctness of the expected identity of the sample and also its purity or concentration.
In coming articles, we will discuss in more detail automatic structure verification and integrity confirmation, its nature and challenges and the value the community can expect from the current state-of-the-art in this area.