Transferring optimal parameters from a smaller reactor to a bigger reactor. Different reactor sizes constitute different optimisation problems...
How can we warm-start scale-up reactor optimisation?
Cell biology approximately behaves similar across similar cell lines...
How can we maximise our cheap animal cell data for expensive human cell experiments?
Transferring the reactor location from the UK to Japan. The environment will change....
How can we reliably re-use the UK data in our Japan plant?
Using historical data from experiments using vendor additives that have been replaced with a new vendor...
How can we robustly adjust for new raw input materials?
TLBO is a method that allows users to learn from historical data from a different but relevant source when solving an optimisation problem.
For example, leading pharma companies such as Merck Germany use TLBO extensively to efficiently optimise chemical reactions and different cell lines.
Using TLBO enables the user to have a 'warm start' instead of a 'cold start' (starting from scratch). Thus, TLBO reduces R&D costs by shortening the optimisation time.
As shown in Merck Germany's graph below, the more historical data you have, the faster you converge to the optimum.
However, as great as the benefits of TLBO are, realistically, many factors impact the effectiveness of TLBO, such as the quality and reliability of the historical data.
How can you protect your experiments from unreliable historical data?
Input and output measurements are exposed to drift, calibration failures and misaligned standards. Often, these can stay unnoticed for a long time.
Humans are imperfect and mistakes are a daily occurrence. Manual data entry or instrument use regularly lead to faulty data, undetected for months or years.
rTLBO is an advanced version of TLBO that automatically ignores unreliable data when learning from relevant but not identical historical data.
If the historical data is faulty then it can confuse the machine learning algorithm, leading to increased optimisation time and resource wastage (shown in Figure (a)).
However, the implementation of rTLBO allows the user to robustly learn from historical data by automatically identifying and only learning from reliable historical data from the mixed pool (shown in Figure (b)).
Data quality reliability is a challenge for all experimentation-based organisations. In reality, many factors that affect data reliability such as human or technical failure can go undetected for months or years.
rTLBO allows users to leverage the power of historical data while being protected and insured against the harm of unreliable data.
Any optimisation problems that involve a single active target site and some historical data obtained from a relevant but not identical source.
When the user is unsure about the reliability and quality of the historical data (or its relevance to the target problem).
- Data Science Lead at Top 10 Pharma Company
Interested in rTLBO IP?
We project that this method has the potential to drastically reduce experimental costs (depending on the level of unreliability in the historical data). Its commercial value depends on the expected number of faulty reactors and the severity of unreliable data measured from those reactors.
Realistically, we conservatively estimate the average improvement (of using rTLBO instead of TLBO) to be between 5% and 25%. Ultimately, the business value of rTLBO will differ depending on our client's individual circumstances.
As we don't believe in making big blanket statements, thus, to better understand its value for your operations, please use the commercial value calculator below.
Figure 1: Robust TLBO as an insurance against unreliable data, significantly mitigating losses due to past and future faulty experiments.
(a) shows the ideal but unrealistic scenario: an optimisation process with reliable data. In this instance, both TLBO (orange) and robust-TLBO (blue) eventually converge to the true optimum (green);
(b) shows the non-ideal but likely scenario: an optimisation process with unreliable data. In this instance, the performance of robust-TLBO is almost 100% better than that of TLBO, as it automatically learns to only learn from reliable data.