Scaling to industrial scale can be difficult, time consuming, costly, and labor intensive for cultivated meat companies. By developing a robust, data-rich baseline process at smaller scales, companies can use advanced modeling, such as Ark’s digital twin technology, to significantly de-risk and accelerate scale-up.
In this piece, we argue that companies should implement a robust data acquisition process through developing a (1) baseline process in a relevant context, with a (2) broad, frequent dataset, that is (3) contextualized and easy-to-use. A strong data acquisition process will ensure a robust and growing pool of data that can be used for optimization decisions.
Traditional process development is complex and dynamic, making optimization difficult to achieve. Fortunately, modeling tools like computational fluid dynamics and digital twins can ease the burden of empirical experimentation. These powerful tools are utilized in the biopharmaceutical industry, and have saved companies months of effort and millions of dollars.
By developing a robust data set, advanced computational methods can be used to discover relationships that would otherwise be missed given the dynamic and complex nature of bioprocess. These relationships can then be leveraged to quickly optimize processes, including in real-time, so that cultivated meat companies can make more meat at a lower-cost.
Industrial scale is different from bench-scale, which creates inherent difficulty in scale-up, but the differences in scale can be vastly reduced by performing bench-scale experiments in 3D, dynamic systems.
Using 3D culture as early as possible is essential to gaining robust, representative experimental data. Cells behave differently in 2D (e.g., monolayer) compared to 3D (e.g., suspension) culture. While static, monolayer flask experiments may initially be easiest to conduct, relying on the data from these experiments can be punishing in the long-term. Research has shown discrepancies between culture formats with regards to growth rates, responses to stimuli, protein expression levels, and numerous other factors (source). Moreover, 3D culture recapitulates the natural structure of meat tissues. To facilitate well-plate experiments that better emulate a bioreactor environment, cells need to be adapted to suspension, aggregate, or microcarrier culture early in the R&D process.
In addition to being more representative of industrial processes, moving to larger capacity culture systems also allows for enhanced data collection of an increased set of parameters.
A broader and more frequent data set can elucidate relationships between parameters that might have previously been unknown. A broad and frequent data set can also reduce the number of future experiments required as new questions can be answered by previous experiments that incidentally captured useful data.
The best way to capture more frequent and granular data is to replace offline sampling with online monitoring. For example, viable cell density and aggregation dynamics can be captured with a bio-capacitance probe instead of a cell counter. Select metabolites can be monitored with continuous analyzer probes or a more comprehensive culture composition profile can be attained via Raman spectroscopy. Eliminating offline sampling reduces contamination risk, increases precision and frequency, and decreases human error while freeing up scientist bandwidth.
Ideally, parameter measurements are not only frequent but also broad. Collecting a wide array of data is the best way to increase the chance that optimization and scale-up decisions do not exclude critical constraining variables. For example, if cell line candidates are being screened with growth rate as the sole performance indicator while overlooking metabolic flux data, a great deal of effort may need to be exerted to further develop the cell line deemed “optimal” in initial experiments only to determine later that metabolic efficiency, rather than doubling time, is a more significant factor in achieving the target unit economics.
It is also beneficial to measure parameters that can become limiting at larger scales, even though they are likely not an issue at small scales. For example, at industrial scales oxygen transfer and CO2 stripping can be limiting factors even though they are not likely to be limiting at bench-scale.
Clean and contextualized data sets enable quicker and more robust learning, maximizing the value of experiments. Ways to maximize the utility of historical data include:
These recommendations help ensure that the data collected from all experiments are properly organized, formatted, and contextualized, supporting data analytics and computational modeling.
Clean and contextualized data can be used for scale-up decisions. For example, we could calculate derivative parameters (e.g., mass transfer coefficients, oxygen uptake rate, carbon dioxide evolution rate, specific glucose consumption rate, specific lactate production rate, specific ammonia production rate) from the combined online and offline data set. These derivative parameters are more direct representations of the cell, process, or scale-specific characteristics, and can be used to build cell culture models to simulate process performance in different operational conditions or scales. A robust data set can fundamentally change process optimization from a process guided by tribal knowledge and trial and error to a data-driven approach where decisions are based on quantitative analysis and predictive modeling, ultimately enabling a deeper level of understanding of process and increasing confidence in scale-up.
Enhancing the scope and quality of data acquisition can accelerate rapid industrialization of cultivated meat production. Robustness of datasets can be improved by transitioning experiment set-ups to 3D, dynamic culture systems that more closely resemble production-scale systems, as well as increasing the scope and frequency of data acquisition by integrating online monitoring technologies.
Robust datasets can then be coupled with computational scale-up tools (e.g., computational fluid dynamics, digital twin technology) to transcend empirical knowledge and rule-of-thumb for (A) identifying and correlating critical process parameters with key performance indicators, (B) determining the best approaches to monitor these process parameters, and (C) implementing well-engineered control strategies to drive productivity.
At Ark, we’ve developed computational models so that companies can scale up their bioprocess in months instead of years, while optimizing the bioprocess to ensure the bioreactors produce more, for less. These models are able to simulate dynamic interactions across mass balance, gas-liquid mass transfer, ionic balance, and metabolic models.