Recording date:
Duration:
Session:
Speaker:
Co-Authors:
Abstract:
Process control, data-driven models, chemometrics, soft sensors.
Initiatives such as Process Analytical Technology, Quality by Design and Continuous Integrated Biomanufacturing have been established to ensure the quality and safety of biopharmaceuticals. Together with further technology development these initiatives will lead to improved process automation, enable real-time release and finally lead to highly controllable and robust biomanufacturing systems. One step towards this vision are statistical models utilizing data streams from multiple physical measurement devices to establish soft sensors for predictive chemometrics and hybrid modeling (Dürauer et al., 2023). To enable model predictive control, real-time information on quantity, purity and potency, - the critical quality attributes has to be related to critical process parameters. Compared to traditional control strategies, real-time analytics enable control strategies that are more adaptive and precise due to the additional information on the current process state.
One of the goals of the process modelling is a better process understanding through interpretable models. In upstream processing it was shown that random forests are suitable machine learning tools to generate soft sensors that can easily identify the most suitable online variables (Melcher et al. 2015). Real-time monitoring in downstream processing was successfully established in a multiple sensor approach with extensive variable selection for interpretable statistical models (Walch et al., 2019). Recently, we could show that by using deep learning methods like convolutional neural networks in-depth knowledge of the process data, manual variable selection and data preprocessing is not necessarily required to generate highly accurate models. We observed that the models generally exhibited dependencies on correlations that agreed with first principles knowledge, thereby bolstering confidence in model reliability (Medl et al., 2024). Finally, explainability methods such as the proposed permutation/occlusion feature importance for the interpretation of input-output mappings of data-driven models are suitable instruments for building knowledge-based trust in machine learning models.