Theses 2025
Conclusions: Spray play a vital role in many industires and scientific applications such as fuel injection in combustion processes, pharmaceutical distribution and soon. Inorder to monitor, optimize and control such processes, estimating droplet size distribution accurately is crucial.
Existing droplet size distributions algorithms were developed and applied to estimate droplet size distributions. However such algorithms used parameterized functions, for example, to detect local maxima of Mie scattering signals. This can lead to biased and inaccurate results.
In this study, we explored the potential of different neural network models for estimating droplet size distribution. The results show that it is possible to apply deep neural network for accurate estimation of droplet size distribution. In addition to significant improvement of droplet size distribution estimation, such an approach does not rely on input parameters that humans have to choose. Therefore, a pre-trained neural network can play a significant role in spray characterization which is crucial to monitor, optimize and control of numerous industrial and scientific processes.
To train the deep neural network, two different training dataset were extracted from WALS acquired original images. Training dataset A consists of scattering 1D signals (data) from left and right mirror section of WALS image data. Training dataset B contains cropped images of the WALS images. These two datasets were applied to train three deep neural network models. The results showed that training dataset B (cropped images) contains richer information compared to training dataset A (1D scattering data) to train the deep neural network models better. Even though the result was only shown for U-Net model, dataset B provided an improved performance of the Res-Net and also custom CNN models.
To address the challenge of estimating droplet size, three different CNN models, Res-Net, U-Net and custom CNN, were trained and compared in estimating droplet size distribution with the WALS data. The learning curves of the Res-Net model show an improved performance where the training and validations curves decrease to the point of stability with minimal gap between the curves. In contrary, the custom CNN could not learn the training dataset very well which showed it has low capacity to train all the trends of the WALS data. The U-Net model did a great job in learning the training dataset but ends up with a large generalization error when tested by the validation dataset. It achieved a minimum MSE loss which is more than 8 times higher compared to the validation curve of Res-Net. This indicates overfitting where the model performs well in learning the training dataset but failed to predict when tested with new dataset. The improved performance of the Res-Net is because it uses a bottleneck design of residual blocks that increases network performance. It also protects the network from vanishing gradient problems by using identity/short-cut connection.