Design and Evaluation of a Deep Learning Approach to Quantify Immune Cell Infiltrates in Volumetric Autofluorescence Image Data
Immune cells infiltrating into tissue are an important factor in inflammatory processes. An automated quantification of immune cells in a tissue volume would therefore be of great value yet poses a very challenging task.
In this thesis, a Deep Neural Network is implemented to approach the problem of immune cell quantification in volumetric autofluorescence image data acquired from colon tissue with a multiphoton microscope. Deep neural networks may perform remarkably in tasks like picture segmentation and classification, but they need a lot of training data to do so. When images are three-dimensional, labeling training data becomes considerably more difficult. As an intermediate step to address this issue, a 3D cell simulation framework was developed in this thesis, which can create unlimited annotated datasets resembling real 3D image stacks for training and testing neural networks. Furthermore, a deep convolutional network for automated cell classification in tissue volumes was developed based on LinkNet architecture and evaluated using simulated datasets and real image stacks.
Eshun, Marcellinus Pius
Comparison of different powder strategies for their usage in electrophotographic laser-based powder bed fusion of polymers
The generation of multi-material components by means of laser-based powder bed fusion of polymers has been predicted to offer combined functionalities. However, one crucial step within this process is the reliable and the repeatable charging of powders, which in the end is used for the manufacturing of the multi-material parts. Towards this goal, electrostatic powder charging methods such as mechanical powder treatment, corona charging and scorotron charging have been investigated. These observations have shown that poor powder charging which is concomitant with mechanical powder treatment can be compensated. However, the powder charge level and its homogeneity which is said to depend on the charging strategy and material properties such as PSD and relative permittivity has not yet been investigated.
As such, the work carried out in this thesis aims to characterize and compare two charging strategies: corona charging and triboelectric charging in terms of their charge level, process time, repeatability and particle transfer accuracy. For this purpose, an experimental procedure has been developed. The method employed benefited from the ability to transport charged powder from an isolated triboelectric charging device for relevant surface potential and powder development analysis. On the one hand, the results of the triboelectric charging mechanism showed that the achievable charge level agreed very well with the powder’s blend homogeneity which was correlated to the functionalization of the powder samples. In the best case, the supposed highly functional powder samples showed slower potential decay hence maximum surface potential values of 883.59 ± 86.71 V and -1013.84 ± 61.39 V were obtained. With the narrow variability achieved, it can be concluded that the triboelectric charging strategy can produce reliable and repeatable powder charging results.
By contrast, the achievable charge level under the corona charging strategy was rather observed to depend on the powder development system, the corona discharge process, the magnitude of the high voltage supply and the material properties. While the powder development system and the corona discharge process generally led to a broader variability of the achievable maximum surface potential, at corona supply voltage of 5.0 kV high surface potential values of 1049.21 ± 283.73 V and -1359.38 ± 201.50 V could be achieved for the sample which exhibited a low relative permittivity value. Furthermore, from an independent investigation using segregated particle size distribution, it was also observed that PSD may not ideally dominate charge build-up under the corona charging strategy, however, it also revealed that potential loss rather occurs much faster in smaller sized particles than larger ones. These observations also hint on the fact that low relative permittivity and narrow particle size distribution with a high proportion of larger particles are likely going to favour high charge levels due to the possible associated slow charge decay.
As a proof of concept, powder samples were charged, developed onto a photoconductive plate and subsequently printed to verify the relationship between powder charging homogeneity and particle transfer accuracy. The powder charging could be achieved in just 15 s of process time under the triboelectric charging strategy with high area filling attributes and development homogeneity compared to the corona charging strategy whose powder charging throughput was rather realized in 24 s of process time and subsequently with relatively low area filling and low development homogeneity.
Qualification of an Oscillating Optics System for DED Laser Process
Laser hardeing and cladding present material processing techniques that are both versatile and precise in their areas of application. Yet, laser machines are usually more costly than their conventional counterparts due to the rather expensive beam sources and precise optical systems. ERLAS has developed hybrid machines that can perform both processes. Nevertheless, when a fifferent mirror or beam shape is required for the task, the change has to be executed manually.
A new processing optics is currently being developed that can hold up to four mirrors at once, enabling automated switching between processes without the need to pause the task. The mirrors can also oscillate along a rotational axis, allowing for even more processing flexibility through virtual beam shaping.
The aim of this thesis is to wualify a prototype of this hybrid machine by teting the influence of different parameters such as cooling, amplitude, frquency, laser power and others. Long-term trials are also itilized to test the durability and lifetime of the mechanical components.
The experiments have yielded positive results, proving the qualification of the oscillating optics system successful. Processing and manufacturing tests will be the subject of subsequent reserch and development projects.
Design and Evaluation of an Endomicroscopy Extension Module for Research Microscopes
Endomicroscopy is a technology that facilitates microscopic in vivo tissue imaging with endoscope objectives. In research applications, typically needle objectives are used made of gradient index (GRIN) lenses with less than 2 mm diameter. A prominent application example is colonoscopy of mice, which can be performed in sedated mice in a minimal invasive way. For these experiments, the endoscopic GRIN needle must be in horizontal position to be carefully inserted along the mouse colon.
In this work, an additional optical module was designed and built as an extension of research microscopes for endomicroscopy. Since the optical path of a microscope is oriented vertically at the sample, it is necessary to reflect the light beam in right angle to couple it between the microscope objective and the GRIN needle objective with a relay optical system. The imaging performance of the whole system was evaluated.
A relay optical system was designed with two converging lenses placed on both sides of a plane mirror. The focal points of the relay lenses were aligned in the focus of the microscope objective and the back-focal plane of the endoscope lens. The evaluation showed that the relay system itself has a large spherical aberration, but when integrated with the GRIN lens, both the spherical and off-axis aberrations are limited by the small aperture and higher numerical aperture of the GRIN lens. Therefore, it was concluded that the extension module does not severely deteriorate the imaging performance of the entire endomicroscope system in view of the given limitations of the needle objective.
Prediction of future research trends in Optics using Semantic Analysis and Artificial Neural Networks
Abstract: The rapid growth in the number of publications in Optics over the years calls for the development of a robust methodology to store the necessary data and utilize it in the best possible way. Semantic graphs can store huge amounts of data and the hidden relationships in it in the most efficient and easy-to-access format, thus making information management easier. The field of graph-based data analysis and tools powered by Natural Language Processing software in the back end have emerged intensely in recent times. This thesis aims at utilizing such methodologies to pick topics in the field of Optics that highlight past and ongoing research. The aim here is to interlink the research fields in Optics in an efficient manner to unleash some new, surprising, and interesting research directions in this field that might not be visible otherwise. To tackle this idea, a semantic network is developed based on 35,717 papers published on arXiv under the category, physics.optics by analyzing historic patterns in the titles and abstracts of these papers. The extracted patterns are then fed to an artificial neural network to predict the chances of a pair of
research topics being investigated together in the future, which was not tackled together in the past. After building and testing this model, it is deployed to predict personalized research topic combinations based on the interest of one specific chosen Scientist. As a test case, this idea has been implemented for one Scientist at the Max Planck Institute for the Science of Light, Erlangen.
Optimization Potentials for the Deposition Process of Polymer Particles in the Context of Electrophotographic Powder Application for Laser-Based Powder Bed Fusion of Polymers
Electrophotographic technology has been successfully applied to transfer toner powders in typical printing applications for many years. Utilizing the electrophotographic principle in combination with additive manufacturing can generate multi-material 3D parts and reduce the consumption of powder. The electrostatic forces to transfer powder do not allow the maximum coverage of the deposited powder layers to be achieved. This is because the electric field strength is not sufficient for overcoming the strong adhesive forces like the van der Waals force as long as the powder particles are attached to the photoconductive plate. Therefore, the use of mechanical excitation vibrations to improve the coverage of the deposited powder layer in the context of electrophotographic applications for laser-based powder bed fusion of polymers (PBF-LB/P) process is discussed. For this, a piezoelectric actuator control system was developed to apply mechanical vibrations to the photoconductive plate. For the excitation of the photoconductive plate by vibrations to occur automatically at the stage of the powder deposition process, as well as to remotely control the parameters of vibration excitations, software was developed. The simulation results showed that the acceleration acting on the photoconductive plate is highly dependent on the frequency of vibrational impact. Vibration excitations also impart lateral acceleration to the plate, which means an increase in the false print of the deposited powder layer. An experimental study of the parameters of excitation vibrations (frequency, amplitude, shape, time duration) showed that the coverage of the deposited powder layer could be increased by 25 % with a specific set of parameters. In addition to increasing the coverage, false printing of the deposited powder layer is also increased. Experimental research was carried out for two types of powders (PA12 and TPU), which proved the transferability of the vibration excitation method for different powders.
Defect detection on nano-imprint stamps with deep learning and low resolution microscope
Nanoimprint lithography is a mechanical based patterning technique where mold with patterns are imprinted into photoresist. Nanoimprint molds considered in this work have patterns of size 2-3 μm in diameter. During printing, misalignments and impurities in the mold cause defects in the photoresist. Such defects are undesired and need to be identified. A microscope with high numerical aperture objective lens is often used to image the samples, though this leads to smaller field of views compared to low numerical aperture lenses. Several unique fields of view are therefore required to image the entire sample. This process is time consuming and can be prone to errors even when a mechanized scanner is used. Computational imaging techniques can aid in reducing the tradeoff between resolution and field of view, producing images with better quality. Algorithms are then used to look for defects in the enhanced images. Advent of deep learning in the past decade has led to algorithms that are much faster and more accurate than humans and other algorithms at image analysis tasks. This has led to inclusion of deep learning in almost every field, including microscopy.
This thesis set to perform segmentation of nanoimprint pillar images collected with a low NA microscope, combining ideas from computational imaging with deep learning. To achieve this, images of the sample were collected and later annotated to create masks. The images and corresponding masks were then prepared as a dataset ready for training various unet networks. Results from training a network with just a single image were compared to results from training a network using a weighted sum of images. Results indicate its easier to identify normal pillars in the sample for all networks, and different networks perform differently in identifying defects. Networks that use both on and off axis illumination images can identify defects that are missed with using networks that use only on-axis illumination.
Using Light for Bio-physical Analysis of Minimal Artificial Cells
Aqueous compartments encapsulated by lipid mono-layer or a bi-layer membrane act as the simplest form of model synthetic cell and their production and modulation is considered as the first step towards understanding bio-physical phenomenon of biological cells in a bottom-up approach. Microfluidic chips are one of the most promising platforms for generating such model compartments reliably and in bulk. On the other hand photoactivation of bio-membranes through unsaturated lipid oxidation has gained considerable interest in the past few decades especially in relation to it’s therapeutic applications in cancer therapy. The bottom-up approach to analyse, understand and control the dynamics of light triggered activation using artificially constructed minimal model cell-like compartments, also known as lipid vesicles is an emerging field of interest today. This master’s thesis has two fundamental goals. First, to establish methodologies and trouble shooting engineering problems for reliable and high throughput production, modulation and characterization of simple lipid compartments using droplet microfluidic chip set-up
in the laboratory. The second goal of this thesis, is to explore light triggered activation of lipid membranes by utilizing it’s fatty acid unsaturation properties in the presence of photosensitizer chlorin e6. Regarding the first goal, this project includes building up on the base protocol to generate lipid compartments, characterizing the high through put formation of lipid compartments, trouble shooting steps such as coating and testing different microfluidic designs. In the direction of modulation of these compartments, photosensitive hydrogel PEGDA was used in combination with controlled illumination from digital micromirror device (DMD) to fabricate microtraps and support structures inside microfluidic channel. This system was also characterized and it’s potential to modulate and deform lipid compartments have been explored in this work.
In the second goal, the experimental results show possibility to control light triggered lipid membrane deformation using controlled partial illumination of vesicles and through heterogeneity in
lipid membrane. To study controlled deformation in heterogeneous membranes, lipid vesicles with phase separated membranes, having lipids with different levels of unsaturation
was used to analyse the phenomenon of photo-oxidation based membrane disruption by photosensitizer.
The Wide Angle Diffraction in a Space Optical Instrument
In my mini thesis titled ‚Boundary Diffraction Wave Formalism‘, I applied the stationary phase approximation to compute the far field diffraction from an aperture. In this approximation, only the points on the aperture where the optical parth length between source and observer are stationary contribute to the diffracted intensity. This approach has opended a possibility of ray tracing to estimate the diffraction of light by considering intermediate light sources at the sationary points on the aperture. Ray tracing simulations are performed using the new optical analysis software RayJack One by Hembach Photonik GmbH. In this master thesis, the formalism of boundary diffraction wave by the Maggi-Rubinowicz representation of Kirchhoff’s diffraction formula model of a mirror telescope set up in RayJack One, which mimics a space optical instrument such as the Hubble Space Telescope. The model shall not only include optical components, but also mechanical assembly structures such as stray light baffles to shade the optics from the bright light sources and spiders to hold the secondary mirror.
When light from a bright light source (typically the sun) outside the sun exclusion angle of the telescope illuminates the instrument, it can be diffracted at the structures of baffles. Through a subsequent scattering process caused by contamination on the mirrors the light diffracted from the vane edges of the baffles can reach the image plane. This stray light can make the observation of distant stars as faint by reducing the contrast therefore it was analyzed.
A simulation model for this process set up within RayJack One, for computation of the irradiance distribution of the stray light in the image plane, and for the comparison of the result to the typical signal caused by a star of a given brightness. This work represents the first complete system for stray light analysis including contributions from wide angle diffraction for a space optical instrument entirely in a single software environment.
Modeling of Thick Photoresist for Grayscale Lithography Application
Grayscale lithography uses established processes from semiconductor technology and, therefore, provides the ideal starting point for wafer level optics and large-area structures. However, the development of product specific processes for grayscale lithography is extremely demanding, costly and time-consuming. This Master’s thesis aims to develop an accurate and robust model with main emphasis on the thick photoresist effect due to the presence of residual solvent inside the photoresist after spin coating and prebake. To fabricate a certain target layout, different patterns should be simulated, then the model should be calibrated to predict the experimental profile for a given dose and height of photoresist. Once the model is calibrated with experimental data, it should predict the dose distribution (and process conditions) to fabricate a certain target layout. The final goal is to set up a neural network and to use this model to generate data, free form profiles, for deep learning applications. This enables the realization of numerous innovative products using grayscale lithography more flexible and efficient.
Using Deep Reinforcement Learning in Optimum Order Execution for Large, Mid, Small Caps and ETFs in Multiple Market Conditions
Optimum Order Execution is a famous problem in Finance. However, despite years of research on it, the problem does not have a satisfactory solution. Different strategies have been tested on this popular topic. However, till today, the majority of the traders use statistical methods for Optimum Order Execution. The biggest issue with using these statistical methods is the fact that they can never truly understand the market, especially a special or stressful market.
Recently, with the increase in computational power, the popularity of Machine Learning models has increased rapidly. With Google Brain in 2012 and Facebook’s DeepFace in 2014, the application of Machine Learning (especially Deep Learning) in industries started growing fast.
Reinforcement Learning probably is the most complex form of Machine Learning. However, it has the maximum potential in reality, especially in industries. With Google’s AlphaGo using Reinforcement and beating human players in the famous computer game Go, the age of Deep Reinforcement Learning began. However, it is still thought by many that Deep Reinforcement Learning will always be limited to a bunch of computer games. But the true potential of Deep Reinforcement Learning is heavily misjudged here.
Theoretically, Deep Reinforcement Learning can have a superhuman performance in any sequential decision-making process- from a game of Chess to Optimum Order Execution in Finance.
In this work, Deep Reinforcement Learning is used for the Optimum Order Execution. We used datasets from 6 Large, 8 Mid, 8 Small Caps, and 2 ETFs for this study. The trained models are tested in 3 different market conditions Covid period, Normal Market and Inflation+War- the first and third conditions are stressful market conditions. We compared our results on Returns and Market Risks with respect to TWAP and VWAP strategies popular statistical strategies used by the majority of traders- from individual traders to large-scale institutions.
Our results show noteworthy improvements over TWAP and VWAP strategies in all the experiments conducted during this study.