Abstract
High image quality is crucial for cell experiments in space, as it requires the ability of remotely monitoring to grasp the progress and direction of experiments. However, due to space limitations and environmental factors, imaging equipment is strongly constrained in some performance, which directly affects the imaging quality and observation of cultivated targets. Moreover, experimental analysis on the ground requires tasks such as feature extraction and cell counting, but uneven lighting can seriously affect computer processing. Therefore, a method called STAR-ADF is proposed, and experimental results show that the proposed method can effectively remove noise, equalize illumination, and increase the enhancement evaluation index by 12.5% in comparison with original figures, which has certain robustness.
Numerous cell experiments conducted in space aim to study the rhythms of cell growth and differentiation, the biological effects of the space environment, and the impact of microgravity on cellular tissue
In 2022, Shanshan He et al
Imaging devices and detectors are subject to the unique conditions of the space environment, including strong radiation, vacuum, and temperature fluctuations, which are quite different from the environment on Earth. These conditions can introduce intricate noise patterns and uneven illumination issues which display on the Grayscale, thereby diminishing the quality and accuracy of the images. Such degradation has far-reaching effects for scientific observations and analysis, so it is imperative to employ suitable techniques to enhance the quality of detector output data.
For the two image tasks of highlight removing and denoising, state-of-the-art algorithms are mainly divided into traditional methods and machine learning-based methods. Soleimani et al
Based on the Retinex theory, we present a method called STAR-ADF for enhancing microscopic images obtained from space life science experiments. This method not only ensures preserving image details but also effectively addresses challenges associated with brightness and noise, which provides a valuable and practical tool for scientists to analyze the space experiment data. Compared to other methods, the STAR-ADF method demonstrates superior performance in terms of enhanced contrast, making the features of the image clearer and more prominent. By applying the STAR-ADF method, we can obtain more accurate and reliable image results, which supports the research and analysis of space life science experiments.
This study utilized a custom-designed visible light microscopy camera device equipped with the detector featuring a resolution of 1 294×1 024 pixels. The experimental data were derived from life science experiments conducted in space. The image datasets involved in the experiment include brightfield microscopic images, including mesenchymal stem cells, osteoblasts, pluripotent stem cells, liver stem cells, germ cells, human embryonic stem cells, mouse embryonic stem cells under diverse experimental conditions. In order to show uneven illumination of the detector output, we demonstrate the relationship between the light source of the brightfield microscope and the cell culture region, as shown in

Fig. 1 Schematic diagram of the spatial microscope camera
图 1 空间显微相机示意图
We propose a novel method for targeted improvement of image quality, accomplished through the integration of the structure and texture aware Retinex (STAR) mode

Fig. 2 Proposed methodological framework diagram
图 2 提出的方法框架图
In 1971, Land et al
, | (1) |
where the pixels of the image are represented as , and , , and represent the original image, the extracted light layer image, and the reflection information layer, respectively. The generation of light regions in the image results from the unevenness of illumination, that is, the non-uniform brightness value expressed in the light laye
, | (2) |
where is the total variation of the image, and , .
However, in detail-rich cell microscopic images, applying total variational filtering may result in excessive suppression of high-frequency details in image
, | (3) |
where denotes the exponential local variance of the image, the weight matrix and is added. Therefore, we can derive that , . Then the iterative solution of illumination component and uniform illumination results through vector form:
, | (4) |
, | (5) |
where we denote that , , , , , and is the conversion of vectors into diagonal matrices. The estimated component can be expressed as: , .
Normalize the results (see
. | (6) |
The principle of anisotropic filterin
, | (7) |
where is the gray value of the first pixel of the images to be denoised, represents the direction, and for the four-neighborhood solution referring to east, west, south and north. is the gradient of a certain direction of the point, and the weight of the filter of each pixel (denoted as ) is calculated according to the anisotropy metric. According to the gradient information and noise level of the image, the filtering direction of anisotropic filtering is determined. In general, filtering along the edges of the image better preserves the edge details of the image. According to the determined filtering direction, the image is filtered:
. | (8) |
The proposed method aims to achieve a dual objective: smoothing the image pixels while retaining the maximal edge information. To this end, multiple iterative filtering operations are applied to progressively diminish noise artifacts presenting in the image. Subsequently, the output at this stage undergoes further processing, utilizing contrast limited adaptive histogram equalization (CLAHE) to enhance the image's contras
In addition, for objective evaluation, three key evaluation indicators are used to assess the performance of various methods in meeting the task requirements: image entropy, contrast ratio (EME), and image similarity (SSIM). Firstly, entropy is utilized to quantify the information loss before and after the image processing (
, | (9) |
where and represent the height and width of the image, respectively. The image entropy is calculated using the following formula:
. | (10) |
Secondly, we utilize Enhancement Measure Evaluation (EME) as an assessment index to quantify the improvement in image contrast achieved after processing. The calculation method involves dividing the image into M*N small areas, and then calculating the logarithmic mean of the ratio of the largest gray value to the smallest value in the small area, and the evaluation result is obtained as the logarithmic mean:
, | (11) |
where represents the maximum gray value within the image block ( and is the minimum gray value within the image block.
Lastly, we adopt the Structural Similarity (SSIM) as the evaluation metric to quantify the similarity between images before and after processing. Given the representations of the original image and the evaluated image as and , respectively, then SSIM quantifies the degree of similarity between them in a quantitative manner as the following formula:
, | (12) |
where represents the mean value of the image, represents the standard deviation of the image, denotes the covariance between the two images. The constant and are introduced to avoid dividing by zero, with . For 8-bit grayscale images, .
The STAR-ADF method is specifically designed to improve the quality and contrast of microscopic images for better visualization of cells and microstructures. The flowchart of the STAR-ADF method is illustrated in
Fig. 3 Flow chart of the STAR-ADF method
图 3 STAR-ADF方法流程图
Compared with other denoising methods, anisotropic filters are more suitable for our image data according to image characteristic analysis and attempts. Anisotropic filtering, a nonlinear filtering method commonly employed for image denoising, is calculated. First, the gradient intensity and orientation of each pixel in the grayscale images are calculated to capture edge and texture information. Based on the gradient information of the pixels, we calculate a weight coefficient that reflects the structural features surrounding each pixel. During the filtering process, this weight coefficient guides the adjustment of smoothness in different directions. A sliding window is then employed to weight the pixel values of a local neighborhood for each pixel in the image. By introducing weight coefficients, the filter becomes more sensitive to pixel value variations along edges and texture directions while ensuring smoother pixel value transitions in flat regions. This preservation of edge and texture information enhances the overall image quality. Ultimately, the filtered image attains a higher level of visual fidelity, representing the improved outcome of this denoising procedure.
In this section, we present a comprehensive performance analysis of the proposed method, evaluating its effectiveness from both subjective and objective perspectives. The test dataset employed in the experiment consists of the image data derived from various life science experiments as discussed in Part 2. The experiment is performed with MATLAB software on a PC with 3.70 GHz CPU and 16.00 GB RAM.
Firstly, our method decomposes the image to obtain the illumination layer and reflection layer as shown in Figs.

Fig. 4 Decomposition images:(a) reflection layer;(b) illumination layer;(c) weighted matrix map
图 4 分解图:(a)反射层;(b)光照层分解;(c)权重矩阵示意图
The exponential parameters are determined through a series of experiments. Initially, a parameter range is defined, followed by iterative testing and evaluation of entropy values to identify the optimal parameter value. The entropy outcomes for various parameter choices are presented in
of | of | ||||||
---|---|---|---|---|---|---|---|
6.812 1 | 6.812 1 | 6.812 1 | 5.914 3 | 5.914 0 | 5.912 4 | ||
6.815 8 | 6.815 8 | 6.815 8 | 5.932 0 | 5.931 7 | 5.930 4 | ||
6.825 5 | 6.825 5 | 6.825 5 | 5.928 1 | 5.927 9 | 5.926 5 |
The visual outcomes of our experiments are illustrated in

Fig. 5 Comparison of exponential parameter results:(a) reflection layer results;(b) illumination layer results
图 5 指数参数比较结果:(a) 反射层结果;(b) 光照层结果
The histogram functions as a statistical representation of the grayscale values present within the image, facilitating an evaluation of the influence of uniform illumination in the experiment, as illustrated in Figs.

Fig. 6 Illumination uniformization and denoising results:(a) the original image and the local indicator evaluation;(b) the STAR-ADF enhanced image and the local indicator evaluation;(c) the original image and the extraction;(d) the STAR-ADF enhanced image and the extraction
图 6 光照均匀化和去噪效果:(a) 原始图像和局部指标评价;(b) STAR-ADF增强后图像和局部指标评价;(c) 原始图像和提取;(d) STAR-ADF增强后图像和提取
To rigorously assess the efficacy of our proposed method, we conduct a comprehensive cross-sectional comparison, as illustrated in

Fig. 7 Results of four different cell experiments by different methods
图 7 四种不同细胞实验使用不同方法得到的结果
Specifically, the MSR metho
We evaluate each method with 20 test images and get the results shown in
Methods | |||
---|---|---|---|
Original | 6.941 4 | 7.237 2 | 1.000 0 |
MSR | 5.311 4 | 2.102 2 | 0.994 7 |
LIME | 6.225 9 | 6.734 5 | 0.993 4 |
Jiep | 4.995 9 | 2.259 0 | 0.992 5 |
Proposed | 6.673 0 | 8.141 6 | 0.996 2 |
In this paper, we present a novel image enhancement method termed STAR-ADR, which demonstrates excellent application results for the images of spatial cell tissue experiments. Through the proposed method, we address the challenges of uneven illumination and fringe noise that commonly arise in the space environment, consequently improving the image quality requisite for cell tissue experiments. The method enables enhanced target identification, enabling scientists to observe images more clearly and facilitating subsequent computer-based recognition processes. Furthermore, objective evaluation indices substantiate the effectiveness of the proposed method. The results show a significant 12.5% improvement in the contrast evaluation index compared to the original image, while preserving essential image details. These findings provide empirical evidence of the efficacy and utility of the STAR-ADR method for enhancing image quality of spatial cell tissue experiments.
References
LI Ying-Hui, SUN Ye-Qing, ZHENG Hui-Qiong, et al. Recent Review and Prospect of Space Life Science in China for 40 Years[J]. Chinese Journal of Space Science(李莹辉,孙野青,郑慧琼. 中国空间生命科学40年回顾与展望, 空间科学学报), 2021, 41(1): 46-67. 10.3724/sp.j.0254-6124.2021.0105 [Baidu Scholar]
Julien I, Mazen A C, Bartosz K, et al. Artificial-Intelligence-Based Imaging Analysis of Stem Cells: A Systematic Scoping Review[J]. Biology, 2022, 11(10): 1412. 10.3390/biology11101412 [Baidu Scholar]
He Shan-Shan, Ruchir B, Carl B, et al. High-Plex Imaging of RNA and Proteins at Subcellular Resolution in Fixed Tissue by Spatial Molecular Imaging[J]. Nature Biotechnology, 2022, 40: 1794–1806. 10.1038/s41587-022-01483-z [Baidu Scholar]
Soleimani S, Mirzaei M, Toncu D C. A New Method of SC Image Processing for Confluence Estimation[J]. Micron, 2017, 101(10): 206-212. 10.1016/j.micron.2017.07.013 [Baidu Scholar]
Seonhee P, Soohwan Y; Minseo K, et al. Dual Autoencoder Network for Retinex-Based Low-Light Image Enhancement[J]. IEEE Access, 2018, 6: 22084-22093. 10.1109/access.2018.2812809 [Baidu Scholar]
Ai S, Kwon J. Extreme Low-Light Image Enhancement for Surveillance Cameras Using Attention U-Net[J]. Sensors, 2020, 20(2): 495. 10.3390/s20020495 [Baidu Scholar]
FU Ying, HONG Yang, CHEN Lin-Wei, et al. LE-GAN: Unsupervised Low-Light Image Enhancement Network Using Attention Module and Identity Invariant Loss[J]. Knowledge-Based Systems, 2022, 240(3): 108010. 10.1016/j.knosys.2021.108010 [Baidu Scholar]
XU Jun, HOU Ying-Kun, REN Dong-Wei, et al. STAR: A Structure and Texture Aware Retinex Model[J]. IEEE Transactions on Image Processing, 2020, 29: 5022-5037. 10.1109/tip.2020.2974060 [Baidu Scholar]
Land E H, McCann J J. Lightness and Retinex Theory[J]. Journal of the Optical Society of America, 1971, 61(1): 1-11. 10.1364/josa.61.000001 [Baidu Scholar]
McCann J J. Do Humans Discount the Illuminant?[J]. International Society for Optics and Photonics, 2005, 5666: 9–16. 10.1117/12.594383 [Baidu Scholar]
Ng M K, Wang Wei. A Total Variation Model for Retinex[J]. SIAM J. Imag. Sci., 2011, 4(1): 345–365. 10.1137/100806588 [Baidu Scholar]
Song M, Kim M. Gradient-Based Cell Localization for Automated Stem Cell Counting in Non-Fluorescent Images[J]. Tissue Engineering and Regenerative Medicine, 2014, 11: 149–154. 10.1007/s13770-014-0050-7 [Baidu Scholar]
Finlayson G D, Drew M S, Lu Cheng. Entropy Minimization for Shadow Removal[J]. International Journal of Computer Vision, 2009, 85: 35–57. 10.1007/s11263-009-0243-z [Baidu Scholar]
Chao S M, Tsai D M. An Improved Anisotropic Diffusion Model for Detail- and Edge-Preserving Smoothing. Pattern Recognition[J]. Pattern Recognition Letters, 2010, 31: 2012-2023. 10.1016/j.patrec.2010.06.004 [Baidu Scholar]
Riya Gupta B, Lamba S S. An Efficient Anisotropic Diffusion Model for Image Denoising with Edge Preservation[J]. Computers & Mathematics with Applications, 2021, 93(4): 106-119. 10.1016/j.camwa.2021.03.029 [Baidu Scholar]
Shyam L, Mahesh C. Efficient Algorithm for Contrast Enhancement of Natural Images[J]. International Arab Journal of Information Technology, 2014, 11(1): 95-102. [Baidu Scholar]
ZHU Meng-Qiu, YU Ling-Jie, WANG Zong-Biao, et al. Review: A Survey on Objective Evaluation of Image Sharpness[J]. Applied Sciences. 2023, 13(4): 2652. 10.3390/app13042652 [Baidu Scholar]
WANG Wen-Cheng, WU Xiao-Jin, YUAN Xiao-Hui, et al. An Experiment-Based Review of Low-Light Image Enhancement Methods[J]. IEEE Access. 2020, 8: 87884-87917. 10.1109/access.2020.2992749 [Baidu Scholar]
FU Xue-Yang, ZHUANG Pei-Xian, HUANG Yue, et al. A Retinex-Based Enhancing Approach for Single Underwater Image[C]// 2014 IEEE International Conference on Image Processing (ICIP), Paris, France. 2014: 4572-4576. 10.1109/icip.2014.7025927 [Baidu Scholar]
Celik T, Tjahjadi T. Automatic Image Equalization and Contrast Enhancement Using Gaussian Mixture Modeling[J]. IEEE Transactions on Image Processing, 2012, 21(1): 145-156. 10.1109/tip.2011.2162419 [Baidu Scholar]
Jobson D J, Rahman Z, Woodell G A. A Multiscale Retinex for Bridging the Gap Between Color Images and the Human Observation of Scenes[J]. IEEE Transactions on Image Processing. 2002, 6(7): 965-976. [Baidu Scholar]
XU Kai-Qiang, Jung C. Retinex-Based Perceptual Contrast Enhancement in Images Using Luminance Adaptation[C/OL]// 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, 2017. 10.1109/icassp.2017.7952379 [Baidu Scholar]
CAI Bo-Lun, XU Xiang-Min, GUO Kai-Ling, et al. A Joint Intrinsic-Extrinsic Prior Model for Retinex[C]// 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017: 4020-4029. 10.1109/iccv.2017.431 [Baidu Scholar]
GUO Xiao-Jie, LI Yu, LING Hai-Bin. LIME: Low-Light Image Enhancement via Illumination Map Estimation[J]. IEEE Transactions on Image Processing, 2017, 26(2): 982-993. 10.1109/tip.2016.2639450 [Baidu Scholar]