ADAPTIVE IMAGE ENHANCEMENT MODEL FOR THE ROBOT VISION SYSTEM

Authors

  • Kyrylo Smelyakov Software engineering department, Kharkiv National University of Radio Electronics (UA)
  • Anastasiya Chupryna Software engineering department, Kharkiv National University of Radio Electronics (UA)
  • Denys Sandrkin Software engineering department, Kharkiv National University of Radio Electronics (UA)
  • Loreta Savulioniene Faculty of electronics and informatics, Vilniaus Kolegija/Higher Education Institution (LT)
  • Paulius Sakalys Faculty of electronics and informatics, Vilniaus Kolegija/Higher Education Institution (LT)

DOI:

https://doi.org/10.17770/etr2023vol3.7300

Keywords:

Digital Image, robotics, robot vision system, image enhancement, gradational correction

Abstract

Robotics is one of the important trends in the current development of science and technology. Most modern robots and drones have their own vision system, including a video camera, which they use to take digital photos and video streams. These data are used to analyze the situation in the robot's camera field of view, as well as to determine a real-time robot's behavior algorithm. In this regard, the novelty of the paper is special polynomial mathematical model and method for adaptive gradational correction of a digital image. The proposed model and method make it possible to independently adjust to brightness scales and image formats and optimally perform gradational image correction in various lighting conditions. Thus, ensuring the efficiency of the entire subsequent cycle of image analysis in the robot's vision system. In addition, the paper presents the results of numerous experiments of such gradational correction for images of various classes, as well as conditions of reduced and increased levels of illumination of the field of view objects. Conclusions and recommendations are given regarding the practical application of the proposed model and method.

Downloads

Download data is not yet available.

References

J. P. Queralta et al., "Collaborative Multi-Robot Search and Rescue: Planning, Coordination, Perception, and Active Vision," in IEEE Access, vol. 8, pp. 191617-191643, 2020, doi: 10.1109/ACCESS.2020.3030190.

I. Enebuse, M. Foo, B. S. K. K. Ibrahim, H. Ahmed, F. Supmak and O. S. Eyobu, "A Comparative Review of Hand-Eye Calibration Techniques for Vision Guided Robots," in IEEE Access, vol. 9, pp. 113143-113155, 2021, doi: 10.1109/ACCESS.2021.3104514.

M. Aranda, Y. Mezouar, G. López-Nicolás and C. Sagüés, "Scale-Free Vision-Based Aerial Control of a Ground Formation With Hybrid Topology," in IEEE Transactions on Control Systems Technology, vol. 27, no. 4, pp. 1703-1711, July 2019, doi: 10.1109/TCST.2018.2834308.

C. Zhou, Q. Sun, K. Wang, J. Li and X. Zhang, "Simultaneous Calibration of Multiple Revolute Joints for Articulated Vision Systems via SE(3) Kinematic Bundle Adjustment," in IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 12161-12168, Oct. 2022, doi: 10.1109/LRA.2022.3189815.

Y. Luo et al., "Calibration-Free Monocular Vision-Based Robot Manipulations With Occlusion Awareness," in IEEE Access, vol. 9, pp. 85265-85276, 2021, doi: 10.1109/ACCESS.2021.3082947.

R. G. Lins and S. N. Givigi, "FPGA-Based Design Optimization in Autonomous Robot Systems for Inspection of Civil Infrastructure," in IEEE Systems Journal, vol. 14, no. 2, pp. 2961-2964, June 2020, doi: 10.1109/JSYST.2019.2960309.

Y. Wang et al., "Real-Time Underwater Onboard Vision Sensing System for Robotic Gripping," in IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-11, 2021, Art no. 5002611, doi: 10.1109/TIM.2020.3028400.

L. Yi et al., "Reconfiguration During Locomotion by Pavement Sweeping Robot With Feedback Control From Vision System," in IEEE Access, vol. 8, pp. 113355-113370, 2020, doi: 10.1109/ACCESS.2020.3003376.

Y. Shi, W. Zhang, F. Li and Q. Huang, "Robust Localization System Fusing Vision and Lidar Under Severe Occlusion," in IEEE Access, vol. 8, pp. 62495-62504, 2020, doi: 10.1109/ACCESS.2020.2981520.

E. P. van Horssen, J. A. A. van Hooijdonk, D. Antunes and W. P. M. H. Heemels, "Event- and Deadline-Driven Control of a Self-Localizing Robot With Vision-Induced Delays," in IEEE Transactions on Industrial Electronics, vol. 67, no. 2, pp. 1212-1221, Feb. 2020, doi: 10.1109/TIE.2019.2899553.

L. Fiorini et al., "Daily Gesture Recognition During Human-Robot Interaction Combining Vision and Wearable Systems," in IEEE Sensors Journal, vol. 21, no. 20, pp. 23568-23577, 15 Oct.15, 2021, doi: 10.1109/JSEN.2021.3108011.

H. Cheng, Y. Wang and M. Q. . -H. Meng, "A Vision-Based Robot Grasping System," in IEEE Sensors Journal, vol. 22, no. 10, pp. 9610-9620, 15 May , 2022, doi: 10.1109/JSEN.2022.3163730.

G. S. Ramos, D. Barreto Haddad, A. L. Barros, L. de Melo Honorio and M. Faria Pinto, "EKF-Based Vision-Assisted Target Tracking and Approaching for Autonomous UAV in Offshore Mooring Tasks," in IEEE Journal on Miniaturization for Air and Space Systems, vol. 3, no. 2, pp. 53-66, June 2022, doi: 10.1109/JMASS.2022.3195660.

W. Bouachir, K. E. Ihou, H. -E. Gueziri, N. Bouguila and N. Bélanger, "Computer Vision System for Automatic Counting of Planting Microsites Using UAV Imagery," in IEEE Access, vol. 7, pp. 82491-82500, 2019, doi: 10.1109/ACCESS.2019.2923765.

N. H. Malle, F. F. Nyboe and E. S. M. Ebeid, "Onboard Powerline Perception System for UAVs Using mmWave Radar and FPGA-Accelerated Vision," in IEEE Access, vol. 10, pp. 113543-113559, 2022, doi: 10.1109/ACCESS.2022.3217537.

T. Liu, H. Liu, Y. -F. Li, Z. Chen, Z. Zhang and S. Liu, "Flexible FTIR Spectral Imaging Enhancement for Industrial Robot Infrared Vision Sensing," in IEEE Transactions on Industrial Informatics, vol. 16, no. 1, pp. 544-554, Jan. 2020, doi: 10.1109/TII.2019.2934728.

R. Chen, Z. Cai and W. Cao, "MFFN: An Underwater Sensing Scene Image Enhancement Method Based on Multiscale Feature Fusion Network," in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-12, 2022, Art no. 4205612, doi: 10.1109/TGRS.2021.3134762.

F. Ding, K. Yu, Z. Gu, X. Li and Y. Shi, "Perceptual Enhancement for Autonomous Vehicles: Restoring Visually Degraded Images for Context Prediction via Adversarial Training," in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 9430-9441, July 2022, doi: 10.1109/TITS.2021.3120075.

K. Smelyakov, A. Chupryna, D. Sandrkin and M. Kolisnyk, "Search by Image Engine for Big Data Warehouse," 2020 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), Vilnius, Lithuania, 2020, pp. 1-4, doi: 10.1109/eStream50540.2020.9108782.

L. Nie, F. Jiao, W. Wang, Y. Wang and Q. Tian, "Conversational Image Search," in IEEE Transactions on Image Processing, vol. 30, pp. 7732-7743, 2021, doi: 10.1109/TIP.2021.3108724.

K. Smelyakov, M. Hvozdiev, A. Chupryna, D. Sandrkin and V. Martovytskyi, "Comparative Efficiency Analysis of Gradational Correction Models of Highly Lighted Image," 2019 IEEE International Scientific-Practical Conference Problems of Infocommunications, Science and Technology (PIC S&T), 2019, pp. 703-708, doi: 10.1109/PICST47496.2019.9061356.

Loh, Yuen Peng and Chan, Chee Seng, “Getting to Know Low-light Images with The Exclusively Dark Dataset,” Computer Vision and Image Understanding, vol. 178, pp. 30-42, 2019, doi:https://doi.org/10.1016/j.cviu.2018.10.010

V. Vonikakis, D. Chrysostomou, R. Kouskouridas and A. Gasteratos, Improving the Robustness in Feature Detection by Local Contrast Enhancement, in Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST ’12), pp. 158 – 163, Manchester, United Kingdom, 16-17 July 2012.

Downloads

Published

2024-01-16

How to Cite

[1]
K. Smelyakov, A. Chupryna, D. Sandrkin, L. Savulioniene, and P. Sakalys, “ADAPTIVE IMAGE ENHANCEMENT MODEL FOR THE ROBOT VISION SYSTEM”, ETR, vol. 3, pp. 246–251, Jan. 2024, doi: 10.17770/etr2023vol3.7300.