Остання редакція: 19-11-2024
Тези доповіді
The presented study explores continuous adaptation techniques for monocular depth estimation and semantic segmentation to improve real-time scene understanding capabilities for autonomous vehicles and driver assistance systems. The proposed methodologies enable models to dynamically adjust to new information in video sequences, sustaining high performance amidst ongoing changes in scene appearance, lighting, and other contextual factors. The first contribution is continuous online adaptation for monocular depth estimation, eliminating the need for isolated fine-tuning techniques and retaining information across video frames. The method addresses data drift by perpetually adapting to new frames, preventing overfitting due to limited data diversity. Experience replay is integrated to stabilize the learning process and introduce minimal computational overhead. Techniques like auto-masking and velocity supervision help differentiate between stationary and moving objects, mitigating errors related to inconsistent depth cues. The study validates the effectiveness of the proposed approach through intra-dataset and cross-dataset adaptation scenarios, showing substantial accuracy gains while maintaining real-time runtime.