Navigation systems have become an indispensable part of everyday life. Unmanned systems also rely heavily on satellite-provided positioning data in many operational areas. But there must be alternatives. After all, Global Navigation Satellite System (GNSS) signals are often deliberately jammed in war and crisis zones. And in environments such as power plants, caves, or underground pipeline networks, drones must be able to find their way without satellite navigation altogether.
For unmanned systems to operate safely—both autonomously and in BVLOS (Beyond Visual Line of Sight) missions—they must always know their exact position. Typically, this is achieved via satellite navigation. However, when this option is unavailable or disrupted, drones are quite literally at risk of flying blind. GNSS outages can result not only from intentional jamming or spoofing but also from natural causes. In dense forests, narrow canyons, or under the rubble of collapsed buildings, connections often drop suddenly. Indoors, GNSS signals may not penetrate at all. Even in densely populated urban environments, interference can occur. To navigate and orient themselves in such scenarios, drones must integrate multiple sensor systems. Instead of relying on satellite signals, they combine data from optical cameras, inertial measurement units (IMUs), LiDAR, ultrasonic sensors, and sometimes radar. However, this so-called sensor fusion is highly sensitive: differing sampling rates, environmental factors, or the failure of individual sensors can significantly impair calculations. Challenging environmental conditions can also push the technology to its limits.
To meet the growing demand for unmanned systems capable of operating without GNSS, significant research and development efforts are underway. Visual navigation is currently at the forefront. This technique—known in English as visual-based navigation—uses camera imagery to orient the drone within its current environment. Instead of depending on GPS, the system analyzes optical features such as edges, patterns, and objects to estimate its own position and detect obstacles. Using SLAM (Simultaneous Localization and Mapping) algorithms, the drone creates a map of its surroundings and locates itself within that map. By integrating AI software and deep learning methods such as Convolutional Neural Networks (CNNs), the system can more quickly and accurately estimate distances and convert this information into control commands. CNNs are a special type of artificial neural network designed primarily for processing images and video, automatically identifying critical features such as edges, shapes, and complex objects. Driven by increasingly powerful AI, advanced sensors, and edge-computing technologies, a new generation of drones is emerging—capable of navigating without external positioning aids when necessary, and thus contributing to greater autonomy for unmanned systems overall.
> This article was created in cooperation with Drones – The Drone Economy Magazine. www.drones-magazin.de