Current research on robust trajectory planning for autonomous agents aims to mitigate uncertainties arising from disturbances and modeling errors while ensuring guaranteed safety. Existing methods primarily utilize stochastic optimal control techniques with chance constraints to maintain a minimum distance among agents with a guaranteed probability. However, these approaches face challenges, such as the use of simplifying assumptions that result in linear system models or Gaussian disturbances, which limit their practicality in complex realistic scenarios. To address these limitations, this work introduces a novel probabilistically robust distributed controller enabling autonomous agents to plan safe trajectories, even under non-Gaussian uncertainty and nonlinear systems. Leveraging exact uncertainty propagation techniques based on mixed-trigonometric-polynomial moment propagation, this method transforms non-Gaussian chance constraints into deterministic ones, seamlessly integrating them into a distributed model predictive control framework solvable with standard optimization tools. Simulation results demonstrate the effectiveness of this technique, highlighting its ability to consistently handle various types of uncertainty, ensuring robust and accurate path planning in complex scenarios.
Automated Real-Time Inspection in Indoor and Outdoor 3D Environments with Cooperative Aerial Robots
Andreas Anastasiou, Angelos Zacharia, Savvas Papaioannou, Panayiotis Kolios, Christos G. Panayiotou, and Marios M. Polycarpou
In 2024 International Conference on Unmanned Aircraft Systems (ICUAS), 2024
This work introduces a cooperative inspection system designed to efficiently control and coordinate a team of distributed heterogeneous UAV agents for the inspection of 3D structures in cluttered, unknown spaces. Our proposed approach employs a two-stage innovative methodology. Initially, it leverages the complementary sensing capabilities of the robots to cooperatively map the unknown environment. It then generates optimized, collision-free inspection paths, thereby ensuring comprehensive coverage of the structure’s surface area. The effectiveness of our system is demonstrated through qualitative and quantitative results from extensive Gazebo-based simulations that closely replicate real-world inspection scenarios, highlighting its ability to thoroughly inspect real-world-like 3D structures
Synergising Human-like Responses and Machine Intelligence for Planning in Disaster Response
Savvas Papaioannou, Panayiotis Kolios, Christos G. Panayiotou, and Marios M. Polycarpou
In 2024 International Joint Conference on Neural Networks (IJCNN), 2024
In the rapidly changing environments of disaster response, planning and decision-making for autonomous agents involve complex and interdependent choices. Although recent advancements have improved traditional artificial intelligence (AI) approaches, they often struggle in such settings, particularly when applied to agents operating outside their well-defined training parameters. To address these challenges, we propose an attention-based cognitive architecture inspired by Dual Process Theory (DPT). This framework integrates, in an online fashion, rapid yet heuristic (human-like) responses (System 1) with the slow but optimized planning capabilities of machine intelligence (System 2). We illustrate how a supervisory controller can dynamically determine in real-time the engagement of either system to optimize mission objectives by assessing their performance across a number of distinct attributes. Evaluated for trajectory planning in dynamic environments, our framework demonstrates that this synergistic integration effectively manages complex tasks by optimizing multiple mission objectives.
Hierarchical Fault-Tolerant Coverage Control for an Autonomous Aerial Agent
Savvas Papaioannou, Christian Vitale, Panayiotis Kolios, Christos G. Panayiotou, and Marios M. Polycarpou
In 12th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPROCESS 2024), 2024
Fault-tolerant coverage control involves determining a trajectory that enables an autonomous agent to cover specific points of interest, even in the presence of actuation and/or sensing faults. In this work, the agent encounters control inputs that are erroneous; specifically, its nominal controls inputs are perturbed by stochastic disturbances, potentially disrupting its intended operation. Existing techniques have focused on deterministically bounded disturbances or relied on the assumption of Gaussian disturbances, whereas non-Gaussian disturbances have been primarily been tackled via scenario-based stochastic control methods. However, the assumption of Gaussian disturbances is generally limited to linear systems, and scenario-based methods can become computationally prohibitive. To address these limitations, we propose a hierarchical coverage controller that integrates mixed-trigonometric-polynomial moment propagation to propagate non-Gaussian disturbances through the agent’s nonlinear dynamics. Specifically, the first stage generates an ideal reference plan by optimising the agent’s mobility and camera control inputs. The second-stage fault-tolerant controller then aims to follow this reference plan, even in the presence of erroneous control inputs caused by non-Gaussian disturbances. This is achieved by imposing a set of deterministic constraints on the moments of the system’s uncertain states.
2023
Jointly-optimized Trajectory Generation and Camera Control for 3D Coverage Planning (under review)
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
This work proposes a jointly-optimized trajectory generation and camera control approach which allows an autonomous UAV agent, operating in 3D environments, to plan and execute coverage trajectories that maximally cover the surface area of a 3D object of interest. More specifically, the UAV’s kinematic and camera control inputs are jointly-optimized over a rolling finite planning horizon for the complete 3D coverage of the object of interest. The proposed controller integrates ray-tracing into the planning process in order to simulate the propagation of light-rays and thus determine the visible parts of the object through the UAV’s camera. Subsequently, this enables the generation of accurate look-ahead coverage trajectories. The coverage planning problem is formulated in this work as a rolling finite horizon optimal control problem, and solved with mixed integer programming techniques. Extensive real-world and synthetic experiments demonstrate the performance of the proposed approach.
Distributed Search Planning in 3-D Environments With a Dynamically Varying Number of Agents
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023
In this work, a novel distributed search-planning framework is proposed, where a dynamically varying team of autonomous agents cooperate in order to search multiple objects of interest in three-dimension (3-D). It is assumed that the agents can enter and exit the mission space at any point in time, and as a result the number of agents that actively participate in the mission varies over time. The proposed distributed search-planning framework takes into account the agent dynamical and sensing model, and the dynamically varying number of agents, and utilizes model predictive control (MPC) to generate cooperative search trajectories over a finite rolling planning horizon. This enables the agents to adapt their decisions on-line while considering the plans of their peers, maximizing their search planning performance, and reducing the duplication of work.
Integrated Guidance and Gimbal Control for Coverage Planning With Visibility Constraints
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
IEEE Transactions on Aerospace and Electronic Systems, 2023
Coverage path planning with unmanned aerial vehicles (UAVs) is a core task for many services and applications including search and rescue, precision agriculture, infrastructure inspection and surveillance. This work proposes an integrated guidance and gimbal control coverage path planning (CPP) approach, in which the mobility and gimbal inputs of an autonomous UAV agent are jointly controlled and optimized to achieve full coverage of a given object of interest, according to a specified set of optimality criteria. The proposed approach uses a set of visibility constraints to integrate the physical behavior of sensor signals (i.e., camera-rays) into the coverage planning process, thus generating optimized coverage trajectories that take into account which parts of the scene are visible through the agent’s camera at any point in time. The integrated guidance and gimbal control CPP problem is posed in this work as a constrained optimal control problem which is then solved using mixed integer programming (MIP) optimization. Extensive numerical experiments demonstrate the effectiveness of the proposed approach.
Cooperative Receding Horizon 3D Coverage Control with a Team of Networked Aerial Agents
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
In 2023 IEEE 62st Conference on Decision and Control (CDC), 2023
This work proposes a receding horizon coverage control approach which allows multiple autonomous aerial agents to work cooperatively in order cover the total surface area of a 3D object of interest. The cooperative coverage problem which is posed in this work as an optimal control problem, jointly optimizes the agents’ kinematic and camera control inputs, while considering coupling constraints amongst the team of agents which aim at minimizing the duplication of work. To generate look-ahead coverage trajectories over a finite planning horizon, the proposed approach integrates visibility constraints into the proposed coverage controller in order to determine the visible part of the object with respect to the agents’ future states. In particular, we show how non-linear and non-convex visibility determination constraints can be transformed into logical constraints which can easily be embedded into a mixed integer optimization program.
Unscented Optimal Control for 3D Coverage Planning with an Autonomous UAV Agent
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
In 2023 International Conference on Unmanned Aircraft Systems (ICUAS), 2023
We propose a novel probabilistically robust controller for the guidance of an unmanned aerial vehicle (UAV) in coverage planning missions, which can simultaneously optimize both the UAV’s motion, and camera control inputs for the 3D coverage of a given object of interest. Specifically, the coverage planning problem is formulated in this work as an optimal control problem with logical constraints to enable the UAV agent to jointly: a) select a series of discrete camera field-of-view states which satisfy a set of coverage constraints, and b) optimize its motion control inputs according to a specified mission objective. We show how this hybrid optimal control problem can be solved with standard optimization tools by converting the logical expressions in the constraints into equality/inequality constraints involving only continuous variables. Finally, probabilistic robustness is achieved by integrating the unscented transformation to the proposed controller, thus enabling the design of robust open-loop coverage plans which take into account the future posterior distribution of the UAV’s state inside the planning horizon.
Joint Estimation and Control for Multi-Target Passive Monitoring with an Autonomous UAV Agent
Savvas Papaioannou, Christos Laoudias, Panayiotis Kolios, Theocharis Theocharides, and Christos G. Panayiotou
In 2023 31st Mediterranean Conference on Control and Automation (MED), 2023
This work considers the problem of passively monitoring multiple moving targets with a single unmanned aerial vehicle (UAV) agent equipped with a direction-finding radar. This is in general a challenging problem due to the unobservability of the target states, and the highly non-linear measurement process. In addition to these challenges, in this work we also consider: a) environments with multiple obstacles where the targets need to be tracked as they manoeuvre through the obstacles, and b) multiple false-alarm measurements caused by the cluttered environment. To address these challenges we first design a model predictive guidance controller which is used to plan hypothetical target trajectories over a rolling finite planning horizon. We then formulate a joint estimation and control problem where the trajectory of the UAV agent is optimized to achieve optimal multi-target monitoring.
Distributed Control for 3D Inspection using Multi-UAV Systems
Angelos Zacharia, Savvas Papaioannou, Panayiotis Kolios, and Christos Panayiotou
In 2023 31st Mediterranean Conference on Control and Automation (MED), 2023
Cooperative control of multi-UAV systems has attracted substantial research attention due to its significance in various application sectors such as emergency response, search and rescue missions, and critical infrastructure inspection. This paper proposes a distributed control algorithm to generate collision-free trajectories that drive the multi-UAV system to completely inspect a set of 3D points on the surface of an object of interest. The objective of the UAVs is to cooperatively inspect the object of interest in the minimum amount of time. Extensive numerical simulations for a team of quadrotor UAVs inspecting a real 3D structure illustrate the validity and effectiveness of the proposed approach.
Model Predictive Control For Multiple Castaway Tracking with an Autonomous Aerial Agent
Andreas Anastasiou, Savvas Papaioannou, Panayiotis Kolios, and Christos G. Panayiotou
Over the past few years, a plethora of advancements in Unmanned Areal Vehicle (UAV) technology has paved the way for UAV-based Search and Rescue (SAR) operations with transformative impact to the outcome of critical life-saving missions. This paper dives into the challenging task of multiple castaway tracking using an autonomous UAV agent. Leveraging on the computing power of the modern embedded devices, we propose a Model Predictive Control (MPC) framework for tracking multiple castaways assumed to drift afloat in the aftermath of a maritime accident. We consider a stationary radar sensor that is responsible for signaling the search mission by providing noisy measurements of each castaway’s initial state. The UAV agent aims at detecting and tracking the moving targets with its equipped onboard camera sensor that has limited sensing range. In this work, we also experimentally determine the probability of target detection from real-world data by training and evaluating various Convolutional Neural Networks (CNNs). Extensive qualitative and quantitative evaluations demonstrate the performance of the proposed approach.
2022
Distributed Estimation and Control for Jamming an Aerial Target With Multiple Agents
Savvas Papaioannou, Panayiotis Kolios, and Georgios Ellinas
This work proposes a distributed estimation and control approach in which a team of aerial agents equipped with radio jamming devices collaborate in order to intercept and concurrently track-and-jam a malicious target, while at the same time minimizing the induced jamming interference amongst the team. Specifically, it is assumed that the malicious target maneuvers in 3D space, avoiding collisions with obstacles and other 3D structures in its way, according to a stochastic dynamical model. Based on this, a track-and-jam control approach is proposed which allows a team of distributed aerial agents to decide their control actions online, over a finite planning horizon, to achieve uninterrupted radio-jamming and tracking of the malicious target, in the presence of jamming interference constraints. The proposed approach is formulated as a distributed model predictive control (MPC) problem and is solved using mixed integer quadratic programming (MIQP). Extensive evaluation of the system’s performance validates the applicability of the proposed approach in challenging scenarios with uncertain target dynamics, noisy measurements, and in the presence of obstacles.
Autonomous 4D Trajectory Planning for Dynamic and Flexible Air Traffic Management
Christian Vitale, Savvas Papaioannou, Panayiotis Kolios, and Georgios Ellinas
With an ever increasing number of unmanned aerial vehicles (UAVs) in flight, there is a pressing need for scalable and dynamic air traffic management solutions that ensure efficient use of the airspace while maintaining safety and avoiding mid-air collisions. To address this need, a novel framework is developed for computing optimized 4D trajectories for UAVs that ensure dynamic and flexible use of the airspace, while maximizing the available capacity through the minimization of the aggregate traveling times. Specifically, a network manager (NM) is utilized that considers UAV requests (including start/target locations) and addresses inherent mobility uncertainties using a linear-Gaussian system, to compute efficient and safe trajectories. Through the proposed framework, a family of mathematical programming problems is derived to compute control profiles for both distributed and centralized implementations. Extensive simulation results are presented to demonstrate the applicability of the proposed framework to maximize air traffic throughput under probabilistic collision avoidance guarantees.
Multi-Agent Coordinated Close-in Jamming for Disabling a Rogue Drone
Panayiota Valianti, Savvas Papaioannou, Panayiotis Kolios, and Georgios Ellinas
Drones, including remotely piloted aircraft or unmanned aerial vehicles, have become extremely appealing over the recent years, with a multitude of applications and usages. However, they can potentially present major threats for security and public safety, especially when they fly across critical infrastructures and public spaces. This work investigates a novel counter-drone solution by proposing a multi-agent framework in which a team of pursuer drones cooperate in order to track and jam a rogue drone. Within the proposed framework, a joint mobility and power control solution is developed to optimize the respective decisions of each cooperating agent in order to best track and intercept the moving rogue drone. Both centralized and distributed variants of the joint optimization problem are developed and extensive simulations are conducted to evaluate the performance of the problem variants and to demonstrate the effectiveness of the proposed solution.
Integrated Ray-Tracing and Coverage Planning Control using Reinforcement Learning
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
In 2022 IEEE 61st Conference on Decision and Control (CDC), 2022
In this work we propose a coverage planning control approach which allows a mobile agent, equipped with a controllable sensor (i.e., a camera) with limited sensing domain (i.e., finite sensing range and angle of view), to cover the surface area of an object of interest. The proposed approach integrates ray-tracing into the coverage planning process, thus allowing the agent to identify which parts of the scene are visible at any point in time. The problem of integrated ray-tracing and coverage planning control is first formulated as a constrained optimal control problem (OCP), which aims at determining the agent’s optimal control inputs over a finite planning horizon, that minimize the coverage time. Efficiently solving the resulting OCP is however very challenging due to non-convex and nonlinear visibility constraints. To overcome this limitation, the problem is converted into a Markov decision process (MDP) which is then solved using reinforcement learning. In particular, we show that a controller which follows an optimal control law can be learned using off-policy temporal-difference control (i.e., Q-learning). Extensive numerical experiments demonstrate the effectiveness of the proposed approach for various configurations of the agent and the object of interest.
UAV-based Receding Horizon Control for 3D Inspection Planning
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
In 2022 International Conference on Unmanned Aircraft Systems (ICUAS), 2022
Nowadays, unmanned aerial vehicles or UAVs are being used for a wide range of tasks, including infrastructure inspection, automated monitoring and coverage. This paper investigates the problem of 3D inspection planning with an autonomous UAV agent which is subject to dynamical and sensing constraints. We propose a receding horizon 3D inspection planning control approach for generating optimal trajectories which enable an autonomous UAV agent to inspect a finite number of feature-points scattered on the surface of a cuboid-like structure of interest. The inspection planning problem is formulated as a constrained open-loop optimal control problem and is solved using mixed integer programming (MIP) optimization. Quantitative and qualitative evaluation demonstrates the effectiveness of the proposed approach.
2021
Towards Automated 3D Search Planning for Emergency Response Missions
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G Panayiotou, and Marios M Polycarpou
The ability to efficiently plan and execute automated and precise search missions using unmanned aerial vehicles (UAVs) during emergency response situations is imperative. Precise navigation between obstacles and time-efficient searching of 3D structures and buildings are essential for locating survivors and people in need in emergency response missions. In this work we address this challenging problem by proposing a unified search planning framework that automates the process of UAV-based search planning in 3D environments. Specifically, we propose a novel search planning framework which enables automated planning and execution of collision-free search trajectories in 3D by taking into account low-level mission constrains (e.g., the UAV dynamical and sensing model), mission objectives (e.g., the mission execution time and the UAV energy efficiency) and user-defined mission specifications (e.g., the 3D structures to be searched and minimum detection probability constraints). The capabilities and performance of the proposed approach are demonstrated through extensive simulated 3D search scenarios.
Deep Reinforcement Learning Multi-UAV Trajectory Control for Target Tracking
Jiseon Moon, Savvas Papaioannou, Christos Laoudias, Panayiotis Kolios, and Sunwoo Kim
In this article, we propose a novel deep reinforcement learning (DRL) approach for controlling multiple unmanned aerial vehicles (UAVs) with the ultimate purpose of tracking multiple first responders (FRs) in challenging 3-D environments in the presence of obstacles and occlusions. We assume that the UAVs receive noisy distance measurements from the FRs which are of two types, i.e., Line of Sight (LoS) and non-LoS (NLoS) measurements and which are used by the UAV agents in order to estimate the state (i.e., position) of the FRs. Subsequently, the proposed DRL-based controller selects the optimal joint control actions according to the Cramér–Rao lower bound (CRLB) of the joint measurement likelihood function to achieve high tracking performance. Specifically, the optimal UAV control actions are quantified by the proposed reward function, which considers both the CRLB of the entire system and each UAV’s individual contribution to the system, called global reward and difference reward, respectively. Since the UAVs take actions that reduce the CRLB of the entire system, tracking accuracy is improved by ensuring the reception of high quality LoS measurements with high probability. Our simulation results show that the proposed DRL-based UAV controller provides a highly accurate target tracking solution with a very low runtime cost.
A Cooperative Multiagent Probabilistic Framework for Search and Track Missions
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
IEEE Transactions on Control of Network Systems, 2021
In this work, a robust and scalable cooperative multiagent searching and tracking (SAT) framework is proposed. Specifically, we study the problem of cooperative SAT of multiple moving targets by a group of autonomous mobile agents with limited sensing capabilities. We assume that the actual number of targets present is not known a priori and that target births/deaths can occur anywhere inside the surveillance region; thus efficient search strategies are required to detect and track as many targets as possible. To address the aforementioned challenges, we recursively compute and propagate in time the SAT density. Using the SAT density, we then develop decentralized cooperative look-ahead strategies for efficient SAT of an unknown number of targets inside a bounded surveillance area.
Downing a Rogue Drone with a Team of Aerial Radio Signal Jammers
Savvas Papaioannou, Panayiotis Kolios, and Georgios Ellinas
In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021
This work proposes a novel distributed control framework in which a team of pursuer agents equipped with a radio jamming device cooperate in order to track and radio-jam a rogue target in 3D space, with the ultimate purpose of disrupting its communication and navigation circuitry. The target evolves in 3D space according to a stochastic dynamical model and it can appear and disappear from the surveillance area at random times. The pursuer agents cooperate in order to estimate the probability of target existence and its spatial density from a set of noisy measurements in the presence of clutter. Additionally, the proposed control framework allows a team of pursuer agents to optimally choose their radio transmission levels and their mobility control actions in order to ensure uninterrupted radio jamming to the target, as well as to avoid the jamming interference among the team of pursuer agents. Extensive simulation analysis of the system’s performance validates the applicability of the proposed approach.
3D Trajectory Planning for UAV-based Search Missions: An Integrated Assessment and Search Planning Approach
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
In 2021 International Conference on Unmanned Aircraft Systems (ICUAS), 2021
The ability to efficiently plan and execute search missions in challenging and complex environments during natural and man-made disasters is imperative. In many emergency situations, precise navigation between obstacles and time-efficient searching around 3D structures is essential for finding survivors. In this work we propose an integrated assessment and search planning approach which allows an autonomous UAV (unmanned aerial vehicle) agent to plan and execute collision-free search trajectories in 3D environments. More specifically, the proposed search-planning framework aims to integrate and automate the first two phases (i.e., the assessment phase and the search phase) of a traditional search-and-rescue (SAR) mission. In the first stage, termed assessment-planning we aim to find a high-level assessment plan which the UAV agent can execute in order to visit a set of points of interest. The generated plan of this stage guides the UAV to fly over the objects of interest thus providing a first assessment of the situation at hand. In the second stage, termed search-planning, the UAV trajectory is further fine-tuned to allow the UAV to search in 3D (i.e., across all faces) the objects of interest for survivors. The performance of the proposed approach is demonstrated through extensive simulation analysis.
2020
Jointly-Optimized Searching and Tracking with Random Finite Sets
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
In this paper, we investigate the problem of joint searching and tracking of multiple mobile targets by a group of mobile agents. The targets appear and disappear at random times inside a surveillance region and their positions are random and unknown. The agents have limited sensing range and receive noisy measurements from the targets. A decision and control problem arises, where the mode of operation (i.e., search or track) as well as the mobility control action for each agent, at each time instance, must be determined so that the collective goal of searching and tracking is achieved. We build our approach upon the theory of random finite sets (RFS) and we use Bayesian multi-object stochastic filtering to simultaneously estimate the time-varying number of targets and their states from a sequence of noisy measurements. We formulate the above problem as a non-linear binary program (NLBP) and show that it can be approximated by a genetic algorithm. Finally, to study the effectiveness and performance of the proposed approach we have conducted extensive simulation experiments.
Coordinated CRLB-based Control for Tracking Multiple First Responders in 3D Environments
Savvas Papaioannou, Sungjin Kim, Christos Laoudias, Panayiotis Kolios, Sunwoo Kim, Theocharis Theocharides, Christos Panayiotou, and Marios Polycarpou
In 2020 International Conference on Unmanned Aircraft Systems (ICUAS), 2020
In this paper we study the problem of tracking a team of first responders with a fleet of autonomous mobile flying agents, operating in 3D environments. We assume that the first responders exhibit stochastic dynamics and evolve inside challenging environments with obstacles and occlusions. As a result, the mobile agents probabilistically receive noisy line-of-sight (LoS), as well as non-line-of-sight (NLoS) range measurements from the first responders. In this work, we propose a novel estimation (i.e., estimating the position of multiple first responders over time) and control (i.e., controlling the movement of the agents) framework based on the Cramér-Rao lower bound (CRLB). More specifically, we analytically derive the CRLB of the measurement likelihood function which we use as a control criterion to select the optimal joint control actions over all agents, thus achieving optimized tracking performance. The effectiveness of the proposed multi-agent multi-target estimation and control framework is demonstrated through an extensive simulation analysis.
Multi-Agent Coordinated Interception of Multiple Rogue Drones
Panayiota Valianti, Savvas Papaioannou, Panayiotis Kolios, and Georgios Ellinas
In 2020 IEEE Global Communications Conference (GLOBECOM), 2020
Over the last few years there has been an unprecedented interest in unmanned aerial vehicles (UAVs). However, drones potentially pose great threats to security and public safety, especially when their malicious use involves critical infrastructures and public spaces. This work proposes a multiagent counter-drone system where a team of pursuer drones cooperate in order to track and jam multiple rogue drones. Specifically, a cooperative multi-agent approach is proposed in which the best joint mobility and power control actions of each agent are chosen so that the rogue drones are optimally tracked and jammed over time. Two variants of the joint optimization problem are developed and extensive simulations are conducted so as to evaluate the performance of the proposed approach.
Cooperative Simultaneous Tracking and Jamming for Disabling a Rogue Drone
Savvas Papaioannou, Panayiotis Kolios, Christos G. Panayiotou, and Marios M. Polycarpou
In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
This work investigates the problem of simultaneous tracking and jamming of a rogue drone in 3D space with a team of cooperative unmanned aerial vehicles (UAVs). We propose a decentralized estimation, decision and control framework in which a team of UAVs cooperate in order to a) optimally choose their mobility control actions that result in accurate target tracking and b) select the desired transmit power levels which cause uninterrupted radio jamming and thus ultimately disrupt the operation of the rogue drone. The proposed decision and control framework allows the UAVs to reconfigure themselves in 3D space such that the cooperative simultaneous tracking and jamming (CSTJ) objective is achieved; while at the same time ensures that the unwanted inter-UAV jamming interference caused during CSTJ is kept below a specified critical threshold. Finally, we formulate this problem under challenging conditions i.e., uncertain dynamics, noisy measurements and false alarms. Extensive simulation experiments illustrate the performance of the proposed approach
2019
Decentralized Search and Track with Multiple Autonomous Agents
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
In 2019 IEEE 58th Conference on Decision and Control (CDC), 2019
In this paper we study the problem of cooperative searching and tracking (SAT) of multiple moving targets with a group of autonomous mobile agents that exhibit limited sensing capabilities. We assume that the actual number of targets is not known a priori and that target births/deaths can occur anywhere inside the surveillance region. For this reason efficient search strategies are required to detect and track as many targets as possible. To address the aforementioned challenges we augment the classical Probability Hypothesis Density (PHD) filter with the ability to propagate in time the search density in addition to the target density. Based on this, we develop decentralized cooperative look-ahead strategies for efficient searching and tracking of an unknown number of targets inside a bounded surveillance area. The performance of the proposed approach is demonstrated through simulation experiments.
Probabilistic Search and Track with Multiple Mobile Agents
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou, and Marios M. Polycarpou
In 2019 International Conference on Unmanned Aircraft Systems (ICUAS), 2019
In this paper we are interested in the task of searching and tracking multiple moving targets in a bounded surveillance area with a group of autonomous mobile agents. More specifically, we assume that targets can appear and disappear at random times inside the surveillance region and their positions are random and unknown. The agents have a limited sensing range, and due to sensor imperfections they receive noisy measurements from the targets. In this work we utilize the theory of random finite sets (RFS) to capture the uncertainty in the time-varying number of targets and their states and we propose a decision and control framework, in which the mode of operation (i.e. search or track) as well as the mobility control action for each agent, at each time instance, are determined so that the collective goal of searching and tracking is achieved. Extensive simulation results demonstrate the effectiveness and performance of the proposed solution.
2017
Tracking People in Highly Dynamic Industrial Environments
Savvas Papaioannou, Andrew Markham, and Niki Trigoni
To date, the majority of positioning systems have been designed to operate within environments that have a long-term stable macro-structure with potential small-scale dynamics. These assumptions allow the existing positioning systems to produce and utilize stable maps. However, in highly dynamic industrial settings these assumptions are no longer valid and the task of tracking people is more challenging due to the rapid large-scale changes in structure. In this paper, we propose a novel positioning system for tracking people in highly dynamic industrial environments, such as construction sites. The proposed system leverages the existing CCTV camera infrastructure found in many industrial settings along with radio and inertial sensors within each worker’s mobile phone to accurately track multiple people. This multi-target multi-sensor tracking framework also allows our system to use cross-modality training in order to deal with the environment dynamics. In particular, we show how our system uses cross-modality training in order to automatically keep track environmental changes (i.e., new walls) by utilizing occlusion maps. In addition, we show how these maps can be used in conjunction with social forces to accurately predict human motion and increase the tracking accuracy. We have conducted extensive real-world experiments in a construction site showing significant accuracy improvement via cross-modality training and the use of social forces.
2016
Poster Abstract: Efficient Visual Positioning with Adaptive Parameter Learning
Hongkai Wen, Sen Wang, Ronnie Clark, Savvas Papaioannou, and Niki Trigoni
In 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), 2016
Positioning with vision sensors is gaining its popularity, since it is more accurate, and requires much less bootstrapping and training effort. However, one of the major limitations of the existing solutions is the expensive visual processing pipeline: on resource-constrained mobile devices, it could take up to tens of seconds to process one frame. To address this, we propose a novel learning algorithm, which adaptively discovers the place dependent parameters for visual processing, such as which parts of the scene are more informative, and what kind of visual elements one would expect, as it is employed more and more by the users in a particular setting. With such meta-information, our positioning system dynamically adjust its behaviour, to localise the users with minimum effort. Preliminary results show that the proposed algorithm can reduce the cost on visual processing significantly, and achieve sub-metre positioning accuracy.
2015
Accurate Positioning via Cross-Modality Training
Savvas Papaioannou, Hongkai Wen, Zhuoling Xiao, Andrew Markham, and Niki Trigoni
In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems (SenSys), Seoul, South Korea, 2015
In this paper we propose a novel algorithm for tracking people in highly dynamic industrial settings, such as construction sites. We observed both short term and long term changes in the environment; people were allowed to walk in different parts of the site on different days, the field of view of fixed cameras changed over time with the addition of walls, whereas radio and magnetic maps proved unstable with the movement of large structures. To make things worse, the uniforms and helmets that people wear for safety make them very hard to distinguish visually, necessitating the use of additional sensor modalities. In order to address these challenges, we designed a positioning system that uses both anonymous and id-linked sensor measurements and explores the use of cross-modality training to deal with environment dynamics. The system is evaluated in a real construction site and is shown to outperform state of the art multi-target tracking algorithms designed to operate in relatively stable environments.
Opportunistic Radio Assisted Navigation for Autonomous Ground Vehicles
Hongkai Wen, Yiran Shen, Savvas Papaioannou, Winston Churchill, Niki Trigoni, and Paul Newman
In 2015 International Conference on Distributed Computing in Sensor Systems (DCOSS), 2015
Navigating autonomous ground vehicles with visual sensors has many advantages - it does not rely on global maps, yet is accurate and reliable even in GPS-denied environments. However, due to the limitation of the camera field of view, one typically has to record a large number of visual experiences for practical navigation. In this paper, we explore new avenues in linking together visual experiences, by opportunistically harvesting and sharing a variety of radio signals emitted by surrounding stationary access points and mobile devices. We propose a novel navigation approach, which exploits side-channel information of co-location to thread up visually-separated experiences with short exploration phases. The proposed approach empowers users to trade travel time for manual navigation effort, allowing them to choose the itinerary that best serves their needs. We evaluate the proposed approach with data collected from a typical urban area, and show that it achieves much better navigation performance in both reach ability and cost, comparing with the state of the arts that only use visual information.
2014
Fusion of Radio and Camera Sensor Data for Accurate Indoor Positioning
Savvas Papaioannou, Hongkai Wen, Andrew Markham, and Niki Trigoni
In 2014 IEEE 11th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), 2014
Indoor positioning systems have received a lot of attention recently due to their importance for many location-based services, e.g. indoor navigation and smart buildings. Lightweight solutions based on WiFi and inertial sensing have gained popularity, but are not fit for demanding applications, such as expert museum guides and industrial settings, which typically require sub-meter location information. In this paper, we propose a novel positioning system, RAVEL (Radio And Vision Enhanced Localization), which fuses anonymous visual detections captured by widely available camera infrastructure, with radio readings (e.g. WiFi radio data). Although visual trackers can provide excellent positioning accuracy, they are plagued by issues such as occlusions and people entering/exiting the scene, preventing their use as a robust tracking solution. By incorporating radio measurements, visually ambiguous or missing data can be resolved through multi-hypothesis tracking. We evaluate our system in a complex museum environment with dim lighting and multiple people moving around in a space cluttered with exhibit stands. Our experiments show that although the WiFi measurements are not by themselves sufficiently accurate, when they are fused with camera data, they become a catalyst for pulling together ambiguous, fragmented, and anonymous visual tracklets into accurate and continuous paths, yielding typical errors below 1 meter.
WiFi Sensors Meet Visual Tracking For An Accurate Positioning System
Savvas Papaioannou, Hongkai Wen, Zhuoling Xiao, Andrew Markham, and Niki Trigoni
In 11th European Conference on Wireless Sensor Networks, (EWSN), Feb 2014
In this poster abstract, we propose a new positioning technique that can localize people by combining WiFi information from their mobile devices with visual tracking. We show that the proposed approach can improve visual tracking by resolving motion and appearance ambiguities while at the same time can uniquely identify each person with their device ID
2013
A Novel Low-Power Embedded Object Recognition System Working at Multi-Frames per Second
Antonis Nikitakis, Savvas Papaioannou, and Ioannis Papaefstathiou
One very important challenge in the field of multimedia is the implementation of fast and detailed Object Detection and Recognition systems. In particular, in the current state-of-the-art mobile multimedia systems, it is highly desirable to detect and locate certain objects within a video frame in real time. Although a significant number of Object Detection and Recognition schemes have been developed and implemented, triggering very accurate results, the vast majority of them cannot be applied in state-of-the-art mobile multimedia devices; this is mainly due to the fact that they are highly complex schemes that require a significant amount of processing power, while they are also time consuming and very power hungry. In this article, we present a novel FPGA-based embedded implementation of a very efficient object recognition algorithm called Receptive Field Cooccurrence Histograms Algorithm (RFCH). Our main focus was to increase its performance so as to be able to handle the object recognition task of today’s highly sophisticated embedded multimedia systems while keeping its energy consumption at very low levels. Our low-power embedded reconfigurable system is at least 15 times faster than the software implementation on a low-voltage high-end CPU, while consuming at least 60 times less energy. Our novel system is also 88 times more energy efficient than the recently introduced low-power multi-core Intel devices which are optimized for embedded systems. This is, to the best of our knowledge, the first system presented that can execute the complete complex object recognition task at a multi frame per second rate while consuming minimal amounts of energy, making it an ideal candidate for future embedded multimedia systems.
2012
A novel low-power embedded object recognition system working at multi-frames per second (Best Paper Award)
Antonis Nikitakis, Savvas Papaioannou, and Ioannis Papaefstathiou
In 2012 IEEE 10th Symposium on Embedded Systems for Real-time Multimedia, Mar 2012
One very important challenge in the field of multimedia is the implementation of fast and detailed Object Detection and Recognition systems. In particular, in the current state-of-the-art mobile multimedia systems, it is highly desirable to detect and locate certain objects within a video frame in real time. In this paper, we present a novel FPGA-based embedded implementation of a very efficient object recognition algorithm called Receptive Field Cooccurrence Histograms Algorithm(RFCH). Our main focus was to increase its performance so as to be able to handle the object recognition task of today’s highly sophisticated embedded multimedia systems while keeping its energy consumption at very low levels. Our low-power embedded reconfigurable system is at least 15 times faster than the software implementation on a low-voltage high-end CPU, while consuming at least 60 times less energy. Our novel system is also 88 times more energy efficient than the recently introduced low-power multi-core Intel devices which are optimized for embedded systems.
PETRA 2012
SIPE: A Sensor Information Processing Engine for Wellness Management Applications
Andreas Savvides, Savvas Papaioannou, Sokratis Kartakis, Brendan Kohler, and George Demiris
In Proceedings of the 5th ACM International Conference on Pervasive Technologies Related to Assistive Environments (PETRA), Mar 2012