선도 연구

자율 주행 기술

SLAM

  • AIM-SLAM: Dense Monocular SLAM via Adaptive and Informative Multi-View Keyframe Prioritization with Foundation Model
  • SE(3)-LIO: Smooth IMU Propagation With Jointly Distributed Poses on SE(3) Manifold for Accurate and Robust LiDAR-Inertial Odometry
  • LODESTAR: Degeneracy-Aware LiDAR-Inertial Odometry with Adaptive Schmidt-Kalman Filter and Data Exploitation
  • LVI-Q: Robust LiDAR-Visual-Inertial-Kinematic Odometry for Quadruped Robots Using Tightly-Coupled and Efficient Alternating Optimization
  • DynaVINS++: Robust Visual-Inertial State Estimator in Dynamic Environments by Adaptive Truncated Least Squares and Stable State Recovery
  • DynaVINS: A Visual-Inertial SLAM for Dynamic Environments
  • AdaLIO: Adaptive LiDAR Inertial Odometry
  • STEP: State Estimator for Legged Robots Using a Preintegrated Foot Velocity Factor
  • UV-SLAM: Unconstrained Line-based SLAM Using Vanishing Points for Structural Mapping
  • NR-UIO: NLOS-Robust UWB-Inertial Odometry based on IMM and NLOS Factor Estimation
  • ALVIO: Adaptive Line Visual Inertial Odometry
  • HG-SLAM: Hierarchical Graph-based SLAM
  • G2P-SLAM: Generalized Grouping and Pruning-Based RGB-D SLAM Framework for Mobile Robots in Low-Dynamic Environments
  • GP-SLAM: Grouping Nodes and Puning Constraints SLAM
  • DV-SLAM: Dual-sensor-based Vector-field SLAM
  • MU-SLAM: Magnetic-field-based Underground SLAM

Localization

  • Chamelion: Reliable Change Detection for Long-Term LiDAR Mapping in Transient Environments
  • SaWa-ML: Structure-Aware Pose Correction and Weight Adaptation-Based Robust Multi-Robot Localization
  • Multi-Mapcher: Loop Closure Detection-Free Heterogeneous LiDAR Multi-Session SLAM Leveraging Outlier-Robust Registration for Autonomous Vehicles
  • Quatro++: Robust Global Registration Exploiting Ground Segmentation for Loop Closing in LiDAR SLAM
  • BRM: Localization: Building Ratio Map Localization for UAVs
  • GP-ICP: Ground Plane ICP for Mobile Robots
  • Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor
  • ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point Cloud Map Building

Control

  • OpenHEART: Opening Heterogeneous Articulated Objects with a Legged Manipulator
  • DreamFLEX: Learning Fault-Aware Quadrupedal Locomotion Controller for Anomaly Situation in Rough Terrains
  • DreamWaQ: Learning Robust Quadrupedal Locomotion With Implicit Terrain Imagination via Deep Reinforcement Learning
  • Retro-RL: Reinforcing Nominal Controller with Deep Reinforcement Learning for Tilting-Rotor Drones

자율 주행 기술

  • DreamFlow: Local Navigation Beyond Observation via Conditional Flow Matching in the Latent Space
  • CLUE: Adaptively Prioritized Contextual Cues by Leveraging a Unified Semantic Map for Effective Zero-Shot Object-Goal Navigation
  • TRG-planner: Traversal Risk Graph-Based Path Planning in Unstructured Environments for Safe and Efficient Navigation
  • Peacock Exploration: A Lightweight Exploration for UAV using Control-Efficient Trajectory
  • TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR Scans
  • MLCPP: Multi-Layer Coverage Path Planner
  • eARC-Theta*: extended Angular-Rate-Constrained Theta*

Physical AI

Artificial Intelligence

  • E2EGS: Event-to-Edge Gaussian Splatting for Pose-Free 3D Reconstruction
  • VIRD: View-Invariant Representation through Dual-Axis Transformation for Cross-View Pose Estimation
  • MambaGlue: Fast and Robust Local Feature Matching With Mamba
  • PIDLoc: Cross-View Pose Optimization Network Inspired by PID Controllers
  • CoCoA-Mix: Confusion-and-Confidence-Aware Mixture Model for Context Optimization
  • Contextrast: Contextual Contrastive Learning for Semantic Segmentation
  • Struct-MDC: Mesh-Refined Unsupervised Depth Completion Leveraging Structural Regularities from Visual SLAM
  • MSDPN: Multi-Stage Depth Prediction Neural Network
  • RONet: Range-Only Network

미래 로봇 기술