Robotic Arms
Application Scenarios
- Industrial: Automotive manufacturing welding and spraying, electronic product assembly, pharmaceutical production
- Service: Medical surgery assistance, warehouse logistics
Training Methods
- Imitation learning + reinforcement learning combined
- Behavior Cloning (BC): Human teleoperation to collect teaching data
- Action Chunking Technology (ACT algorithm): Solving error accumulation in long sequence tasks
Research Trends
- Multimodal perception fusion
- Adaptive grasping strategies
- Zero-shot transfer learning
- RT-1: 100 daily tasks zero-shot generalization
Wheeled Mobile Robots
Types
Differential drive robots, delivery robots, autonomous vehicles
Technology Implementation
- Traditional methods: SLAM, A*, Dijkstra, DWA
- Deep reinforcement learning: End-to-end visual navigation
- Imitation learning: Training with human driver data
Hybrid Control
High-level: A* path planning Low-level: Reinforcement learning local obstacle avoidance
Humanoid Robots
Control Methods
- Traditional: Model-based optimal control, PID, ZMP control
- Deep reinforcement learning: Autonomous gait learning in simulation environments
Development Roadmap
- Basic stage: Static balance (center of mass offset <2cm)
- Intermediate stage: Dynamic walking (continuous walking >100 steps)
- Advanced stage: Complex operations (success rate >90%)
Unmanned Aerial Vehicles (Drones)
Control Methods
- Traditional: PID control, MPC
- Reinforcement learning: High-speed ring threading, dynamic obstacle avoidance
- Imitation learning: Hover control, trajectory following
Technical Challenges
- High-frequency decision making (100Hz+)
- Ultra-low latency (<10ms)