
夏起X02
## Product Overview The general-purpose humanoid robot **Xiaqi X02** is designed based on human factors engineering principles, making each joint closer to natural human interaction. It delivers a more natural and comfortable experience, making it suitable for marketing and customer service, exhibition explanations, retail navigation, front desk reception, business consultation, and other scenarios. Xiaqi X02 features multimodal interaction capabilities, including multi-turn voice dialogue, rich natural expressions, and natural body movements. It supports accurate face tracking, lip movement detection, and sound source localization. It also supports multilingual interaction as well as Chinese dialects. The robot is capable of autonomous navigation across various indoor surfaces such as marble, carpets, and wooden floors, and can operate in dynamic indoor environments of up to 100,000 m². --- ## Core Technologies ### Anthropomorphic Design & Human-Centric Engineering * Based on human factors engineering for natural interaction experience * Custom appearance development cycle ≤ 2 months for diverse scenarios * Modular humanoid skeleton structure supports personalized customization ### Intelligent Interaction & Smooth Experience * Task-based scenario programming supporting multimodal interaction (voice, motion, etc.) * Integration with external terminals for collaborative interaction * Fusion of sound source localization, face/lip detection, and body motion tracking for precise responses * Natural speech and body language for human-like service experience ### Path Planning & Reliable Mobility * Global optimal path planning with continuous motion optimization * Joint system supports thousands of hours of stable operation * Agile, flexible, and reliable movement performance * Intelligent obstacle avoidance with three-layer safety monitoring (business/system/hardware) and strong anti-interference capability --- ## Product Highlights ### Plug-and-Play, Minimal Maintenance * Modular hardware design with pre-installed embodied interaction software * Easy configuration with built-in dgrobot task executor for one-click action deployment * Remote monitoring, diagnostics, automatic updates, and upgrades ### Multimodal Emotion Intelligence * 3D emotion analysis across tone, expression, gaze, behavior, and spatial context * Accurate recognition of customer emotions and intent * Real-time emotional adaptation for more attentive and efficient service * Built-in emotion analysis library for enhanced understanding and expression ### AI-Powered Intelligent Decision Making * Embodied large model enables generalized motion execution for complex scenario tasks * Multimodal large models support multi-channel information processing for smoother interaction * LLM + RAG architecture builds enterprise knowledge base for professional consulting * Performance: TTFT ≤ 2s (first token latency), TPOT ≤ 50ms (average)