Robot Brain Technology

Robot Brain: Full-stack technology from chips to applications

We decompose the "Robot Brain" into five layers: Chip, Platform, Models, Modules, and Applications, building a complete ecosystem that embodied developers can use directly.

Embodied AI SoC

Grounded in the requirements of embodied scenarios, we design chips featuring multimodal processing, real time responsiveness, low power consumption, and a rich set of interfaces.

Why develop our own Embodied AI SoC?

General purpose edge chips can support early stage experimentation, but once we begin seriously considering large scale deployment, long term operation, and cost structures, embodied robots reveal very specific requirements at the chip level.

Processing multiple modalities simultaneously such as vision, language, and motion.
Operating stably over long periods within limited volume and thermal constraints.
Meeting stringent constraints on real time performance and safety.
Ensuring headroom for future algorithm updates.

With these factors in mind, we chose to design our own dedicated SoC for embodied robots. We embed the unique computational requirements of embodied systems and the long term planning of computational capacity directly into the starting point of our chip design process.

Key Points in the Chip Layer

At the chip layer, we primarily focus on system-level requirements and constraints.

Requirements for Compute Capability and Task Distribution

  • Approximate compute-demand ratios among perception, decision making, and control tasks in embodied systems
  • The coordination between online inference and on-device learning/fine-tuning

Storage and Dataflow Patterns

  • The internal routing of multimodal data within the robot brain
  • How to prioritize and guarantee latency sensitive components within the decision making pathway

Power Consumption and Thermal Design Constraints

  • Typical power-budget allocation across diverse long-duration operating scenarios
  • Design principles for ensuring safe and stable long-duration operation under varying physical environments

Interfaces Between the Software Stack and the SoC

  • Designing stable high-level interfaces so that training and model export on the platform side can remain seamlessly connected with execution on the SoC over the long term.
  • Maximizing the computational performance of the dedicated SoC through low level optimization paths, without over fixing or constraining the upper layer software.

These system-level requirements and constraints will continue to guide the design and evolution of our Embodied AI SoC.

The Role of Embodied AI SoC

For HanabiAI, the embodied AI SoC is not merely a single chip it is the foundational pivot that supports our entire technology stack.

Downward: Collaboration with the manufacturing ecosystem, packaging partners, and module partners

Upward: Alignment with typical task loads in platform training and evaluation

Lateral: Providing compute and power consumption boundaries for the evolution of the model and module layers

Embodied AI Compute Platform

Connecting HPC, AI, and Embodied Workflows

Introducing Platform

Platform is a general-purpose computing platform built for universities, research institutions, and R&D teams, serving as a foundation for integrated management and scheduling of diverse computing tasks.

  • Conventional HPC simulations and numerical computing
  • Machine learning / deep learning training and inference
  • Large-scale batch processing and data preprocessing
  • Simulation and reinforcement learning tasks for embodied robots

It is an integrated HPC&AI computing platform for research and engineering optimized heavily for embodied scenarios, yet not limited to them.

Platform: From General HPC & AI to Embodied Development

Built on a general purpose computing foundation, it provides a more efficient simulation training evaluation experience tailored for embodied systems.

Integrated Submission and Management of
Tasks

Whether for HPC simulations or AI training, tasks can

be submitted, queued, and monitored through a

unified entry point, eliminating the need for users to

handle different environments or scripts for each system.

Multi Cluster and Multi Environment Management

It supports multiple clusters, different partitions,

and cloud resources, allowing users to view

resource utilization and task queues through

a single unified interface.

Basic Observability and Log Aggregation

It centralizes the visualization of task states,

resource usage,and key logs, improving research

reproducibility and making operations and

troubleshooting easier.

Large Scale Parallel Simulation and Scenario Management

It provides unified management of embodied-AI simulation

tasks and scene data, supporting simultaneous generation and

playback of multiple scenes.

Reinforcement Learning / Imitation-Based Learning Training Operations

It manages embodied simulation tasks and scenario data,

supporting parallel generation and replay of multiple

scenarios.

Closed Loop Policy Evaluation and Sim to Real Validation

It centrally performs policy evaluation, comparative replay, and

transfer validation, shortening the cycle of “simulation → real-

world testing → re-iteration.”

Embodied Intelligence Models

Coordination of Policy, Perception, and Memory with Continuous Learning Iteration.

Model Diagram

Research Directions for Models

  • How to represent policies in a way that enables smooth transfer between simulation and real-world systems
  • How to coordinate multimodal perception and long-term memory within embodied environments
  • How to maintain consistent robot-brain logic across different modules and robot embodiments through a unified interface

At the model layer, we do not aim for an all purpose, general large scale model. Instead, we focus on targeted research specialized for embodied scenarios.

Embodied AI Module Kit

We package the robot brain as an integrable computing module, enabling flexible connection with a wide variety of service robot embodiments.

Simplify the development and validation of embodied systems

  • Standardized interfaces and extensibility
  • Architecture designed with long term evolution in mind
  • A runtime foundation optimized for maintainability and diagnostics

Building on our team’s experience in SoC mass production and module design/manufacturing, we plan our products from the start with a clear design principle: a smooth transition from development kits to mass-production modules.

Model Diagram

Embodied AI Robot

Using patrol and guided tour scenarios as foundational use cases, we validate our full stack technological capabilities and build service application templates for embodied robots.

Robot Diagram

The Directions We Are Validating

  • Patrol and monitoring in campuses, campus districts, and public spaces
  • Guidance and accompaniment support in venues such as exhibition halls and hospitals
  • Feasibility of multi-robot coordination and scheduling

We hope that embodied technologies will deliver greater practical value and provide stronger generalization across a wide range of real world scenarios.