Skip to content

Edge-Cloud Integrated Solution

Unified time-series data platform from vehicle to cloud — production-validated by Li Auto.

30–40×CAN data compression
800KPoints/sec on-vehicle
1M+Production vehicles
Zero ETLEdge-to-cloud sync

Features

Features

Features

  • Reduces network traffic by up to 97% through intelligent edge-cloud sync

  • Cuts cloud storage costs by up to 98% with object storage optimization

  • Enables real-time edge computing for enhanced performance and data privacy

  • Provides comprehensive data management with built-in governance and monitoring

Architecture

Architecture

Architecture

  • GreptimeDB Edge

    GreptimeDB Edge is optimized for resource-constrained edge environments, enabling local data processing and analytics with high compression. Direct synchronization to cloud object storage reduces bandwidth costs while maintaining query capabilities across edge and cloud.

  • GreptimeDB on the Cloud

    Based on GreptimeDB Enterprise's cloud-native architecture with compute-storage disaggregation, optimized for edge-cloud scenarios. Combines SQL analytics with broad ecosystem compatibility (MySQL, monitoring protocols, visualization tools). Built on object storage for elastic scalability and cost efficiency, with optimized edge-cloud architecture for high-performance data ingestion and processing at scale.

  • GreptimeDB Edge Manager

    A unified control plane that streamlines edge-cloud data operations with integrated model management, quality monitoring, and task orchestration capabilities.

Frequently Asked Questions

Real questions from automotive engineering teams.

How much CPU and memory does the vehicle-side database use? Will it impact vehicle apps?

Validated vehicle-side benchmarks (test hardware: Qualcomm 8295 automotive-grade SoC):

  • Idle: 0–1% of one CPU core
  • 350,000 points/s: ~3% of one core (0.75K DMIPS), ~132MB memory
  • 700,000 points/s: ~5.7% of one core (1.425K DMIPS), ~135MB memory
  • 800,000 points/s: <10% CPU, <300MB memory (with compression enabled)
  • Peak CPU: never exceeds 15%

In Li Auto's production deployment, memory is configured at roughly 100MB (case study). Resources can be capped via database config and OS-level limits, and a monitoring API exposes CPU and memory in real time. Heavy analytical queries are time-sliced and auto-aborted when they would push CPU too high, so navigation and cockpit apps stay responsive.

How effective is data compression? What does it save on storage and bandwidth?

Vehicle bus CAN data compresses 30–40× versus traditional ASC files. Millisecond signal data shrinks an additional 30–50% on top of the customer's existing compression. On-vehicle logs compress to roughly 1/30 of the raw log size — about 50% smaller than traditional compression algorithms. In Li Auto's deployment, cloud storage cost dropped to about 20% of the prior solution and upload bandwidth was cut by over 50%, saving tens of millions in cumulative cloud costs (case study). Compression strategy is per-table — high-frequency telemetry and cold logs can use different codecs.

How do you avoid data loss under weak network, disconnection, or power loss?

Three layers of reliability:

  • Custom export policies: schedule uploads by network state, table priority, or business rule. Files spool locally when offline and resume from the breakpoint after recovery.
  • Per-table WAL: enable WAL on critical signals so writes survive a crash. Underlying log-structured storage keeps data on disk, not memory only.
  • Power-loss aware: a system hook flushes data to disk before shutdown. File-level integrity checks plus retry and supplemental upload guarantee eventual consistency.
How does it handle timestamp jumps and multi-domain clock sync?

Two timestamps per row: the wall-clock time supplied by the writer (which can jump if RTC or GPS corrects) and a monotonic uptime timestamp that only increases. The cloud joins them in SQL to reconstruct a consistent timeline. At ingestion, timestamps from different ECUs are normalized against a chosen reference (typically GPS or NTP), so cross-domain traces align on the same axis.

Can I run SQL, UDFs, or stream processing on the vehicle? What about privacy computing and fault diagnosis?

Yes. The vehicle-side database ships a full SQL engine with low-overhead point queries and supports user-defined functions for custom business logic. Common patterns:

  • Cloud-issued queries: push an SQL query down to the vehicle, run it on raw local data, return only the result — keeps PII on the vehicle and cuts upload volume.
  • Streaming alerts: window functions (sliding/tumbling) detect consecutive anomaly conditions for fault diagnosis. The same stream engine runs at the edge and in the cloud.
  • Rule-based anomaly detection: configure detection rules locally; matched events trigger automatic upload with the surrounding data window.
What operating systems, protocols, and tools does it integrate with?
  • OS: Android (validated in Li Auto's production fleet across Android versions), Linux, macOS, ARM edge devices. Written in Rust — any POSIX platform Rust supports is reachable.
  • Protocols: MySQL, PostgreSQL, JDBC, Prometheus Remote Write.
  • Visualization & ops: Grafana data source, native logcat collector for Android.
  • Vehicle ↔ cloud: same database core on both ends — vehicle-side files are read directly by the cloud, no secondary write needed.

See the Integrations Overview for the full list.

How is it licensed and priced? How much can it save end-to-end?

Edge devices (including vehicles) are licensed by device count. The cloud side is licensed per node.

End-to-end savings come from three places: 30–40× compression cuts storage and bandwidth; vehicle-side aggregation removes the need for separate stream-processing engines like Flink in the cloud; and the vehicle-cloud isomorphic architecture skips the secondary write into cloud storage. Li Auto's deployment (case study) dropped cloud storage cost to about 20% of the prior solution and cut upload bandwidth by over 50%.

For a tailored quote, contact us.

How costly is migration from an existing in-house solution? How long does a PoC take?

Vehicle-side migration cost is generally low: GreptimeDB supports a wide range of open protocols and offers edge-optimized ingestion paths for high-throughput telemetry. The cloud side uses standard SQL, so existing analytics and BI tools connect directly. Overall migration cost is manageable, and typical PoC timelines run 1 to 3 months — the exact effort depends on the specifics of the customer's existing solution.

When to choose Edge-Cloud Solution

Building automotive or IoT applications

Specialized edge-cloud integration with automotive-grade reliability

Contact Us

Need edge computing with cloud analytics

Unified API across edge and cloud environments

Contact Us

Want to minimize network and storage costs

97% network reduction and 98% storage cost savings

Contact Us

Require real-time processing with data privacy

Local edge processing with intelligent sync

Contact Us

Stay in the loop

Join our community