Skip to content
On this page
Biweekly
2025-12-15

MySQL/PG Compatibility & Scanner Metrics Boost | Greptime Biweekly Report - No. 72

Enhanced MySQL and PostgreSQL compatibility for better ecosystem integration. Optimized TSID generation performance by replacing mur3 with faster fxhash algorithm. Added detailed scanner metrics for improved query performance monitoring

Summary

Development period: 2025-12-01 - 2025-12-14

GreptimeDB v1.0.0-beta.3 is now available! This release brings simplified caching config, manifest file caching for faster metadata access, improved PromQL correctness & histogram handling, and better PG/MySQL/Grafana compatibility. We recommend upgrading to this version.

Together with our global community of contributors, GreptimeDB continues to evolve and flourish as a growing open-source project. We are grateful to each and every one of you.

Below are the highlights from recent commits:

  • Enhanced MySQL and PostgreSQL compatibility for better ecosystem integration
  • Optimized TSID generation performance by replacing mur3 with faster fxhash algorithm
  • Added detailed scanner metrics for improved query performance monitoring

We recommend users running previous versions to upgrade to the latest version for an improved experience.

Contributors

Over the past two weeks, 9 contributors merged a total of 18 PRs.

Highlights of Recent PRs

db#7315 feat!: improve mysql/pg compatibility

This PR significantly enhances GreptimeDB's MySQL and PostgreSQL protocol compatibility by implementing key database functions and improving client tool integration. The changes include adding MySQL's IF() function with proper truthiness semantics, PostgreSQL compatibility functions (obj_description, col_description, etc.), MySQL-compatible SHOW TABLES column naming, session-level warning management with SHOW WARNINGS support, and enhanced INFORMATION_SCHEMA.PARTITIONS table structure to better support JDBC connectors and tools like StarRocks, DBeaver, and standard MySQL/PostgreSQL clients.

db#7336 feat: add more verbose metrics to scanners

This PR enhances GreptimeDB's query performance analysis capabilities by adding comprehensive metrics collection to scanners, including detailed statistics for index operations (inverted, bloom filter, fulltext), SST file scanning, parquet data fetching, and metadata cache performance. The verbose metrics are collected during "EXPLAIN ANALYZE VERBOSE" operations to provide developers and administrators with granular insights into query execution bottlenecks without impacting normal query performance.

db#7326 feat: implement a cache for manifest files

This PR implements a write cache for manifest files in GreptimeDB, storing them under cache/object/manifest/ to improve performance when accessing these critical metadata files. The implementation includes automatic cleanup of empty directories and maintains the original file path structure rather than using hashed paths, ensuring manifest file integrity while providing caching benefits.

db#7316 perf(metric-engine)!: Replace mur3 with fxhash for faster TSID generation

This PR optimizes TSID (Time Series ID) generation in GreptimeDB's metric engine by replacing the mur3 hash algorithm with fxhash and implementing a fast-path optimization that pre-computes label name hashes for rows without null values, then hashes the label values. The changes deliver a 5-6x performance improvement for typical use cases. This introduces a breaking change as the new hash algorithm will cause existing TSID values to differ. However, the impact on users is minimal—only queries whose time range spans the TSID calculation method switch point may be affected, and users can upgrade directly.

Good First Issue

Issue#7328 HTTP handler error log improvement

This issue aims to improve HTTP error logging in GreptimeDB: currently, clients encountering non-200 responses often can't see the specific reason, making troubleshooting difficult. The suggestion is to log errors during the server-side error conversion process (e.g., logging as needed in Error's IntoResponse). It's also noted that some request paths don't go through the unified Error conversion; additionally, axum/tower-http doesn't consider certain 4xx errors (like 400) as failures, making framework-level errors hard to capture. Therefore, the related pathways need to be reviewed, and information like remote address should be added where necessary for easier debugging.

  • Keywords: HTTP error logging, Observability, axum/tower-http, IntoResponse

  • Difficulty: Medium

Join our community

Get the latest updates and discuss with other users.