Summary
Together with our global community of contributors, GreptimeDB continues to evolve and flourish as a growing open-source project. We are grateful to each and every one of you.
In the past two weeks, we have made steady progress. Below are some highlights:
Removed the storage crate from the dependency lists of other crates, and the actual storage engine is
Mito2
.Enabled distributed tracing in GreptimeDB.
A new
Decimal128
type has been added to the supported types.A row group level page cache for the
Mito
engine has been implemented in order to reduce data scan time.New modules region migration and inverted index are currently under accelerated development.
Contributors
For the past two weeks, our community has been super active with a total of 58 PRs merged. 7 PRs from 4 external contributors merged successfully and lots pending to be merged.
Congrats on becoming our most active contributors in the past 2 weeks:
👏 Welcome contributor @bigboss2063 @lyang24 @taobo to the community as new contributors, and congratulations on successfully merging their first PR!
A big THANK YOU to all our members and contributors! It is people like you who are making GreptimeDB a great product. Let's build an even greater community together.
Highlights of Recent PR
#2777 This PR removes the storage crate from other crates' dependency lists and the actual storage engine is Mito2
.
#2755 To enable distributed tracing in GreptimeDB.
Use otlp as exporter protocol, which can support distributed tracing backend like jaeger, tempo etc.
#2788 A new Decimal128
type has been added to the supported types.
#2688
This PR implements a row group level page cache for the Mito engine.
A new page reader CachedPageReader
is introduced to return pages of a row group from the cache.
When the first time we read a row group, all pages of the row group are loaded and put it into the cache. The next time we read, we can fetch cached pages from the cache and build a CachedPageReader
.
Cached pages are decompressed, so we can skip the decompression step and reduce 20% - 30% of the total scan time if a query hits the cache.