Codec for compressed tier segments.
Encodes lists of aggregate buckets into zstd-compressed binary blobs, replacing individual SQLite rows with packed chunks for ~5-10x storage savings.
Binary Format
<<"TC", version::8, agg_bitmask::8, bucket_count::16, buckets::binary>>Each bucket is timestamp::int64 followed by aggregate values in bitmask order.
Floats are float64, count is int64. The entire binary is zstd-compressed.
Aggregate Bitmask
bit 0: avg bit 3: count
bit 1: min bit 4: sum
bit 2: max bit 5: last
Summary
Functions
Return the number of buckets in a compressed blob without fully decoding.
Decode a compressed blob back to aggregate metadata and bucket maps.
Encode a list of bucket maps into a compressed blob.
Merge new buckets into an existing compressed blob.
Functions
Return the number of buckets in a compressed blob without fully decoding.
Decode a compressed blob back to aggregate metadata and bucket maps.
Returns {aggregates, buckets} where aggregates is the list of
aggregate atoms and buckets is a sorted list of maps.
Encode a list of bucket maps into a compressed blob.
Parameters
buckets— list of maps, each with:bucket(timestamp) key and aggregate keys (:avg,:min,:max,:count,:sum,:last)aggregates— list of aggregate atoms to encode
Example
buckets = [
%{bucket: 1706000000, avg: 73.2, min: 50.1, max: 95.3, count: 12, sum: 878.4, last: 71.0},
%{bucket: 1706003600, avg: 68.1, min: 45.0, max: 89.7, count: 12, sum: 817.2, last: 65.3}
]
blob = TimelessMetrics.TierChunk.encode(buckets, [:avg, :min, :max, :count, :sum, :last])
Merge new buckets into an existing compressed blob.
New buckets overwrite existing ones at the same timestamp. If existing
is nil, equivalent to encode/2.