issue: #46540
Empty timetick is just used to sync up the time clock between different
component in milvus. So empty timetick can be ignored if we achieve the
lsn/mvcc semantic for timetick. Currently, some components need the
empty timetick to trigger some operation, such as flush/tsafe. So we
only slow down the empty time tick for 5 seconds.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
- Core invariant: with LSN/MVCC semantics consumers only need (a) the
first timetick that advances the latest-required-MVCC to unblock
MVCC-dependent waits and (b) occasional periodic timeticks (~≤5s) for
clock synchronization—therefore frequent non-persisted empty timeticks
can be suppressed without breaking MVCC correctness.
- Logic removed/simplified: per-message dispatch/consumption of frequent
non-persisted empty timeticks is suppressed — an MVCC-aware filter
emptyTimeTickSlowdowner (internal/util/pipeline/consuming_slowdown.go)
short-circuits frequent empty timeticks in the stream pipeline
(internal/util/pipeline/stream_pipeline.go), and the WAL flusher
rate-limits non-persisted timetick dispatch to one emission per ~5s
(internal/streamingnode/server/flusher/flusherimpl/wal_flusher.go); the
delegator exposes GetLatestRequiredMVCCTimeTick to drive the filter
(internal/querynodev2/delegator/delegator.go).
- Why this does NOT introduce data loss or regressions: the slowdowner
always refreshes latestRequiredMVCCTimeTick via
GetLatestRequiredMVCCTimeTick and (1) never filters timeticks <
latestRequiredMVCCTimeTick (so existing tsafe/flush waits stay
unblocked) and (2) always lets the first timetick ≥
latestRequiredMVCCTimeTick pass to notify pending MVCC waits;
separately, WAL flusher suppression applies only to non-persisted
timeticks and still emits when the 5s threshold elapses, preserving
periodic clock-sync messages used by flush/tsafe.
- Enhancement summary (where it takes effect): adds
GetLatestRequiredMVCCTimeTick on ShardDelegator and
LastestMVCCTimeTickGetter, wires emptyTimeTickSlowdowner into
NewPipelineWithStream (internal/util/pipeline), and adds WAL flusher
rate-limiting + metrics
(internal/streamingnode/server/flusher/flusherimpl/wal_flusher.go,
pkg/metrics) to reduce CPU/dispatch overhead while keeping MVCC
correctness and periodic synchronization.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: chyezh <chyezh@outlook.com>
Related to #39095
Previous PR #39990 update pkg module path using "/v2" package name, this
PR update milvusclient go sdk dependency for this update
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #36672
The expression supports filling elements through templates, which helps
to reduce the overhead of parsing the elements.
---------
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
OSPP 2024 project:
https://summer-ospp.ac.cn/org/prodetail/247410235?list=org&navpage=org
Solutions:
- parser (planparserv2)
- add CallExpr in planparserv2/Plan.g4
- update parser_visitor and show_visitor
- grpc protobuf
- add CallExpr in plan.proto
- execution (`core/src/exec`)
- add `CallExpr` `ValueExpr` and `ColumnExpr` (both logical and
physical) for function call and function parameters
- function factory (`core/src/exec/expression/function`)
- create a global hashmap when starting milvus (see server.go)
- the global hashmap stores function signatures and their function
pointers, the CallExpr in execution engine can get the function pointer
by function signature.
- custom functions
- empty(string)
- starts_with(string, string)
- add cpp/go unittests and E2E tests
closes: #36559
Signed-off-by: Yinzuo Jiang <jiangyinzuo@foxmail.com>