issue: #42860
pr: #42786
Fix channel node allocation when QueryNode count is not a multiple of
channel count. The previous algorithm used simple division which caused
uneven distribution with remainders.
Key improvements:
- Implement smart remainder distribution algorithm
- Refactor large function into focused helper functions
- Support two-phase rebalancing (release then allocate)
- Handle edge cases like insufficient nodes gracefully
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
Related to #43389
alternative of #43395
This reverts commit e54d92447ccd6d616fe58cadb112cf0c6a44a6e9.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Cherry-pick from master
pr: #43408
Related to #43407
When `MultiSaveAndRemove` like ops contains same key in saves and
removal keys it may cause data lost if the execution order is save first
than removal.
This PR make all the kv execute removal first then save the new values.
Even when same key appeared in both saves and removals, the new value
shall stay.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Previous code uses diskSegmentMaxSize if and only if all of the
collection's vector fields are indexed with DiskANN index.
When introducing sparse vectors, since sparse vector cannot be indexed
with DiskANN index, collections with both dense and sparse vectors will
use maxSize instead.
This PR changes the requirments of using diskSegmentMaxSize to all dense
vectors are indexed with DiskANN indexs, ignoring sparse vector fields.
See also: #43193
pr: #43194
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
Cherry-pick from master
pr: #43321
Related to #43262
This patch fixes following logic bug:
- When multiple chunks are loaded and size cannot be divided by 8, just
appending uint8_t as bitmap will cause null bitmap dislocation
- `null_bitmap_data()` points to start of whole row group, which may not
stand for current `arrow::Array`
The current solutions is:
- Reorganize the null_bitmap with currect size & offset
- Pass `array->offset()` in tuple to info the current offset
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #42900
pr: #43179
Upd: also handles Inf and NaN values, and the division by zero case for
fp32 and fp64
Signed-off-by: Alexandr Guzhva <alexanderguzhva@gmail.com>
Cherry pick from master
pr: #43126#43243
Also bump client version to v2.5.5 preparing for releasing
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #42098#42404
pr: #42689
Fix critical issue where concurrent balance segment and balance channel
operations cause delegator view inconsistency. When shard leader
switches between load and release phases of segment balance, it results
in loading segments on old delegator but releasing on new delegator,
making the new delegator unserviceable.
The root cause is that balance segment modifies delegator views, and if
these modifications happen on different delegators due to leader change,
it corrupts the delegator state and affects query availability.
Changes include:
- Add shardLeaderID field to SegmentTask to track delegator for load
- Record shard leader ID during segment loading in move operations
- Skip release if shard leader changed from the one used for loading
- Add comprehensive unit tests for leader change scenarios
This ensures balance segment operations are atomic on single delegator,
preventing view corruption and maintaining delegator serviceability.
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
issue: #43107
pr: #43108
- Add checkLoadConfigChanges() to apply load config during startup
- Call config check in startQueryCoord() after restart
- Skip auto-updates for collections with user-specified replica numbers
- Add is_user_specified_replica_mode field to preserve user settings
- Add comprehensive unit tests with mockey
Ensures existing collections use latest cluster-level config after
restart.
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
When l0 compaction is executing, do not mark the stats task as failed;
keep it in the init state to allow retry.
issue: https://github.com/milvus-io/milvus/issues/43039
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
related: #42803
1. add a new thread pools using folly::CPUThreadPoolExecutor, named by
FThreadPools
2. reading vectors from chunkcache will use the separated
CHUNKCACHE_POOL to avoid being influenced by load collection
3. Note. For safety on cloud side on 2.5.x, only read-chunk-cache
operations is using this newly created thread pools other caller points
for threadpool will be mutated in the near future
4. master-branch doesn't need this pr as caching layer unified the chunk
cache behaviour
Signed-off-by: MrPresent-Han <chun.han@gmail.com>
Co-authored-by: MrPresent-Han <chun.han@gmail.com>