Previous code uses diskSegmentMaxSize if and only if all of the
collection's vector fields are indexed with DiskANN index.
When introducing sparse vectors, since sparse vector cannot be indexed
with DiskANN index, collections with both dense and sparse vectors will
use maxSize instead.
This PR changes the requirments of using diskSegmentMaxSize to all dense
vectors are indexed with DiskANN indexs, ignoring sparse vector fields.
See also: #43193
pr: #43194
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
issue: #43107
pr: #43108
- Add checkLoadConfigChanges() to apply load config during startup
- Call config check in startQueryCoord() after restart
- Skip auto-updates for collections with user-specified replica numbers
- Add is_user_specified_replica_mode field to preserve user settings
- Add comprehensive unit tests with mockey
Ensures existing collections use latest cluster-level config after
restart.
---------
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
related: #42803
1. add a new thread pools using folly::CPUThreadPoolExecutor, named by
FThreadPools
2. reading vectors from chunkcache will use the separated
CHUNKCACHE_POOL to avoid being influenced by load collection
3. Note. For safety on cloud side on 2.5.x, only read-chunk-cache
operations is using this newly created thread pools other caller points
for threadpool will be mutated in the near future
4. master-branch doesn't need this pr as caching layer unified the chunk
cache behaviour
Signed-off-by: MrPresent-Han <chun.han@gmail.com>
Co-authored-by: MrPresent-Han <chun.han@gmail.com>
1. Optimize the import process: skip subsequent steps and mark the task
as complete if the number of imported rows is 0.
2. Improve import integration tests:
a. Add a test to verify that autoIDs are not duplicated
b. Add a test for the corner case where all data is deleted
c. Shorten test execution time
3. Enhance import logging:
a. Print imported segment information upon completion
b. Include file name in failure logs
issue: https://github.com/milvus-io/milvus/issues/42488,
https://github.com/milvus-io/milvus/issues/42518
pr: https://github.com/milvus-io/milvus/pull/42612
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
issue: #41874
pr: #41875
- Optimize balance_checker to support balancing multiple collections
simultaneously
- Add new parameters for segment and channel balancing batch sizes
- Add enableBalanceOnMultipleCollections parameter
- Update tests for balance checker
This change improves resource utilization by allowing the system to
balance multiple collections in a single trigger with configurable batch
sizes.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
Cherry-pick from master
pr: #42109
Related to #40756
Large nq will naturally increase query time, which causing lots of slow
log when user NQ numbers are very large.
This PR make slow search counts span per nq (using avg val) to decide
whether one request is slow or not.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Assign the import task to the worker with the most available slots, even
if availableSlots < requiredSlots. This ensures tasks won’t be blocked
indefinitely.
issue: https://github.com/milvus-io/milvus/issues/41981
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
issue: #41816
pr: #41817
pr #37983 introduced an issue, if doesn't specified
`defaultRootPassword` in milvus.yaml, then `"Milvus"` will be used as
default password for root user, instead of `Milvus`.
This PR fix the unexpected password for root, and add comment for case
which use large numeric password requires double quotes.
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
This PR fixes https://github.com/milvus-io/milvus/issues/41384 on 2.5.
Related to #41448.
When using milvus client and compile on windows, the compilation failed
with the undefined RSS error.
On windows, the way to get memory used is the same as on darwin.
Signed-off-by: Julien Salleyron <julien.salleyron@gmail.com>
When autoID is enabled, the preimport task estimates row distribution by
evenly dividing the total row count (numRows) across all vchannels:
`estimatedCount = numRows / vchannelNum`.
However, the actual import task hashes real auto-generated IDs to
determine
the target vchannel. This mismatch can lead to inaccurate row
distribution estimation
in such corner cases:
- Importing 1 row into 2 vchannels:
• Preimport: 1 / 2 = 0 → both v0 and v1 are estimated to have 0 rows
• Import: real autoID (e.g., 457975852966809057) hashes to v1
→ actual result: v0 = 0, v1 = 1
To resolve such corner case, we now allocate at least one segment for
each vchannel
when autoID is enabled, ensuring all vchannels are prepared to receive
data even
if no rows are estimated for them.
issue: https://github.com/milvus-io/milvus/issues/41759
pr: https://github.com/milvus-io/milvus/pull/41771
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
refine drop parition through the new interface notifydroppartition in
datacoord
issue: https://github.com/milvus-io/milvus/issues/41542
---------
Signed-off-by: Xianhui.Lin <xianhui.lin@zilliz.com>