Cherry-pick from master
pr: #45455
Related to #45445
Previously, FillFieldData for JSON fields would assert and fail when a
default_value was provided, blocking index creation for JSON fields with
default values (including dynamic fields like $meta).
This change enables JSON default value support by:
- Removing the assertion that blocked default values
- Parsing bytes_data into Json objects when default_value is present
- Properly filling data_ array and setting valid_data_ bitset to true
- Maintaining null behavior when no default_value is provided
Impact:
- Fixes index creation failure for JSON fields with default values
- Resolves upgrade issues from 2.5 to 2.6.5 where dynamic fields with
default values couldn't be indexed
- Index builds that were stuck in InProgress state can now complete
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Cherry-pick from master
pr: #45444
Related to #45338
When using bulk vector search in hybrid search with rerank functions,
the output field values for different queries were all equal to the
values returned by the first query, instead of the correct values
belonging to each document ID. The document IDs were correct, but the
entity field values were wrong.
In rerank functions (RRF, weighted, decay, model), when processing
multiple queries in a batch, the `idLocations` stored only the relative
offset within each result set (`idx`), not accounting for the absolute
position within the entire batch. This caused `FillFieldData` to
retrieve field data from the wrong positions, always using offsets
relative to the first query.
This fix ensures that when processing bulk searches with rerank
functions, each result correctly retrieves its corresponding field data
based on the absolute offset within the entire batch, resolving the
issue where all queries returned the first query's field values.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Sliced arrow arrays "incorrectly" returned the original array's size via
SizeInBytes(), causing inaccurate memory estimates during compaction.
This resulted in segments closing prematurely in mergeSplit mode -
expected 500MB compactions produced 4x100+MB segments instead.
Fixed by calculating actual byte size of sliced arrays, ensuring proper
segment sizing and more accurate memory usage tracking.
See also: #45293
pr: #45294
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
When there're no growing segments in the collection, L0 Compaction will
try to choose all L0 segments that hits all L1/L2 segments.
However, if there's Sealed Segment still under flushing in DataNode at
the same time L0 Compaction selects satisfied L1/L2 segments, L0
Compaction will ignore this Segment because it's not in "FlushState",
which is wrong, causing missing deletes on the Sealed Segment.
This quick solution here is to fail this L0 compaction task once
selected a Sealed segment.
See also: #45339
pr: #45340
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
Cherry-pick from master
pr: #45334
Related to #45333
Fix segment loading failure when adding fields with text match enabled.
The issue occurred because text indexes were being loaded before
FinishLoad() was called, meaning raw data was not properly available
when text index creation attempted to access it, resulting in "failed to
create text index, neither raw data nor index are found" errors.
Solution is to move the FinishLoad() call to execute after raw data
loading but before text index loading. This ensures that:
1. Raw data is properly loaded and available in memory
2. Text indexes can access the raw data they need during creation
3. The segment is in the correct state before any index operations
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Cherry-pick from master
pr: #45330
Related to #45329
Fix compaction failure when handling newly added dynamic fields with
storage v1 binlogs. The issue occurred because the
`GenerateEmptyArrayFromSchema` function did not support JSON data type
default values, causing "Unexpected default value" errors during
compaction.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #45080, #45274, #45285
pr: #45290
- LoadCollection doesn't ignore the ignorable request, for false field
array.
- CreatIndex doesn't ignore the ignorable request, for wrong index.
- index meta is not thread safe.
- lost parameter check of DDL.
- DDL Ack scheduler may get stuck and DDL is block until next incoming
DDL.
Signed-off-by: chyezh <chyezh@outlook.com>
issue: #43897, #44123
pr: #45266
also pick pr: #45237, #45264,#45244,#45275
fix: kafka should auto reset the offset from earliest to read (#45237)
issue: #44172, #45210, #44851,#45244
kafka will auto reset the offset to "latest" if the offset is
Out-of-range. the recovery of milvus wal cannot read any message from
that. So once the offset is out-of-range, kafka should read from eariest
to read the latest uncleared data.
https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset
enhance: support alter collection/database with WAL-based DDL framework
(#45266)
issue: #43897
- Alter collection/database is implemented by WAL-based DDL framework
now.
- Support AlterCollection/AlterDatabase in wal now.
- Alter operation can be synced by new CDC now.
- Refactor some UT for alter DDL.
fix: milvus role cannot stop at initializing state (#45244)
issue: #45243
fix: support upgrading from 2.6.x -> 2.6.5 (#45264)
issue: #43897
---------
Signed-off-by: chyezh <chyezh@outlook.com>
Related to #45282
Initialize `tsFrom` and `tsTo` fields in the composite binlog record
writer constructor to prevent timestamp range information loss in stats
tasks.
The composite binlog writer now properly initializes the timestamp range
fields, ensuring that:
1. The first timestamp update will correctly set the minimum (`tsFrom`)
2. The first timestamp update will correctly set the maximum (`tsTo`)
3. All subsequent data writes will maintain accurate timestamp range
tracking
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Cherry pick from master
pr: #45263
Related to #43028
Initialize the schema version field when creating a new collection
instance in QueryNode. The schema version is extracted from loadMetaInfo
and assigned to the collection, ensuring proper schema version tracking
and consistency across the distributed system.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #43897, #44123
pr: #45224
also pick pr: #45216,#45154,#45033,#45145,#45092,#45058,#45029
enhance: Close channel replicator more gracefully (#45029)
issue: https://github.com/milvus-io/milvus/issues/44123
enhance: Show create time for import job (#45058)
issue: https://github.com/milvus-io/milvus/issues/45056
fix: wal state may be unconsistent after recovering from crash (#45092)
issue: #45088, #45086
- Message on control channel should trigger the checkpoint update.
- LastConfrimedMessageID should be recovered from the minimum of
checkpoint or the LastConfirmedMessageID of uncommitted txn.
- Add more log info for wal debugging.
fix: make ack of broadcaster cannot canceled by client (#45145)
issue: #45141
- make ack of broadcaster cannot canceled by rpc.
- make clone for assignment snapshot of wal balancer.
- add server id for GetReplicateCheckpoint to avoid failure.
enhance: support collection and index with WAL-based DDL framework
(#45033)
issue: #43897
- Part of collection/index related DDL is implemented by WAL-based DDL
framework now.
- Support following message type in wal, CreateCollection,
DropCollection, CreatePartition, DropPartition, CreateIndex, AlterIndex,
DropIndex.
- Part of collection/index related DDL can be synced by new CDC now.
- Refactor some UT for collection/index DDL.
- Add Tombstone scheduler to manage the tombstone GC for collection or
partition meta.
- Move the vchannel allocation into streaming pchannel manager.
enhance: support load/release collection/partition with WAL-based DDL
framework (#45154)
issue: #43897
- Load/Release collection/partition is implemented by WAL-based DDL
framework now.
- Support AlterLoadConfig/DropLoadConfig in wal now.
- Load/Release operation can be synced by new CDC now.
- Refactor some UT for load/release DDL.
enhance: Don't start cdc by default (#45216)
issue: https://github.com/milvus-io/milvus/issues/44123
fix: unrecoverable when replicate from old (#45224)
issue: #44962
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
Signed-off-by: chyezh <chyezh@outlook.com>
Co-authored-by: yihao.dai <yihao.dai@zilliz.com>