issue: #46393
RO node can be created from two sources: stopping a QueryNode or replica
node transfer (e.g., suspend node). Before this fix, there were two
defects and one constraint that caused a deadlock:
Defects:
1. LeaderChecker does not sync segment distribution to RO nodes
2. Scheduler only cancels tasks on stopping nodes, not RO nodes
Constraint:
- Balance channel task blocks waiting for new delegator to become
serviceable (via sync segment) before executing release action
Deadlock scenario:
When target node becomes RO node (but not stopping) during balance
channel execution, the task gets stuck because:
- Cannot sync segment to RO node (defect 1) -> task blocks
- Task is not cancelled since node is not stopping (defect 2)
PR #45949 attempted to fix defect 1 but was not successful.
This PR unifies RO node handling by:
- LeaderChecker: only sync segment distribution to RW nodes
- Scheduler: cancel task when target node becomes RO node
- Simplify checkStale logic with unified node state checking
Signed-off-by: Wei Liu <wei.liu@zilliz.com>
Related to #46358
Refactor segment loading to use a unified diff-based approach for both
initial Load and Reopen operations:
- Extract ApplyLoadDiff from Reopen to share loading logic
- Add GetLoadDiff to compute diff from empty state for initial load
- Change column_groups_to_load from map to vector<pair> to preserve
order
- Add validation for empty index file paths in diff computation
- Add comprehensive unit tests for GetLoadDiff
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Performance**
* Improved segment loading efficiency through incremental updates,
reducing memory overhead and enhancing performance during data updates.
* **Tests**
* Expanded test coverage for load operation scenarios.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
related: #45993
This commit extends nullable vector support to the proxy layer,
querynode,
and adds comprehensive validation, search reduce, and field data
handling
for nullable vectors with sparse storage.
Proxy layer changes:
- Update validate_util.go checkAligned() with getExpectedVectorRows()
helper
to validate nullable vector field alignment using valid data count
- Update checkFloatVectorFieldData/checkSparseFloatVectorFieldData for
nullable vector validation with proper row count expectations
- Add FieldDataIdxComputer in typeutil/schema.go for logical-to-physical
index translation during search reduce operations
- Update search_reduce_util.go reduceSearchResultData to use
idxComputers
for correct field data indexing with nullable vectors
- Update task.go, task_query.go, task_upsert.go for nullable vector
handling
- Update msg_pack.go with nullable vector field data processing
QueryNode layer changes:
- Update segments/result.go for nullable vector result handling
- Update segments/search_reduce.go with nullable vector offset
translation
Storage and index changes:
- Update data_codec.go and utils.go for nullable vector serialization
- Update indexcgowrapper/dataset.go and index.go for nullable vector
indexing
Utility changes:
- Add FieldDataIdxComputer struct with Compute() method for efficient
logical-to-physical index mapping across multiple field data
- Update EstimateEntitySize() and AppendFieldData() with fieldIdxs
parameter
- Update funcutil.go with nullable vector support functions
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Full support for nullable vector fields (float, binary, float16,
bfloat16, int8, sparse) across ingest, storage, indexing, search and
retrieval; logical↔physical offset mapping preserves row semantics.
* Client: compaction control and compaction-state APIs.
* **Bug Fixes**
* Improved validation for adding vector fields (nullable + dimension
checks) and corrected search/query behavior for nullable vectors.
* **Chores**
* Persisted validity maps with indexes and on-disk formats.
* **Tests**
* Extensive new and updated end-to-end nullable-vector tests.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: marcelo-cjl <marcelo.chen@zilliz.com>
### **User description**
Issue: #46504
test: create e2e test case for highlighter
On branch feature/highlighter
Changes to be committed:
new file: milvus_client/test_milvus_client_highlighter.py
___
### **PR Type**
Tests
___
### **Description**
- Add comprehensive e2e test suite for LexicalHighlighter functionality
- Test highlighter initialization with collection setup and data
insertion
- Validate highlighter with various parameters (tags, fragments,
offsets)
- Test edge cases including Chinese characters, long text, and invalid
inputs
- Verify error handling for invalid fragment sizes, offsets, and
configurations
___
### Diagram Walkthrough
```mermaid
flowchart LR
A["Test Suite Setup"] --> B["Highlighter Init Tests"]
B --> C["Valid Test Cases"]
C --> D["Fragment Parameters"]
C --> E["Search Variations"]
C --> F["Language Support"]
B --> G["Invalid Test Cases"]
G --> H["Parameter Validation"]
G --> I["Error Handling"]
```
<details><summary><h3>File Walkthrough</h3></summary>
<table><thead><tr><th></th><th align="left">Relevant
files</th></tr></thead><tbody><tr><td><strong>Tests</strong></td><td><table>
<tr>
<td>
<details>
<summary><strong>test_milvus_client_highlighter.py</strong><dd><code>Add
comprehensive LexicalHighlighter e2e test suite</code>
</dd></summary>
<hr>
tests/python_client/milvus_client/test_milvus_client_highlighter.py
<ul><li>Create new test file with 1163 lines of comprehensive
highlighter test <br>cases<br> <li> Implement
<code>TestMilvusClientHighlighterInit</code> class to initialize
<br>collection with pre-defined test data including English, Chinese,
and <br>long text samples<br> <li> Implement
<code>TestMilvusClientHighlighterValid</code> class with 15+ test
methods <br>covering basic usage, multiple tags, fragment parameters,
offsets, <br>numbers, sentences, and language support<br> <li> Implement
<code>TestMilvusClientHighlighterInvalid</code> class with 8+ test
<br>methods validating error handling for invalid parameters and
<br>configurations<br> <li> Test highlighter with BM25 search, text
matching, and various analyzer <br>configurations</ul>
</details>
</td>
<td><a
href="https://github.com/milvus-io/milvus/pull/46505/files#diff-443e3fefb65fbdb088d5920083306ecfe3605745b1e2714198c6566ca67b3736">+1163/-0</a></td>
</tr>
</table></td></tr></tbody></table>
</details>
___
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Tests**
* Added a comprehensive highlighter test suite covering:
- Core highlighting with single and multi-analyzer setups and multi-tag
variations
- Fragment parameter behaviors and edge cases (size, offset, count)
- Text-match and query-based highlighting, including BM25 and vector
interactions
- Sub-word, long-text/tag, case sensitivity, Chinese/multi-language
scenarios
- Error handling for invalid parameters, no-match cases, and other edge
conditions
- Module-scoped fixture preparing multilingual, long-form test data and
teardown
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Eric Hou <eric.hou@zilliz.com>
Co-authored-by: Eric Hou <eric.hou@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/45525
see added README.md for added optimizations
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added query expression optimization feature with a new `optimizeExpr`
configuration flag to enable automatic simplification of filter
predicates, including range predicate optimization, merging of IN/NOT IN
conditions, and flattening of nested logical operators.
* **Bug Fixes**
* Adjusted delete operation behavior to correctly handle expression
evaluation.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
issue: #46274
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Performance Improvements**
* Field-level text index creation and JSON-key statistics now run
concurrently, reducing overall indexing time and speeding task
completion.
* **Observability Enhancements**
* Per-task and per-field logging expanded with richer context and
per-phase elapsed-time reporting for improved monitoring and
diagnostics.
* **Refactor**
* Node slot handling simplified to compute slot counts on demand instead
of storing them.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
relate: https://github.com/milvus-io/milvus/issues/42589
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
## New Features
- Added `concurrency_per_cpu_core` configuration parameter for the
analyzer component, enabling customizable per-CPU concurrency tuning
(default: 8).
## Tests
- Added test coverage for batch analysis operations.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: aoiasd <zhicheng.yue@zilliz.com>
### **User description**
As of https://github.com/milvus-io/pymilvus/pull/2976 `milvus-lite` is
no longer included as part of the default `pymilvus` installation and
must be explicitly specified
___
### **PR Type**
Documentation
___
### **Description**
- Update installation docs to reflect milvus-lite as optional dependency
- Clarify explicit installation requirement for Milvus Lite
functionality
___
### Diagram Walkthrough
```mermaid
flowchart LR
A["pymilvus default install"] -- "does not include" --> B["Milvus Lite"]
C["pymilvus[milvus-lite]"] -- "includes" --> B
B -- "enables" --> D["Local vector database"]
```
<details><summary><h3>File Walkthrough</h3></summary>
<table><thead><tr><th></th><th align="left">Relevant
files</th></tr></thead><tbody><tr><td><strong>Documentation</strong></td><td><table>
<tr>
<td>
<details>
<summary><strong>README.md</strong><dd><code>Clarify Milvus Lite as
optional dependency</code>
</dd></summary>
<hr>
README.md
<ul><li>Updated installation documentation to clarify that
<code>milvus-lite</code> is no <br>longer included by default<br> <li>
Changed wording from "includes Milvus Lite" to "can try Milvus Lite by
<br>installing <code>pymilvus[milvus-lite]</code>"<br> <li> Reflects the
breaking change from pymilvus PR #2976 where milvus-lite <br>became an
optional dependency</ul>
</details>
</td>
<td><a
href="https://github.com/milvus-io/milvus/pull/46474/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5">+1/-1</a>
</td>
</tr>
</table></td></tr></tbody></table>
</details>
___
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Documentation**
* Updated Milvus Lite installation guidance to specify the proper
installation method using the optional feature specification.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: Nathan Weinberg <nathan2@stwmd.net>
issue: #45486
Introduce row group batching to reduce cache cell granularity and
improve
memory&disk efficiency. Previously, each parquet row group mapped 1:1 to
a cache
cell. Now, up to `kRowGroupsPerCell` (4) row groups are merged into one
cell.
This reduces the number of cache cells (and associated overhead) by ~4x
while
maintaining the same data granularity for loading.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Refactor**
* Switched to cell-based grouping that merges multiple row groups for
more efficient multi-file aggregation and reads.
* Chunk loading now combines multiple source batches/tables per cell and
better supports mmap-backed storage.
* **New Features**
* Exposed helpers to query row-group ranges and global row-group offsets
for diagnostics and testing.
* Translators now accept chunk-type and mmap/load hints to control
on-disk vs in-memory behavior.
* **Bug Fixes**
* Improved bounds checks and clearer error messages for out-of-range
cell requests.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: Shawn Wang <shawn.wang@zilliz.com>
### **User description**
Related to #44956
Add manifest-based data loading path for optional fields in
`cache_opt_field_memory_v2`. When a manifest file is provided in the
config, the function now retrieves field data directly from the manifest
using `GetFieldDatasFromManifest` instead of reading from segment insert
files. This enables storage v2 compatibility for building indexes with
optional fields.
___
### **PR Type**
Enhancement
___
### **Description**
- Add manifest-based data loading for optional fields in index building
- Support storage v2 compatibility via `GetFieldDatasFromManifest`
function
- Enable PK isolation optional field handling without segment insert
files
___
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/44399
this PR also adds `ByteSize()` methods for scalar indexes. currently not
used in milvus code, but used in scalar benchmark. may be used by
cachinglayer in the future.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Refactor**
* Improved and standardized memory-size computation and caching across
index types so reported index footprints are more accurate and
consistent.
* **Chores**
* Ensured byte-size metrics are refreshed immediately after index
build/load operations to keep memory accounting in sync with runtime
state.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
This commit refines L0 compaction to ensure data consistency by properly
setting the delete position boundary for L1 segment selection.
Key Changes:
1. L0 View Trigger Sets latestDeletePos for L1 Selection
2. Filter L0 Segments by Growing Segment Position in policy, not in
views
3. Renamed LevelZeroSegmentsView to LevelZeroCompactionView
4. Renamed fields for semantic clarity: * segments -> l0Segments *
earliestGrowingSegmentPos -> latestDeletePos
5. Update Default Compaction Prioritizer to level
See also: #46434
---------
Signed-off-by: yangxuan <xuan.yang@zilliz.com>
This PR improves the robustness of object storage operations by retrying
both explicit throttling errors (e.g. HTTP 429, SlowDown, ServerBusy).
These errors commonly occur under high concurrency and are typically
recoverable with bounded retries.
issue: https://github.com/milvus-io/milvus/issues/44772
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Configurable retry support for reads from object storage and improved
mapping of transient/rate-limit errors.
* Added a retryable reader wrapper used by CSV/JSON/Parquet/Numpy import
paths.
* **Configuration**
* New parameter to control storage read retry attempts.
* **Tests**
* Expanded unit tests covering error mapping and retry behaviors across
storage backends.
* Standardized mock readers and test initialization to simplify test
setups.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
issue: #46349
When using brute-force search, the iterator results from multiple chunks
are merged; at that point, we need to pay attention to how the metric
affects result ranking.
Signed-off-by: xianliang.li <xianliang.li@zilliz.com>
generated a library that wraps the go expr parser, and embedded that
into libmilvus-core.so
issue: https://github.com/milvus-io/milvus/issues/45702
see `internal/core/src/plan/milvus_plan_parser.h` for the exposed
interface
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Introduced C++ API for plan parsing with schema registration and
expression parsing capabilities.
* Plan parser now available as shared libraries instead of a standalone
binary tool.
* **Refactor**
* Reorganized build system to produce shared library artifacts instead
of executable binaries.
* Build outputs relocated to standardized library and include
directories.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
### **User description**
issue: #46507
we use the assign/unassign api to manage the consumer manually, the
commit operation will generate a new consumer group which is not what we
want. so we disable the auto commit to avoid it, also see:
https://github.com/confluentinc/confluent-kafka-python/issues/250#issuecomment-331377925
___
### **PR Type**
Bug fix
___
### **Description**
- Disable auto-commit in Kafka consumer configuration
- Prevents unwanted consumer group creation from manual offset
management
- Clarifies offset reset behavior with explanatory comments
___
### Diagram Walkthrough
```mermaid
flowchart LR
A["Kafka Consumer Config"] --> B["Set enable.auto.commit to false"]
B --> C["Prevent auto consumer group creation"]
A --> D["Set auto.offset.reset to earliest"]
D --> E["Handle deleted offsets gracefully"]
```
<details><summary><h3>File Walkthrough</h3></summary>
<table><thead><tr><th></th><th align="left">Relevant
files</th></tr></thead><tbody><tr><td><strong>Bug
fix</strong></td><td><table>
<tr>
<td>
<details>
<summary><strong>builder.go</strong><dd><code>Disable auto-commit and
add configuration comments</code>
</dd></summary>
<hr>
pkg/streaming/walimpls/impls/kafka/builder.go
<ul><li>Added <code>enable.auto.commit</code> configuration set to
<code>false</code> to prevent <br>automatic consumer group creation<br>
<li> Added explanatory comments for both <code>auto.offset.reset</code>
and <br><code>enable.auto.commit</code> settings<br> <li> Clarifies that
manual assign/unassign API is used for consumer <br>management</ul>
</details>
</td>
<td><a
href="https://github.com/milvus-io/milvus/pull/46508/files#diff-4b5635821fdc8b585d16c02d8a3b59079d8e667b2be43a073265112d72701add">+7/-0</a>
</td>
</tr>
</table></td></tr></tbody></table>
</details>
___
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
## Bug Fixes
* Kafka consumer now reads from the earliest available messages and
auto-commit has been disabled to support manual offset management.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: chyezh <chyezh@outlook.com>
### **PR Type**
Enhancement
___
### **Description**
- Only flush and fence segments for schema-changing alter collection
messages
- Skip segment sealing for collection property-only alterations
- Add conditional check using messageutil.IsSchemaChange utility
function
___
### Diagram Walkthrough
```mermaid
flowchart LR
A["Alter Collection Message"] --> B{"Is Schema Change?"}
B -->|Yes| C["Flush and Fence Segments"]
B -->|No| D["Skip Segment Operations"]
C --> E["Set Flushed Segment IDs"]
D --> E
E --> F["Append Operation"]
```
<details><summary><h3>File Walkthrough</h3></summary>
<table><thead><tr><th></th><th align="left">Relevant
files</th></tr></thead><tbody><tr><td><strong>Enhancement</strong></td><td><table>
<tr>
<td>
<details>
<summary><strong>shard_interceptor.go</strong><dd><code>Conditional
segment sealing based on schema changes</code>
</dd></summary>
<hr>
internal/streamingnode/server/wal/interceptors/shard/shard_interceptor.go
<ul><li>Added import for <code>messageutil</code> package to access
schema change detection <br>utility<br> <li> Modified
<code>handleAlterCollection</code> to conditionally flush and fence
<br>segments only for schema-changing messages<br> <li> Wrapped segment
flushing logic in <code>if
</code><br><code>messageutil.IsSchemaChange(header)</code> check<br>
<li> Skips unnecessary segment sealing when only collection properties
are <br>altered</ul>
</details>
</td>
<td><a
href="https://github.com/milvus-io/milvus/pull/46488/files#diff-c1acf785e5b530e59137b21584cf567ccd9aeeb613fb3684294b439289e80beb">+9/-3</a>
</td>
</tr>
</table></td></tr></tbody></table>
</details>
___
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Optimized collection schema alteration to conditionally perform
segment allocation operations only when schema changes are detected,
reducing unnecessary overhead in unmodified collection scenarios.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: Cai Zhang <cai.zhang@zilliz.com>
issue: #46451
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Session versioning added to validate coordinator compatibility during
registration and active takeover.
* **Changes**
* Active–standby flow simplified: standby-to-active activation now
always enabled and initialized unconditionally.
* Registration uses version-aware transactions to ensure version
consistency during takeover.
* Startup/health startup path streamlined.
* **Tests**
* Added version-key integration test; removed test for disabling
active-standby.
* Updated flush test to assert rate-limiter errors occur.
* **Chores**
* Removed centralized connection manager and its test suite.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: chyezh <chyezh@outlook.com>
### **User description**
issue: #46097
- the flush rate is modified into 4qps, so the testcase is fail.
___
### **PR Type**
Tests, Bug fix
___
### **Description**
- Replace sequential flush calls with concurrent requests to trigger
rate limiting
- Add sync.WaitGroup for concurrent goroutine execution
- Check for rate limit errors across multiple concurrent flush
operations
- Remove hardcoded error message expectation for flexibility
___
### Diagram Walkthrough
```mermaid
flowchart LR
A["Sequential Flush Calls"] -->|Replace with| B["Concurrent Flush Requests"]
B -->|Use| C["sync.WaitGroup"]
C -->|Validate| D["Rate Limit Errors"]
```
<details><summary><h3>File Walkthrough</h3></summary>
<table><thead><tr><th></th><th align="left">Relevant
files</th></tr></thead><tbody><tr><td><strong>Tests</strong></td><td><table>
<tr>
<td>
<details>
<summary><strong>insert_test.go</strong><dd><code>Refactor flush rate
test to use concurrent requests</code>
</dd></summary>
<hr>
tests/go_client/testcases/insert_test.go
<ul><li>Added <code>sync</code> package import for concurrent goroutine
synchronization<br> <li> Replaced sequential flush calls with 10
concurrent flush operations <br>using goroutines<br> <li> Implemented
WaitGroup to synchronize all concurrent flush requests<br> <li> Modified
error validation to check for rate limit errors across all
<br>concurrent attempts instead of expecting specific sequential
behavior<br> <li> Relaxed error message matching to only check for "rate
limit exceeded" <br>substring</ul>
</details>
</td>
<td><a
href="https://github.com/milvus-io/milvus/pull/46497/files#diff-89a4ddfa15d096e6a5f647da0e461715e5a692b375b04a3d01939f419b00f529">+19/-4</a>
</td>
</tr>
</table></td></tr></tbody></table>
</details>
___
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
## Release Notes
* **Tests**
* Enhanced testing of concurrent flush operations to improve validation
of system reliability under concurrent load scenarios.
---
**Note:** This release contains internal testing improvements with no
direct user-facing feature changes.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: chyezh <chyezh@outlook.com>
AddProxyClients now removes clients not in the new snapshot before
adding new ones. This ensures proper cleanup when ProxyWatcher re-watche
etcd.
issue: https://github.com/milvus-io/milvus/issues/46397
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
Related to #44956
When loading column groups with mmap enabled, the
ManifestGroupTranslator needs the mmap directory path to properly handle
memory-mapped data loading. This change retrieves the root path from
LocalChunkManagerSingleton and passes it to the translator during
construction.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Add ManifestPath field to SegmentInfo in GetRecoveryInfoV2 response,
enabling QueryCoord to detect manifest path changes and trigger segment
reopen for storage v2 incremental updates.
Related to #46394
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: https://github.com/milvus-io/milvus/issues/42053
The splitted literals in `match` execution should be handled in `and`
manner rather than `or`.
Signed-off-by: SpadeA <tangchenjie1210@gmail.com>
issue: https://github.com/milvus-io/milvus/issues/45890
ComputePhraseMatchSlop accepts three pararms:
1. A string: query text
2. Some trings: data texts
3. Analyzer params,
Slop will be calculated for the query text with each data text in the
context of phrase match where they are tokenized with tokenizer with
analyzer params.
So two array will be returned:
1. is_match: is phrase match can sucess
2. slop: the related slop if phrase match can sucess, or -1 is cannot.
---------
Signed-off-by: SpadeA <tangchenjie1210@gmail.com>
issue: #46443
Add `Forbidden: true` to all tiered storage related parameters to
prevent runtime configuration changes via etcd. These parameters are
marked as refreshable:"false" but that tag was only documentation - the
actual prevention requires the Forbidden field.
Without this fix, if tiered storage parameters are modified at runtime:
- Go side would read the new values dynamically
- C++ caching layer would still use the old values (set at InitQueryNode
time)
- This mismatch could cause resource tracking issues and anomalies
Signed-off-by: Shawn Wang <shawn.wang@zilliz.com>
Related to #46453
The test was flaky because Submit() returns a Future and executes
asynchronously. The test was setting sig=true immediately after Submit()
returned, but the task's Run() might not have completed yet, causing
mock expectation failures.
Fix by calling future.Await() to wait for task execution to complete
before signaling. Also remove dead commented code.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #44614
Previous PR: #44666
Bump etcd version in pkg/go.mod to 3.5.23 and update test code
accordingly
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Set the auto-appended dynamic field to be nullable with a default value
of empty JSON object `{}`. This allows collections with dynamic schema
to handle rows that don't have any dynamic fields more gracefully,
avoiding potential null reference issues when the dynamic field is not
explicitly set during insert.
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Related to #46358
Add segment reopen mechanism in QueryCoord to handle segment data
updates when the manifest path changes. This enables QueryNode to reload
segment data without full segment reload, supporting storage v2
incremental updates.
Changes:
- Add ActionTypeReopen action type and LoadScope_Reopen in protobuf
- Track ManifestPath in segment distribution metadata
- Add CheckSegmentDataReady utility to verify segment data matches
target
- Extend getSealedSegmentDiff to detect segments needing reopen
- Create segment reopen tasks when manifest path differs from target
- Block target update until segment data is ready
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Issue: #46333
test: re-write convert timestamp logic to cover daylight saving time
Signed-off-by: Eric Hou <eric.hou@zilliz.com>
Co-authored-by: Eric Hou <eric.hou@zilliz.com>
issue: #43897
also for issue: #46166
add ack_sync_up flag into broadcast message header, which indicates that
whether the broadcast operation is need to be synced up between the
streaming node and the coordinator.
If the ack_sync_up is false, the broadcast operation will be acked once
the recovery storage see the message at current vchannel, the fast ack
operation can be applied to speed up the broadcast operation.
If the ack_sync_up is true, the broadcast operation will be acked after
the checkpoint of current vchannel reach current message.
The fast ack operation can not be applied to speed up the broadcast
operation, because the ack operation need to be synced up with streaming
node.
e.g. if truncate collection operation want to call ack once callback
after the all segment are flushed at current vchannel, it should set the
ack_sync_up to be true.
TODO: current implementation doesn't promise the ack sync up semantic,
it only promise FastAck operation will not be applied, wait for 3.0 to
implement the ack sync up semantic. only for truncate api now.
---------
Signed-off-by: chyezh <chyezh@outlook.com>
Related to #44956
This change propagates the useLoonFFI configuration through the import
pipeline to enable LOON FFI usage during data import operations.
Key changes:
- Add use_loon_ffi field to ImportRequest protobuf message
- Add manifest_path field to ImportSegmentInfo for tracking manifest
- Initialize manifest path when creating segments (both import and
growing)
- Pass useLoonFFI flag through NewSyncTask in import tasks
- Simplify pack_writer_v2 by removing GetManifestInfo method and relying
on pre-initialized manifest path from segment creation
- Update segment meta with manifest path after import completion
This allows the import workflow to use the LOON FFI based packed writer
when the common.useLoonFFI configuration is enabled.
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
issue: #46358
This PR implements segment reopening functionality on query nodes,
enabling the application of data or schema changes to already-loaded
segments without requiring a full reload.
### Core (C++)
**New SegmentLoadInfo class**
(`internal/core/src/segcore/SegmentLoadInfo.h/cpp`):
- Encapsulates segment load configuration with structured access
- Implements `ComputeDiff()` to calculate differences between old and
new load states
- Tracks indexes, binlogs, and column groups that need to be loaded or
dropped
- Provides `ConvertFieldIndexInfoToLoadIndexInfo()` for index loading
**ChunkedSegmentSealedImpl modifications**:
- Added `Reopen(const SegmentLoadInfo&)` method to apply incremental
changes based on computed diff
- Refactored `LoadColumnGroups()` and `LoadColumnGroup()` to support
selective loading via field ID map
- Extracted `LoadBatchIndexes()` and `LoadBatchFieldData()` for reusable
batch loading logic
- Added `LoadManifest()` for manifest-based loading path
- Updated all methods to use `SegmentLoadInfo` wrapper instead of direct
proto access
**SegmentGrowingImpl modifications**:
- Added `Reopen()` stub method for interface compliance
**C API additions** (`segment_c.h/cpp`):
- Added `ReopenSegment()` function exposing reopen to Go layer
### Go Side
**QueryNode handlers** (`internal/querynodev2/`):
- Added `HandleReopen()` in handlers.go
- Added `ReopenSegments()` RPC in services.go
**Segment interface** (`internal/querynodev2/segments/`):
- Extended `Segment` interface with `Reopen()` method
- Implemented `Reopen()` in LocalSegment
- Added `Reopen()` to segment loader
**Segcore wrapper** (`internal/util/segcore/`):
- Added `Reopen()` method in segment.go
- Added `ReopenSegmentRequest` in requests.go
### Proto
- Added new fields to support reopen in `query_coord.proto`
---------
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>