milvus/internal/datanode/compactor/namespace_compactor_test.go
congqixia f94b04e642
feat: [2.6] integrate Loon FFI for manifest-based segment loading and index building (#46076)
Cherry-pick from master
pr: #45061 #45488 #45803 #46017 #44991 #45132 #45723 #45726 #45798
#45897 #45918 #44998

This feature integrates the Storage V2 (Loon) FFI interface as a unified
storage layer for segment loading and index building in Milvus. It
enables
manifest-based data access, replacing the traditional binlog-based
approach
with a more efficient columnar storage format.

Key changes:

### Segment Self-Managed Loading Architecture
- Move segment loading orchestration from Go layer to C++ segcore
- Add NewSegmentWithLoadInfo() API for passing load info during segment
creation
- Implement SetLoadInfo() and Load() methods in SegmentInterface
- Support parallel loading of indexed and non-indexed fields
- Enable both sealed and growing segments to self-manage loading

### Storage V2 FFI Integration
- Integrate milvus-storage library's FFI interface for packed columnar
data
- Add manifest path support throughout the data path (SegmentInfo,
LoadInfo)
- Implement ManifestReader for generating manifests from binlogs
- Support zero-copy data exchange using Arrow C Data Interface
- Add ToCStorageConfig() for Go-to-C storage config conversion

### Manifest-Based Index Building
- Extend FileManagerContext to carry loon_ffi_properties
- Implement GetFieldDatasFromManifest() using Arrow C Stream interface
- Support manifest-based reading in DiskFileManagerImpl and
MemFileManagerImpl
- Add fallback to traditional segment insert files when manifest
unavailable

### Compaction Pipeline Updates
- Include manifest path in all compaction task builders (clustering, L0,
mix)
- Update BulkPackWriterV2 to return manifest path
- Propagate manifest metadata through compaction pipeline

### Configuration & Protocol
- Add common.storageV2.useLoonFFI config option (default: false)
- Add manifest_path field to SegmentLoadInfo and related proto messages
- Add manifest field to compaction segment messages

### Bug Fixes
- Fix mmap settings not applied during segment load (key typo fix)
- Populate index info after segment loading to prevent redundant load
tasks
- Fix memory corruption by removing premature transaction handle
destruction

Related issues: #44956, #45060, #39173

## Individual Cherry-Picked Commits

1. **e1c923b5cc** - fix: apply mmap settings correctly during segment
load (#46017)
2. **63b912370b** - enhance: use milvus-storage internal C++ Reader API
for Loon FFI (#45897)
3. **bfc192faa5** - enhance: Resolve issues integrating loon FFI
(#45918)
4. **fb18564631** - enhance: support manifest-based index building with
Loon FFI reader (#45726)
5. **b9ec2392b9** - enhance: integrate StorageV2 FFI interface for
manifest-based segment loading (#45798)
6. **66db3c32e6** - enhance: integrate Storage V2 FFI interface for
unified storage access (#45723)
7. **ae789273ac** - fix: populate index info after segment loading to
prevent redundant load tasks (#45803)
8. **49688b0be2** - enhance: Move segment loading logic from Go layer to
segcore for self-managed loading (#45488)
9. **5b2df88bac** - enhance: [StorageV2] Integrate FFI interface for
packed reader (#45132)
10. **91ff5706ac** - enhance: [StorageV2] add manifest path support for
FFI integration (#44991)
11. **2192bb4a85** - enhance: add NewSegmentWithLoadInfo API to support
segment self-managed loading (#45061)
12. **4296b01da0** - enhance: update delta log serialization APIs to
integrate storage V2 (#44998)

## Technical Details

### Architecture Changes
- **Before**: Go layer orchestrated segment loading, making multiple CGO
calls
- **After**: Segments autonomously manage loading in C++ layer with
single entry point

### Storage Access Pattern
- **Before**: Read individual binlog files through Go storage layer
- **After**: Read manifest file that references packed columnar data via
FFI

### Benefits
- Reduced cross-language call overhead
- Better resource management at C++ level
- Improved I/O performance through batched streaming reads
- Cleaner separation of concerns between Go and C++ layers
- Foundation for proactive schema evolution handling

---------

Signed-off-by: Ted Xu <ted.xu@zilliz.com>
Signed-off-by: Congqi Xia <congqi.xia@zilliz.com>
Co-authored-by: Ted Xu <ted.xu@zilliz.com>
2025-12-04 17:09:12 +08:00

229 lines
7.2 KiB
Go

package compactor
import (
"context"
"fmt"
"math"
"testing"
"github.com/apache/arrow/go/v17/arrow/array"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/suite"
"github.com/milvus-io/milvus-proto/go-api/v2/schemapb"
"github.com/milvus-io/milvus/internal/allocator"
"github.com/milvus-io/milvus/internal/compaction"
"github.com/milvus-io/milvus/internal/flushcommon/metacache"
"github.com/milvus-io/milvus/internal/flushcommon/metacache/pkoracle"
"github.com/milvus-io/milvus/internal/flushcommon/syncmgr"
"github.com/milvus-io/milvus/internal/mocks/flushcommon/mock_util"
"github.com/milvus-io/milvus/internal/storage"
"github.com/milvus-io/milvus/internal/storagecommon"
"github.com/milvus-io/milvus/internal/storagev2/packed"
"github.com/milvus-io/milvus/internal/util/initcore"
"github.com/milvus-io/milvus/pkg/v2/common"
"github.com/milvus-io/milvus/pkg/v2/objectstorage"
"github.com/milvus-io/milvus/pkg/v2/proto/datapb"
"github.com/milvus-io/milvus/pkg/v2/proto/indexpb"
"github.com/milvus-io/milvus/pkg/v2/util/paramtable"
"github.com/milvus-io/milvus/pkg/v2/util/tsoutil"
"github.com/milvus-io/milvus/pkg/v2/util/typeutil"
)
type NamespaceCompactorTestSuite struct {
suite.Suite
binlogIO *mock_util.MockBinlogIO
schema *schemapb.CollectionSchema
sortedSegments []*datapb.CompactionSegmentBinlogs
}
func (s *NamespaceCompactorTestSuite) SetupSuite() {
paramtable.Get().Init(paramtable.NewBaseTable())
paramtable.Get().Save("common.storageType", "local")
paramtable.Get().Save("common.storage.enableV2", "true")
initcore.InitStorageV2FileSystem(paramtable.Get())
s.binlogIO = mock_util.NewMockBinlogIO(s.T())
s.binlogIO.EXPECT().Upload(mock.Anything, mock.Anything).Return(nil)
s.schema = &schemapb.CollectionSchema{
Fields: []*schemapb.FieldSchema{
{
FieldID: common.RowIDField,
Name: "row_id",
DataType: schemapb.DataType_Int64,
},
{
FieldID: common.TimeStampField,
Name: "timestamp",
DataType: schemapb.DataType_Int64,
},
{
FieldID: 100,
Name: "pk",
DataType: schemapb.DataType_Int64,
IsPrimaryKey: true,
},
{
FieldID: 101,
Name: "namespace",
DataType: schemapb.DataType_Int64,
},
},
}
s.setupSortedSegments()
}
func (s *NamespaceCompactorTestSuite) setupSortedSegments() {
num := 3
rows := 10000
collectionID := int64(100)
partitionID := int64(100)
alloc := allocator.NewLocalAllocator(0, math.MaxInt64)
for i := 0; i < num; i++ {
data, err := storage.NewInsertData(s.schema)
s.Require().NoError(err)
for j := 0; j < rows; j++ {
v := map[int64]interface{}{
common.RowIDField: int64(j),
common.TimeStampField: int64(tsoutil.ComposeTSByTime(getMilvusBirthday(), 0)),
100: int64(j),
101: int64(j),
}
data.Append(v)
}
pack := new(syncmgr.SyncPack).WithCollectionID(collectionID).WithPartitionID(partitionID).WithSegmentID(int64(i)).WithChannelName(fmt.Sprintf("by-dev-rootcoord-dml_0_%dv0", collectionID)).WithInsertData([]*storage.InsertData{data})
rootPath := paramtable.Get().LocalStorageCfg.Path.GetValue()
cm := storage.NewLocalChunkManager(objectstorage.RootPath(rootPath))
bfs := pkoracle.NewBloomFilterSet()
seg := metacache.NewSegmentInfo(&datapb.SegmentInfo{}, bfs, nil)
metacache.UpdateNumOfRows(int64(rows))(seg)
mc := metacache.NewMockMetaCache(s.T())
mc.EXPECT().Collection().Return(collectionID).Maybe()
mc.EXPECT().GetSchema(mock.Anything).Return(s.schema).Maybe()
mc.EXPECT().GetSegmentByID(int64(i)).Return(seg, true).Maybe()
mc.EXPECT().GetSegmentsBy(mock.Anything, mock.Anything).Return([]*metacache.SegmentInfo{seg}).Maybe()
mc.EXPECT().UpdateSegments(mock.Anything, mock.Anything).Run(func(action metacache.SegmentAction, filters ...metacache.SegmentFilter) {
action(seg)
}).Return().Maybe()
fields := typeutil.GetAllFieldSchemas(s.schema)
columnGroups := storagecommon.SplitColumns(fields, map[int64]storagecommon.ColumnStats{}, storagecommon.NewSelectedDataTypePolicy(), storagecommon.NewRemanentShortPolicy(-1))
bw := syncmgr.NewBulkPackWriterV2(mc, s.schema, cm, alloc, packed.DefaultWriteBufferSize, 0, &indexpb.StorageConfig{
StorageType: "local",
RootPath: rootPath,
}, columnGroups)
inserts, _, _, _, _, _, err := bw.Write(context.Background(), pack)
s.Require().NoError(err)
s.sortedSegments = append(s.sortedSegments, &datapb.CompactionSegmentBinlogs{
SegmentID: int64(i),
FieldBinlogs: storage.SortFieldBinlogs(inserts),
Deltalogs: []*datapb.FieldBinlog{},
StorageVersion: storage.StorageV2,
IsSorted: true,
})
}
}
func (s *NamespaceCompactorTestSuite) TestCompactSorted() {
plan := &datapb.CompactionPlan{
SegmentBinlogs: s.sortedSegments,
Schema: s.schema,
PreAllocatedSegmentIDs: &datapb.IDRange{
Begin: 0,
End: math.MaxInt64,
},
PreAllocatedLogIDs: &datapb.IDRange{
Begin: 0,
End: math.MaxInt64,
},
MaxSize: 1024 * 1024 * 1024,
}
params := compaction.GenParams()
sortedByFieldIDs := []int64{101, 100}
c := NewNamespaceCompactor(context.Background(), plan, s.binlogIO, params, sortedByFieldIDs)
result, err := c.Compact()
s.Require().NoError(err)
s.Require().Equal(datapb.CompactionTaskState_completed, result.State)
for _, segment := range result.GetSegments() {
s.assertSorted(segment, sortedByFieldIDs)
}
}
func (s *NamespaceCompactorTestSuite) assertSorted(segment *datapb.CompactionSegment, sortedByFieldIDs []int64) {
reader, err := storage.NewBinlogRecordReader(context.Background(), segment.GetInsertLogs(), s.schema, storage.WithVersion(segment.GetStorageVersion()), storage.WithStorageConfig(&indexpb.StorageConfig{
StorageType: "local",
RootPath: paramtable.Get().LocalStorageCfg.Path.GetValue(),
}))
s.Require().NoError(err)
records := make([]storage.Record, 0)
for {
record, err := reader.Next()
if err != nil {
break
}
record.Retain()
records = append(records, record)
}
cmps := make([]func(ri, i, rj, j int) int, 0)
for _, fieldID := range sortedByFieldIDs {
field := typeutil.GetField(s.schema, fieldID)
switch field.DataType {
case schemapb.DataType_Int64:
cmps = append(cmps, func(ri, i, rj, j int) int {
vi := records[ri].Column(fieldID).(*array.Int64).Value(i)
vj := records[rj].Column(fieldID).(*array.Int64).Value(j)
if vi < vj {
return -1
}
if vi > vj {
return 1
}
return 0
})
case schemapb.DataType_String:
cmps = append(cmps, func(ri, i, rj, j int) int {
vi := records[ri].Column(fieldID).(*array.String).Value(i)
vj := records[rj].Column(fieldID).(*array.String).Value(j)
if vi < vj {
return -1
}
if vi > vj {
return 1
}
return 0
})
default:
panic("unsupported data type")
}
}
prevri := -1
previ := -1
for ri := 0; ri < len(records); ri++ {
for i := 0; i < records[ri].Len(); i++ {
if prevri == -1 {
prevri = ri
previ = i
continue
}
for _, cmp := range cmps {
c := cmp(prevri, previ, ri, i)
s.Require().True(c <= 0, "not sorted")
if c < 0 {
break
}
}
prevri = ri
previ = i
}
}
}
func TestNamespaceCompactorTestSuite(t *testing.T) {
suite.Run(t, new(NamespaceCompactorTestSuite))
}