yihao.dai b18ebd9468
enhance: Remove legacy cdc/replication (#46603)
issue: https://github.com/milvus-io/milvus/issues/44123

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
- Core invariant: legacy in-cluster CDC/replication plumbing
(ReplicateMsg types, ReplicateID-based guards and flags) is obsolete —
the system relies on standard msgstream positions, subPos/end-ts
semantics and timetick ordering as the single source of truth for
message ordering and skipping, so replication-specific
channels/types/guards can be removed safely.

- Removed/simplified logic (what and why): removed replication feature
flags and params (ReplicateMsgChannel, TTMsgEnabled,
CollectionReplicateEnable), ReplicateMsg type and its tests, ReplicateID
constants/helpers and MergeProperties hooks, ReplicateConfig and its
propagation (streamPipeline, StreamConfig, dispatcher, target),
replicate-aware dispatcher/pipeline branches, and replicate-mode
pre-checks/timestamp-allocation in proxy tasks — these implemented a
redundant alternate “replicate-mode” pathway that duplicated
position/end-ts and timetick logic.

- Why this does NOT cause data loss or regression (concrete code paths):
no persistence or core write paths were removed — proxy PreExecute flows
(internal/proxy/task_*.go) still perform the same schema/ID/size
validations and then follow the normal non-replicate execution path;
dispatcher and pipeline continue to use position/subPos and
pullback/end-ts in Seek/grouping (pkg/mq/msgdispatcher/dispatcher.go,
internal/util/pipeline/stream_pipeline.go), so skipping and ordering
behavior remains unchanged; timetick emission in rootcoord
(sendMinDdlTsAsTt) is now ungated (no silent suppression), preserving or
increasing timetick delivery rather than removing it.

- PR type and net effect: Enhancement/Refactor — removes deprecated
replication API surface (types, helpers, config, tests) and replication
branches, simplifies public APIs and constructor signatures, and reduces
surface area for future maintenance while keeping DML/DDL persistence,
ordering, and seek semantics intact.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
2025-12-30 14:53:21 +08:00

123 lines
3.4 KiB
Go

// Licensed to the LF AI & Data foundation under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package msgdispatcher
import (
"fmt"
"sync"
"time"
"go.uber.org/zap"
"github.com/milvus-io/milvus/pkg/v2/log"
"github.com/milvus-io/milvus/pkg/v2/util/lifetime"
"github.com/milvus-io/milvus/pkg/v2/util/paramtable"
)
type target struct {
vchannel string
ch chan *MsgPack
subPos SubPos
pos *Pos
filterSameTimeTick bool
latestTimeTick uint64
isLagged bool
closeMu sync.Mutex
closeOnce sync.Once
closed bool
maxLag time.Duration
timer *time.Timer
cancelCh lifetime.SafeChan
}
func newTarget(streamConfig *StreamConfig, filterSameTimeTick bool) *target {
maxTolerantLag := paramtable.Get().MQCfg.MaxTolerantLag.GetAsDuration(time.Second)
t := &target{
vchannel: streamConfig.VChannel,
ch: make(chan *MsgPack, paramtable.Get().MQCfg.TargetBufSize.GetAsInt()),
subPos: streamConfig.SubPos,
pos: streamConfig.Pos,
filterSameTimeTick: filterSameTimeTick,
latestTimeTick: 0,
cancelCh: lifetime.NewSafeChan(),
maxLag: maxTolerantLag,
timer: time.NewTimer(maxTolerantLag),
}
t.closed = false
return t
}
func (t *target) close() {
t.cancelCh.Close()
t.closeMu.Lock()
defer t.closeMu.Unlock()
t.closeOnce.Do(func() {
t.closed = true
t.timer.Stop()
close(t.ch)
log.Info("close target chan", zap.String("vchannel", t.vchannel))
})
}
func (t *target) send(pack *MsgPack) error {
t.closeMu.Lock()
defer t.closeMu.Unlock()
if t.closed {
return nil
}
if t.filterSameTimeTick {
if pack.EndPositions[0].GetTimestamp() <= t.latestTimeTick {
if len(pack.Msgs) > 0 {
// only filter out the msg that is only timetick message,
// So it's a unexpected behavior if the msgs is not empty
log.Warn("some data lost when time tick filtering",
zap.String("vchannel", t.vchannel),
zap.Uint64("latestTimeTick", t.latestTimeTick),
zap.Uint64("packEndTs", pack.EndPositions[0].GetTimestamp()),
zap.Int("msgCount", len(pack.Msgs)),
)
}
// filter out the msg that is already sent with the same timetick.
return nil
}
}
if !t.timer.Stop() {
select {
case <-t.timer.C:
default:
}
}
t.timer.Reset(t.maxLag)
select {
case <-t.cancelCh.CloseCh():
log.Info("target closed", zap.String("vchannel", t.vchannel))
return nil
case <-t.timer.C:
t.isLagged = true
return fmt.Errorf("send target timeout, vchannel=%s, timeout=%s, beginTs=%d, endTs=%d, latestTimeTick=%d", t.vchannel, t.maxLag, pack.BeginTs, pack.EndTs, t.latestTimeTick)
case t.ch <- pack:
if len(pack.EndPositions) > 0 {
t.latestTimeTick = pack.EndPositions[0].GetTimestamp()
}
return nil
}
}