mirror of
https://gitee.com/milvus-io/milvus.git
synced 2025-12-07 17:48:29 +08:00
* refactoring(create_table done) * refactoring * refactor server delivery (insert done) * refactoring server module (count_table done) * server refactor done * cmake pass * refactor server module done. * set grpc response status correctly * format done. * fix redefine ErrorMap() * optimize insert reducing ids data copy * optimize grpc request with reducing data copy * clang format * [skip ci] Refactor server module done. update changlog. prepare for PR * remove explicit and change int32_t to int64_t * add web server * [skip ci] add license in web module * modify header include & comment oatpp environment config * add port configure & create table in handler * modify web url * simple url complation done & add swagger * make sure web url * web functionality done. debuging * add web unittest * web test pass * add web server port * add web server port in template * update unittest cmake file * change web server default port to 19121 * rename method in web module & unittest pass * add search case in unittest for web module * rename some variables * fix bug * unittest pass * web prepare * fix cmd bug(check server status) * update changlog * add web port validate & default set * clang-format pass * add web port test in unittest * add CORS & redirect root to swagger ui * add web status * web table method func cascade test pass * add config url in web module * modify thirdparty cmake to avoid building oatpp test * clang format * update changlog * add constants in web module * reserve Config.cpp * fix constants reference bug * replace web server with async module * modify component to support async * format * developing controller & add test clent into unittest * add web port into demo/server_config * modify thirdparty cmake to allow build test * remove unnecessary comment * add endpoint info in controller * finish web test(bug here) * clang format * add web test cpp to lint exclusions * check null field in GetConfig * add macro RETURN STATUS DTo * fix cmake conflict * fix crash when exit server * remove surplus comments & add http param check * add uri /docs to direct swagger * format * change cmd to system * add default value & unittest in web module * add macros to judge if GPU supported * add macros in unit & add default in index dto & print error message when bind http port fail * format (fix #788) * fix cors bug (not completed) * comment cors * change web framework to simple api * comments optimize * change to simple API * remove comments in controller.hpp * remove EP_COMMON_CMAKE_ARGS in oatpp and oatpp-swagger * add ep cmake args to sqlite * clang-format * change a format * test pass * change name to * fix compiler issue(oatpp-swagger depend on oatpp) * add & in start_server.h * specify lib location with oatpp and oatpp-swagger * add comments * add swagger definition * [skip ci] change http method options status code * remove oatpp swagger(fix #970) * remove comments * check Start web behavior * add default to cpu_cache_capacity * remove swagger component.hpp & /docs url * remove /docs info * remove /docs in unittest * remove space in test rpc * remove repeate info in CHANGLOG * change cache_insert_data default value as a constant * [skip ci] Fix some broken links (#960) * [skip ci] Fix broken link * [skip ci] Fix broken link * [skip ci] Fix broken link * [skip ci] Fix broken links * fix issue 373 (#964) * fix issue 373 * Adjustment format * Adjustment format * Adjustment format * change readme * #966 update NOTICE.md (#967) * remove comments * check Start web behavior * add default to cpu_cache_capacity * remove swagger component.hpp & /docs url * remove /docs info * remove /docs in unittest * remove space in test rpc * remove repeate info in CHANGLOG * change cache_insert_data default value as a constant * adjust web port cofig place * rename web_port variable * change gpu resources invoke way to cmd() * set advanced config name add DEFAULT * change config setting to cmd * modify .. * optimize code * assign TableDto' count default value 0 (fix #995) * check if table exists when show partitions (fix #1028) * check table exists when drop partition (fix #1029) * check if partition name is legal (fix #1022) * modify status code when partition tag is illegal * update changlog * add info to /system url * add binary index and add bin uri & handler method(not completed) * optimize http insert and search time(fix #1066) | add binary vectors support(fix #1067) * fix test partition bug * fix test bug when check insert records * add binary vectors test * add default for offset and page_size * fix uinttest bug * [skip ci] remove comments * optimize web code for PR comments * add new folder named utils * check offset and pagesize (fix #1082) * improve error message if offset or page_size is not legal (fix #1075) * add log into web module * update changlog * check gpu sources setting when assign repeated value (fix #990) * update changlog * clang-format pass * add default handler in http handler * [skip ci] improve error msg when check gpu resources * change check offset way * remove func IsIntStr * add case * change int32 to int64 when check number str * add log in we module(doing) * update test case * add log in web controller * remove surplus dot * add preload into /system/ * change get_milvus() to get_milvus(args['handler']) * support load table into memory with http server (fix #1115) * [skip ci] comment surplus dto in VectorDto Co-authored-by: jielinxu <52057195+jielinxu@users.noreply.github.com> Co-authored-by: JackLCL <53512883+JackLCL@users.noreply.github.com> Co-authored-by: Cai Yudong <yudong.cai@zilliz.com>
183 lines
7.5 KiB
Python
183 lines
7.5 KiB
Python
import pdb
|
|
import copy
|
|
import pytest
|
|
import threading
|
|
import datetime
|
|
import logging
|
|
from time import sleep
|
|
from multiprocessing import Process
|
|
import sklearn.preprocessing
|
|
from milvus import IndexType, MetricType
|
|
from utils import *
|
|
|
|
dim = 128
|
|
index_file_size = 10
|
|
table_id = "test_mix"
|
|
add_interval_time = 2
|
|
vectors = gen_vectors(100000, dim)
|
|
vectors = sklearn.preprocessing.normalize(vectors, axis=1, norm='l2')
|
|
vectors = vectors.tolist()
|
|
top_k = 1
|
|
nprobe = 1
|
|
epsilon = 0.0001
|
|
index_params = {'index_type': IndexType.IVFLAT, 'nlist': 16384}
|
|
|
|
|
|
class TestMixBase:
|
|
|
|
# disable
|
|
def _test_search_during_createIndex(self, args):
|
|
loops = 10000
|
|
table = gen_unique_str()
|
|
query_vecs = [vectors[0], vectors[1]]
|
|
uri = "tcp://%s:%s" % (args["ip"], args["port"])
|
|
id_0 = 0; id_1 = 0
|
|
milvus_instance = get_milvus(args["handler"])
|
|
milvus_instance.connect(uri=uri)
|
|
milvus_instance.create_table({'table_name': table,
|
|
'dimension': dim,
|
|
'index_file_size': index_file_size,
|
|
'metric_type': MetricType.L2})
|
|
for i in range(10):
|
|
status, ids = milvus_instance.add_vectors(table, vectors)
|
|
# logging.getLogger().info(ids)
|
|
if i == 0:
|
|
id_0 = ids[0]; id_1 = ids[1]
|
|
def create_index(milvus_instance):
|
|
logging.getLogger().info("In create index")
|
|
status = milvus_instance.create_index(table, index_params)
|
|
logging.getLogger().info(status)
|
|
status, result = milvus_instance.describe_index(table)
|
|
logging.getLogger().info(result)
|
|
def add_vectors(milvus_instance):
|
|
logging.getLogger().info("In add vectors")
|
|
status, ids = milvus_instance.add_vectors(table, vectors)
|
|
logging.getLogger().info(status)
|
|
def search(milvus_instance):
|
|
logging.getLogger().info("In search vectors")
|
|
for i in range(loops):
|
|
status, result = milvus_instance.search_vectors(table, top_k, nprobe, query_vecs)
|
|
logging.getLogger().info(status)
|
|
assert result[0][0].id == id_0
|
|
assert result[1][0].id == id_1
|
|
milvus_instance = get_milvus(args["handler"])
|
|
milvus_instance.connect(uri=uri)
|
|
p_search = Process(target=search, args=(milvus_instance, ))
|
|
p_search.start()
|
|
milvus_instance = get_milvus(args["handler"])
|
|
milvus_instance.connect(uri=uri)
|
|
p_create = Process(target=add_vectors, args=(milvus_instance, ))
|
|
p_create.start()
|
|
p_create.join()
|
|
|
|
@pytest.mark.level(2)
|
|
def test_mix_multi_tables(self, connect):
|
|
'''
|
|
target: test functions with multiple tables of different metric_types and index_types
|
|
method: create 60 tables which 30 are L2 and the other are IP, add vectors into them
|
|
and test describe index and search
|
|
expected: status ok
|
|
'''
|
|
nq = 10000
|
|
nlist= 16384
|
|
vectors = gen_vectors(nq, dim)
|
|
table_list = []
|
|
idx = []
|
|
|
|
#create table and add vectors
|
|
for i in range(30):
|
|
table_name = gen_unique_str('test_mix_multi_tables')
|
|
table_list.append(table_name)
|
|
param = {'table_name': table_name,
|
|
'dimension': dim,
|
|
'index_file_size': index_file_size,
|
|
'metric_type': MetricType.L2}
|
|
connect.create_table(param)
|
|
status, ids = connect.add_vectors(table_name=table_name, records=vectors)
|
|
idx.append(ids[0])
|
|
idx.append(ids[10])
|
|
idx.append(ids[20])
|
|
assert status.OK()
|
|
for i in range(30):
|
|
table_name = gen_unique_str('test_mix_multi_tables')
|
|
table_list.append(table_name)
|
|
param = {'table_name': table_name,
|
|
'dimension': dim,
|
|
'index_file_size': index_file_size,
|
|
'metric_type': MetricType.IP}
|
|
connect.create_table(param)
|
|
status, ids = connect.add_vectors(table_name=table_name, records=vectors)
|
|
idx.append(ids[0])
|
|
idx.append(ids[10])
|
|
idx.append(ids[20])
|
|
assert status.OK()
|
|
time.sleep(2)
|
|
|
|
#create index
|
|
for i in range(10):
|
|
index_params = {'index_type': IndexType.FLAT, 'nlist': nlist}
|
|
status = connect.create_index(table_list[i], index_params)
|
|
assert status.OK()
|
|
status = connect.create_index(table_list[30 + i], index_params)
|
|
assert status.OK()
|
|
index_params = {'index_type': IndexType.IVFLAT, 'nlist': nlist}
|
|
status = connect.create_index(table_list[10 + i], index_params)
|
|
assert status.OK()
|
|
status = connect.create_index(table_list[40 + i], index_params)
|
|
assert status.OK()
|
|
index_params = {'index_type': IndexType.IVF_SQ8, 'nlist': nlist}
|
|
status = connect.create_index(table_list[20 + i], index_params)
|
|
assert status.OK()
|
|
status = connect.create_index(table_list[50 + i], index_params)
|
|
assert status.OK()
|
|
|
|
#describe index
|
|
for i in range(10):
|
|
status, result = connect.describe_index(table_list[i])
|
|
logging.getLogger().info(result)
|
|
assert result._nlist == 16384
|
|
assert result._table_name == table_list[i]
|
|
assert result._index_type == IndexType.FLAT
|
|
status, result = connect.describe_index(table_list[10 + i])
|
|
logging.getLogger().info(result)
|
|
assert result._nlist == 16384
|
|
assert result._table_name == table_list[10 + i]
|
|
assert result._index_type == IndexType.IVFLAT
|
|
status, result = connect.describe_index(table_list[20 + i])
|
|
logging.getLogger().info(result)
|
|
assert result._nlist == 16384
|
|
assert result._table_name == table_list[20 + i]
|
|
assert result._index_type == IndexType.IVF_SQ8
|
|
status, result = connect.describe_index(table_list[30 + i])
|
|
logging.getLogger().info(result)
|
|
assert result._nlist == 16384
|
|
assert result._table_name == table_list[30 + i]
|
|
assert result._index_type == IndexType.FLAT
|
|
status, result = connect.describe_index(table_list[40 + i])
|
|
logging.getLogger().info(result)
|
|
assert result._nlist == 16384
|
|
assert result._table_name == table_list[40 + i]
|
|
assert result._index_type == IndexType.IVFLAT
|
|
status, result = connect.describe_index(table_list[50 + i])
|
|
logging.getLogger().info(result)
|
|
assert result._nlist == 16384
|
|
assert result._table_name == table_list[50 + i]
|
|
assert result._index_type == IndexType.IVF_SQ8
|
|
|
|
#search
|
|
query_vecs = [vectors[0], vectors[10], vectors[20]]
|
|
for i in range(60):
|
|
table = table_list[i]
|
|
status, result = connect.search_vectors(table, top_k, nprobe, query_vecs)
|
|
assert status.OK()
|
|
assert len(result) == len(query_vecs)
|
|
for j in range(len(query_vecs)):
|
|
assert len(result[j]) == top_k
|
|
for j in range(len(query_vecs)):
|
|
assert check_result(result[j], idx[3 * i + j])
|
|
|
|
def check_result(result, id):
|
|
if len(result) >= 5:
|
|
return id in [result[0].id, result[1].id, result[2].id, result[3].id, result[4].id]
|
|
else:
|
|
return id in (i.id for i in result) |