mirror of
https://gitee.com/milvus-io/milvus.git
synced 2025-12-08 10:08:42 +08:00
* add read/write lock * change compact to ddl queue * add api to get vector data * add flush / merge / compact lock * add api to get vector data * add data size for table info * add db recovery test * add data_size check * change file name to uppercase Signed-off-by: jinhai <hai.jin@zilliz.com> * update wal flush_merge_compact_mutex_ * update wal flush_merge_compact_mutex_ * change requirement * change requirement * upd requirement * add logging * add logging * add logging * add logging * add logging * add logging * add logging * add logging * add logging * delete part * add all size checks * fix bug * update faiss get_vector_by_id * add get_vector case * update get vector by id * update server * fix DBImpl * attempting to fix #1268 * lint * update unit test * fix #1259 * issue 1271 fix wal config * update * fix cases Signed-off-by: del.zhenwu <zhenxiang.li@zilliz.com> * update read / write error message * update read / write error message * [skip ci] get vectors by id from raw files instead faiss * [skip ci] update FilesByType meta * update * fix ci error * update * lint * Hide partition_name parameter * Remove douban pip source Signed-off-by: zhenwu <zw@zilliz.com> * Update epsilon value in test cases Signed-off-by: zhenwu <zw@zilliz.com> * Add default partition * Caiyd crud (#1313) * fix clang format Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * fix unittest build error Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * add faiss_bitset_test Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * avoid user directly operate partition table * fix has table bug * Caiyd crud (#1323) * fix clang format Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * fix unittest build error Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * use compile option -O3 Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * update faiss_bitset_test.cpp Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * change open flags * change OngoingFileChecker to static instance * mark ongoing files when applying deletes * update clean up with ttl * fix centos ci * update * lint * update partition Signed-off-by: zhenwu <zw@zilliz.com> * update delete and flush to include partitions * update * Update cases Signed-off-by: zhenwu <zw@zilliz.com> * Fix test cases crud (#1350) * fix order * add wal case Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix wal case Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix wal case Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix wal case Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix invalid operation issue Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix invalid operation issue Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix bug Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix bug Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * crud fix Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * crud fix Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * add table info test cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> Signed-off-by: JinHai-CN <hai.jin@zilliz.com> * merge cases Signed-off-by: zhenwu <zw@zilliz.com> * Shengjun (#1349) * Add GPU sharing solution on native Kubernetes (#1102) * run hadolint with reviewdog * add LINCENSE in Dockerfile * run hadolint with reviewdog * Reporter of reviewdog command is "github-pr-check" * format Dockerfile * ignore DL3007 in hadolint * clean up old docker images * Add GPU sharing solution on native Kubernetes * nightly test mailer * Fix http server bug (#1096) * refactoring(create_table done) * refactoring * refactor server delivery (insert done) * refactoring server module (count_table done) * server refactor done * cmake pass * refactor server module done. * set grpc response status correctly * format done. * fix redefine ErrorMap() * optimize insert reducing ids data copy * optimize grpc request with reducing data copy * clang format * [skip ci] Refactor server module done. update changlog. prepare for PR * remove explicit and change int32_t to int64_t * add web server * [skip ci] add license in web module * modify header include & comment oatpp environment config * add port configure & create table in handler * modify web url * simple url complation done & add swagger * make sure web url * web functionality done. debuging * add web unittest * web test pass * add web server port * add web server port in template * update unittest cmake file * change web server default port to 19121 * rename method in web module & unittest pass * add search case in unittest for web module * rename some variables * fix bug * unittest pass * web prepare * fix cmd bug(check server status) * update changlog * add web port validate & default set * clang-format pass * add web port test in unittest * add CORS & redirect root to swagger ui * add web status * web table method func cascade test pass * add config url in web module * modify thirdparty cmake to avoid building oatpp test * clang format * update changlog * add constants in web module * reserve Config.cpp * fix constants reference bug * replace web server with async module * modify component to support async * format * developing controller & add test clent into unittest * add web port into demo/server_config * modify thirdparty cmake to allow build test * remove unnecessary comment * add endpoint info in controller * finish web test(bug here) * clang format * add web test cpp to lint exclusions * check null field in GetConfig * add macro RETURN STATUS DTo * fix cmake conflict * fix crash when exit server * remove surplus comments & add http param check * add uri /docs to direct swagger * format * change cmd to system * add default value & unittest in web module * add macros to judge if GPU supported * add macros in unit & add default in index dto & print error message when bind http port fail * format (fix #788) * fix cors bug (not completed) * comment cors * change web framework to simple api * comments optimize * change to simple API * remove comments in controller.hpp * remove EP_COMMON_CMAKE_ARGS in oatpp and oatpp-swagger * add ep cmake args to sqlite * clang-format * change a format * test pass * change name to * fix compiler issue(oatpp-swagger depend on oatpp) * add & in start_server.h * specify lib location with oatpp and oatpp-swagger * add comments * add swagger definition * [skip ci] change http method options status code * remove oatpp swagger(fix #970) * remove comments * check Start web behavior * add default to cpu_cache_capacity * remove swagger component.hpp & /docs url * remove /docs info * remove /docs in unittest * remove space in test rpc * remove repeate info in CHANGLOG * change cache_insert_data default value as a constant * [skip ci] Fix some broken links (#960) * [skip ci] Fix broken link * [skip ci] Fix broken link * [skip ci] Fix broken link * [skip ci] Fix broken links * fix issue 373 (#964) * fix issue 373 * Adjustment format * Adjustment format * Adjustment format * change readme * #966 update NOTICE.md (#967) * remove comments * check Start web behavior * add default to cpu_cache_capacity * remove swagger component.hpp & /docs url * remove /docs info * remove /docs in unittest * remove space in test rpc * remove repeate info in CHANGLOG * change cache_insert_data default value as a constant * adjust web port cofig place * rename web_port variable * change gpu resources invoke way to cmd() * set advanced config name add DEFAULT * change config setting to cmd * modify .. * optimize code * assign TableDto' count default value 0 (fix #995) * check if table exists when show partitions (fix #1028) * check table exists when drop partition (fix #1029) * check if partition name is legal (fix #1022) * modify status code when partition tag is illegal * update changlog * add info to /system url * add binary index and add bin uri & handler method(not completed) * optimize http insert and search time(fix #1066) | add binary vectors support(fix #1067) * fix test partition bug * fix test bug when check insert records * add binary vectors test * add default for offset and page_size * fix uinttest bug * [skip ci] remove comments * optimize web code for PR comments * add new folder named utils * check offset and pagesize (fix #1082) * improve error message if offset or page_size is not legal (fix #1075) * add log into web module * update changlog * check gpu sources setting when assign repeated value (fix #990) * update changlog * clang-format pass * add default handler in http handler * [skip ci] improve error msg when check gpu resources * change check offset way * remove func IsIntStr * add case * change int32 to int64 when check number str * add log in we module(doing) * update test case * add log in web controller Co-authored-by: jielinxu <52057195+jielinxu@users.noreply.github.com> Co-authored-by: JackLCL <53512883+JackLCL@users.noreply.github.com> Co-authored-by: Cai Yudong <yudong.cai@zilliz.com> * Filtering for specific paths in Jenkins CI (#1107) * run hadolint with reviewdog * add LINCENSE in Dockerfile * run hadolint with reviewdog * Reporter of reviewdog command is "github-pr-check" * format Dockerfile * ignore DL3007 in hadolint * clean up old docker images * Add GPU sharing solution on native Kubernetes * nightly test mailer * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Fix Filtering for specific paths in Jenkins CI bug (#1109) * run hadolint with reviewdog * add LINCENSE in Dockerfile * run hadolint with reviewdog * Reporter of reviewdog command is "github-pr-check" * format Dockerfile * ignore DL3007 in hadolint * clean up old docker images * Add GPU sharing solution on native Kubernetes * nightly test mailer * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Fix Filtering for specific paths in Jenkins CI bug (#1110) * run hadolint with reviewdog * add LINCENSE in Dockerfile * run hadolint with reviewdog * Reporter of reviewdog command is "github-pr-check" * format Dockerfile * ignore DL3007 in hadolint * clean up old docker images * Add GPU sharing solution on native Kubernetes * nightly test mailer * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Don't skip ci when triggered by a time (#1113) * run hadolint with reviewdog * add LINCENSE in Dockerfile * run hadolint with reviewdog * Reporter of reviewdog command is "github-pr-check" * format Dockerfile * ignore DL3007 in hadolint * clean up old docker images * Add GPU sharing solution on native Kubernetes * nightly test mailer * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Don't skip ci when triggered by a time * Don't skip ci when triggered by a time * Set default sending to Milvus Dev mail group (#1121) * run hadolint with reviewdog * add LINCENSE in Dockerfile * run hadolint with reviewdog * Reporter of reviewdog command is "github-pr-check" * format Dockerfile * ignore DL3007 in hadolint * clean up old docker images * Add GPU sharing solution on native Kubernetes * nightly test mailer * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Test filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * Filtering for specific paths in Jenkins CI * No skip ci when triggered by a time * Don't skip ci when triggered by a time * Set default sending to Milvus Dev * Support hnsw (#1131) * add hnsw * add config * format... * format.. * Remove test.template (#1129) * Update framework * remove files * Remove files * Remove ann-acc cases && Update java-sdk cases * change cn to en * [skip ci] remove doc test * [skip ci] change cn to en * Case stability * Add mail notification when test failed * Add main notification * Add main notification * gen milvus instance from utils * Distable case with multiprocess * Add mail notification when nightly test failed * add milvus handler param * add http handler * Remove test.template Co-authored-by: quicksilver <zhifeng.zhang@zilliz.com> * Add doc for the RESTful API / Update contributor number in Milvus readme (#1100) * [skip ci] Update contributor number. * [skip ci] Add RESTful API doc. * [skip ci] Some updates. * [skip ci] Change port to 19121. * [skip ci] Update README.md. Update the descriptions for OPTIONS. * Update README.md Fix a typo. * #1105 update error message when creating IVFSQ8H index without GPU resources (#1117) * [skip ci] Update README (#1104) * remove Nvidia owned files from faiss (#1136) * #1135 remove Nvidia owned files from faiss * Revert "#1135 remove Nvidia owned files from faiss" This reverts commit 3bc007c28c8df5861fdd0452fd64c0e2e719eda2. * #1135 remove Nvidia API implementation * #1135 remove Nvidia owned files from faiss * Update CODE_OF_CONDUCT.md (#1163) * Improve codecov (#1095) * Optimize config test. Dir src/config 99% lines covered * add unittest coverage * optimize cache&config unittest * code format * format * format code * fix merge conflict * cover src/utils unittest * '#831 fix exe_path judge error' * #831 fix exe_path judge error * add some unittest coverage * add some unittest coverage * improve coverage of src/wrapper * improve src/wrapper coverage * *test optimize db/meta unittest * fix bug * *test optimize mysqlMetaImpl unittest * *style: format code * import server& scheduler unittest coverage * handover next work * *test: add some test_meta test case * *format code * *fix: fix typo * feat(codecov): improve code coverage for src/db(#872) * feat(codecov): improve code coverage for src/db/engine(#872) * feat(codecov): improve code coverage(#872) * fix config unittest bug * feat(codecov): improve code coverage core/db/engine(#872) * feat(codecov): improve code coverage core/knowhere * feat(codecov): improve code coverage core/knowhere * feat(codecov): improve code coverage * feat(codecov): fix cpu test some error * feat(codecov): improve code coverage * feat(codecov): rename some fiu * fix(db/meta): fix switch/case default action * feat(codecov): improve code coverage(#872) * fix error caused by merge code * format code * feat(codecov): improve code coverage & format code(#872) * feat(codecov): fix test error(#872) * feat(codecov): fix unittest test_mem(#872) * feat(codecov): fix unittest(#872) * feat(codecov): fix unittest for resource manager(#872) * feat(codecov): code format (#872) * feat(codecov): trigger ci(#872) * fix(RequestScheduler): remove a wrong sleep statement * test(test_rpc): fix rpc test * Fix format issue * Remove unused comments * Fix unit test error Co-authored-by: ABNER-1 <ABNER-1@users.noreply.github.com> Co-authored-by: Jin Hai <hai.jin@zilliz.com> * Support run dev test with http handler in python SDK (#1116) * refactoring(create_table done) * refactoring * refactor server delivery (insert done) * refactoring server module (count_table done) * server refactor done * cmake pass * refactor server module done. * set grpc response status correctly * format done. * fix redefine ErrorMap() * optimize insert reducing ids data copy * optimize grpc request with reducing data copy * clang format * [skip ci] Refactor server module done. update changlog. prepare for PR * remove explicit and change int32_t to int64_t * add web server * [skip ci] add license in web module * modify header include & comment oatpp environment config * add port configure & create table in handler * modify web url * simple url complation done & add swagger * make sure web url * web functionality done. debuging * add web unittest * web test pass * add web server port * add web server port in template * update unittest cmake file * change web server default port to 19121 * rename method in web module & unittest pass * add search case in unittest for web module * rename some variables * fix bug * unittest pass * web prepare * fix cmd bug(check server status) * update changlog * add web port validate & default set * clang-format pass * add web port test in unittest * add CORS & redirect root to swagger ui * add web status * web table method func cascade test pass * add config url in web module * modify thirdparty cmake to avoid building oatpp test * clang format * update changlog * add constants in web module * reserve Config.cpp * fix constants reference bug * replace web server with async module * modify component to support async * format * developing controller & add test clent into unittest * add web port into demo/server_config * modify thirdparty cmake to allow build test * remove unnecessary comment * add endpoint info in controller * finish web test(bug here) * clang format * add web test cpp to lint exclusions * check null field in GetConfig * add macro RETURN STATUS DTo * fix cmake conflict * fix crash when exit server * remove surplus comments & add http param check * add uri /docs to direct swagger * format * change cmd to system * add default value & unittest in web module * add macros to judge if GPU supported * add macros in unit & add default in index dto & print error message when bind http port fail * format (fix #788) * fix cors bug (not completed) * comment cors * change web framework to simple api * comments optimize * change to simple API * remove comments in controller.hpp * remove EP_COMMON_CMAKE_ARGS in oatpp and oatpp-swagger * add ep cmake args to sqlite * clang-format * change a format * test pass * change name to * fix compiler issue(oatpp-swagger depend on oatpp) * add & in start_server.h * specify lib location with oatpp and oatpp-swagger * add comments * add swagger definition * [skip ci] change http method options status code * remove oatpp swagger(fix #970) * remove comments * check Start web behavior * add default to cpu_cache_capacity * remove swagger component.hpp & /docs url * remove /docs info * remove /docs in unittest * remove space in test rpc * remove repeate info in CHANGLOG * change cache_insert_data default value as a constant * [skip ci] Fix some broken links (#960) * [skip ci] Fix broken link * [skip ci] Fix broken link * [skip ci] Fix broken link * [skip ci] Fix broken links * fix issue 373 (#964) * fix issue 373 * Adjustment format * Adjustment format * Adjustment format * change readme * #966 update NOTICE.md (#967) * remove comments * check Start web behavior * add default to cpu_cache_capacity * remove swagger component.hpp & /docs url * remove /docs info * remove /docs in unittest * remove space in test rpc * remove repeate info in CHANGLOG * change cache_insert_data default value as a constant * adjust web port cofig place * rename web_port variable * change gpu resources invoke way to cmd() * set advanced config name add DEFAULT * change config setting to cmd * modify .. * optimize code * assign TableDto' count default value 0 (fix #995) * check if table exists when show partitions (fix #1028) * check table exists when drop partition (fix #1029) * check if partition name is legal (fix #1022) * modify status code when partition tag is illegal * update changlog * add info to /system url * add binary index and add bin uri & handler method(not completed) * optimize http insert and search time(fix #1066) | add binary vectors support(fix #1067) * fix test partition bug * fix test bug when check insert records * add binary vectors test * add default for offset and page_size * fix uinttest bug * [skip ci] remove comments * optimize web code for PR comments * add new folder named utils * check offset and pagesize (fix #1082) * improve error message if offset or page_size is not legal (fix #1075) * add log into web module * update changlog * check gpu sources setting when assign repeated value (fix #990) * update changlog * clang-format pass * add default handler in http handler * [skip ci] improve error msg when check gpu resources * change check offset way * remove func IsIntStr * add case * change int32 to int64 when check number str * add log in we module(doing) * update test case * add log in web controller * remove surplus dot * add preload into /system/ * change get_milvus() to get_milvus(args['handler']) * support load table into memory with http server (fix #1115) * [skip ci] comment surplus dto in VectorDto Co-authored-by: jielinxu <52057195+jielinxu@users.noreply.github.com> Co-authored-by: JackLCL <53512883+JackLCL@users.noreply.github.com> Co-authored-by: Cai Yudong <yudong.cai@zilliz.com> * Fix #1140 (#1162) * fix Signed-off-by: Nicky <nicky.xj.lin@gmail.com> * update... Signed-off-by: Nicky <nicky.xj.lin@gmail.com> * fix2 Signed-off-by: Nicky <nicky.xj.lin@gmail.com> * fix3 Signed-off-by: Nicky <nicky.xj.lin@gmail.com> * update changelog Signed-off-by: Nicky <nicky.xj.lin@gmail.com> * Update INSTALL.md (#1175) * Update INSTALL.md 1. Change image tag and Milvus source code to latest. 2. Fix a typo Signed-off-by: Lu Wang <yamasite@qq.com> * Update INSTALL.md Signed-off-by: lu.wang <yamasite@qq.com> * add Tanimoto ground truth (#1138) * add milvus ground truth * add milvus groundtruth * [skip ci] add milvus ground truth * [skip ci]add tanimoto ground truth * fix mix case bug (#1208) * fix mix case bug Signed-off-by: del.zhenwu <zhenxiang.li@zilliz.com> * Remove case.md Signed-off-by: del.zhenwu <zhenxiang.li@zilliz.com> * Update README.md (#1206) Add LFAI mailing lists. Signed-off-by: Lutkin Wang <yamasite@qq.com> * Add design.md to store links to design docs (#1219) * Update README.md Add link to Milvus design docs Signed-off-by: Lutkin Wang <yamasite@qq.com> * Create design.md Signed-off-by: Lutkin Wang <yamasite@qq.com> * Update design.md Signed-off-by: Lutkin Wang <yamasite@qq.com> * Add troubleshooting info about libmysqlpp.so.3 error (#1225) * Update INSTALL.md Signed-off-by: Lutkin Wang <yamasite@qq.com> * Update INSTALL.md Signed-off-by: Lutkin Wang <yamasite@qq.com> * Update README.md (#1233) Signed-off-by: Lutkin Wang <yamasite@qq.com> * #1240 Update license declaration of each file (#1241) * #1240 Update license declaration of each files Signed-off-by: jinhai <hai.jin@zilliz.com> * #1240 Update CHANGELOG Signed-off-by: jinhai <hai.jin@zilliz.com> * Update README.md (#1258) Add Jenkins master badge. Signed-off-by: Lutkin Wang <yamasite@qq.com> * Update INSTALL.md (#1265) Fix indentation. * support CPU profiling (#1251) * #1250 support CPU profiling Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * #1250 fix code coverage Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * Fix HNSW crash (#1262) * fix Signed-off-by: xiaojun.lin <xiaojun.lin@zilliz.com> * update. Signed-off-by: xiaojun.lin <xiaojun.lin@zilliz.com> * Add troubleshooting information for INSTALL.md and enhance readability (#1274) * Update INSTALL.md 1. Add new troubleshooting message; 2. Enhance readability. Signed-off-by: Lutkin Wang <yamasite@qq.com> * Update INSTALL.md Signed-off-by: Lutkin Wang <yamasite@qq.com> * Update INSTALL.md Signed-off-by: Lutkin Wang <yamasite@qq.com> * Update INSTALL.md Add CentOS link. Signed-off-by: Lutkin Wang <yamasite@qq.com> * Create COMMUNITY.md (#1292) Signed-off-by: Lutkin Wang <yamasite@qq.com> * fix gtest * add copyright * fix gtest * MERGE_NOT_YET * fix lint Co-authored-by: quicksilver <zhifeng.zhang@zilliz.com> Co-authored-by: BossZou <40255591+BossZou@users.noreply.github.com> Co-authored-by: jielinxu <52057195+jielinxu@users.noreply.github.com> Co-authored-by: JackLCL <53512883+JackLCL@users.noreply.github.com> Co-authored-by: Cai Yudong <yudong.cai@zilliz.com> Co-authored-by: Tinkerrr <linxiaojun.cn@outlook.com> Co-authored-by: del-zhenwu <56623710+del-zhenwu@users.noreply.github.com> Co-authored-by: Lutkin Wang <yamasite@qq.com> Co-authored-by: shengjh <46514371+shengjh@users.noreply.github.com> Co-authored-by: ABNER-1 <ABNER-1@users.noreply.github.com> Co-authored-by: Jin Hai <hai.jin@zilliz.com> Co-authored-by: shiyu22 <cshiyu22@gmail.com> * #1302 Get all record IDs in a segment by given a segment id * Remove query time ranges Signed-off-by: zhenwu <zw@zilliz.com> * #1295 let wal enable by default * fix cases Signed-off-by: zhenwu <zw@zilliz.com> * fix partition cases Signed-off-by: zhenwu <zw@zilliz.com> * [skip ci] update test_db * update * fix case bug Signed-off-by: zhenwu <zw@zilliz.com> * lint * fix test case failures * remove some code * Caiyd crud 1 (#1377) * fix clang format Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * fix unittest build error Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * fix build issue when enable profiling Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * fix hastable bug * update bloom filter * update * benchmark * update benchmark * update * update * remove wal record size Signed-off-by: shengjun.li <shengjun.li@zilliz.com> * remove wal record size config Signed-off-by: shengjun.li <shengjun.li@zilliz.com> * update apply deletes: switch to binary search * update sdk_simple Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * update apply deletes: switch to binary search * add test_search_by_id Signed-off-by: zhenwu <zw@zilliz.com> * add more log * flush error with multi same ids Signed-off-by: zhenwu <zw@zilliz.com> * modify wal config Signed-off-by: shengjun.li <shengjun.li@zilliz.com> * update * add binary search_by_id * fix case bug Signed-off-by: zhenwu <zw@zilliz.com> * update cases Signed-off-by: zhenwu <zw@zilliz.com> * fix unit test #1395 * improve merge performance * add uids_ for VectorIndex to improve search performance Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * fix error Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * update * fix search * fix record num Signed-off-by: shengjun.li <shengjun.li@zilliz.com> * refine code * refine code * Add get_vector_ids test cases (#1407) * fix order * add wal case Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix wal case Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix wal case Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix wal case Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix invalid operation issue Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix invalid operation issue Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix bug Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * fix bug Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * crud fix Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * crud fix Signed-off-by: sahuang <xiaohaix@student.unimelb.edu.au> * add table info test cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> Signed-off-by: JinHai-CN <hai.jin@zilliz.com> * add to compact case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * add to compact case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * add to compact case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * add case and debug compact Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * test pdb Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * test pdb Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * test pdb Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix cases Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update table_info case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update table_info case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update table_info case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update get vector ids case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update get vector ids case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update get vector ids case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update get vector ids case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * update case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * pdb test Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * pdb test Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * add tests for get_vector_ids Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix case Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * add binary and ip Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix binary index Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * fix pdb Signed-off-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> * #1408 fix search result in-correct after DeleteById Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * add one case * delete failed segment * update serialize * update serialize * fix case Signed-off-by: zhenwu <zw@zilliz.com> * update * update case assertion Signed-off-by: zhenwu <zw@zilliz.com> * [skip ci] update config * change bloom filter msync flag to async * #1319 add more timing debug info Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * update * update * add normalize Signed-off-by: zhenwu <zw@zilliz.com> * add normalize Signed-off-by: zhenwu <zw@zilliz.com> * add normalize Signed-off-by: zhenwu <zw@zilliz.com> * Fix compiling error Signed-off-by: jinhai <hai.jin@zilliz.com> * support ip (#1383) * support ip Signed-off-by: xiaojun.lin <xiaojun.lin@zilliz.com> * IP result distance sort by descend Signed-off-by: Nicky <nicky.xj.lin@gmail.com> * update Signed-off-by: Nicky <nicky.xj.lin@gmail.com> * format Signed-off-by: xiaojun.lin <xiaojun.lin@zilliz.com> * get table lsn * Remove unused third party Signed-off-by: jinhai <hai.jin@zilliz.com> * Refine code Signed-off-by: jinhai <hai.jin@zilliz.com> * #1319 fix clang format Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * fix wal applied lsn Signed-off-by: shengjun.li <shengjun.li@zilliz.com> * validate partition tag * #1319 improve search performance Signed-off-by: yudong.cai <yudong.cai@zilliz.com> * build error Co-authored-by: Zhiru Zhu <youny626@hotmail.com> Co-authored-by: groot <yihua.mo@zilliz.com> Co-authored-by: Xiaohai Xu <xiaohaix@student.unimelb.edu.au> Co-authored-by: shengjh <46514371+shengjh@users.noreply.github.com> Co-authored-by: del-zhenwu <56623710+del-zhenwu@users.noreply.github.com> Co-authored-by: shengjun.li <49774184+shengjun1985@users.noreply.github.com> Co-authored-by: Cai Yudong <yudong.cai@zilliz.com> Co-authored-by: quicksilver <zhifeng.zhang@zilliz.com> Co-authored-by: BossZou <40255591+BossZou@users.noreply.github.com> Co-authored-by: jielinxu <52057195+jielinxu@users.noreply.github.com> Co-authored-by: JackLCL <53512883+JackLCL@users.noreply.github.com> Co-authored-by: Tinkerrr <linxiaojun.cn@outlook.com> Co-authored-by: Lutkin Wang <yamasite@qq.com> Co-authored-by: ABNER-1 <ABNER-1@users.noreply.github.com> Co-authored-by: shiyu22 <cshiyu22@gmail.com>
839 lines
34 KiB
C++
839 lines
34 KiB
C++
// Copyright (C) 2019-2020 Zilliz. All rights reserved.
|
|
//
|
|
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance
|
|
// with the License. You may obtain a copy of the License at
|
|
//
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
//
|
|
// Unless required by applicable law or agreed to in writing, software distributed under the License
|
|
// is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
|
|
// or implied. See the License for the specific language governing permissions and limitations under the License.
|
|
|
|
#include <gtest/gtest.h>
|
|
#include <opentracing/mocktracer/tracer.h>
|
|
|
|
#include <boost/filesystem.hpp>
|
|
#include <thread>
|
|
|
|
#include "server/Server.h"
|
|
#include "server/grpc_impl/GrpcRequestHandler.h"
|
|
#include "server/delivery/RequestScheduler.h"
|
|
#include "server/delivery/request/BaseRequest.h"
|
|
#include "server/delivery/RequestHandler.h"
|
|
#include "src/version.h"
|
|
|
|
#include "grpc/gen-milvus/milvus.grpc.pb.h"
|
|
#include "grpc/gen-status/status.pb.h"
|
|
#include "scheduler/ResourceFactory.h"
|
|
#include "scheduler/SchedInst.h"
|
|
#include "server/Config.h"
|
|
#include "server/DBWrapper.h"
|
|
#include "utils/CommonUtil.h"
|
|
#include "server/grpc_impl/GrpcServer.h"
|
|
|
|
#include <fiu-local.h>
|
|
#include <fiu-control.h>
|
|
|
|
namespace {
|
|
|
|
static const char* TABLE_NAME = "test_grpc";
|
|
static constexpr int64_t TABLE_DIM = 256;
|
|
static constexpr int64_t INDEX_FILE_SIZE = 1024;
|
|
static constexpr int64_t VECTOR_COUNT = 1000;
|
|
static constexpr int64_t INSERT_LOOP = 10;
|
|
constexpr int64_t SECONDS_EACH_HOUR = 3600;
|
|
|
|
void
|
|
CopyRowRecord(::milvus::grpc::RowRecord* target, const std::vector<float>& src) {
|
|
auto vector_data = target->mutable_float_data();
|
|
vector_data->Resize(static_cast<int>(src.size()), 0.0);
|
|
memcpy(vector_data->mutable_data(), src.data(), src.size() * sizeof(float));
|
|
}
|
|
|
|
class RpcHandlerTest : public testing::Test {
|
|
protected:
|
|
void
|
|
SetUp() override {
|
|
auto res_mgr = milvus::scheduler::ResMgrInst::GetInstance();
|
|
res_mgr->Clear();
|
|
res_mgr->Add(milvus::scheduler::ResourceFactory::Create("disk", "DISK", 0, false));
|
|
res_mgr->Add(milvus::scheduler::ResourceFactory::Create("cpu", "CPU", 0));
|
|
res_mgr->Add(milvus::scheduler::ResourceFactory::Create("gtx1660", "GPU", 0));
|
|
|
|
auto default_conn = milvus::scheduler::Connection("IO", 500.0);
|
|
auto PCIE = milvus::scheduler::Connection("IO", 11000.0);
|
|
res_mgr->Connect("disk", "cpu", default_conn);
|
|
res_mgr->Connect("cpu", "gtx1660", PCIE);
|
|
res_mgr->Start();
|
|
milvus::scheduler::SchedInst::GetInstance()->Start();
|
|
milvus::scheduler::JobMgrInst::GetInstance()->Start();
|
|
|
|
milvus::engine::DBOptions opt;
|
|
|
|
milvus::server::Config::GetInstance().SetDBConfigBackendUrl("sqlite://:@:/");
|
|
milvus::server::Config::GetInstance().SetDBConfigArchiveDiskThreshold("");
|
|
milvus::server::Config::GetInstance().SetDBConfigArchiveDaysThreshold("");
|
|
milvus::server::Config::GetInstance().SetStorageConfigPrimaryPath("/tmp/milvus_test");
|
|
milvus::server::Config::GetInstance().SetStorageConfigSecondaryPath("");
|
|
milvus::server::Config::GetInstance().SetCacheConfigCacheInsertData("");
|
|
milvus::server::Config::GetInstance().SetEngineConfigOmpThreadNum("");
|
|
|
|
// serverConfig.SetValue(server::CONFIG_CLUSTER_MODE, "cluster");
|
|
// DBWrapper::GetInstance().GetInstance().StartService();
|
|
// DBWrapper::GetInstance().GetInstance().StopService();
|
|
//
|
|
// serverConfig.SetValue(server::CONFIG_CLUSTER_MODE, "read_only");
|
|
// DBWrapper::GetInstance().GetInstance().StartService();
|
|
// DBWrapper::GetInstance().GetInstance().StopService();
|
|
|
|
milvus::server::DBWrapper::GetInstance().StartService();
|
|
|
|
// initialize handler, create table
|
|
handler = std::make_shared<milvus::server::grpc::GrpcRequestHandler>(opentracing::Tracer::Global());
|
|
dummy_context = std::make_shared<milvus::server::Context>("dummy_request_id");
|
|
opentracing::mocktracer::MockTracerOptions tracer_options;
|
|
auto mock_tracer =
|
|
std::shared_ptr<opentracing::Tracer>{new opentracing::mocktracer::MockTracer{std::move(tracer_options)}};
|
|
auto mock_span = mock_tracer->StartSpan("mock_span");
|
|
auto trace_context = std::make_shared<milvus::tracing::TraceContext>(mock_span);
|
|
dummy_context->SetTraceContext(trace_context);
|
|
::grpc::ServerContext context;
|
|
handler->SetContext(&context, dummy_context);
|
|
::milvus::grpc::TableSchema request;
|
|
::milvus::grpc::Status status;
|
|
request.set_table_name(TABLE_NAME);
|
|
request.set_dimension(TABLE_DIM);
|
|
request.set_index_file_size(INDEX_FILE_SIZE);
|
|
request.set_metric_type(1);
|
|
handler->SetContext(&context, dummy_context);
|
|
handler->random_id();
|
|
::grpc::Status grpc_status = handler->CreateTable(&context, &request, &status);
|
|
}
|
|
|
|
void
|
|
TearDown() override {
|
|
milvus::server::DBWrapper::GetInstance().StopService();
|
|
milvus::scheduler::JobMgrInst::GetInstance()->Stop();
|
|
milvus::scheduler::ResMgrInst::GetInstance()->Stop();
|
|
milvus::scheduler::SchedInst::GetInstance()->Stop();
|
|
boost::filesystem::remove_all("/tmp/milvus_test");
|
|
}
|
|
|
|
protected:
|
|
std::shared_ptr<milvus::server::grpc::GrpcRequestHandler> handler;
|
|
std::shared_ptr<milvus::server::Context> dummy_context;
|
|
};
|
|
|
|
void
|
|
BuildVectors(int64_t from, int64_t to, std::vector<std::vector<float>>& vector_record_array) {
|
|
if (to <= from) {
|
|
return;
|
|
}
|
|
|
|
vector_record_array.clear();
|
|
for (int64_t k = from; k < to; k++) {
|
|
std::vector<float> record;
|
|
record.resize(TABLE_DIM);
|
|
for (int64_t i = 0; i < TABLE_DIM; i++) {
|
|
record[i] = (float)(k % (i + 1));
|
|
}
|
|
|
|
vector_record_array.emplace_back(record);
|
|
}
|
|
}
|
|
|
|
std::string
|
|
CurrentTmDate(int64_t offset_day = 0) {
|
|
time_t tt;
|
|
time(&tt);
|
|
tt = tt + 8 * SECONDS_EACH_HOUR;
|
|
tt = tt + 24 * SECONDS_EACH_HOUR * offset_day;
|
|
tm t;
|
|
gmtime_r(&tt, &t);
|
|
|
|
std::string str =
|
|
std::to_string(t.tm_year + 1900) + "-" + std::to_string(t.tm_mon + 1) + "-" + std::to_string(t.tm_mday);
|
|
|
|
return str;
|
|
}
|
|
|
|
} // namespace
|
|
|
|
TEST_F(RpcHandlerTest, HAS_TABLE_TEST) {
|
|
::grpc::ServerContext context;
|
|
handler->SetContext(&context, dummy_context);
|
|
handler->RegisterRequestHandler(milvus::server::RequestHandler());
|
|
::milvus::grpc::TableName request;
|
|
::milvus::grpc::BoolReply reply;
|
|
::grpc::Status status = handler->HasTable(&context, &request, &reply);
|
|
request.set_table_name(TABLE_NAME);
|
|
status = handler->HasTable(&context, &request, &reply);
|
|
ASSERT_TRUE(status.error_code() == ::grpc::Status::OK.error_code());
|
|
int error_code = reply.status().error_code();
|
|
ASSERT_EQ(error_code, ::milvus::grpc::ErrorCode::SUCCESS);
|
|
|
|
fiu_init(0);
|
|
fiu_enable("HasTableRequest.OnExecute.table_not_exist", 1, NULL, 0);
|
|
handler->HasTable(&context, &request, &reply);
|
|
ASSERT_NE(reply.status().error_code(), ::milvus::grpc::ErrorCode::SUCCESS);
|
|
fiu_disable("HasTableRequest.OnExecute.table_not_exist");
|
|
|
|
fiu_enable("HasTableRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
handler->HasTable(&context, &request, &reply);
|
|
ASSERT_NE(reply.status().error_code(), ::milvus::grpc::ErrorCode::SUCCESS);
|
|
fiu_disable("HasTableRequest.OnExecute.throw_std_exception");
|
|
}
|
|
|
|
TEST_F(RpcHandlerTest, INDEX_TEST) {
|
|
::grpc::ServerContext context;
|
|
handler->SetContext(&context, dummy_context);
|
|
handler->RegisterRequestHandler(milvus::server::RequestHandler());
|
|
::milvus::grpc::IndexParam request;
|
|
::milvus::grpc::Status response;
|
|
::grpc::Status grpc_status = handler->CreateIndex(&context, &request, &response);
|
|
request.set_table_name("test1");
|
|
handler->CreateIndex(&context, &request, &response);
|
|
|
|
request.set_table_name(TABLE_NAME);
|
|
handler->CreateIndex(&context, &request, &response);
|
|
|
|
request.mutable_index()->set_index_type(1);
|
|
handler->CreateIndex(&context, &request, &response);
|
|
|
|
request.mutable_index()->set_nlist(16384);
|
|
grpc_status = handler->CreateIndex(&context, &request, &response);
|
|
ASSERT_EQ(grpc_status.error_code(), ::grpc::Status::OK.error_code());
|
|
int error_code = response.error_code();
|
|
// ASSERT_EQ(error_code, ::milvus::grpc::ErrorCode::SUCCESS);
|
|
|
|
fiu_init(0);
|
|
fiu_enable("CreateIndexRequest.OnExecute.not_has_table", 1, NULL, 0);
|
|
grpc_status = handler->CreateIndex(&context, &request, &response);
|
|
ASSERT_TRUE(grpc_status.ok());
|
|
fiu_disable("CreateIndexRequest.OnExecute.not_has_table");
|
|
|
|
fiu_enable("CreateIndexRequest.OnExecute.throw_std.exception", 1, NULL, 0);
|
|
grpc_status = handler->CreateIndex(&context, &request, &response);
|
|
ASSERT_TRUE(grpc_status.ok());
|
|
fiu_disable("CreateIndexRequest.OnExecute.throw_std.exception");
|
|
|
|
fiu_enable("CreateIndexRequest.OnExecute.create_index_fail", 1, NULL, 0);
|
|
grpc_status = handler->CreateIndex(&context, &request, &response);
|
|
ASSERT_TRUE(grpc_status.ok());
|
|
fiu_disable("CreateIndexRequest.OnExecute.create_index_fail");
|
|
|
|
#ifdef MILVUS_GPU_VERSION
|
|
request.mutable_index()->set_index_type(static_cast<int>(milvus::engine::EngineType::FAISS_PQ));
|
|
fiu_enable("CreateIndexRequest.OnExecute.ip_meteric", 1, NULL, 0);
|
|
grpc_status = handler->CreateIndex(&context, &request, &response);
|
|
ASSERT_TRUE(grpc_status.ok());
|
|
fiu_disable("CreateIndexRequest.OnExecute.ip_meteric");
|
|
#endif
|
|
|
|
::milvus::grpc::TableName table_name;
|
|
::milvus::grpc::IndexParam index_param;
|
|
handler->DescribeIndex(&context, &table_name, &index_param);
|
|
table_name.set_table_name("test4");
|
|
handler->DescribeIndex(&context, &table_name, &index_param);
|
|
table_name.set_table_name(TABLE_NAME);
|
|
handler->DescribeIndex(&context, &table_name, &index_param);
|
|
|
|
fiu_init(0);
|
|
fiu_enable("DescribeIndexRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
handler->DescribeIndex(&context, &table_name, &index_param);
|
|
fiu_disable("DescribeIndexRequest.OnExecute.throw_std_exception");
|
|
|
|
::milvus::grpc::Status status;
|
|
table_name.Clear();
|
|
handler->DropIndex(&context, &table_name, &status);
|
|
table_name.set_table_name("test5");
|
|
handler->DropIndex(&context, &table_name, &status);
|
|
|
|
table_name.set_table_name(TABLE_NAME);
|
|
|
|
fiu_init(0);
|
|
fiu_enable("DropIndexRequest.OnExecute.table_not_exist", 1, NULL, 0);
|
|
handler->DropIndex(&context, &table_name, &status);
|
|
fiu_disable("DropIndexRequest.OnExecute.table_not_exist");
|
|
|
|
fiu_enable("DropIndexRequest.OnExecute.drop_index_fail", 1, NULL, 0);
|
|
handler->DropIndex(&context, &table_name, &status);
|
|
fiu_disable("DropIndexRequest.OnExecute.drop_index_fail");
|
|
|
|
fiu_enable("DropIndexRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
handler->DropIndex(&context, &table_name, &status);
|
|
fiu_disable("DropIndexRequest.OnExecute.throw_std_exception");
|
|
|
|
handler->DropIndex(&context, &table_name, &status);
|
|
}
|
|
|
|
TEST_F(RpcHandlerTest, INSERT_TEST) {
|
|
::grpc::ServerContext context;
|
|
handler->SetContext(&context, dummy_context);
|
|
handler->RegisterRequestHandler(milvus::server::RequestHandler());
|
|
::milvus::grpc::InsertParam request;
|
|
::milvus::grpc::Status response;
|
|
|
|
request.set_table_name(TABLE_NAME);
|
|
std::vector<std::vector<float>> record_array;
|
|
BuildVectors(0, VECTOR_COUNT, record_array);
|
|
::milvus::grpc::VectorIds vector_ids;
|
|
for (auto& record : record_array) {
|
|
::milvus::grpc::RowRecord* grpc_record = request.add_row_record_array();
|
|
CopyRowRecord(grpc_record, record);
|
|
}
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_EQ(vector_ids.vector_id_array_size(), VECTOR_COUNT);
|
|
fiu_init(0);
|
|
fiu_enable("InsertRequest.OnExecute.id_array_error", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_NE(vector_ids.vector_id_array_size(), VECTOR_COUNT);
|
|
fiu_disable("InsertRequest.OnExecute.id_array_error");
|
|
|
|
fiu_enable("InsertRequest.OnExecute.db_not_found", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_NE(vector_ids.vector_id_array_size(), VECTOR_COUNT);
|
|
fiu_disable("InsertRequest.OnExecute.db_not_found");
|
|
|
|
fiu_enable("InsertRequest.OnExecute.describe_table_fail", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_NE(vector_ids.vector_id_array_size(), VECTOR_COUNT);
|
|
fiu_disable("InsertRequest.OnExecute.describe_table_fail");
|
|
|
|
fiu_enable("InsertRequest.OnExecute.illegal_vector_id", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_NE(vector_ids.vector_id_array_size(), VECTOR_COUNT);
|
|
fiu_disable("InsertRequest.OnExecute.illegal_vector_id");
|
|
|
|
fiu_enable("InsertRequest.OnExecute.illegal_vector_id2", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_NE(vector_ids.vector_id_array_size(), VECTOR_COUNT);
|
|
fiu_disable("InsertRequest.OnExecute.illegal_vector_id2");
|
|
|
|
fiu_enable("InsertRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_NE(vector_ids.vector_id_array_size(), VECTOR_COUNT);
|
|
fiu_disable("InsertRequest.OnExecute.throw_std_exception");
|
|
|
|
fiu_enable("InsertRequest.OnExecute.invalid_dim", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_NE(vector_ids.vector_id_array_size(), VECTOR_COUNT);
|
|
fiu_disable("InsertRequest.OnExecute.invalid_dim");
|
|
|
|
fiu_enable("InsertRequest.OnExecute.insert_fail", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
fiu_disable("InsertRequest.OnExecute.insert_fail");
|
|
|
|
fiu_enable("InsertRequest.OnExecute.invalid_ids_size", 1, NULL, 0);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
fiu_disable("InsertRequest.OnExecute.invalid_ids_size");
|
|
|
|
// insert vectors with wrong dim
|
|
std::vector<float> record_wrong_dim(TABLE_DIM - 1, 0.5f);
|
|
::milvus::grpc::RowRecord* grpc_record = request.add_row_record_array();
|
|
CopyRowRecord(grpc_record, record_wrong_dim);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
ASSERT_EQ(vector_ids.status().error_code(), ::milvus::grpc::ILLEGAL_ROWRECORD);
|
|
}
|
|
|
|
TEST_F(RpcHandlerTest, SEARCH_TEST) {
|
|
::grpc::ServerContext context;
|
|
handler->SetContext(&context, dummy_context);
|
|
handler->RegisterRequestHandler(milvus::server::RequestHandler());
|
|
::milvus::grpc::SearchParam request;
|
|
::milvus::grpc::TopKQueryResult response;
|
|
// test null input
|
|
handler->Search(&context, nullptr, &response);
|
|
|
|
// test invalid table name
|
|
handler->Search(&context, &request, &response);
|
|
|
|
// test table not exist
|
|
request.set_table_name("test3");
|
|
handler->Search(&context, &request, &response);
|
|
|
|
// test invalid topk
|
|
request.set_table_name(TABLE_NAME);
|
|
handler->Search(&context, &request, &response);
|
|
|
|
// test invalid nprobe
|
|
request.set_topk(10);
|
|
handler->Search(&context, &request, &response);
|
|
|
|
// test empty query record array
|
|
request.set_nprobe(32);
|
|
handler->Search(&context, &request, &response);
|
|
|
|
std::vector<std::vector<float>> record_array;
|
|
BuildVectors(0, VECTOR_COUNT, record_array);
|
|
::milvus::grpc::InsertParam insert_param;
|
|
for (auto& record : record_array) {
|
|
::milvus::grpc::RowRecord* grpc_record = insert_param.add_row_record_array();
|
|
CopyRowRecord(grpc_record, record);
|
|
}
|
|
// insert vectors
|
|
insert_param.set_table_name(TABLE_NAME);
|
|
::milvus::grpc::VectorIds vector_ids;
|
|
handler->Insert(&context, &insert_param, &vector_ids);
|
|
|
|
BuildVectors(0, 10, record_array);
|
|
for (auto& record : record_array) {
|
|
::milvus::grpc::RowRecord* row_record = request.add_query_record_array();
|
|
CopyRowRecord(row_record, record);
|
|
}
|
|
handler->Search(&context, &request, &response);
|
|
|
|
::milvus::grpc::SearchInFilesParam search_in_files_param;
|
|
std::string* file_id = search_in_files_param.add_file_id_array();
|
|
*file_id = "test_tbl";
|
|
handler->SearchInFiles(&context, &search_in_files_param, &response);
|
|
}
|
|
|
|
TEST_F(RpcHandlerTest, TABLES_TEST) {
|
|
::grpc::ServerContext context;
|
|
handler->SetContext(&context, dummy_context);
|
|
handler->RegisterRequestHandler(milvus::server::RequestHandler());
|
|
::milvus::grpc::TableSchema tableschema;
|
|
::milvus::grpc::Status response;
|
|
std::string tablename = "tbl";
|
|
|
|
// create table test
|
|
// test null input
|
|
handler->CreateTable(&context, nullptr, &response);
|
|
// test invalid table name
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
// test invalid table dimension
|
|
tableschema.set_table_name(tablename);
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
// test invalid index file size
|
|
tableschema.set_dimension(TABLE_DIM);
|
|
// handler->CreateTable(&context, &tableschema, &response);
|
|
// test invalid index metric type
|
|
tableschema.set_index_file_size(INDEX_FILE_SIZE);
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
// test table already exist
|
|
tableschema.set_metric_type(1);
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
|
|
// describe table test
|
|
// test invalid table name
|
|
::milvus::grpc::TableName table_name;
|
|
::milvus::grpc::TableSchema table_schema;
|
|
handler->DescribeTable(&context, &table_name, &table_schema);
|
|
|
|
table_name.set_table_name(TABLE_NAME);
|
|
::grpc::Status status = handler->DescribeTable(&context, &table_name, &table_schema);
|
|
ASSERT_EQ(status.error_code(), ::grpc::Status::OK.error_code());
|
|
|
|
fiu_init(0);
|
|
fiu_enable("DescribeTableRequest.OnExecute.describe_table_fail", 1, NULL, 0);
|
|
handler->DescribeTable(&context, &table_name, &table_schema);
|
|
fiu_disable("DescribeTableRequest.OnExecute.describe_table_fail");
|
|
|
|
fiu_enable("DescribeTableRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
handler->DescribeTable(&context, &table_name, &table_schema);
|
|
fiu_disable("DescribeTableRequest.OnExecute.throw_std_exception");
|
|
|
|
::milvus::grpc::InsertParam request;
|
|
std::vector<std::vector<float>> record_array;
|
|
BuildVectors(0, VECTOR_COUNT, record_array);
|
|
::milvus::grpc::VectorIds vector_ids;
|
|
for (int64_t i = 0; i < VECTOR_COUNT; i++) {
|
|
vector_ids.add_vector_id_array(i);
|
|
}
|
|
// Insert vectors
|
|
// test invalid table name
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
request.set_table_name(tablename);
|
|
// test empty row record
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
|
|
for (auto& record : record_array) {
|
|
::milvus::grpc::RowRecord* grpc_record = request.add_row_record_array();
|
|
CopyRowRecord(grpc_record, record);
|
|
}
|
|
// test vector_id size not equal to row record size
|
|
vector_ids.clear_vector_id_array();
|
|
vector_ids.add_vector_id_array(1);
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
|
|
// normally test
|
|
vector_ids.clear_vector_id_array();
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
|
|
request.clear_row_record_array();
|
|
vector_ids.clear_vector_id_array();
|
|
for (uint64_t i = 0; i < 10; ++i) {
|
|
::milvus::grpc::RowRecord* grpc_record = request.add_row_record_array();
|
|
CopyRowRecord(grpc_record, record_array[i]);
|
|
}
|
|
handler->Insert(&context, &request, &vector_ids);
|
|
|
|
// show tables
|
|
::milvus::grpc::Command cmd;
|
|
::milvus::grpc::TableNameList table_name_list;
|
|
status = handler->ShowTables(&context, &cmd, &table_name_list);
|
|
ASSERT_EQ(status.error_code(), ::grpc::Status::OK.error_code());
|
|
|
|
// show table info
|
|
::milvus::grpc::TableInfo table_info;
|
|
status = handler->ShowTableInfo(&context, &table_name, &table_info);
|
|
ASSERT_EQ(status.error_code(), ::grpc::Status::OK.error_code());
|
|
|
|
fiu_init(0);
|
|
fiu_enable("ShowTablesRequest.OnExecute.show_tables_fail", 1, NULL, 0);
|
|
handler->ShowTables(&context, &cmd, &table_name_list);
|
|
fiu_disable("ShowTablesRequest.OnExecute.show_tables_fail");
|
|
|
|
// Count Table
|
|
::milvus::grpc::TableRowCount count;
|
|
table_name.Clear();
|
|
status = handler->CountTable(&context, &table_name, &count);
|
|
table_name.set_table_name(tablename);
|
|
status = handler->CountTable(&context, &table_name, &count);
|
|
ASSERT_EQ(status.error_code(), ::grpc::Status::OK.error_code());
|
|
// ASSERT_EQ(count.table_row_count(), vector_ids.vector_id_array_size());
|
|
fiu_init(0);
|
|
fiu_enable("CountTableRequest.OnExecute.db_not_found", 1, NULL, 0);
|
|
status = handler->CountTable(&context, &table_name, &count);
|
|
fiu_disable("CountTableRequest.OnExecute.db_not_found");
|
|
|
|
fiu_enable("CountTableRequest.OnExecute.status_error", 1, NULL, 0);
|
|
status = handler->CountTable(&context, &table_name, &count);
|
|
fiu_disable("CountTableRequest.OnExecute.status_error");
|
|
|
|
fiu_enable("CountTableRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
status = handler->CountTable(&context, &table_name, &count);
|
|
fiu_disable("CountTableRequest.OnExecute.throw_std_exception");
|
|
|
|
// Preload Table
|
|
table_name.Clear();
|
|
status = handler->PreloadTable(&context, &table_name, &response);
|
|
table_name.set_table_name(TABLE_NAME);
|
|
status = handler->PreloadTable(&context, &table_name, &response);
|
|
ASSERT_EQ(status.error_code(), ::grpc::Status::OK.error_code());
|
|
|
|
fiu_enable("PreloadTableRequest.OnExecute.preload_table_fail", 1, NULL, 0);
|
|
handler->PreloadTable(&context, &table_name, &response);
|
|
fiu_disable("PreloadTableRequest.OnExecute.preload_table_fail");
|
|
|
|
fiu_enable("PreloadTableRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
handler->PreloadTable(&context, &table_name, &response);
|
|
fiu_disable("PreloadTableRequest.OnExecute.throw_std_exception");
|
|
|
|
fiu_init(0);
|
|
fiu_enable("CreateTableRequest.OnExecute.invalid_index_file_size", 1, NULL, 0);
|
|
tableschema.set_table_name(tablename);
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreateTableRequest.OnExecute.invalid_index_file_size");
|
|
|
|
fiu_enable("CreateTableRequest.OnExecute.db_already_exist", 1, NULL, 0);
|
|
tableschema.set_table_name(tablename);
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreateTableRequest.OnExecute.db_already_exist");
|
|
|
|
fiu_enable("CreateTableRequest.OnExecute.create_table_fail", 1, NULL, 0);
|
|
tableschema.set_table_name(tablename);
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreateTableRequest.OnExecute.create_table_fail");
|
|
|
|
fiu_enable("CreateTableRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
tableschema.set_table_name(tablename);
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreateTableRequest.OnExecute.throw_std_exception");
|
|
|
|
// Drop table
|
|
table_name.set_table_name("");
|
|
// test invalid table name
|
|
::grpc::Status grpc_status = handler->DropTable(&context, &table_name, &response);
|
|
table_name.set_table_name(tablename);
|
|
|
|
fiu_enable("DropTableRequest.OnExecute.db_not_found", 1, NULL, 0);
|
|
handler->DropTable(&context, &table_name, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("DropTableRequest.OnExecute.db_not_found");
|
|
|
|
fiu_enable("DropTableRequest.OnExecute.describe_table_fail", 1, NULL, 0);
|
|
handler->DropTable(&context, &table_name, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("DropTableRequest.OnExecute.describe_table_fail");
|
|
|
|
fiu_enable("DropTableRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
handler->DropTable(&context, &table_name, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("DropTableRequest.OnExecute.throw_std_exception");
|
|
|
|
grpc_status = handler->DropTable(&context, &table_name, &response);
|
|
ASSERT_EQ(grpc_status.error_code(), ::grpc::Status::OK.error_code());
|
|
int error_code = response.error_code();
|
|
ASSERT_EQ(error_code, ::milvus::grpc::ErrorCode::SUCCESS);
|
|
|
|
tableschema.set_table_name(table_name.table_name());
|
|
handler->DropTable(&context, &table_name, &response);
|
|
sleep(1);
|
|
handler->CreateTable(&context, &tableschema, &response);
|
|
ASSERT_EQ(response.error_code(), ::grpc::Status::OK.error_code());
|
|
|
|
fiu_enable("DropTableRequest.OnExecute.drop_table_fail", 1, NULL, 0);
|
|
handler->DropTable(&context, &table_name, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("DropTableRequest.OnExecute.drop_table_fail");
|
|
|
|
handler->DropTable(&context, &table_name, &response);
|
|
}
|
|
|
|
TEST_F(RpcHandlerTest, PARTITION_TEST) {
|
|
::grpc::ServerContext context;
|
|
handler->SetContext(&context, dummy_context);
|
|
handler->RegisterRequestHandler(milvus::server::RequestHandler());
|
|
::milvus::grpc::TableSchema table_schema;
|
|
::milvus::grpc::Status response;
|
|
std::string str_table_name = "tbl_partition";
|
|
table_schema.set_table_name(str_table_name);
|
|
table_schema.set_dimension(TABLE_DIM);
|
|
table_schema.set_index_file_size(INDEX_FILE_SIZE);
|
|
table_schema.set_metric_type(1);
|
|
handler->CreateTable(&context, &table_schema, &response);
|
|
|
|
::milvus::grpc::PartitionParam partition_param;
|
|
partition_param.set_table_name(str_table_name);
|
|
std::string partition_tag = "0";
|
|
partition_param.set_tag(partition_tag);
|
|
handler->CreatePartition(&context, &partition_param, &response);
|
|
ASSERT_EQ(response.error_code(), ::grpc::Status::OK.error_code());
|
|
|
|
::milvus::grpc::TableName table_name;
|
|
table_name.set_table_name(str_table_name);
|
|
::milvus::grpc::PartitionList partition_list;
|
|
handler->ShowPartitions(&context, &table_name, &partition_list);
|
|
ASSERT_EQ(response.error_code(), ::grpc::Status::OK.error_code());
|
|
ASSERT_EQ(partition_list.partition_tag_array_size(), 2);
|
|
|
|
fiu_init(0);
|
|
fiu_enable("ShowPartitionsRequest.OnExecute.invalid_table_name", 1, NULL, 0);
|
|
handler->ShowPartitions(&context, &table_name, &partition_list);
|
|
fiu_disable("ShowPartitionsRequest.OnExecute.invalid_table_name");
|
|
|
|
fiu_enable("ShowPartitionsRequest.OnExecute.show_partition_fail", 1, NULL, 0);
|
|
handler->ShowPartitions(&context, &table_name, &partition_list);
|
|
fiu_disable("ShowPartitionsRequest.OnExecute.show_partition_fail");
|
|
|
|
fiu_init(0);
|
|
fiu_enable("CreatePartitionRequest.OnExecute.invalid_table_name", 1, NULL, 0);
|
|
handler->CreatePartition(&context, &partition_param, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreatePartitionRequest.OnExecute.invalid_table_name");
|
|
|
|
fiu_enable("CreatePartitionRequest.OnExecute.invalid_partition_name", 1, NULL, 0);
|
|
handler->CreatePartition(&context, &partition_param, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreatePartitionRequest.OnExecute.invalid_partition_name");
|
|
|
|
fiu_enable("CreatePartitionRequest.OnExecute.invalid_partition_tags", 1, NULL, 0);
|
|
handler->CreatePartition(&context, &partition_param, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreatePartitionRequest.OnExecute.invalid_partition_tags");
|
|
|
|
fiu_enable("CreatePartitionRequest.OnExecute.db_already_exist", 1, NULL, 0);
|
|
handler->CreatePartition(&context, &partition_param, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreatePartitionRequest.OnExecute.db_already_exist");
|
|
|
|
fiu_enable("CreatePartitionRequest.OnExecute.create_partition_fail", 1, NULL, 0);
|
|
handler->CreatePartition(&context, &partition_param, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreatePartitionRequest.OnExecute.create_partition_fail");
|
|
|
|
fiu_enable("CreatePartitionRequest.OnExecute.throw_std_exception", 1, NULL, 0);
|
|
handler->CreatePartition(&context, &partition_param, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("CreatePartitionRequest.OnExecute.throw_std_exception");
|
|
|
|
::milvus::grpc::PartitionParam partition_parm;
|
|
partition_parm.set_table_name(str_table_name);
|
|
partition_parm.set_tag(partition_tag);
|
|
|
|
fiu_enable("DropPartitionRequest.OnExecute.invalid_table_name", 1, NULL, 0);
|
|
handler->DropPartition(&context, &partition_parm, &response);
|
|
ASSERT_NE(response.error_code(), ::grpc::Status::OK.error_code());
|
|
fiu_disable("DropPartitionRequest.OnExecute.invalid_table_name");
|
|
|
|
handler->DropPartition(&context, &partition_parm, &response);
|
|
ASSERT_EQ(response.error_code(), ::grpc::Status::OK.error_code());
|
|
}
|
|
|
|
TEST_F(RpcHandlerTest, CMD_TEST) {
|
|
::grpc::ServerContext context;
|
|
handler->SetContext(&context, dummy_context);
|
|
handler->RegisterRequestHandler(milvus::server::RequestHandler());
|
|
::milvus::grpc::Command command;
|
|
command.set_cmd("version");
|
|
::milvus::grpc::StringReply reply;
|
|
handler->Cmd(&context, &command, &reply);
|
|
ASSERT_EQ(reply.string_reply(), MILVUS_VERSION);
|
|
|
|
command.set_cmd("tasktable");
|
|
handler->Cmd(&context, &command, &reply);
|
|
command.set_cmd("test");
|
|
handler->Cmd(&context, &command, &reply);
|
|
|
|
command.set_cmd("status");
|
|
handler->Cmd(&context, &command, &reply);
|
|
command.set_cmd("mode");
|
|
handler->Cmd(&context, &command, &reply);
|
|
|
|
command.set_cmd("build_commit_id");
|
|
handler->Cmd(&context, &command, &reply);
|
|
|
|
command.set_cmd("set_config");
|
|
handler->Cmd(&context, &command, &reply);
|
|
command.set_cmd("get_config");
|
|
handler->Cmd(&context, &command, &reply);
|
|
}
|
|
|
|
//////////////////////////////////////////////////////////////////////
|
|
namespace {
|
|
class DummyRequest : public milvus::server::BaseRequest {
|
|
public:
|
|
milvus::Status
|
|
OnExecute() override {
|
|
return milvus::Status::OK();
|
|
}
|
|
|
|
static milvus::server::BaseRequestPtr
|
|
Create(std::string& dummy) {
|
|
return std::shared_ptr<milvus::server::BaseRequest>(new DummyRequest(dummy));
|
|
}
|
|
|
|
public:
|
|
explicit DummyRequest(std::string& dummy)
|
|
: BaseRequest(std::make_shared<milvus::server::Context>("dummy_request_id"), dummy) {
|
|
}
|
|
};
|
|
|
|
class RpcSchedulerTest : public testing::Test {
|
|
protected:
|
|
void
|
|
SetUp() override {
|
|
std::string dummy = "dql";
|
|
request_ptr = std::make_shared<DummyRequest>(dummy);
|
|
}
|
|
|
|
std::shared_ptr<DummyRequest> request_ptr;
|
|
};
|
|
|
|
class AsyncDummyRequest : public milvus::server::BaseRequest {
|
|
public:
|
|
milvus::Status
|
|
OnExecute() override {
|
|
return milvus::Status::OK();
|
|
}
|
|
|
|
static milvus::server::BaseRequestPtr
|
|
Create(std::string& dummy) {
|
|
return std::shared_ptr<milvus::server::BaseRequest>(new DummyRequest(dummy));
|
|
}
|
|
|
|
void TestSetStatus() {
|
|
SetStatus(milvus::SERVER_INVALID_ARGUMENT, "");
|
|
}
|
|
|
|
public:
|
|
explicit AsyncDummyRequest(std::string& dummy)
|
|
: BaseRequest(std::make_shared<milvus::server::Context>("dummy_request_id2"), dummy, true) {
|
|
}
|
|
};
|
|
} // namespace
|
|
|
|
TEST_F(RpcSchedulerTest, BASE_TASK_TEST) {
|
|
auto status = request_ptr->Execute();
|
|
ASSERT_TRUE(status.ok());
|
|
|
|
milvus::server::RequestScheduler::GetInstance().Start();
|
|
// milvus::server::RequestScheduler::GetInstance().Stop();
|
|
// milvus::server::RequestScheduler::GetInstance().Start();
|
|
|
|
std::string dummy = "dql";
|
|
milvus::server::BaseRequestPtr base_task_ptr = DummyRequest::Create(dummy);
|
|
milvus::server::RequestScheduler::ExecRequest(base_task_ptr);
|
|
|
|
milvus::server::RequestScheduler::GetInstance().ExecuteRequest(request_ptr);
|
|
|
|
fiu_init(0);
|
|
fiu_enable("RequestScheduler.ExecuteRequest.push_queue_fail", 1, NULL, 0);
|
|
milvus::server::RequestScheduler::GetInstance().ExecuteRequest(request_ptr);
|
|
fiu_disable("RequestScheduler.ExecuteRequest.push_queue_fail");
|
|
|
|
// std::string dummy2 = "dql2";
|
|
// milvus::server::BaseRequestPtr base_task_ptr2 = DummyRequest::Create(dummy2);
|
|
// fiu_enable("RequestScheduler.PutToQueue.null_queue", 1, NULL, 0);
|
|
// milvus::server::RequestScheduler::GetInstance().ExecuteRequest(base_task_ptr2);
|
|
// fiu_disable("RequestScheduler.PutToQueue.null_queue");
|
|
|
|
std::string dummy3 = "dql3";
|
|
milvus::server::BaseRequestPtr base_task_ptr3 = DummyRequest::Create(dummy3);
|
|
fiu_enable("RequestScheduler.TakeToExecute.throw_std_exception", 1, NULL, 0);
|
|
milvus::server::RequestScheduler::GetInstance().ExecuteRequest(base_task_ptr3);
|
|
fiu_disable("RequestScheduler.TakeToExecute.throw_std_exception");
|
|
|
|
std::string dummy4 = "dql4";
|
|
milvus::server::BaseRequestPtr base_task_ptr4 = DummyRequest::Create(dummy4);
|
|
fiu_enable("RequestScheduler.TakeToExecute.execute_fail", 1, NULL, 0);
|
|
milvus::server::RequestScheduler::GetInstance().ExecuteRequest(base_task_ptr4);
|
|
fiu_disable("RequestScheduler.TakeToExecute.execute_fail");
|
|
|
|
std::string dummy5 = "dql5";
|
|
milvus::server::BaseRequestPtr base_task_ptr5 = DummyRequest::Create(dummy5);
|
|
fiu_enable("RequestScheduler.PutToQueue.push_null_thread", 1, NULL, 0);
|
|
milvus::server::RequestScheduler::GetInstance().ExecuteRequest(base_task_ptr5);
|
|
fiu_disable("RequestScheduler.PutToQueue.push_null_thread");
|
|
|
|
request_ptr = nullptr;
|
|
milvus::server::RequestScheduler::GetInstance().ExecuteRequest(request_ptr);
|
|
|
|
milvus::server::BaseRequestPtr null_ptr = nullptr;
|
|
milvus::server::RequestScheduler::ExecRequest(null_ptr);
|
|
|
|
std::string async_dummy = "AsyncDummyRequest";
|
|
auto async_ptr = std::make_shared<AsyncDummyRequest>(async_dummy);
|
|
auto base_ptr = std::static_pointer_cast<milvus::server::BaseRequest>(async_ptr);
|
|
milvus::server::RequestScheduler::ExecRequest(base_ptr);
|
|
async_ptr->TestSetStatus();
|
|
|
|
milvus::server::RequestScheduler::GetInstance().Stop();
|
|
milvus::server::RequestScheduler::GetInstance().Start();
|
|
milvus::server::RequestScheduler::GetInstance().Stop();
|
|
}
|
|
|
|
TEST(RpcTest, RPC_SERVER_TEST) {
|
|
using GrpcServer = milvus::server::grpc::GrpcServer;
|
|
GrpcServer& server = GrpcServer::GetInstance();
|
|
|
|
fiu_init(0);
|
|
fiu_enable("check_config_address_fail", 1, NULL, 0);
|
|
server.Start();
|
|
sleep(2);
|
|
fiu_disable("check_config_address_fail");
|
|
server.Stop();
|
|
|
|
fiu_enable("check_config_port_fail", 1, NULL, 0);
|
|
server.Start();
|
|
sleep(2);
|
|
fiu_disable("check_config_port_fail");
|
|
server.Stop();
|
|
|
|
server.Start();
|
|
sleep(2);
|
|
server.Stop();
|
|
}
|
|
|
|
TEST(RpcTest, InterceptorHookHandlerTest) {
|
|
auto handler = std::make_shared<milvus::server::grpc::GrpcInterceptorHookHandler>();
|
|
handler->OnPostRecvInitialMetaData(nullptr, nullptr);
|
|
handler->OnPreSendMessage(nullptr, nullptr);
|
|
}
|