From 8a37609dcc04440b466a616855d78d670505b770 Mon Sep 17 00:00:00 2001 From: jielinxu <52057195+jielinxu@users.noreply.github.com> Date: Wed, 20 Nov 2019 15:44:25 +0800 Subject: [PATCH 01/14] [skip ci] Update README --- README.md | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 08ac3de4a0..5b2fc4454b 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ ![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen) ![Language](https://img.shields.io/badge/language-C%2B%2B-blue) [![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master) -![Release](https://img.shields.io/badge/release-v0.5.2-yellowgreen) +![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen) ![Release_date](https://img.shields.io/badge/release%20date-November-yellowgreen) [中文版](README_CN.md) | [日本語版](README_JP.md) @@ -18,7 +18,7 @@ For more detailed introduction of Milvus and its architecture, see [Milvus overv Milvus provides stable [Python](https://github.com/milvus-io/pymilvus), [Java](https://github.com/milvus-io/milvus-sdk-java) and [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) APIs. -Keep up-to-date with newest releases and latest updates by reading Milvus [release notes](https://www.milvus.io/docs/en/release/v0.5.2/). +Keep up-to-date with newest releases and latest updates by reading Milvus [release notes](https://www.milvus.io/docs/en/release/v0.5.3/). ## Get started @@ -52,12 +52,13 @@ We use [GitHub issues](https://github.com/milvus-io/milvus/issues) to track issu To connect with other users and contributors, welcome to join our [Slack channel](https://join.slack.com/t/milvusio/shared_invite/enQtNzY1OTQ0NDI3NjMzLWNmYmM1NmNjOTQ5MGI5NDhhYmRhMGU5M2NhNzhhMDMzY2MzNDdlYjM5ODQ5MmE3ODFlYzU3YjJkNmVlNDQ2ZTk). -## Thanks +## Contributors -We greatly appreciate the help of the following people. +Below is a list of Milvus contributors. We greatly appreciate your contributions! - [akihoni](https://github.com/akihoni) provided the CN version of README, and found a broken link in the doc. - [goodhamgupta](https://github.com/goodhamgupta) fixed a filename typo in the bootcamp doc. +- [erdustiggen](https://github.com/erdustiggen) changed from std::cout to LOG for error messages, and fixed a clang format issue as well as some grammatical errors. ## Resources @@ -65,6 +66,8 @@ We greatly appreciate the help of the following people. - [Milvus bootcamp](https://github.com/milvus-io/bootcamp) +- [Milvus test reports](https://github.com/milvus-io/milvus/tree/master/docs/test_report) + - [Milvus Medium](https://medium.com/@milvusio) - [Milvus CSDN](https://zilliz.blog.csdn.net/) From e5ee302ef8058e5f5ee11e448bfa81ce3beeb891 Mon Sep 17 00:00:00 2001 From: jielinxu <52057195+jielinxu@users.noreply.github.com> Date: Wed, 20 Nov 2019 16:03:43 +0800 Subject: [PATCH 02/14] [skip ci] Update README_CN --- README_CN.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/README_CN.md b/README_CN.md index d5de0b1cd6..979c476ebf 100644 --- a/README_CN.md +++ b/README_CN.md @@ -4,7 +4,7 @@ ![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen) ![Language](https://img.shields.io/badge/language-C%2B%2B-blue) [![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master) -![Release](https://img.shields.io/badge/release-v0.5.2-orange) +![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen) ![Release_date](https://img.shields.io/badge/release_date-October-yellowgreen) # 欢迎来到 Milvus @@ -17,7 +17,7 @@ Milvus 是一款开源的、针对海量特征向量的相似性搜索引擎。 Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)、[Java](https://github.com/milvus-io/milvus-sdk-java) 以及 C++ 的 API 接口。 -通过 [版本发布说明](https://milvus.io/docs/zh-CN/release/v0.5.2/) 获取最新版本的功能和更新。 +通过 [版本发布说明](https://milvus.io/docs/zh-CN/release/v0.5.3/) 获取最新版本的功能和更新。 ## 开始使用 Milvus @@ -57,6 +57,7 @@ Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)、[Java]( - [akihoni](https://github.com/akihoni) 提供了中文版 README,并发现了 README 中的无效链接。 - [goodhamgupta](https://github.com/goodhamgupta) 发现并修正了在线训练营文档中的文件名拼写错误。 +- [erdustiggen](https://github.com/erdustiggen) 将错误信息里的 std::cout 修改为 LOG,修正了一个 Clang 格式问题和一些语法错误。 ## 相关链接 @@ -64,6 +65,8 @@ Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)、[Java]( - [Milvus 在线训练营](https://github.com/milvus-io/bootcamp) +- [Milvus 测试报告](https://github.com/milvus-io/milvus/tree/master/docs/test_report_cn) + - [Milvus Medium](https://medium.com/@milvusio) - [Milvus CSDN](https://zilliz.blog.csdn.net/) From 84c1483d969ea3d3d8c5f64f2d575271d4d2d1aa Mon Sep 17 00:00:00 2001 From: jielinxu <52057195+jielinxu@users.noreply.github.com> Date: Wed, 20 Nov 2019 16:10:36 +0800 Subject: [PATCH 03/14] [skip ci] Update README_JP --- README_JP.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/README_JP.md b/README_JP.md index fd80b5d2ca..d55fea4a14 100644 --- a/README_JP.md +++ b/README_JP.md @@ -5,7 +5,7 @@ ![LICENSE](https://img.shields.io/badge/license-Apache--2.0-brightgreen) ![Language](https://img.shields.io/badge/language-C%2B%2B-blue) [![codebeat badge](https://codebeat.co/badges/e030a4f6-b126-4475-a938-4723d54ec3a7?style=plastic)](https://codebeat.co/projects/github-com-jinhai-cn-milvus-master) -![Release](https://img.shields.io/badge/release-v0.5.2-yellowgreen) +![Release](https://img.shields.io/badge/release-v0.5.3-yellowgreen) ![Release_date](https://img.shields.io/badge/release%20date-November-yellowgreen) @@ -15,9 +15,9 @@ Milvusは世界中一番早い特徴ベクトルにむかう類似性検索エンジンです。不均質な計算アーキテクチャーに基づいて効率を最大化出来ます。数十億のベクタの中に目標を検索できるまで数ミリ秒しかかからず、最低限の計算資源だけが必要です。 -Milvusは安定的なPython、Java又は C++ APIsを提供します。 +Milvusは安定的な[Python](https://github.com/milvus-io/pymilvus)、[Java](https://github.com/milvus-io/milvus-sdk-java)又は [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) APIsを提供します。 -Milvus [リリースノート](https://milvus.io/docs/en/release/v0.5.2/)を読んで最新バージョンや更新情報を手に入れます。 +Milvus [リリースノート](https://milvus.io/docs/en/release/v0.5.3/)を読んで最新バージョンや更新情報を手に入れます。(https://github.com/milvus-io/milvus/tree/master/core/src/sdk) ## はじめに @@ -46,7 +46,7 @@ C++サンプルコードを実行するために、次のコマンドをつか 本プロジェクトへの貢献に心より感謝いたします。 Milvusを貢献したいと思うなら、[貢献規約](CONTRIBUTING.md)を読んでください。 本プロジェクトはMilvusの[行動規範](CODE_OF_CONDUCT.md)に従います。プロジェクトに参加したい場合は、行動規範を従ってください。 -[GitHub issues](https://github.com/milvus-io/milvus/issues/new/choose) を使って問題やバッグなとを報告しでください。 一般てきな問題なら, Milvusコミュニティに参加してください。 +[GitHub issues](https://github.com/milvus-io/milvus/issues) を使って問題やバッグなとを報告しでください。 一般てきな問題なら, Milvusコミュニティに参加してください。 ## Milvusコミュニティを参加する @@ -59,6 +59,8 @@ C++サンプルコードを実行するために、次のコマンドをつか - [Milvus](https://github.com/milvus-io/bootcamp) +- [Milvus テストレポート](https://github.com/milvus-io/milvus/tree/master/docs/test_report) + - [Milvus Medium](https://medium.com/@milvusio) - [Milvus CSDN](https://zilliz.blog.csdn.net/) From a54df0d3ccf06c0b36df9ada37919016cce2a95c Mon Sep 17 00:00:00 2001 From: jielinxu <52057195+jielinxu@users.noreply.github.com> Date: Wed, 20 Nov 2019 16:57:22 +0800 Subject: [PATCH 04/14] [skip ci] minor change --- README_CN.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README_CN.md b/README_CN.md index 979c476ebf..df407f1a5f 100644 --- a/README_CN.md +++ b/README_CN.md @@ -65,7 +65,7 @@ Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)、[Java]( - [Milvus 在线训练营](https://github.com/milvus-io/bootcamp) -- [Milvus 测试报告](https://github.com/milvus-io/milvus/tree/master/docs/test_report_cn) +- [Milvus 测试报告](https://github.com/milvus-io/milvus/tree/master/docs/test_report) - [Milvus Medium](https://medium.com/@milvusio) From fd9c1a123065b4c5c4a28299ace4dd02aae88b20 Mon Sep 17 00:00:00 2001 From: jielinxu <52057195+jielinxu@users.noreply.github.com> Date: Wed, 20 Nov 2019 17:21:10 +0800 Subject: [PATCH 05/14] [skip ci] Update test reports link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5b2fc4454b..d8d5d80b11 100644 --- a/README.md +++ b/README.md @@ -66,7 +66,7 @@ Below is a list of Milvus contributors. We greatly appreciate your contributions - [Milvus bootcamp](https://github.com/milvus-io/bootcamp) -- [Milvus test reports](https://github.com/milvus-io/milvus/tree/master/docs/test_report) +- [Milvus test reports](https://github.com/milvus-io/milvus/tree/master/docs) - [Milvus Medium](https://medium.com/@milvusio) From fd9beb8d3b173f4522bf0308637a98ae7a00172c Mon Sep 17 00:00:00 2001 From: jielinxu <52057195+jielinxu@users.noreply.github.com> Date: Wed, 20 Nov 2019 17:22:18 +0800 Subject: [PATCH 06/14] [skip ci] Update test report link --- README_CN.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README_CN.md b/README_CN.md index df407f1a5f..df5445d931 100644 --- a/README_CN.md +++ b/README_CN.md @@ -65,7 +65,7 @@ Milvus 提供稳定的 [Python](https://github.com/milvus-io/pymilvus)、[Java]( - [Milvus 在线训练营](https://github.com/milvus-io/bootcamp) -- [Milvus 测试报告](https://github.com/milvus-io/milvus/tree/master/docs/test_report) +- [Milvus 测试报告](https://github.com/milvus-io/milvus/tree/master/docs) - [Milvus Medium](https://medium.com/@milvusio) From e8f97ece6f047dac0059f1decb33d5bfe6f13a92 Mon Sep 17 00:00:00 2001 From: jielinxu <52057195+jielinxu@users.noreply.github.com> Date: Wed, 20 Nov 2019 17:22:53 +0800 Subject: [PATCH 07/14] [skip ci] Update test report link --- README_JP.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README_JP.md b/README_JP.md index d55fea4a14..b5001a476c 100644 --- a/README_JP.md +++ b/README_JP.md @@ -59,7 +59,7 @@ C++サンプルコードを実行するために、次のコマンドをつか - [Milvus](https://github.com/milvus-io/bootcamp) -- [Milvus テストレポート](https://github.com/milvus-io/milvus/tree/master/docs/test_report) +- [Milvus テストレポート](https://github.com/milvus-io/milvus/tree/master/docs) - [Milvus Medium](https://medium.com/@milvusio) From 388e7fc315ca4e19b2e4269c419d39f8c970b615 Mon Sep 17 00:00:00 2001 From: jielinxu <52057195+jielinxu@users.noreply.github.com> Date: Wed, 20 Nov 2019 17:23:47 +0800 Subject: [PATCH 08/14] [skip ci] minor delete --- README_JP.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README_JP.md b/README_JP.md index b5001a476c..4a1d67738d 100644 --- a/README_JP.md +++ b/README_JP.md @@ -17,7 +17,7 @@ Milvusは世界中一番早い特徴ベクトルにむかう類似性検索エ Milvusは安定的な[Python](https://github.com/milvus-io/pymilvus)、[Java](https://github.com/milvus-io/milvus-sdk-java)又は [C++](https://github.com/milvus-io/milvus/tree/master/core/src/sdk) APIsを提供します。 -Milvus [リリースノート](https://milvus.io/docs/en/release/v0.5.3/)を読んで最新バージョンや更新情報を手に入れます。(https://github.com/milvus-io/milvus/tree/master/core/src/sdk) +Milvus [リリースノート](https://milvus.io/docs/en/release/v0.5.3/)を読んで最新バージョンや更新情報を手に入れます。 ## はじめに From 1ac30913e73d9eb621eb770a7dd0125fd5c2c6a8 Mon Sep 17 00:00:00 2001 From: "xiaojun.lin" Date: Thu, 21 Nov 2019 15:06:00 +0800 Subject: [PATCH 09/14] move seal to Load --- CHANGELOG.md | 1 + .../knowhere/knowhere/index/vector_index/FaissBaseIndex.cpp | 4 +++- .../knowhere/knowhere/index/vector_index/IndexGPUIVF.cpp | 3 --- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index a8b243546e..af4abe71a5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -18,6 +18,7 @@ Please mark all change in change log and use the ticket from JIRA. - \#412 - Message returned is confused when partition created with null partition name - \#416 - Drop the same partition success repeatally - \#440 - Query API in customization still uses old version +- \#458 - Index data is not compatible between 0.5 and 0.6 ## Feature - \#12 - Pure CPU version for Milvus diff --git a/core/src/index/knowhere/knowhere/index/vector_index/FaissBaseIndex.cpp b/core/src/index/knowhere/knowhere/index/vector_index/FaissBaseIndex.cpp index 783487be3a..8fce37a81e 100644 --- a/core/src/index/knowhere/knowhere/index/vector_index/FaissBaseIndex.cpp +++ b/core/src/index/knowhere/knowhere/index/vector_index/FaissBaseIndex.cpp @@ -33,7 +33,7 @@ FaissBaseIndex::SerializeImpl() { try { faiss::Index* index = index_.get(); - SealImpl(); + // SealImpl(); MemoryIOWriter writer; faiss::write_index(index, &writer); @@ -60,6 +60,8 @@ FaissBaseIndex::LoadImpl(const BinarySet& index_binary) { faiss::Index* index = faiss::read_index(&reader); index_.reset(index); + + SealImpl(); } void diff --git a/core/src/index/knowhere/knowhere/index/vector_index/IndexGPUIVF.cpp b/core/src/index/knowhere/knowhere/index/vector_index/IndexGPUIVF.cpp index 251dfc12ed..d69f87a061 100644 --- a/core/src/index/knowhere/knowhere/index/vector_index/IndexGPUIVF.cpp +++ b/core/src/index/knowhere/knowhere/index/vector_index/IndexGPUIVF.cpp @@ -86,9 +86,6 @@ GPUIVF::SerializeImpl() { faiss::Index* index = index_.get(); faiss::Index* host_index = faiss::gpu::index_gpu_to_cpu(index); - // TODO(linxj): support seal - // SealImpl(); - faiss::write_index(host_index, &writer); delete host_index; } From 098ba111d7437f16e2614cd28b0610cfcdf7b608 Mon Sep 17 00:00:00 2001 From: quicksilver Date: Thu, 21 Nov 2019 15:42:10 +0800 Subject: [PATCH 10/14] format Jenkinsfile --- ci/jenkins/Jenkinsfile | 24 +++--- .../pod/milvus-cpu-version-build-env-pod.yaml | 2 +- .../pod/milvus-gpu-version-build-env-pod.yaml | 2 +- ci/jenkins/step/publishImages.groovy | 78 +++++++++---------- 4 files changed, 52 insertions(+), 54 deletions(-) diff --git a/ci/jenkins/Jenkinsfile b/ci/jenkins/Jenkinsfile index 8d3953b112..47be9f5bb6 100644 --- a/ci/jenkins/Jenkinsfile +++ b/ci/jenkins/Jenkinsfile @@ -53,7 +53,7 @@ pipeline { stage("Run Build") { agent { kubernetes { - label "${BINRARY_VERSION}-build" + label "${env.BINRARY_VERSION}-build" defaultContainer 'jnlp' yamlFile 'ci/jenkins/pod/milvus-gpu-version-build-env-pod.yaml' } @@ -62,7 +62,7 @@ pipeline { stages { stage('Build') { steps { - container('milvus-build-env') { + container("milvus-${env.BINRARY_VERSION}-build-env") { script { load "${env.WORKSPACE}/ci/jenkins/step/build.groovy" } @@ -71,7 +71,7 @@ pipeline { } stage('Code Coverage') { steps { - container('milvus-build-env') { + container("milvus-${env.BINRARY_VERSION}-build-env") { script { load "${env.WORKSPACE}/ci/jenkins/step/coverage.groovy" } @@ -80,7 +80,7 @@ pipeline { } stage('Upload Package') { steps { - container('milvus-build-env') { + container("milvus-${env.BINRARY_VERSION}-build-env") { script { load "${env.WORKSPACE}/ci/jenkins/step/package.groovy" } @@ -93,7 +93,7 @@ pipeline { stage("Publish docker images") { agent { kubernetes { - label "${BINRARY_VERSION}-publish" + label "${env.BINRARY_VERSION}-publish" defaultContainer 'jnlp' yamlFile 'ci/jenkins/pod/docker-pod.yaml' } @@ -115,7 +115,7 @@ pipeline { stage("Deploy to Development") { agent { kubernetes { - label "${BINRARY_VERSION}-dev-test" + label "${env.BINRARY_VERSION}-dev-test" defaultContainer 'jnlp' yamlFile 'ci/jenkins/pod/testEnvironment.yaml' } @@ -183,7 +183,7 @@ pipeline { stage("Run Build") { agent { kubernetes { - label "${BINRARY_VERSION}-build" + label "${env.BINRARY_VERSION}-build" defaultContainer 'jnlp' yamlFile 'ci/jenkins/pod/milvus-cpu-version-build-env-pod.yaml' } @@ -192,7 +192,7 @@ pipeline { stages { stage('Build') { steps { - container('milvus-build-env') { + container("milvus-${env.BINRARY_VERSION}-build-env") { script { load "${env.WORKSPACE}/ci/jenkins/step/build.groovy" } @@ -201,7 +201,7 @@ pipeline { } stage('Code Coverage') { steps { - container('milvus-build-env') { + container("milvus-${env.BINRARY_VERSION}-build-env") { script { load "${env.WORKSPACE}/ci/jenkins/step/coverage.groovy" } @@ -210,7 +210,7 @@ pipeline { } stage('Upload Package') { steps { - container('milvus-build-env') { + container("milvus-${env.BINRARY_VERSION}-build-env") { script { load "${env.WORKSPACE}/ci/jenkins/step/package.groovy" } @@ -223,7 +223,7 @@ pipeline { stage("Publish docker images") { agent { kubernetes { - label "${BINRARY_VERSION}-publish" + label "${env.BINRARY_VERSION}-publish" defaultContainer 'jnlp' yamlFile 'ci/jenkins/pod/docker-pod.yaml' } @@ -245,7 +245,7 @@ pipeline { stage("Deploy to Development") { agent { kubernetes { - label "${BINRARY_VERSION}-dev-test" + label "${env.BINRARY_VERSION}-dev-test" defaultContainer 'jnlp' yamlFile 'ci/jenkins/pod/testEnvironment.yaml' } diff --git a/ci/jenkins/pod/milvus-cpu-version-build-env-pod.yaml b/ci/jenkins/pod/milvus-cpu-version-build-env-pod.yaml index 561bfe8140..894067d66c 100644 --- a/ci/jenkins/pod/milvus-cpu-version-build-env-pod.yaml +++ b/ci/jenkins/pod/milvus-cpu-version-build-env-pod.yaml @@ -7,7 +7,7 @@ metadata: componet: cpu-build-env spec: containers: - - name: milvus-build-env + - name: milvus-cpu-build-env image: registry.zilliz.com/milvus/milvus-cpu-build-env:v0.6.0-ubuntu18.04 env: - name: POD_IP diff --git a/ci/jenkins/pod/milvus-gpu-version-build-env-pod.yaml b/ci/jenkins/pod/milvus-gpu-version-build-env-pod.yaml index 422dd72ab2..f5ceb9462b 100644 --- a/ci/jenkins/pod/milvus-gpu-version-build-env-pod.yaml +++ b/ci/jenkins/pod/milvus-gpu-version-build-env-pod.yaml @@ -7,7 +7,7 @@ metadata: componet: gpu-build-env spec: containers: - - name: milvus-build-env + - name: milvus-gpu-build-env image: registry.zilliz.com/milvus/milvus-gpu-build-env:v0.6.0-ubuntu18.04 env: - name: POD_IP diff --git a/ci/jenkins/step/publishImages.groovy b/ci/jenkins/step/publishImages.groovy index 72e9924c62..5449bcedd8 100644 --- a/ci/jenkins/step/publishImages.groovy +++ b/ci/jenkins/step/publishImages.groovy @@ -1,47 +1,45 @@ -container('publish-images') { - timeout(time: 15, unit: 'MINUTES') { - dir ("docker/deploy/${env.BINRARY_VERSION}/${env.OS_NAME}") { - def binaryPackage = "${PROJECT_NAME}-${PACKAGE_VERSION}.tar.gz" +timeout(time: 15, unit: 'MINUTES') { + dir ("docker/deploy/${env.BINRARY_VERSION}/${env.OS_NAME}") { + def binaryPackage = "${PROJECT_NAME}-${PACKAGE_VERSION}.tar.gz" - withCredentials([usernamePassword(credentialsId: "${params.JFROG_CREDENTIALS_ID}", usernameVariable: 'JFROG_USERNAME', passwordVariable: 'JFROG_PASSWORD')]) { - def downloadStatus = sh(returnStatus: true, script: "curl -u${JFROG_USERNAME}:${JFROG_PASSWORD} -O ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage}") + withCredentials([usernamePassword(credentialsId: "${params.JFROG_CREDENTIALS_ID}", usernameVariable: 'JFROG_USERNAME', passwordVariable: 'JFROG_PASSWORD')]) { + def downloadStatus = sh(returnStatus: true, script: "curl -u${JFROG_USERNAME}:${JFROG_PASSWORD} -O ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage}") - if (downloadStatus != 0) { - error("\" Download \" ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage} \" failed!") - } + if (downloadStatus != 0) { + error("\" Download \" ${params.JFROG_ARTFACTORY_URL}/milvus/package/${binaryPackage} \" failed!") } - sh "tar zxvf ${binaryPackage}" - def imageName = "${PROJECT_NAME}/engine:${DOCKER_VERSION}" + } + sh "tar zxvf ${binaryPackage}" + def imageName = "${PROJECT_NAME}/engine:${DOCKER_VERSION}" - try { - def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null") - if (isExistSourceImage == 0) { - def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}") - } - - def customImage = docker.build("${imageName}") - - def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null") - if (isExistTargeImage == 0) { - def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}") - } - - docker.withRegistry("https://${params.DOKCER_REGISTRY_URL}", "${params.DOCKER_CREDENTIALS_ID}") { - customImage.push() - } - } catch (exc) { - throw exc - } finally { - def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null") - if (isExistSourceImage == 0) { - def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}") - } - - def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null") - if (isExistTargeImage == 0) { - def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}") - } + try { + def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null") + if (isExistSourceImage == 0) { + def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}") } - } + + def customImage = docker.build("${imageName}") + + def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null") + if (isExistTargeImage == 0) { + def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}") + } + + docker.withRegistry("https://${params.DOKCER_REGISTRY_URL}", "${params.DOCKER_CREDENTIALS_ID}") { + customImage.push() + } + } catch (exc) { + throw exc + } finally { + def isExistSourceImage = sh(returnStatus: true, script: "docker inspect --type=image ${imageName} 2>&1 > /dev/null") + if (isExistSourceImage == 0) { + def removeSourceImageStatus = sh(returnStatus: true, script: "docker rmi ${imageName}") + } + + def isExistTargeImage = sh(returnStatus: true, script: "docker inspect --type=image ${params.DOKCER_REGISTRY_URL}/${imageName} 2>&1 > /dev/null") + if (isExistTargeImage == 0) { + def removeTargeImageStatus = sh(returnStatus: true, script: "docker rmi ${params.DOKCER_REGISTRY_URL}/${imageName}") + } + } } } From 3285aa98851beb7f3fbb8ea5d862ba1b30ece3ee Mon Sep 17 00:00:00 2001 From: quicksilver Date: Thu, 21 Nov 2019 16:08:14 +0800 Subject: [PATCH 11/14] format Jenkinsfile --- ci/jenkins/step/cleanupSingleDev.groovy | 8 ++++---- ci/jenkins/step/deploySingle2Dev.groovy | 2 +- ci/jenkins/step/singleDevNightlyTest.groovy | 6 +++--- ci/jenkins/step/singleDevTest.groovy | 6 +++--- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/ci/jenkins/step/cleanupSingleDev.groovy b/ci/jenkins/step/cleanupSingleDev.groovy index 30325e0c91..3311592373 100644 --- a/ci/jenkins/step/cleanupSingleDev.groovy +++ b/ci/jenkins/step/cleanupSingleDev.groovy @@ -1,12 +1,12 @@ try { - def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true + def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true if (!helmResult) { - sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}" + sh "helm del --purge ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}" } } catch (exc) { - def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true + def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true if (!helmResult) { - sh "helm del --purge ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}" + sh "helm del --purge ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}" } throw exc } diff --git a/ci/jenkins/step/deploySingle2Dev.groovy b/ci/jenkins/step/deploySingle2Dev.groovy index 7b479ff44a..929548d645 100644 --- a/ci/jenkins/step/deploySingle2Dev.groovy +++ b/ci/jenkins/step/deploySingle2Dev.groovy @@ -3,7 +3,7 @@ sh 'helm repo update' dir ('milvus-helm') { checkout([$class: 'GitSCM', branches: [[name: "0.6.0"]], userRemoteConfigs: [[url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.6.0:refs/remotes/origin/0.6.0"]]]) dir ("milvus") { - sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/sqlite_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." + sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/sqlite_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." } } diff --git a/ci/jenkins/step/singleDevNightlyTest.groovy b/ci/jenkins/step/singleDevNightlyTest.groovy index cee8a092c1..9944a8bac0 100644 --- a/ci/jenkins/step/singleDevNightlyTest.groovy +++ b/ci/jenkins/step/singleDevNightlyTest.groovy @@ -1,7 +1,7 @@ timeout(time: 90, unit: 'MINUTES') { dir ("tests/milvus_python_test") { sh 'python3 -m pip install -r requirements.txt' - sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" + sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" } // mysql database backend test load "ci/jenkins/jenkinsfile/cleanupSingleDev.groovy" @@ -13,10 +13,10 @@ timeout(time: 90, unit: 'MINUTES') { } dir ("milvus-helm") { dir ("milvus") { - sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." + sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." } } dir ("tests/milvus_python_test") { - sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" + sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" } } diff --git a/ci/jenkins/step/singleDevTest.groovy b/ci/jenkins/step/singleDevTest.groovy index db0fdc0f3b..70223219a5 100644 --- a/ci/jenkins/step/singleDevTest.groovy +++ b/ci/jenkins/step/singleDevTest.groovy @@ -1,7 +1,7 @@ timeout(time: 60, unit: 'MINUTES') { dir ("tests/milvus_python_test") { sh 'python3 -m pip install -r requirements.txt' - sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" + sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" } // mysql database backend test @@ -14,10 +14,10 @@ timeout(time: 60, unit: 'MINUTES') { // } // dir ("milvus-helm") { // dir ("milvus") { - // sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." + // sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." // } // } // dir ("tests/milvus_python_test") { - // sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.PIPELINE_NAME}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" + // sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" // } } From ae59ad1901a0dd98a33ba4ea72caea4e1bc7f1fe Mon Sep 17 00:00:00 2001 From: quicksilver Date: Thu, 21 Nov 2019 16:49:01 +0800 Subject: [PATCH 12/14] format Jenkinsfile --- ci/jenkins/Jenkinsfile | 8 ++++++++ ci/jenkins/step/cleanupSingleDev.groovy | 8 ++++---- ci/jenkins/step/deploySingle2Dev.groovy | 2 +- ci/jenkins/step/singleDevNightlyTest.groovy | 6 +++--- ci/jenkins/step/singleDevTest.groovy | 6 +++--- 5 files changed, 19 insertions(+), 11 deletions(-) diff --git a/ci/jenkins/Jenkinsfile b/ci/jenkins/Jenkinsfile index 47be9f5bb6..01048bd953 100644 --- a/ci/jenkins/Jenkinsfile +++ b/ci/jenkins/Jenkinsfile @@ -113,6 +113,10 @@ pipeline { } stage("Deploy to Development") { + environment { + HELM_RELEASE_NAME = "${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}".toLowerCase() + } + agent { kubernetes { label "${env.BINRARY_VERSION}-dev-test" @@ -243,6 +247,10 @@ pipeline { } stage("Deploy to Development") { + environment { + HELM_RELEASE_NAME = "${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}".toLowerCase() + } + agent { kubernetes { label "${env.BINRARY_VERSION}-dev-test" diff --git a/ci/jenkins/step/cleanupSingleDev.groovy b/ci/jenkins/step/cleanupSingleDev.groovy index 3311592373..101105c027 100644 --- a/ci/jenkins/step/cleanupSingleDev.groovy +++ b/ci/jenkins/step/cleanupSingleDev.groovy @@ -1,12 +1,12 @@ try { - def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true + def helmResult = sh script: "helm status ${env.HELM_RELEASE_NAME}", returnStatus: true if (!helmResult) { - sh "helm del --purge ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}" + sh "helm del --purge ${env.HELM_RELEASE_NAME}" } } catch (exc) { - def helmResult = sh script: "helm status ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}", returnStatus: true + def helmResult = sh script: "helm status ${env.HELM_RELEASE_NAME}", returnStatus: true if (!helmResult) { - sh "helm del --purge ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}" + sh "helm del --purge ${env.HELM_RELEASE_NAME}" } throw exc } diff --git a/ci/jenkins/step/deploySingle2Dev.groovy b/ci/jenkins/step/deploySingle2Dev.groovy index 929548d645..cb2ad2b1cb 100644 --- a/ci/jenkins/step/deploySingle2Dev.groovy +++ b/ci/jenkins/step/deploySingle2Dev.groovy @@ -3,7 +3,7 @@ sh 'helm repo update' dir ('milvus-helm') { checkout([$class: 'GitSCM', branches: [[name: "0.6.0"]], userRemoteConfigs: [[url: "https://github.com/milvus-io/milvus-helm.git", name: 'origin', refspec: "+refs/heads/0.6.0:refs/remotes/origin/0.6.0"]]]) dir ("milvus") { - sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/sqlite_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." + sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/sqlite_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." } } diff --git a/ci/jenkins/step/singleDevNightlyTest.groovy b/ci/jenkins/step/singleDevNightlyTest.groovy index 9944a8bac0..d14ba1b66c 100644 --- a/ci/jenkins/step/singleDevNightlyTest.groovy +++ b/ci/jenkins/step/singleDevNightlyTest.groovy @@ -1,7 +1,7 @@ timeout(time: 90, unit: 'MINUTES') { dir ("tests/milvus_python_test") { sh 'python3 -m pip install -r requirements.txt' - sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" + sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local" } // mysql database backend test load "ci/jenkins/jenkinsfile/cleanupSingleDev.groovy" @@ -13,10 +13,10 @@ timeout(time: 90, unit: 'MINUTES') { } dir ("milvus-helm") { dir ("milvus") { - sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." + sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." } } dir ("tests/milvus_python_test") { - sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" + sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local" } } diff --git a/ci/jenkins/step/singleDevTest.groovy b/ci/jenkins/step/singleDevTest.groovy index 70223219a5..7b72eaacde 100644 --- a/ci/jenkins/step/singleDevTest.groovy +++ b/ci/jenkins/step/singleDevTest.groovy @@ -1,7 +1,7 @@ timeout(time: 60, unit: 'MINUTES') { dir ("tests/milvus_python_test") { sh 'python3 -m pip install -r requirements.txt' - sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" + sh "pytest . --alluredir=\"test_out/dev/single/sqlite\" --level=1 --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local" } // mysql database backend test @@ -14,10 +14,10 @@ timeout(time: 60, unit: 'MINUTES') { // } // dir ("milvus-helm") { // dir ("milvus") { - // sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." + // sh "helm install --wait --timeout 300 --set engine.image.tag=${DOCKER_VERSION} --set expose.type=clusterIP --name ${env.HELM_RELEASE_NAME} -f ci/db_backend/mysql_${env.BINRARY_VERSION}_values.yaml -f ci/filebeat/values.yaml --namespace milvus ." // } // } // dir ("tests/milvus_python_test") { - // sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.PIPELINE_NAME}-${env.SEMVER}-${env.BUILD_NUMBER}-single-${env.BINRARY_VERSION}-engine.milvus.svc.cluster.local" + // sh "pytest . --alluredir=\"test_out/dev/single/mysql\" --level=1 --ip ${env.HELM_RELEASE_NAME}-engine.milvus.svc.cluster.local" // } } From 32e5bba61aafcc1285acadfca63a28e39b9045a9 Mon Sep 17 00:00:00 2001 From: "xiaojun.lin" Date: Thu, 21 Nov 2019 16:54:51 +0800 Subject: [PATCH 13/14] fix --- core/src/index/knowhere/knowhere/index/vector_index/IndexIVF.cpp | 1 - 1 file changed, 1 deletion(-) diff --git a/core/src/index/knowhere/knowhere/index/vector_index/IndexIVF.cpp b/core/src/index/knowhere/knowhere/index/vector_index/IndexIVF.cpp index 7f30a97ea0..8b734abdc6 100644 --- a/core/src/index/knowhere/knowhere/index/vector_index/IndexIVF.cpp +++ b/core/src/index/knowhere/knowhere/index/vector_index/IndexIVF.cpp @@ -97,7 +97,6 @@ IVF::Serialize() { } std::lock_guard lk(mutex_); - Seal(); return SerializeImpl(); } From acf4d0459d37a8760a4c931abeb29b7a4e9e916f Mon Sep 17 00:00:00 2001 From: zhenwu Date: Thu, 21 Nov 2019 17:23:06 +0800 Subject: [PATCH 14/14] Add partition case --- tests/milvus_python_test/pytest.ini | 2 +- tests/milvus_python_test/requirements.txt | 2 +- .../requirements_no_pymilvus.txt | 1 - tests/milvus_python_test/test_add_vectors.py | 96 +++- tests/milvus_python_test/test_connect.py | 13 +- tests/milvus_python_test/test_index.py | 216 ++++++++- tests/milvus_python_test/test_mix.py | 3 +- tests/milvus_python_test/test_partition.py | 431 ++++++++++++++++++ .../milvus_python_test/test_search_vectors.py | 238 +++++++++- tests/milvus_python_test/test_table.py | 2 +- tests/milvus_python_test/test_table_count.py | 88 +++- 11 files changed, 1065 insertions(+), 27 deletions(-) create mode 100644 tests/milvus_python_test/test_partition.py diff --git a/tests/milvus_python_test/pytest.ini b/tests/milvus_python_test/pytest.ini index 3f95dc29b8..3ae6a790db 100644 --- a/tests/milvus_python_test/pytest.ini +++ b/tests/milvus_python_test/pytest.ini @@ -4,6 +4,6 @@ log_format = [%(asctime)s-%(levelname)s-%(name)s]: %(message)s (%(filename)s:%(l log_cli = true log_level = 20 -timeout = 300 +timeout = 600 level = 1 \ No newline at end of file diff --git a/tests/milvus_python_test/requirements.txt b/tests/milvus_python_test/requirements.txt index c8fc02c096..016c8dedfc 100644 --- a/tests/milvus_python_test/requirements.txt +++ b/tests/milvus_python_test/requirements.txt @@ -22,4 +22,4 @@ wcwidth==0.1.7 wrapt==1.11.1 zipp==0.5.1 scikit-learn>=0.19.1 -pymilvus-test>=0.2.0 \ No newline at end of file +pymilvus-test>=0.2.0 diff --git a/tests/milvus_python_test/requirements_no_pymilvus.txt b/tests/milvus_python_test/requirements_no_pymilvus.txt index 45884c0c71..c6a933736e 100644 --- a/tests/milvus_python_test/requirements_no_pymilvus.txt +++ b/tests/milvus_python_test/requirements_no_pymilvus.txt @@ -17,7 +17,6 @@ allure-pytest==2.7.0 pytest-print==0.1.2 pytest-level==0.1.1 six==1.12.0 -thrift==0.11.0 typed-ast==1.3.5 wcwidth==0.1.7 wrapt==1.11.1 diff --git a/tests/milvus_python_test/test_add_vectors.py b/tests/milvus_python_test/test_add_vectors.py index f9f7f7d4ca..7245d51ea2 100644 --- a/tests/milvus_python_test/test_add_vectors.py +++ b/tests/milvus_python_test/test_add_vectors.py @@ -15,7 +15,7 @@ table_id = "test_add" ADD_TIMEOUT = 60 nprobe = 1 epsilon = 0.0001 - +tag = "1970-01-01" class TestAddBase: """ @@ -186,6 +186,7 @@ class TestAddBase: expected: status ok ''' index_param = get_simple_index_params + logging.getLogger().info(index_param) vector = gen_single_vector(dim) status, ids = connect.add_vectors(table, vector) status = connect.create_index(table, index_param) @@ -439,6 +440,80 @@ class TestAddBase: assert status.OK() assert len(ids) == nq + @pytest.mark.timeout(ADD_TIMEOUT) + def test_add_vectors_tag(self, connect, table): + ''' + target: test add vectors in table created before + method: create table and add vectors in it, with the partition_tag param + expected: the table row count equals to nq + ''' + nq = 5 + partition_name = gen_unique_str() + vectors = gen_vectors(nq, dim) + status = connect.create_partition(table, partition_name, tag) + status, ids = connect.add_vectors(table, vectors, partition_tag=tag) + assert status.OK() + assert len(ids) == nq + + @pytest.mark.timeout(ADD_TIMEOUT) + def test_add_vectors_tag_A(self, connect, table): + ''' + target: test add vectors in table created before + method: create partition and add vectors in it + expected: the table row count equals to nq + ''' + nq = 5 + partition_name = gen_unique_str() + vectors = gen_vectors(nq, dim) + status = connect.create_partition(table, partition_name, tag) + status, ids = connect.add_vectors(partition_name, vectors) + assert status.OK() + assert len(ids) == nq + + @pytest.mark.timeout(ADD_TIMEOUT) + def test_add_vectors_tag_not_existed(self, connect, table): + ''' + target: test add vectors in table created before + method: create table and add vectors in it, with the not existed partition_tag param + expected: status not ok + ''' + nq = 5 + vectors = gen_vectors(nq, dim) + status, ids = connect.add_vectors(table, vectors, partition_tag=tag) + assert not status.OK() + + @pytest.mark.timeout(ADD_TIMEOUT) + def test_add_vectors_tag_not_existed_A(self, connect, table): + ''' + target: test add vectors in table created before + method: create partition, add vectors with the not existed partition_tag param + expected: status not ok + ''' + nq = 5 + vectors = gen_vectors(nq, dim) + new_tag = "new_tag" + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status, ids = connect.add_vectors(table, vectors, partition_tag=new_tag) + assert not status.OK() + + @pytest.mark.timeout(ADD_TIMEOUT) + def test_add_vectors_tag_existed(self, connect, table): + ''' + target: test add vectors in table created before + method: create table and add vectors in it repeatly, with the partition_tag param + expected: the table row count equals to nq + ''' + nq = 5 + partition_name = gen_unique_str() + vectors = gen_vectors(nq, dim) + status = connect.create_partition(table, partition_name, tag) + status, ids = connect.add_vectors(table, vectors, partition_tag=tag) + for i in range(5): + status, ids = connect.add_vectors(table, vectors, partition_tag=tag) + assert status.OK() + assert len(ids) == nq + @pytest.mark.level(2) def test_add_vectors_without_connect(self, dis_connect, table): ''' @@ -1198,7 +1273,8 @@ class TestAddAdvance: assert len(ids) == nb assert status.OK() -class TestAddTableNameInvalid(object): + +class TestNameInvalid(object): """ Test adding vectors with invalid table names """ @@ -1209,13 +1285,27 @@ class TestAddTableNameInvalid(object): def get_table_name(self, request): yield request.param + @pytest.fixture( + scope="function", + params=gen_invalid_table_names() + ) + def get_tag_name(self, request): + yield request.param + @pytest.mark.level(2) - def test_add_vectors_with_invalid_tablename(self, connect, get_table_name): + def test_add_vectors_with_invalid_table_name(self, connect, get_table_name): table_name = get_table_name vectors = gen_vectors(1, dim) status, result = connect.add_vectors(table_name, vectors) assert not status.OK() + @pytest.mark.level(2) + def test_add_vectors_with_invalid_tag_name(self, connect, get_tag_name): + tag_name = get_tag_name + vectors = gen_vectors(1, dim) + status, result = connect.add_vectors(table_name, vectors, partition_tag=tag_name) + assert not status.OK() + class TestAddTableVectorsInvalid(object): single_vector = gen_single_vector(dim) diff --git a/tests/milvus_python_test/test_connect.py b/tests/milvus_python_test/test_connect.py index dd7e80c1f9..143ac4d8bf 100644 --- a/tests/milvus_python_test/test_connect.py +++ b/tests/milvus_python_test/test_connect.py @@ -149,15 +149,14 @@ class TestConnect: milvus.connect(uri=uri_value, timeout=1) assert not milvus.connected() - # TODO: enable - def _test_connect_with_multiprocess(self, args): + def test_connect_with_multiprocess(self, args): ''' target: test uri connect with multiprocess method: set correct uri, test with multiprocessing connecting expected: all connection is connected ''' uri_value = "tcp://%s:%s" % (args["ip"], args["port"]) - process_num = 4 + process_num = 10 processes = [] def connect(milvus): @@ -248,7 +247,7 @@ class TestConnect: expected: connect raise an exception and connected is false ''' milvus = Milvus() - uri_value = "tcp://%s:19540" % args["ip"] + uri_value = "tcp://%s:39540" % args["ip"] with pytest.raises(Exception) as e: milvus.connect(host=args["ip"], port="", uri=uri_value) @@ -264,6 +263,7 @@ class TestConnect: milvus.connect(host="", port=args["port"], uri=uri_value, timeout=1) assert not milvus.connected() + # Disable, (issue: https://github.com/milvus-io/milvus/issues/288) def test_connect_param_priority_both_hostip_uri(self, args): ''' target: both host_ip_port / uri are both given, and not null, use the uri params @@ -273,8 +273,9 @@ class TestConnect: milvus = Milvus() uri_value = "tcp://%s:%s" % (args["ip"], args["port"]) with pytest.raises(Exception) as e: - milvus.connect(host=args["ip"], port=19540, uri=uri_value, timeout=1) - assert not milvus.connected() + res = milvus.connect(host=args["ip"], port=39540, uri=uri_value, timeout=1) + logging.getLogger().info(res) + # assert not milvus.connected() def _test_add_vector_and_disconnect_concurrently(self): ''' diff --git a/tests/milvus_python_test/test_index.py b/tests/milvus_python_test/test_index.py index 269e6137da..39aadb9d33 100644 --- a/tests/milvus_python_test/test_index.py +++ b/tests/milvus_python_test/test_index.py @@ -20,6 +20,7 @@ vectors = sklearn.preprocessing.normalize(vectors, axis=1, norm='l2') vectors = vectors.tolist() BUILD_TIMEOUT = 60 nprobe = 1 +tag = "1970-01-01" class TestIndexBase: @@ -62,6 +63,21 @@ class TestIndexBase: status = connect.create_index(table, index_params) assert status.OK() + @pytest.mark.timeout(BUILD_TIMEOUT) + def test_create_index_partition(self, connect, table, get_index_params): + ''' + target: test create index interface + method: create table, create partition, and add vectors in it, create index + expected: return code equals to 0, and search success + ''' + partition_name = gen_unique_str() + index_params = get_index_params + logging.getLogger().info(index_params) + status = connect.create_partition(table, partition_name, tag) + status, ids = connect.add_vectors(table, vectors, partition_tag=tag) + status = connect.create_index(table, index_params) + assert status.OK() + @pytest.mark.level(2) def test_create_index_without_connect(self, dis_connect, table): ''' @@ -555,6 +571,21 @@ class TestIndexIP: status = connect.create_index(ip_table, index_params) assert status.OK() + @pytest.mark.timeout(BUILD_TIMEOUT) + def test_create_index_partition(self, connect, ip_table, get_index_params): + ''' + target: test create index interface + method: create table, create partition, and add vectors in it, create index + expected: return code equals to 0, and search success + ''' + partition_name = gen_unique_str() + index_params = get_index_params + logging.getLogger().info(index_params) + status = connect.create_partition(ip_table, partition_name, tag) + status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag) + status = connect.create_index(partition_name, index_params) + assert status.OK() + @pytest.mark.level(2) def test_create_index_without_connect(self, dis_connect, ip_table): ''' @@ -583,9 +614,9 @@ class TestIndexIP: query_vecs = [vectors[0], vectors[1], vectors[2]] top_k = 5 status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vecs) + logging.getLogger().info(result) assert status.OK() assert len(result) == len(query_vecs) - # logging.getLogger().info(result) # TODO: enable @pytest.mark.timeout(BUILD_TIMEOUT) @@ -743,13 +774,13 @@ class TestIndexIP: ****************************************************************** """ - def test_describe_index(self, connect, ip_table, get_index_params): + def test_describe_index(self, connect, ip_table, get_simple_index_params): ''' target: test describe index interface method: create table and add vectors in it, create index, call describe index expected: return code 0, and index instructure ''' - index_params = get_index_params + index_params = get_simple_index_params logging.getLogger().info(index_params) status, ids = connect.add_vectors(ip_table, vectors) status = connect.create_index(ip_table, index_params) @@ -759,6 +790,80 @@ class TestIndexIP: assert result._table_name == ip_table assert result._index_type == index_params["index_type"] + def test_describe_index_partition(self, connect, ip_table, get_simple_index_params): + ''' + target: test describe index interface + method: create table, create partition and add vectors in it, create index, call describe index + expected: return code 0, and index instructure + ''' + partition_name = gen_unique_str() + index_params = get_simple_index_params + logging.getLogger().info(index_params) + status = connect.create_partition(ip_table, partition_name, tag) + status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag) + status = connect.create_index(ip_table, index_params) + status, result = connect.describe_index(ip_table) + logging.getLogger().info(result) + assert result._nlist == index_params["nlist"] + assert result._table_name == ip_table + assert result._index_type == index_params["index_type"] + status, result = connect.describe_index(partition_name) + logging.getLogger().info(result) + assert result._nlist == index_params["nlist"] + assert result._table_name == partition_name + assert result._index_type == index_params["index_type"] + + def test_describe_index_partition_A(self, connect, ip_table, get_simple_index_params): + ''' + target: test describe index interface + method: create table, create partition and add vectors in it, create index on partition, call describe index + expected: return code 0, and index instructure + ''' + partition_name = gen_unique_str() + index_params = get_simple_index_params + logging.getLogger().info(index_params) + status = connect.create_partition(ip_table, partition_name, tag) + status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag) + status = connect.create_index(partition_name, index_params) + status, result = connect.describe_index(ip_table) + logging.getLogger().info(result) + assert result._nlist == 16384 + assert result._table_name == ip_table + assert result._index_type == IndexType.FLAT + status, result = connect.describe_index(partition_name) + logging.getLogger().info(result) + assert result._nlist == index_params["nlist"] + assert result._table_name == partition_name + assert result._index_type == index_params["index_type"] + + def test_describe_index_partition_B(self, connect, ip_table, get_simple_index_params): + ''' + target: test describe index interface + method: create table, create partitions and add vectors in it, create index on partitions, call describe index + expected: return code 0, and index instructure + ''' + partition_name = gen_unique_str() + new_partition_name = gen_unique_str() + new_tag = "new_tag" + index_params = get_simple_index_params + logging.getLogger().info(index_params) + status = connect.create_partition(ip_table, partition_name, tag) + status = connect.create_partition(ip_table, new_partition_name, new_tag) + status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag) + status, ids = connect.add_vectors(ip_table, vectors, partition_tag=new_tag) + status = connect.create_index(partition_name, index_params) + status = connect.create_index(new_partition_name, index_params) + status, result = connect.describe_index(ip_table) + logging.getLogger().info(result) + assert result._nlist == 16384 + assert result._table_name == ip_table + assert result._index_type == IndexType.FLAT + status, result = connect.describe_index(new_partition_name) + logging.getLogger().info(result) + assert result._nlist == index_params["nlist"] + assert result._table_name == new_partition_name + assert result._index_type == index_params["index_type"] + def test_describe_and_drop_index_multi_tables(self, connect, get_simple_index_params): ''' target: test create, describe and drop index interface with multiple tables of IP @@ -849,6 +954,111 @@ class TestIndexIP: assert result._table_name == ip_table assert result._index_type == IndexType.FLAT + def test_drop_index_partition(self, connect, ip_table, get_simple_index_params): + ''' + target: test drop index interface + method: create table, create partition and add vectors in it, create index on table, call drop table index + expected: return code 0, and default index param + ''' + partition_name = gen_unique_str() + index_params = get_simple_index_params + status = connect.create_partition(ip_table, partition_name, tag) + status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag) + status = connect.create_index(ip_table, index_params) + assert status.OK() + status, result = connect.describe_index(ip_table) + logging.getLogger().info(result) + status = connect.drop_index(ip_table) + assert status.OK() + status, result = connect.describe_index(ip_table) + logging.getLogger().info(result) + assert result._nlist == 16384 + assert result._table_name == ip_table + assert result._index_type == IndexType.FLAT + + def test_drop_index_partition_A(self, connect, ip_table, get_simple_index_params): + ''' + target: test drop index interface + method: create table, create partition and add vectors in it, create index on partition, call drop table index + expected: return code 0, and default index param + ''' + partition_name = gen_unique_str() + index_params = get_simple_index_params + status = connect.create_partition(ip_table, partition_name, tag) + status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag) + status = connect.create_index(partition_name, index_params) + assert status.OK() + status = connect.drop_index(ip_table) + assert status.OK() + status, result = connect.describe_index(ip_table) + logging.getLogger().info(result) + assert result._nlist == 16384 + assert result._table_name == ip_table + assert result._index_type == IndexType.FLAT + status, result = connect.describe_index(partition_name) + logging.getLogger().info(result) + assert result._nlist == 16384 + assert result._table_name == partition_name + assert result._index_type == IndexType.FLAT + + def test_drop_index_partition_B(self, connect, ip_table, get_simple_index_params): + ''' + target: test drop index interface + method: create table, create partition and add vectors in it, create index on partition, call drop partition index + expected: return code 0, and default index param + ''' + partition_name = gen_unique_str() + index_params = get_simple_index_params + status = connect.create_partition(ip_table, partition_name, tag) + status, ids = connect.add_vectors(ip_table, vectors, partition_tag=tag) + status = connect.create_index(partition_name, index_params) + assert status.OK() + status = connect.drop_index(partition_name) + assert status.OK() + status, result = connect.describe_index(ip_table) + logging.getLogger().info(result) + assert result._nlist == 16384 + assert result._table_name == ip_table + assert result._index_type == IndexType.FLAT + status, result = connect.describe_index(partition_name) + logging.getLogger().info(result) + assert result._nlist == 16384 + assert result._table_name == partition_name + assert result._index_type == IndexType.FLAT + + def test_drop_index_partition_C(self, connect, ip_table, get_simple_index_params): + ''' + target: test drop index interface + method: create table, create partitions and add vectors in it, create index on partitions, call drop partition index + expected: return code 0, and default index param + ''' + partition_name = gen_unique_str() + new_partition_name = gen_unique_str() + new_tag = "new_tag" + index_params = get_simple_index_params + status = connect.create_partition(ip_table, partition_name, tag) + status = connect.create_partition(ip_table, new_partition_name, new_tag) + status, ids = connect.add_vectors(ip_table, vectors) + status = connect.create_index(ip_table, index_params) + assert status.OK() + status = connect.drop_index(new_partition_name) + assert status.OK() + status, result = connect.describe_index(new_partition_name) + logging.getLogger().info(result) + assert result._nlist == 16384 + assert result._table_name == new_partition_name + assert result._index_type == IndexType.FLAT + status, result = connect.describe_index(partition_name) + logging.getLogger().info(result) + assert result._nlist == index_params["nlist"] + assert result._table_name == partition_name + assert result._index_type == index_params["index_type"] + status, result = connect.describe_index(ip_table) + logging.getLogger().info(result) + assert result._nlist == index_params["nlist"] + assert result._table_name == ip_table + assert result._index_type == index_params["index_type"] + def test_drop_index_repeatly(self, connect, ip_table, get_simple_index_params): ''' target: test drop index repeatly diff --git a/tests/milvus_python_test/test_mix.py b/tests/milvus_python_test/test_mix.py index f099db5c31..5ef9ba2cde 100644 --- a/tests/milvus_python_test/test_mix.py +++ b/tests/milvus_python_test/test_mix.py @@ -25,9 +25,8 @@ index_params = {'index_type': IndexType.IVFLAT, 'nlist': 16384} class TestMixBase: - # TODO: enable def test_search_during_createIndex(self, args): - loops = 100000 + loops = 10000 table = gen_unique_str() query_vecs = [vectors[0], vectors[1]] uri = "tcp://%s:%s" % (args["ip"], args["port"]) diff --git a/tests/milvus_python_test/test_partition.py b/tests/milvus_python_test/test_partition.py new file mode 100644 index 0000000000..cbb0b5bc8e --- /dev/null +++ b/tests/milvus_python_test/test_partition.py @@ -0,0 +1,431 @@ +import time +import random +import pdb +import threading +import logging +from multiprocessing import Pool, Process +import pytest +from milvus import Milvus, IndexType, MetricType +from utils import * + + +dim = 128 +index_file_size = 10 +table_id = "test_add" +ADD_TIMEOUT = 60 +nprobe = 1 +epsilon = 0.0001 +tag = "1970-01-01" + + +class TestCreateBase: + + """ + ****************************************************************** + The following cases are used to test `create_partition` function + ****************************************************************** + """ + def test_create_partition(self, connect, table): + ''' + target: test create partition, check status returned + method: call function: create_partition + expected: status ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + + def test_create_partition_repeat(self, connect, table): + ''' + target: test create partition, check status returned + method: call function: create_partition + expected: status ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status = connect.create_partition(table, partition_name, tag) + assert not status.OK() + + def test_create_partition_recursively(self, connect, table): + ''' + target: test create partition, and create partition in parent partition, check status returned + method: call function: create_partition + expected: status not ok + ''' + partition_name = gen_unique_str() + new_partition_name = gen_unique_str() + new_tag = "new_tag" + status = connect.create_partition(table, partition_name, tag) + status = connect.create_partition(partition_name, new_partition_name, new_tag) + assert not status.OK() + + def test_create_partition_table_not_existed(self, connect): + ''' + target: test create partition, its owner table name not existed in db, check status returned + method: call function: create_partition + expected: status not ok + ''' + table_name = gen_unique_str() + partition_name = gen_unique_str() + status = connect.create_partition(table_name, partition_name, tag) + assert not status.OK() + + def test_create_partition_partition_name_existed(self, connect, table): + ''' + target: test create partition, and create the same partition again, check status returned + method: call function: create_partition + expected: status not ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + tag_new = "tag_new" + status = connect.create_partition(table, partition_name, tag_new) + assert not status.OK() + + def test_create_partition_partition_name_equals_table(self, connect, table): + ''' + target: test create partition, the partition equals to table, check status returned + method: call function: create_partition + expected: status not ok + ''' + status = connect.create_partition(table, table, tag) + assert not status.OK() + + def test_create_partition_partition_name_None(self, connect, table): + ''' + target: test create partition, partition name set None, check status returned + method: call function: create_partition + expected: status not ok + ''' + partition_name = None + status = connect.create_partition(table, partition_name, tag) + assert not status.OK() + + def test_create_partition_tag_name_None(self, connect, table): + ''' + target: test create partition, tag name set None, check status returned + method: call function: create_partition + expected: status ok + ''' + tag_name = None + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag_name) + assert not status.OK() + + def test_create_different_partition_tag_name_existed(self, connect, table): + ''' + target: test create partition, and create the same partition tag again, check status returned + method: call function: create_partition with the same tag name + expected: status not ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + new_partition_name = gen_unique_str() + status = connect.create_partition(table, new_partition_name, tag) + assert not status.OK() + + def test_create_partition_add_vectors(self, connect, table): + ''' + target: test create partition, and insert vectors, check status returned + method: call function: create_partition + expected: status ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + nq = 100 + vectors = gen_vectors(nq, dim) + ids = [i for i in range(nq)] + status, ids = connect.insert(table, vectors, ids) + assert status.OK() + + def test_create_partition_insert_with_tag(self, connect, table): + ''' + target: test create partition, and insert vectors, check status returned + method: call function: create_partition + expected: status ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + nq = 100 + vectors = gen_vectors(nq, dim) + ids = [i for i in range(nq)] + status, ids = connect.insert(table, vectors, ids, partition_tag=tag) + assert status.OK() + + def test_create_partition_insert_with_tag_not_existed(self, connect, table): + ''' + target: test create partition, and insert vectors, check status returned + method: call function: create_partition + expected: status not ok + ''' + tag_new = "tag_new" + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + nq = 100 + vectors = gen_vectors(nq, dim) + ids = [i for i in range(nq)] + status, ids = connect.insert(table, vectors, ids, partition_tag=tag_new) + assert not status.OK() + + def test_create_partition_insert_same_tags(self, connect, table): + ''' + target: test create partition, and insert vectors, check status returned + method: call function: create_partition + expected: status ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + nq = 100 + vectors = gen_vectors(nq, dim) + ids = [i for i in range(nq)] + status, ids = connect.insert(table, vectors, ids, partition_tag=tag) + ids = [(i+100) for i in range(nq)] + status, ids = connect.insert(table, vectors, ids, partition_tag=tag) + assert status.OK() + time.sleep(1) + status, res = connect.get_table_row_count(partition_name) + assert res == nq * 2 + + def test_create_partition_insert_same_tags_two_tables(self, connect, table): + ''' + target: test create two partitions, and insert vectors with the same tag to each table, check status returned + method: call function: create_partition + expected: status ok, table length is correct + ''' + partition_name = gen_unique_str() + table_new = gen_unique_str() + new_partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + param = {'table_name': table_new, + 'dimension': dim, + 'index_file_size': index_file_size, + 'metric_type': MetricType.L2} + status = connect.create_table(param) + status = connect.create_partition(table_new, new_partition_name, tag) + assert status.OK() + nq = 100 + vectors = gen_vectors(nq, dim) + ids = [i for i in range(nq)] + status, ids = connect.insert(table, vectors, ids, partition_tag=tag) + ids = [(i+100) for i in range(nq)] + status, ids = connect.insert(table_new, vectors, ids, partition_tag=tag) + assert status.OK() + time.sleep(1) + status, res = connect.get_table_row_count(new_partition_name) + assert res == nq + + +class TestShowBase: + + """ + ****************************************************************** + The following cases are used to test `show_partitions` function + ****************************************************************** + """ + def test_show_partitions(self, connect, table): + ''' + target: test show partitions, check status and partitions returned + method: create partition first, then call function: show_partitions + expected: status ok, partition correct + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status, res = connect.show_partitions(table) + assert status.OK() + + def test_show_partitions_no_partition(self, connect, table): + ''' + target: test show partitions with table name, check status and partitions returned + method: call function: show_partitions + expected: status ok, partitions correct + ''' + partition_name = gen_unique_str() + status, res = connect.show_partitions(table) + assert status.OK() + + def test_show_partitions_no_partition_recursive(self, connect, table): + ''' + target: test show partitions with partition name, check status and partitions returned + method: call function: show_partitions + expected: status ok, no partitions + ''' + partition_name = gen_unique_str() + status, res = connect.show_partitions(partition_name) + assert status.OK() + assert len(res) == 0 + + def test_show_multi_partitions(self, connect, table): + ''' + target: test show partitions, check status and partitions returned + method: create partitions first, then call function: show_partitions + expected: status ok, partitions correct + ''' + partition_name = gen_unique_str() + new_partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status = connect.create_partition(table, new_partition_name, tag) + status, res = connect.show_partitions(table) + assert status.OK() + + +class TestDropBase: + + """ + ****************************************************************** + The following cases are used to test `drop_partition` function + ****************************************************************** + """ + def test_drop_partition(self, connect, table): + ''' + target: test drop partition, check status and partition if existed + method: create partitions first, then call function: drop_partition + expected: status ok, no partitions in db + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status = connect.drop_partition(table, tag) + assert status.OK() + # check if the partition existed + status, res = connect.show_partitions(table) + assert partition_name not in res + + def test_drop_partition_tag_not_existed(self, connect, table): + ''' + target: test drop partition, but tag not existed + method: create partitions first, then call function: drop_partition + expected: status not ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + new_tag = "new_tag" + status = connect.drop_partition(table, new_tag) + assert not status.OK() + + def test_drop_partition_tag_not_existed_A(self, connect, table): + ''' + target: test drop partition, but table not existed + method: create partitions first, then call function: drop_partition + expected: status not ok + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + new_table = gen_unique_str() + status = connect.drop_partition(new_table, tag) + assert not status.OK() + + def test_drop_partition_repeatedly(self, connect, table): + ''' + target: test drop partition twice, check status and partition if existed + method: create partitions first, then call function: drop_partition + expected: status not ok, no partitions in db + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status = connect.drop_partition(table, tag) + status = connect.drop_partition(table, tag) + time.sleep(2) + assert not status.OK() + status, res = connect.show_partitions(table) + assert partition_name not in res + + def test_drop_partition_create(self, connect, table): + ''' + target: test drop partition, and create again, check status + method: create partitions first, then call function: drop_partition, create_partition + expected: status not ok, partition in db + ''' + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status = connect.drop_partition(table, tag) + time.sleep(2) + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + status, res = connect.show_partitions(table) + assert partition_name == res[0].partition_name + + +class TestNameInvalid(object): + @pytest.fixture( + scope="function", + params=gen_invalid_table_names() + ) + def get_partition_name(self, request): + yield request.param + + @pytest.fixture( + scope="function", + params=gen_invalid_table_names() + ) + def get_tag_name(self, request): + yield request.param + + @pytest.fixture( + scope="function", + params=gen_invalid_table_names() + ) + def get_table_name(self, request): + yield request.param + + def test_create_partition_with_invalid_partition_name(self, connect, table, get_partition_name): + ''' + target: test create partition, with invalid partition name, check status returned + method: call function: create_partition + expected: status not ok + ''' + partition_name = get_partition_name + status = connect.create_partition(table, partition_name, tag) + assert not status.OK() + + def test_create_partition_with_invalid_tag_name(self, connect, table): + ''' + target: test create partition, with invalid partition name, check status returned + method: call function: create_partition + expected: status not ok + ''' + tag_name = " " + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag_name) + assert not status.OK() + + def test_drop_partition_with_invalid_table_name(self, connect, table, get_table_name): + ''' + target: test drop partition, with invalid table name, check status returned + method: call function: drop_partition + expected: status not ok + ''' + table_name = get_table_name + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status = connect.drop_partition(table_name, tag) + assert not status.OK() + + def test_drop_partition_with_invalid_tag_name(self, connect, table, get_tag_name): + ''' + target: test drop partition, with invalid tag name, check status returned + method: call function: drop_partition + expected: status not ok + ''' + tag_name = get_tag_name + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status = connect.drop_partition(table, tag_name) + assert not status.OK() + + def test_show_partitions_with_invalid_table_name(self, connect, table, get_table_name): + ''' + target: test show partitions, with invalid table name, check status returned + method: call function: show_partitions + expected: status not ok + ''' + table_name = get_table_name + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status, res = connect.show_partitions(table_name) + assert not status.OK() \ No newline at end of file diff --git a/tests/milvus_python_test/test_search_vectors.py b/tests/milvus_python_test/test_search_vectors.py index 10892d6de3..e0b1bc09ea 100644 --- a/tests/milvus_python_test/test_search_vectors.py +++ b/tests/milvus_python_test/test_search_vectors.py @@ -16,8 +16,9 @@ add_interval_time = 2 vectors = gen_vectors(100, dim) # vectors /= numpy.linalg.norm(vectors) # vectors = vectors.tolist() -nrpobe = 1 +nprobe = 1 epsilon = 0.001 +tag = "1970-01-01" class TestSearchBase: @@ -49,6 +50,15 @@ class TestSearchBase: pytest.skip("sq8h not support in open source") return request.param + @pytest.fixture( + scope="function", + params=gen_simple_index_params() + ) + def get_simple_index_params(self, request, args): + if "internal" not in args: + if request.param["index_type"] == IndexType.IVF_SQ8H: + pytest.skip("sq8h not support in open source") + return request.param """ generate top-k params """ @@ -70,7 +80,7 @@ class TestSearchBase: query_vec = [vectors[0]] top_k = get_top_k nprobe = 1 - status, result = connect.search_vectors(table, top_k, nrpobe, query_vec) + status, result = connect.search_vectors(table, top_k, nprobe, query_vec) if top_k <= 2048: assert status.OK() assert len(result[0]) == min(len(vectors), top_k) @@ -85,7 +95,6 @@ class TestSearchBase: method: search with the given vectors, check the result expected: search status ok, and the length of the result is top_k ''' - index_params = get_index_params logging.getLogger().info(index_params) vectors, ids = self.init_data(connect, table) @@ -93,7 +102,7 @@ class TestSearchBase: query_vec = [vectors[0]] top_k = 10 nprobe = 1 - status, result = connect.search_vectors(table, top_k, nrpobe, query_vec) + status, result = connect.search_vectors(table, top_k, nprobe, query_vec) logging.getLogger().info(result) if top_k <= 1024: assert status.OK() @@ -103,6 +112,160 @@ class TestSearchBase: else: assert not status.OK() + def test_search_l2_index_params_partition(self, connect, table, get_simple_index_params): + ''' + target: test basic search fuction, all the search params is corrent, test all index params, and build + method: add vectors into table, search with the given vectors, check the result + expected: search status ok, and the length of the result is top_k, search table with partition tag return empty + ''' + index_params = get_simple_index_params + logging.getLogger().info(index_params) + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + vectors, ids = self.init_data(connect, table) + status = connect.create_index(table, index_params) + query_vec = [vectors[0]] + top_k = 10 + nprobe = 1 + status, result = connect.search_vectors(table, top_k, nprobe, query_vec) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[0], ids[0]) + assert result[0][0].distance <= epsilon + status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[tag]) + logging.getLogger().info(result) + assert status.OK() + assert len(result) == 0 + + def test_search_l2_index_params_partition_A(self, connect, table, get_simple_index_params): + ''' + target: test basic search fuction, all the search params is corrent, test all index params, and build + method: search partition with the given vectors, check the result + expected: search status ok, and the length of the result is 0 + ''' + index_params = get_simple_index_params + logging.getLogger().info(index_params) + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + vectors, ids = self.init_data(connect, table) + status = connect.create_index(table, index_params) + query_vec = [vectors[0]] + top_k = 10 + nprobe = 1 + status, result = connect.search_vectors(partition_name, top_k, nprobe, query_vec, partition_tags=[tag]) + logging.getLogger().info(result) + assert status.OK() + assert len(result) == 0 + + def test_search_l2_index_params_partition_B(self, connect, table, get_simple_index_params): + ''' + target: test basic search fuction, all the search params is corrent, test all index params, and build + method: search with the given vectors, check the result + expected: search status ok, and the length of the result is top_k + ''' + index_params = get_simple_index_params + logging.getLogger().info(index_params) + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + vectors, ids = self.init_data(connect, partition_name) + status = connect.create_index(table, index_params) + query_vec = [vectors[0]] + top_k = 10 + nprobe = 1 + status, result = connect.search_vectors(table, top_k, nprobe, query_vec) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[0], ids[0]) + assert result[0][0].distance <= epsilon + status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[tag]) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[0], ids[0]) + assert result[0][0].distance <= epsilon + status, result = connect.search_vectors(partition_name, top_k, nprobe, query_vec, partition_tags=[tag]) + logging.getLogger().info(result) + assert status.OK() + assert len(result) == 0 + + def test_search_l2_index_params_partition_C(self, connect, table, get_simple_index_params): + ''' + target: test basic search fuction, all the search params is corrent, test all index params, and build + method: search with the given vectors and tags (one of the tags not existed in table), check the result + expected: search status ok, and the length of the result is top_k + ''' + index_params = get_simple_index_params + logging.getLogger().info(index_params) + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + vectors, ids = self.init_data(connect, partition_name) + status = connect.create_index(table, index_params) + query_vec = [vectors[0]] + top_k = 10 + nprobe = 1 + status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[tag, "new_tag"]) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[0], ids[0]) + assert result[0][0].distance <= epsilon + + def test_search_l2_index_params_partition_D(self, connect, table, get_simple_index_params): + ''' + target: test basic search fuction, all the search params is corrent, test all index params, and build + method: search with the given vectors and tag (tag name not existed in table), check the result + expected: search status ok, and the length of the result is top_k + ''' + index_params = get_simple_index_params + logging.getLogger().info(index_params) + partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + vectors, ids = self.init_data(connect, partition_name) + status = connect.create_index(table, index_params) + query_vec = [vectors[0]] + top_k = 10 + nprobe = 1 + status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=["new_tag"]) + logging.getLogger().info(result) + assert status.OK() + assert len(result) == 0 + + def test_search_l2_index_params_partition_E(self, connect, table, get_simple_index_params): + ''' + target: test basic search fuction, all the search params is corrent, test all index params, and build + method: search table with the given vectors and tags, check the result + expected: search status ok, and the length of the result is top_k + ''' + new_tag = "new_tag" + index_params = get_simple_index_params + logging.getLogger().info(index_params) + partition_name = gen_unique_str() + new_partition_name = gen_unique_str() + status = connect.create_partition(table, partition_name, tag) + status = connect.create_partition(table, new_partition_name, new_tag) + vectors, ids = self.init_data(connect, partition_name) + new_vectors, new_ids = self.init_data(connect, new_partition_name, nb=1000) + status = connect.create_index(table, index_params) + query_vec = [vectors[0], new_vectors[0]] + top_k = 10 + nprobe = 1 + status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[tag, new_tag]) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[0], ids[0]) + assert check_result(result[1], new_ids[0]) + assert result[0][0].distance <= epsilon + assert result[1][0].distance <= epsilon + status, result = connect.search_vectors(table, top_k, nprobe, query_vec, partition_tags=[new_tag]) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[1], new_ids[0]) + assert result[1][0].distance <= epsilon + def test_search_ip_index_params(self, connect, ip_table, get_index_params): ''' target: test basic search fuction, all the search params is corrent, test all index params, and build @@ -117,7 +280,7 @@ class TestSearchBase: query_vec = [vectors[0]] top_k = 10 nprobe = 1 - status, result = connect.search_vectors(ip_table, top_k, nrpobe, query_vec) + status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vec) logging.getLogger().info(result) if top_k <= 1024: @@ -128,6 +291,59 @@ class TestSearchBase: else: assert not status.OK() + def test_search_ip_index_params_partition(self, connect, ip_table, get_simple_index_params): + ''' + target: test basic search fuction, all the search params is corrent, test all index params, and build + method: search with the given vectors, check the result + expected: search status ok, and the length of the result is top_k + ''' + index_params = get_simple_index_params + logging.getLogger().info(index_params) + partition_name = gen_unique_str() + status = connect.create_partition(ip_table, partition_name, tag) + vectors, ids = self.init_data(connect, ip_table) + status = connect.create_index(ip_table, index_params) + query_vec = [vectors[0]] + top_k = 10 + nprobe = 1 + status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vec) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[0], ids[0]) + assert abs(result[0][0].distance - numpy.inner(numpy.array(query_vec[0]), numpy.array(query_vec[0]))) <= gen_inaccuracy(result[0][0].distance) + status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vec, partition_tags=[tag]) + logging.getLogger().info(result) + assert status.OK() + assert len(result) == 0 + + def test_search_ip_index_params_partition_A(self, connect, ip_table, get_simple_index_params): + ''' + target: test basic search fuction, all the search params is corrent, test all index params, and build + method: search with the given vectors and tag, check the result + expected: search status ok, and the length of the result is top_k + ''' + index_params = get_simple_index_params + logging.getLogger().info(index_params) + partition_name = gen_unique_str() + status = connect.create_partition(ip_table, partition_name, tag) + vectors, ids = self.init_data(connect, partition_name) + status = connect.create_index(ip_table, index_params) + query_vec = [vectors[0]] + top_k = 10 + nprobe = 1 + status, result = connect.search_vectors(ip_table, top_k, nprobe, query_vec, partition_tags=[tag]) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[0], ids[0]) + assert abs(result[0][0].distance - numpy.inner(numpy.array(query_vec[0]), numpy.array(query_vec[0]))) <= gen_inaccuracy(result[0][0].distance) + status, result = connect.search_vectors(partition_name, top_k, nprobe, query_vec) + logging.getLogger().info(result) + assert status.OK() + assert len(result[0]) == min(len(vectors), top_k) + assert check_result(result[0], ids[0]) + @pytest.mark.level(2) def test_search_vectors_without_connect(self, dis_connect, table): ''' @@ -518,6 +734,14 @@ class TestSearchParamsInvalid(object): status, result = connect.search_vectors(table_name, top_k, nprobe, query_vecs) assert not status.OK() + @pytest.mark.level(1) + def test_search_with_invalid_tag_format(self, connect, table): + top_k = 1 + nprobe = 1 + query_vecs = gen_vectors(1, dim) + with pytest.raises(Exception) as e: + status, result = connect.search_vectors(table_name, top_k, nprobe, query_vecs, partition_tags="tag") + """ Test search table with invalid top-k """ @@ -574,7 +798,7 @@ class TestSearchParamsInvalid(object): yield request.param @pytest.mark.level(1) - def test_search_with_invalid_nrpobe(self, connect, table, get_nprobes): + def test_search_with_invalid_nprobe(self, connect, table, get_nprobes): ''' target: test search fuction, with the wrong top_k method: search with top_k @@ -592,7 +816,7 @@ class TestSearchParamsInvalid(object): status, result = connect.search_vectors(table, top_k, nprobe, query_vecs) @pytest.mark.level(2) - def test_search_with_invalid_nrpobe_ip(self, connect, ip_table, get_nprobes): + def test_search_with_invalid_nprobe_ip(self, connect, ip_table, get_nprobes): ''' target: test search fuction, with the wrong top_k method: search with top_k diff --git a/tests/milvus_python_test/test_table.py b/tests/milvus_python_test/test_table.py index 6af38bac15..40b0850859 100644 --- a/tests/milvus_python_test/test_table.py +++ b/tests/milvus_python_test/test_table.py @@ -297,7 +297,7 @@ class TestTable: ''' table_name = gen_unique_str("test_table") status = connect.delete_table(table_name) - assert not status.code==0 + assert not status.OK() def test_delete_table_repeatedly(self, connect): ''' diff --git a/tests/milvus_python_test/test_table_count.py b/tests/milvus_python_test/test_table_count.py index 4e8a780c62..77780c8faa 100644 --- a/tests/milvus_python_test/test_table_count.py +++ b/tests/milvus_python_test/test_table_count.py @@ -13,8 +13,8 @@ from milvus import IndexType, MetricType dim = 128 index_file_size = 10 -add_time_interval = 5 - +add_time_interval = 3 +tag = "1970-01-01" class TestTableCount: """ @@ -58,6 +58,90 @@ class TestTableCount: status, res = connect.get_table_row_count(table) assert res == nb + def test_table_rows_count_partition(self, connect, table, add_vectors_nb): + ''' + target: test table rows_count is correct or not + method: create table, create partition and add vectors in it, + assert the value returned by get_table_row_count method is equal to length of vectors + expected: the count is equal to the length of vectors + ''' + nb = add_vectors_nb + partition_name = gen_unique_str() + vectors = gen_vectors(nb, dim) + status = connect.create_partition(table, partition_name, tag) + assert status.OK() + res = connect.add_vectors(table_name=table, records=vectors, partition_tag=tag) + time.sleep(add_time_interval) + status, res = connect.get_table_row_count(table) + assert res == nb + + def test_table_rows_count_multi_partitions_A(self, connect, table, add_vectors_nb): + ''' + target: test table rows_count is correct or not + method: create table, create partitions and add vectors in it, + assert the value returned by get_table_row_count method is equal to length of vectors + expected: the count is equal to the length of vectors + ''' + new_tag = "new_tag" + nb = add_vectors_nb + partition_name = gen_unique_str() + new_partition_name = gen_unique_str() + vectors = gen_vectors(nb, dim) + status = connect.create_partition(table, partition_name, tag) + status = connect.create_partition(table, new_partition_name, new_tag) + assert status.OK() + res = connect.add_vectors(table_name=table, records=vectors) + time.sleep(add_time_interval) + status, res = connect.get_table_row_count(table) + assert res == nb + + def test_table_rows_count_multi_partitions_B(self, connect, table, add_vectors_nb): + ''' + target: test table rows_count is correct or not + method: create table, create partitions and add vectors in one of the partitions, + assert the value returned by get_table_row_count method is equal to length of vectors + expected: the count is equal to the length of vectors + ''' + new_tag = "new_tag" + nb = add_vectors_nb + partition_name = gen_unique_str() + new_partition_name = gen_unique_str() + vectors = gen_vectors(nb, dim) + status = connect.create_partition(table, partition_name, tag) + status = connect.create_partition(table, new_partition_name, new_tag) + assert status.OK() + res = connect.add_vectors(table_name=table, records=vectors, partition_tag=tag) + time.sleep(add_time_interval) + status, res = connect.get_table_row_count(partition_name) + assert res == nb + status, res = connect.get_table_row_count(new_partition_name) + assert res == 0 + + def test_table_rows_count_multi_partitions_C(self, connect, table, add_vectors_nb): + ''' + target: test table rows_count is correct or not + method: create table, create partitions and add vectors in one of the partitions, + assert the value returned by get_table_row_count method is equal to length of vectors + expected: the table count is equal to the length of vectors + ''' + new_tag = "new_tag" + nb = add_vectors_nb + partition_name = gen_unique_str() + new_partition_name = gen_unique_str() + vectors = gen_vectors(nb, dim) + status = connect.create_partition(table, partition_name, tag) + status = connect.create_partition(table, new_partition_name, new_tag) + assert status.OK() + res = connect.add_vectors(table_name=table, records=vectors, partition_tag=tag) + res = connect.add_vectors(table_name=table, records=vectors, partition_tag=new_tag) + time.sleep(add_time_interval) + status, res = connect.get_table_row_count(partition_name) + assert res == nb + status, res = connect.get_table_row_count(new_partition_name) + assert res == nb + status, res = connect.get_table_row_count(table) + assert res == nb * 2 + def test_table_rows_count_after_index_created(self, connect, table, get_simple_index_params): ''' target: test get_table_row_count, after index have been created