Issue #46015
<test>: <add timestamptz to more bulk writer>
On branch feature/timestamps
Changes to be committed:
modified: testcases/test_bulk_insert.py
Signed-off-by: Eric Hou <eric.hou@zilliz.com>
Co-authored-by: Eric Hou <eric.hou@zilliz.com>
Issue: #45756
1. add bulk insert scenario
2. fix small issue in e2e cases
3. add search group by test case
4. add timestampstz to gen_all_datatype_collection_schema
5. modify partial update testcase to ensure correct result from
timestamptz field
On branch feature/timestamps
Changes to be committed:
modified: common/bulk_insert_data.py
modified: common/common_func.py
modified: common/common_type.py
modified: milvus_client/test_milvus_client_partial_update.py
modified: milvus_client/test_milvus_client_timestamptz.py
modified: pytest.ini
modified: testcases/test_bulk_insert.py
Signed-off-by: Eric Hou <eric.hou@zilliz.com>
Co-authored-by: Eric Hou <eric.hou@zilliz.com>
related issue: #40698
1. use vector datat types instead of hard code datatpe names
2. update search pagination tests
3. remove checking distances in search results checking, for knowhere
customize the distances for different metrics and indexes. Now only
assert the distances are sorted correct.
---------
Signed-off-by: yanliang567 <yanliang.qiao@zilliz.com>
issue: #35922
add an enable_tokenizer param to varchar field: must be set to true so
that a varchar field can enable_match or used as input of BM25 function
---------
Signed-off-by: Buqian Zheng <zhengbuqian@gmail.com>
Replacing the current import API v1 implementation with the v2
implementation.
issue: https://github.com/milvus-io/milvus/issues/28521
---------
Signed-off-by: bigsheeper <yihao.dai@zilliz.com>
test: There are too many test cases for bulkinsert+partition_key. Each
case creates 10 bulkinsert tasks to import a file with 100~200 rows. The
default num_partitions is 64 for partition_key. So, each task will
generate 64 tiny segments. There are 10 cases, each case 10 tasks, each
task 64 tiny segment, totally there are 6400 tiny segments generated.
And all these segment row count is less than 1024, no need to build
index, and take part in compaction. There will be lots of compaction
tasks generated. It costs too much time to process these compaction
tasks. Eventually, some cases are timeout after waiting 5 minutes for
their segments to be ready and cases fail.
Specifying the num_partitions to a small value can avoid this problem.
```
[2023-11-21T03:41:16.187Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_json_file[int_scalar-True-True] PASSED [ 54%]
[2023-11-21T03:41:42.796Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_json_file[int_scalar-False-True] PASSED [ 57%]
[2023-11-21T03:42:04.694Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_json_file[string_scalar-True-True] PASSED [ 60%]
[2023-11-21T03:42:31.205Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_json_file[string_scalar-False-True] PASSED [ 63%]
[2023-11-21T03:43:38.876Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_multi_numpy_files[10-150-13-True] XPASS [ 66%]
[2023-11-21T03:49:00.357Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_multi_numpy_files[10-150-13-False] XFAIL [ 69%]
[2023-11-21T03:53:51.811Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_csv_file[int_scalar-True] FAILED [ 72%]
[2023-11-21T03:58:58.283Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_csv_file[int_scalar-False] FAILED [ 75%]
[2023-11-21T04:02:04.696Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_csv_file[string_scalar-True] PASSED [ 78%]
[2023-11-21T04:02:26.608Z] testcases/test_bulk_insert.py::TestBulkInsert::test_partition_key_on_csv_file[string_scalar-False] PASSED [ 81%]
```
Signed-off-by: yhmo <yihua.mo@zilliz.com>
skip a bulk insert test case temporarily.
It is a known issue but needs more time to solve. skip the test case is
for not blocking other PR
Signed-off-by: zhuwenxing <wenxing.zhu@zilliz.com>