Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
OpenSearch-3.2.0
Describe the issue:
I ran an index with 1.2M documents .
The neural sparse encoding took around 55 minutes to index the data with 240MB storage.
whereas
The dense vector with embedding model took around 16 minutes to index the data with 270MB storage.
Is this correct?
Does sparse encoding models take more time to index the data?
Is there any config to improve the indexing time for the sparse encoding?
Any feedback on this would be useful.
Thanks in advance