Byte size vector with neural search on the fly

Hi all,

Is there a way to quantize the vectors generated by neural search plugin on the fly? So that if I index a new document then the pipeline will do inference and then quantize the result automatically before storing it.


1 Like

Hi @asfoorial,

nteresting thought. As of today we dont have such inbuilt capability. I see two possible ways of doing this. 1) Byte Quantized ingestion processor 2)Have quantizer integrated in k-NN plugin and open through mappings/index settings.

Let me create a GitHub issue and look for community feedback. Let me know if you have any other thoughts

GitHub Issue [FEATURE] Inbuilt Byte quantizer to convert float 32 bits to 8 bit · Issue #1277 · opensearch-project/k-NN · GitHub

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.