Cannot explain knn search results

Hi, any help would be appreciated with explaining search results with knn cosinesimil.

I am using AWS Opensearch Serverless and have created an index with several fields, of which one is a vector field. I first index my documents where the text body (2000-5000 characters) is converted to a vector with the universal-sentence-encoder model (multilingual-large). Before conversion there is some text pre-processing being done where stop words are removed from the text, punctuation marks, etc. The resulting vector has 512 dimensions and is part of my aoss index.

In the second phase i take some user input (20-800 characters) in the application and do the same text processing and vectorization which is done during indexing, and perform a search.
This is the query that i do to search for documents in aoss:

GET _search
{
  "size": 100,
  "query": {
    "bool": {
      "must": [
        {
  "script_score": {
    "query": {
      "match_all": {}
    },
    "script": {
      "source": "knn_score",
      "lang": "knn",
      "params": {
        "field": "vector",
        "query_value": [-0.030469229444861412,0.00919314380735159,.... <full vector ommitted for brevity>],
        "space_type": "cosinesimil"
      }
    },
    "min_score": 0.0,
    "boost": 0.5
  }
}
      ]
    }
  }
}

I have been reading about the cosinesimil formula Exact k-NN with scoring script - OpenSearch Documentation and how it is used to score the results, so i get the boost and how the score is derived.

But, what is happening in my specific use case, is that the result documents found by Opensearch are too similar and do not appear to be that relevant for the data which is in the index. The scores are simply too close to each other, and some documents at the top of the results do not seem to be that relevant.
For example when i use a text snippet which is really relevant compared to indexed data, i get back “max_score”: 0.68753433 and the 100th result item score is 0.615429. Here my expectation is that i am close to 1 with this input.
On the other hand if i take some completely nonsense text which is syntactically and semantically correct but is totally irrelevant for the data in the index i get back “max_score”: 0.6175947, and the 100th result item score is 0.5619706. In this case my expectation is that the results are much closer to 0.

Does anyone have an idea what could be my issue and how to debug/test this problem? Is my search query syntax wrong in any way, or is it probably a problem with the vectorization of text? Any hints would be really appreciated, thanks!

After some more testing i think i have come to a conclusion that in general the length of input text affects more the score than the quality of text. I am not sure if there is some step which i am missing in data preprocessing, or appropriate model usage, but i cannot explain why is semantic similarity so sensitive on the amount of text rather on the topics in the text itself. Should this really behave that way, or i am probably doing something cardinally wrong?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.