Neural search text_embedding pipeline error (null_pointer_exception)

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

AWS OpenSearch Service v2.9.0

Describe the issue:

Hi! Thanks so much for reading this. I appreciate any help I receive here.

TLDR;

I’m trying to get a basic semantic search use case working in AWS OpenSearch Service, which uses the Neural and ML Commons plugins, delegating inference to a text embedding model hosted externally on AWS SageMaker.

I am getting the following error when indexing to a knn index configured with a text_embedding processor:

{
  "error": {
    "root_cause": [
      {
        "type": "null_pointer_exception",
        "reason": "Cannot invoke \"org.opensearch.ml.common.output.model.ModelTensorOutput.getMlModelOutputs()\" because \"modelTensorOutput\" is null"
      }
    ],
    "type": "null_pointer_exception",
    "reason": "Cannot invoke \"org.opensearch.ml.common.output.model.ModelTensorOutput.getMlModelOutputs()\" because \"modelTensorOutput\" is null"
  },
  "status": 500
}

How can I fix this error whilst ingesting using a text_embedding processor?

As discussed in detail below, I am able to obtain predictions in kibana with the model used in the pipeline.

Configuration:

In this section I’ll describe the successful steps/checks I’ve undertaken.

So far I have successfully:

Step [1]: Deployed a model to SageMaker
I used the recommended msmarco-distilbert-base-tas-b model. I can invoke it using awscurl as follows:

awscurl https://runtime.sagemaker.us-east-1.amazonaws.com/endpoints/search-semantic-search/invocations --service sagemaker -X POST -H 'X-Amzn-SageMaker-Target-Variant: msmarco-distilbert-base-tas-b' -H 'Content-Type: application/json' --data '{"inputs": ["zara jumper blue", "zara jumper green"]}'

resulting in output

{"vectors":[[0.13654185831546783,-0.12164773792028427,...],[0.13654185831546783,-0.12164773792028427,...]]

Step [2]: Created an ML Commons connector for SageMaker
I configured the necessary IAM permissions and then created a connector:

awscurl --service 'es' -X POST https://<redacted>.us-east-1.es.amazonaws.com/_plugins/_ml/connectors/_create --data '{
  "name": "Sagemaker text embedding endpoint connector",
  "description": "Connector to Sagemaker text embedding endpoint",
  "version": 1,
  "protocol": "aws_sigv4",
  "credential": {
    "roleArn": "arn:aws:iam::<redacted>:role/opensearch-sagemaker-role"
 },
  "parameters": {
    "region": "us-east-1",
    "service_name": "sagemaker",
    "model": "msmarco-distilbert-base-tas-b"
  },
  "actions": [
    {
      "action_type": "predict",
      "method": "POST",
      "url": "https://runtime.sagemaker.${parameters.region}.amazonaws.com/endpoints/search-semantic-search/invocations",
      "headers": {
        "content-type": "application/json",
        "X-Amzn-SageMaker-Target-Variant": "${parameters.model}"
      },
      "post_process_function": "params.vector",
      "request_body": "{ \"inputs\": ${parameters.input} }"
    }
  ]
}'

With output

{"connector_id":"cYd8Io0Biz7a3vMLoeTb"}

Step [3]: Created and deployed a model
Using the guide I have created a model group, model, and deployed it from Kibana:

# Register a new model group
POST /_plugins/_ml/model_groups/_register
{
  "name": "remote_model_group",
  "description": "A model group for external models"
}

# Create a new model in the model group
POST /_plugins/_ml/models/_register
{
    "name": "msmarco-distilbert-base-tas-b",
    "function_name": "remote",
    "model_group_id": "oqp8Io0BaCSTIJeV0j3Y",
    "description": "msmarco-distilbert-base-tas-b on SageMaker",
    "connector_id": "cYd8Io0Biz7a3vMLoeTb"
}

# Check on the model task progress
GET /_plugins/_ml/tasks/dId-Io0Biz7a3vMLUeQS

# Deploy the model
POST /_plugins/_ml/models/dYd-Io0Biz7a3vMLUeSX/_deploy

# Check on the deployment
GET /_plugins/_ml/tasks/dod-Io0Biz7a3vMLg-Tp

This successfully deploys the model. I have verified the integration with the endpoint by using the Predict API:

POST /_plugins/_ml/models/dYd-Io0Biz7a3vMLUeSX/_predict
{
  "parameters": {
    "input": ["this is a sentence"]
  }
}

Which has given a valid output:

{
  "inference_results": [
    {
      "output": [
        {
          "name": "response",
          "dataAsMap": {
            "vectors": [
              [
                0.25234439969062805,
                -0.09862993657588959,
                ...
              ]
            ]
          }
        }
      ],
      "status_code": 200
    }
  ]
}

Step [4]: Create pipeline and knn index

At this stage we move from working with ML Commons, and start using the Neural plugin, following the Semantic Search guide](Semantic search - OpenSearch Documentation).

All of the steps below are exactly the same as from the guide.

# Create pipeline
PUT /_ingest/pipeline/nlp-ingest-pipeline
{
  "description": "A text embedding pipeline",
  "processors": [
    {
      "text_embedding": {
        "model_id": "dYd-Io0Biz7a3vMLUeSX",
        "field_map": {
          "passage_text": "passage_embedding"
        }
      }
    }
  ]
}

# Create index that uses the pipeline
PUT /my-nlp-index
{
  "settings": {
    "index.knn": true,
    "default_pipeline": "nlp-ingest-pipeline"
  },
  "mappings": {
    "properties": {
      "id": {
        "type": "text"
      },
      "passage_embedding": {
        "type": "knn_vector",
        "dimension": 768,
        "method": {
          "engine": "lucene",
          "space_type": "l2",
          "name": "hnsw",
          "parameters": {}
        }
      },
      "passage_text": {
        "type": "text"
      }
    }
  }
}

Step [5]: Attempt to ingest some data

It’s at this point that I attempt to ingest to the index. When I execute the following:

# Ingest some data
PUT /my-nlp-index/_doc/1
{
  "passage_text": "Hello world",
  "id": "s1"
}

I then see the error shared at the start of this post.

@sam-briggs-depop could you please take a look at this article: Connector blueprints - OpenSearch Documentation ?

Thanks
Dhrubo

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.