Debugging remote connectors HTTP calls

I’ve created a create a blueprint connector so I can use our Azure OpenAI instance with the ML features, but I’m running into an issue when I try executing the _predict endpoint to test the connector.

I’m getting the following error:

Error from remote service: {"error":{"code":"401","message":"Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource."}}

The error message is clear, but I know the API key I’m passing is valid and I can successfully make API calls from the CLI, so I suspect something is wrong with my connector configuration and it’s incorrect building the url endpoint.

Here’s what my connector looks like:

POST /_plugins/_ml/connectors/_create

{
		"name": "My test connector"
	, "description": "My description."
	, "version": "0.1"
	, "protocol": "http"
	, "parameters": {
			"resource_name": "my-resource-name"
		, "deployment_name": "my-deployment-name"
		, "model": "my-model-name"
		, "api_version": "2024-10-21"
		, "temperature": 0.0
	}
	, "credential": {
		"openAI_key": "my-api-key"
	}
	, "actions": [
		{
				"action_type": "predict"
			, "method": "POST"
			, "url": "https://${parameters.resource_name}.openai.azure.com/openai/deployments/${parameters.deployment_name}/chat/completions?api-version=${parameters.api_version}"
			, "headers": {
				  "content-type": "application/json"
				, "api-key": "${credential.openAI_key}"
			}
			, "request_body": '{ "messages": ${parameters.messages}, "temperature": ${parameters.temperature} }'
		}
	]
}

When I check the opensearch log file, all I get is a line like:

[2024-11-25T19:22:07,895][ERROR][o.o.m.e.a.r.MLSdkAsyncHttpResponseHandler] [giva-dev] Remote server returned error code: 401

How can I debug the actual HTTP call so I can see what’s going on?

I’ve combed through the documentation but I’m not finding anything.

Thanks!
-Dan

So after updating my opensearch.yml and adding the linelogger.level: "DEBUG", it appears that the ${credential.openAI_key} is not getting evaluated. It’s sending the "api-key": "${credential.openAI_key}" header as a literal value and not replacing the ${credential.openAI_key} token with the correct value.

I think the issue is that the key gets converted to lowercase when it’s stored, because I just changed the openAI_key token to secret_key and it now works as I would expect.

That means the following connector does work for Azure’s OpenAI endpoint:

POST /_plugins/_ml/connectors/_create

{
		"name": "My test connector"
	, "description": "My description."
	, "version": "0.1"
	, "protocol": "http"
	, "parameters": {
			"resource_name": "my-resource-name"
		, "deployment_name": "my-deployment-name"
		, "model": "my-model-name"
		, "api_version": "2024-10-21"
		, "temperature": 0.0
	}
	, "credential": {
		"secret_key": "my-api-key"
	}
	, "actions": [
		{
				"action_type": "predict"
			, "method": "POST"
			, "url": "https://${parameters.resource_name}.openai.azure.com/openai/deployments/${parameters.deployment_name}/chat/completions?api-version=${parameters.api_version}"
			, "headers": {
				  "content-type": "application/json"
				, "api-key": "${credential.secret_key}"
			}
			, "request_body": '{ "messages": ${parameters.messages}, "temperature": ${parameters.temperature} }'
		}
	]
}

I’m still curious is there’s an easier way to debug the HTTP requests sent for remote connectors. Enabling the “DEBUG” logs for everything produces very verbose logs and I’d like to be able to just get the HTTP debugging information on a per request basis.

Is this possible?

You are providing the API key in the HTTP header incorrectly.

You need:

"headers": {
                "Authorization": "Bearer ${credential.openAI_key}"
            },

@austinlee,

You would be correct that you need the Authorization Bearer header for OpenAI, but I’m using Azure OpenAI which has different requirements.

Ah I see. I just checked my Azure OpenAI connector definition and yours looks correct, although in my headers I only set “api-key”. It works w/o setting content-type.

Anyway, the way I debug this sort of thing is to write through an integ test. Take a look at ml-commons/plugin/src/test/java/org/opensearch/ml/rest/RestMLRAGSearchProcessorIT.java at main · opensearch-project/ml-commons · GitHub.

I implement unit testing as well, but it would be helpful if there was an easy way to see what the translated HTTP request was for an ML connector. That’s where this problem was. It wasn’t easy to see what headers were being sent in the HTTP request, which is why I didn’t pick up on the case sensitivity issue in the template replacement of ${credential.openAI_key}.

I was hoping that the explain parameter (or some other parameter) could be passed which would return information about any remote HTTP requests to make it easier to debug ML connectors.

I see. Just to confirm, your connector is now working, correct?

Yes, it’s working now. Once I was able to enable debug logging and see the actual HTTP request, I saw what the issue was. That’s why I was looking for an easier way to see this information (w/out having to turn on debug logging for the entire instance).