Add Custom CA for internal HTTPS endpoints

@FalcoSuessgott I’ve tested solution provided in the shared link and I was able to connect to my remote Ollama with HTTPS and self-signed certificate.

These were my steps.

  1. Copy cacerts keystore from the running OpenSearch Docker
docker cp opensearch-node1_3.3.0:/usr/share/opensearch/jdk/lib/security/cacerts .
  1. Get certificate from Ollama instance
openssl s_client -connect ollama.pablo.local:443
  1. Save certificate into ollama.crt
  2. Add ollama.crt to the carets keystore
keytool -import -noprompt -trustcacerts -alias ollama -file ollama.crt -keystore cacerts -storepass changeit
  1. Confirm that ollama certificate is in the keystore
 keytool -keystore cacerts -storepass changeit -list|grep ollama

output:

ollama, Nov 3, 2025, trustedCertEntry,

  1. Map cacerts keystore to each OpenSearch Docker container. I also added to OpenSearch Dashboards

OpenSearch:

    volumes:
      - opensearch-data1:/usr/share/opensearch/data
      ...
      - ./certs/cacerts:/usr/share/opensearch/jdk/lib/security/cacerts

OpenSearch Dashboards

    volumes:
      ...
      - ./certs/cacerts:/usr/share/opensearch-dashboards/jdk/lib/security/cacerts
  1. Start docker compose
  2. Create connector with Ollama HTTPS address.
POST /_plugins/_ml/connectors/_create
{
  "name": "Llama-3.3-70B-Instruct Connector",
  "description": "Connector for Llama-3.3-70B-Instruct",
  "protocol": "http",
  "version": 1,
  "parameters": {
    "model": "llama3.1:8b",
    "temperature": 0.7,
    "max_tokens": 500,
    "endpoint": "ollama.pablo.local:443"
  },
  "credential": {
    "api_key": "123456789123456789123456789"
  },
  "client_config" : {
    "read_timeout": 60000,
    "connection_timeout": 30000,
    "max_connection": 256,
    "max_retry_times": 3,
    "retry_backoff_policy": "exponential_full_jitter"
  },
  "actions": [
    {
      "action_type": "predict",
      "method": "POST",
      "url": "https://${parameters.endpoint}/v1/chat/completions",
      "headers": {
        "Content-Type": "application/json",
        "Authorization": "Bearer ${credential.api_key}"
      },
      "request_body": "{ \"model\": \"${parameters.model}\",  \"messages\": ${parameters.messages}, \"temperature\": ${parameters.temperature}, \"max_tokens\": ${parameters.max_tokens}, \"stream\": false }"
    }
  ]
}
  1. Test model
POST /_plugins/_ml/models/es3XSZoBLdsWuLU2kDy5/_predict
{
 "parameters": {
   "messages": [
     {
       "role": "system",
       "content": "You are a helpful assistant."
     },
     {
       "role": "user",
       "content": "Hello!What is the name of this model??"
     }
   ]
 }
}

output:

{
  "inference_results": [
    {
      "output": [
        {
          "name": "response",
          "dataAsMap": {
            "id": "chatcmpl-358",
            "object": "chat.completion",
            "created": 1762175496,
            "model": "llama3.1:8b",
            "system_fingerprint": "fp_ollama",
            "choices": [
              {
                "index": 0,
                "message": {
                  "role": "assistant",
                  "content":
...