@FalcoSuessgott I’ve tested solution provided in the shared link and I was able to connect to my remote Ollama with HTTPS and self-signed certificate.
These were my steps.
- Copy
cacertskeystore from the running OpenSearch Docker
docker cp opensearch-node1_3.3.0:/usr/share/opensearch/jdk/lib/security/cacerts .
- Get certificate from Ollama instance
openssl s_client -connect ollama.pablo.local:443
- Save certificate into
ollama.crt - Add
ollama.crtto thecaretskeystore
keytool -import -noprompt -trustcacerts -alias ollama -file ollama.crt -keystore cacerts -storepass changeit
- Confirm that
ollamacertificate is in the keystore
keytool -keystore cacerts -storepass changeit -list|grep ollama
output:
ollama, Nov 3, 2025, trustedCertEntry,
- Map cacerts keystore to each OpenSearch Docker container. I also added to OpenSearch Dashboards
OpenSearch:
volumes:
- opensearch-data1:/usr/share/opensearch/data
...
- ./certs/cacerts:/usr/share/opensearch/jdk/lib/security/cacerts
OpenSearch Dashboards
volumes:
...
- ./certs/cacerts:/usr/share/opensearch-dashboards/jdk/lib/security/cacerts
- Start docker compose
- Create connector with Ollama HTTPS address.
POST /_plugins/_ml/connectors/_create
{
"name": "Llama-3.3-70B-Instruct Connector",
"description": "Connector for Llama-3.3-70B-Instruct",
"protocol": "http",
"version": 1,
"parameters": {
"model": "llama3.1:8b",
"temperature": 0.7,
"max_tokens": 500,
"endpoint": "ollama.pablo.local:443"
},
"credential": {
"api_key": "123456789123456789123456789"
},
"client_config" : {
"read_timeout": 60000,
"connection_timeout": 30000,
"max_connection": 256,
"max_retry_times": 3,
"retry_backoff_policy": "exponential_full_jitter"
},
"actions": [
{
"action_type": "predict",
"method": "POST",
"url": "https://${parameters.endpoint}/v1/chat/completions",
"headers": {
"Content-Type": "application/json",
"Authorization": "Bearer ${credential.api_key}"
},
"request_body": "{ \"model\": \"${parameters.model}\", \"messages\": ${parameters.messages}, \"temperature\": ${parameters.temperature}, \"max_tokens\": ${parameters.max_tokens}, \"stream\": false }"
}
]
}
- Test model
POST /_plugins/_ml/models/es3XSZoBLdsWuLU2kDy5/_predict
{
"parameters": {
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!What is the name of this model??"
}
]
}
}
output:
{
"inference_results": [
{
"output": [
{
"name": "response",
"dataAsMap": {
"id": "chatcmpl-358",
"object": "chat.completion",
"created": 1762175496,
"model": "llama3.1:8b",
"system_fingerprint": "fp_ollama",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content":
...