@Anthony
network.host: 0.0.0.0
fixed the docker-compose problem, thank you!
But this line is not needed in case of the k8s deployment or is it?
I moved this config (without the line above) to k8s and the nodes came up and were connected.
Chart.yml
# helm install -n opensearch --create-namespace opensearch-with-dashboard . --values values.yaml
apiVersion: v2
name: opensearch-with-dashboard
### IMPORTANT: Bump this version every time you change anything in this chart (e.g. dependencies, versions, templates, files, etc.)
version: 0.0.0
dependencies:
- name: opensearch
### IMPORTANT: Also run "helm dependency update ." in this directory if you update any dependency version.
version: 1.2.4
repository: "https://opensearch-project.github.io/helm-charts/"
- name: opensearch-dashboards
### IMPORTANT: Also run "helm dependency update ." in this directory if you update any dependency version.
version: 1.0.6
repository: "https://opensearch-project.github.io/helm-charts/"
values.yml
opensearch:
config:
opensearch.yml:
plugins.security.ssl.transport.pemcert_filepath: certs/tls.crt
plugins.security.ssl.transport.pemkey_filepath: certs/tls.key
plugins.security.ssl.transport.pemtrustedcas_filepath: certs/tls.crt
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: certs/tls.crt
plugins.security.ssl.http.pemkey_filepath: certs/tls.key
plugins.security.ssl.http.pemtrustedcas_filepath: certs/tls.crt
plugins.security.allow_unsafe_democertificates: false
plugins.security.authcz.admin_dn:
- CN=<domain>
plugins.security.nodes_dn:
- 'CN=<domain>'
persistence:
enabled: true
# Set to false to disable the `fsgroup-volume` initContainer that will update permissions on the persistent disk.
enableInitChown: true
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
# OpenSearch Persistent Volume Storage Class
# If defined, storageClassName: <storageClass>
# If set to "-", storageClassName: "", which disables dynamic provisioning
# If undefined (the default) or set to null, no storageClassName spec is
# set, choosing the default provisioner. (gp2 on AWS, standard on
# GKE, AWS & OpenStack)
#
storageClass: csi-disk
accessModes:
- ReadWriteOnce
size: 25Gi
annotations:
everest.io/disk-volume-type: SSD
# Enabling this will publically expose your OpenSearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx-external
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- <domain>
tls:
- secretName: opensearch-tls
extraVolumes:
- name: certificate
secret:
secretName: opensearch-tls
extraVolumeMounts:
- name: certificate
mountPath: "/usr/share/opensearch/config/certs"
readOnly: true
extraEnvs:
- name: DISABLE_INSTALL_DEMO_CONFIG
value: "true"
opensearch-dashboards:
# Cluster-internal connection
opensearchHosts: "https://opensearch-cluster-master:9200"
imageTag: ""
#config:
# Default OpenSearch Dashboards configuration from docker image of Dashboards
# opensearch_dashboards.yml:
# opensearch.ssl.verificationMode: none
# server.ssl.enabled: false
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx-external
#kubernetes.io/tls-acme: "true"
hosts:
- host: <dashboards-domain>
paths:
- path: /
backend:
serviceName: opensearch-dashboards-dev
servicePort: 5601
tls:
- secretName: opensearch-dashboard-cert
Both ingress rules deliver the right certificate.
If I connect over Dashboards to OpenSearch everything is ok (makes sense since verification disabled).
If I connect over my Browser (Firefox) I can see that the certificate is correct and valid but I get 502 Bad Gateway from Nginx.
in the ingress-nginx-external-controller for the ingress from OpenSearch I get the following errors:
2021/11/10 14:41:38 [error[] 4261#4261: *32847247 upstream prematurely closed connection while reading response header from upstream, client: 100.125.67.47, server: <domain>, request: "GET / HTTP/2.0", upstream: "http://172.16.0.61:9200/", host: "<domain>"
2021/11/10 14:41:38 [error[] 4261#4261: *32847247 upstream prematurely closed connection while reading response header from upstream, client: 100.125.67.47, server: <domain>, request: "GET / HTTP/2.0", upstream: "http://172.16.0.108:9200/", host: "<domain>"
2021/11/10 14:41:38 [error[] 4261#4261: *32847247 upstream prematurely closed connection while reading response header from upstream, client: 100.125.67.47, server: <domain>, request: "GET / HTTP/2.0", upstream: "http://172.16.0.139:9200/", host: "<domain>"
Here are the OpenSearch pods:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Pods(opensearch)[4] βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β NAME PF READY RESTARTS STATUS CPU MEMβ %CPU/R %CPU/L %MEM/R %MEM/L IP NODE AGE β
β opensearch-cluster-master-1 β 1/1 0 Running 16 1246 1 n/a 1246 n/a 172.16.0.61 10.202.102.43 74m β
β opensearch-cluster-master-2 β 1/1 0 Running 28 1245 2 n/a 1245 n/a 172.16.0.108 10.202.102.36 75m β
β opensearch-cluster-master-0 β 1/1 0 Running 25 1131 2 n/a 1131 n/a 172.16.0.139 10.202.102.159 74m β
Every OpenSearch-Pod has more or less the same error message:
opensearch [2021-11-10T14:51:26,174[][ERROR[][o.o.s.s.h.n.SecuritySSLNettyHttpServerTransport[] [opensearch-cluster-master-2[] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310... (shorted)
opensearch io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310... (shorted)
opensearch at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1261) ~[netty-handler-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1329) ~[netty-handler-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:508) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:447) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.59.Final.jar:4.1.59.Final[]
opensearch at java.lang.Thread.run(Thread.java:832) [?:?]
opensearch [2021-11-10T14:51:26,175[][WARN [][o.o.h.AbstractHttpServerTransport[] [opensearch-cluster-master-2[] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/172.16.0.108:9200, remoteAddress=/172.16.0.52:46764}
opensearch io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310... (shorted)
opensearch at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:478) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.59.Final.jar:4.1.59.Final[]
opensearch at java.lang.Thread.run(Thread.java:832) [?:?]
opensearch Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310... (shorted)
opensearch at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1261) ~[netty-handler-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1329) ~[netty-handler-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:508) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final[]
opensearch at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:447) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final[]
opensearch ... 16 more
The decoded shorted message is hex format and is an HTTP-Request:
GET / HTTP/1.1
Host: <domain>
X-Request-ID: 9c50cec41af24277b5e869f99e657733
X-Real-IP: 100.125.67.47
X-Forwarded-For: 100.125.67.47
X-Forwarded-Host: <domain>
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Scheme: https
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
accept-language: de,en-US;q=0.7,en;q=0.3
accept-encoding: gzip, deflate, br
authorization: Basic YWRtaW46YWRtaW4=
upgrade-insecure-requests: 1
sec-fetch-dest: document
sec-fetch-mode: navigate
sec-fetch-site: none
sec-fetch-user: ?1
cache-control: max-age=0
Has this something to do with timeouts or is still something misconfigured?
I think I forget to mention that the kubernetes cluster is only accessible through an VPN-Tunnel.
But the <domain> is accessible from everywhere.
Could this also be part of the problem?