Helm - Expose elasticsearch using ingress

Hi,
I’m using the helm chart for opendistro 1.13.2 and I’m having trouble setting up the ingress for exposing elasticsearch.
From my understanding elasticsearch communicates with Filebeat for example, using the HTTP/HTTPS protocol. I previously set up SSL for elasticsearch using my own self-signed certs and I’m able to communicate with elasticsearch via SSL when inside the cluster, using the clusterIP of the es-client-service.
However when outside the cluster, using the following curl command towards my domain foo.bar.com:

curl https://foo.bar.com/elasticsearch -u admin:admin --insecure

Throws me a bad gateway error
Below you will find the configuration of my es-client-ingress.yaml

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: elk-opendistro-es-client-service
  labels:
    app: elk-opendistro-es
    chart: "opendistro-es-1.13.2"
    release: "elk"
    heritage: "Helm"
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
   - host: "foo.bar.com"
     http:
       paths:
          - path: /elasticsearch
            pathType: Exact
            backend:
              service:
                name: elk-opendistro-es-client-service
                port:
                  number: 9200

Please don’t hesitate to ask me additional questions or request configuration files.

based on a quick glance i’d expect that you’d have to expose elasticsearch with the URL path /elasticsearch. do you really do that? i’d expect ES itself to use / as the path?

you could use the path rewrite feature of nginx to change this in the Ingress by adding this:

metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /

Yes, I want to have elastic on the path “/elasticsearch”, because I’m using the path “/” for kibana.
I’m also aware of the host conflict when you have 2 ingresses pointing to the same host, let’s ignore that since kibana ingress is diabled in this case for testing purposes.

i get that you want it on /elasticsearch on the outside. but i was asking whether the pod is also expecting the call on /elasticsearch (because by default it’d expect it on / => you’ll have to rewrite it).

the other question is: do you get the 503 from the gateway (nginx) or from the pod (elasticsearch)? what does the log say of the application returning the 503?

Ok I understood what “rewrite-target: /” does, you can still use the path “/elasticsearch” to talk to your cluster, but that annotation makes sure that elasticsearch pod receives the request on the"/" path and not on “/elasticsearch”.
As for whose logs contain the error, it’s the elasticsearch-client pod that has the following error.

[2021-06-25T13:30:36,281][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [elk-opendistro-es-client-684dd66b54-c989p] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310d0a486f73743a20656c6b322e746973616c6162732e636f6d0d0a557365722d4167656e743a206375726c2f372e35382e300d0a4163636570743a202a2f2a0d0a417574686f72697a6174696f6e3a204261736963205957527461573436595752746157343d0d0a582d466f727761726465642d466f723a2031302e34322e302e310d0a582d466f727761726465642d486f73743a20656c6b322e746973616c6162732e636f6d0d0a582d466f727761726465642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d466f727761726465642d5365727665723a207472616566696b2d366639636264396264342d36387263720d0a582d5265616c2d49703a2031302e34322e302e310d0a4163636570742d456e636f64696e673a20677a69700d0a0d0a
io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f20485454502f312e310d0a486f73743a20656c6b322e746973616c6162732e636f6d0d0a557365722d4167656e743a206375726c2f372e35382e300d0a4163636570743a202a2f2a0d0a417574686f72697a6174696f6e3a204261736963205957527461573436595752746157343d0d0a582d466f727761726465642d466f723a2031302e34322e302e310d0a582d466f727761726465642d486f73743a20656c6b322e746973616c6162732e636f6d0d0a582d466f727761726465642d506f72743a203434330d0a582d466f727761726465642d50726f746f3a2068747470730d0a582d466f727761726465642d5365727665723a207472616566696b2d366639636264396264342d36387263720d0a582d5265616c2d49703a2031302e34322e302e310d0a4163636570742d456e636f64696e673a20677a69700d0a0d0a
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
        at java.lang.Thread.run(Thread.java:832) [?:?]

This error appears in the logs when I try to curl my domain.

It turns out the issue was that the kubernetes distribution that I had installed was k3s from rancher, and the default installation comes in with Traefik an other ingress controller that was installed in kube-system namespace, whereas I was thinking that I had nginx working as the ingress controller and I was wondering why nginx annotations didn’t have any effect on my ingress objects.
Knowing that, I deleted Traefik and I managed to fix my issue.

Hi @Walid
Even I’m also facing issue while exposing opensearch service. Could please explain how you managing to resolve the issue?