Port 9200 not listening with Docker Deployment

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
opensearchproject/opensearch:latest & opensearchproject/opensearch:2.2.1

Describe the issue:
I am doing a docker compose deployment and using the host network instead of the bridge network. The container is successfully deploying, but port 9200 isn’t listening. Port 9600 is listening, but I can’t figure out why 9200 isn’t listening on the host network. Any advise is appreciated. I’m on week 2 of using Opensearch.

Configuration:
version: ‘3’
services:
opensearch-node1: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/)
image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version
#image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version
network_mode: host
container_name: opensearch-node1
environment:
- cluster.name=opensearch-cluster # Name the cluster
- node.name=opensearch-node1 # Name the node that will run in this container
- discovery.seed_hosts=opensearch-node1 # Nodes to look for when discovering the cluster
#- discovery.seed_hosts=opensearch-node1,opensearch-node2 # Nodes to look for when discovering the cluster
- cluster.initial_cluster_manager_nodes=opensearch-node1 # Nodes eligible to serve as cluster manager
#- cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligible to serve as cluster manager
- bootstrap.memory_lock=true # Disable JVM heap memory swapping
- “OPENSEARCH_JAVA_OPTS=-Xms3g -Xmx3g” # Set min and max JVM heap sizes to at least 50% of system RAM
ulimits:
memlock:
soft: -1 # Set memlock to unlimited (no soft or hard limit)
hard: -1
nofile:
soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
hard: 65536
volumes:
- type: bind ## This is to specify to bind/map the folder/directory in local host
source: /data/dv1 ## This is on Host Directory/mount point
target: /usr/share/opensearch/data

Relevant Logs or Screenshots:
From the host:
netstat -lntu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::9600 :::* LISTEN
tcp6 0 0 :::9650 :::* LISTEN
tcp6 0 0 :::111 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
udp 0 0 127.0.0.53:53 0.0.0.0:*
udp 0 0 10.20.2.108:68 0.0.0.0:*
udp 0 0 0.0.0.0:111 0.0.0.0:*
udp 0 0 127.0.0.1:323 0.0.0.0:*
udp6 0 0 :::111 :::*
udp6 0 0 ::1:323 :::*

curl -XGET https://localhost -u ‘admin:admin’ --insecure
curl: (7) Failed to connect to localhost port 443 after 0 ms: Connection refused

In docker compose logs I see:
opensearch-node1 | [2023-02-09T23:47:04,574][INFO ][o.o.p.c.PluginSettings ] [opensearch-node1] Config: metricsLocation: /dev/shm/performanceanalyzer/, metricsDeletionInterval: 1, httpsEnabled: false, cleanup-metrics-db-files: true, batch-metrics-retention-period-minutes: 7, rpc-port: 9650, webservice-port 9600

I see this block repeating over and over. It looks like it’s trying to bind to 9200 and maybe something is failing?

opensearch-node1 | [2023-02-09T23:47:20,076][INFO ][o.o.t.TransportService ] [opensearch-node1] publish_address {10.20.2.108:9300}, bound_addresses {[::]:9300}
opensearch-node1 | [2023-02-09T23:47:20,462][INFO ][o.o.b.BootstrapChecks ] [opensearch-node1] bound or publishing to a non-loopback address, enforcing bootstrap checks
opensearch-node1 | [2023-02-09T23:47:20,472][INFO ][o.o.c.c.Coordinator ] [opensearch-node1] cluster UUID [sgtlrW59RBWGH7wEvyh02g]
opensearch-node1 | [2023-02-09T23:47:30,493][WARN ][o.o.c.c.ClusterFormationFailureHelper] [opensearch-node1] cluster-manager not discovered or elected yet, an election requires a node with id [imuKUywUR2qqscGvN9I1Zg], have discovered [{opensearch-node1}{kC_l6DkKTjmxnfjgCedklg}{ycqj27a5QrOcy3mIfzuVxA}{10.20.2.108}{10.20.2.108:9300}{dimr}{shard_indexing_pressure_enabled=true}] which is not a quorum; discovery will continue using from hosts providers and [{opensearch-node1}{kC_l6DkKTjmxnfjgCedklg}{ycqj27a5QrOcy3mIfzuVxA}{10.20.2.108}{10.20.2.108:9300}{dimr}{shard_indexing_pressure_enabled=true}] from last-known cluster state; node term 61, last-accepted version 271 in term 61
opensearch-node1 | [2023-02-09T23:47:40,495][WARN ][o.o.c.c.ClusterFormationFailureHelper] [opensearch-node1] cluster-manager not discovered or elected yet, an election requires a node with id [imuKUywUR2qqscGvN9I1Zg], have discovered [{opensearch-node1}{kC_l6DkKTjmxnfjgCedklg}{ycqj27a5QrOcy3mIfzuVxA}{10.20.2.108}{10.20.2.108:9300}{dimr}{shard_indexing_pressure_enabled=true}] which is not a quorum; discovery will continue using from hosts providers and [{opensearch-node1}{kC_l6DkKTjmxnfjgCedklg}{ycqj27a5QrOcy3mIfzuVxA}{10.20.2.108}{10.20.2.108:9300}{dimr}{shard_indexing_pressure_enabled=true}] from last-known cluster state; node term 61, last-accepted version 271 in term 61
opensearch-node1 | [2023-02-09T23:47:50,497][WARN ][o.o.c.c.ClusterFormationFailureHelper] [opensearch-node1] cluster-manager not discovered or elected yet, an election requires a node with id [imuKUywUR2qqscGvN9I1Zg], have discovered [{opensearch-node1}{kC_l6DkKTjmxnfjgCedklg}{ycqj27a5QrOcy3mIfzuVxA}{10.20.2.108}{10.20.2.108:9300}{dimr}{shard_indexing_pressure_enabled=true}] which is not a quorum; discovery will continue using from hosts providers and [{opensearch-node1}{kC_l6DkKTjmxnfjgCedklg}{ycqj27a5QrOcy3mIfzuVxA}{10.20.2.108}{10.20.2.108:9300}{dimr}{shard_indexing_pressure_enabled=true}] from last-known cluster state; node term 61, last-accepted version 271 in term 61
opensearch-node1 | [2023-02-09T23:47:50,498][WARN ][o.o.n.Node ] [opensearch-node1] timed out while waiting for initial discovery state - timeout: 30s
opensearch-node1 | [2023-02-09T23:47:50,528][INFO ][o.o.h.AbstractHttpServerTransport] [opensearch-node1] publish_address {10.20.2.108:9200}, bound_addresses {[::]:9200}
opensearch-node1 | [2023-02-09T23:47:50,529][INFO ][o.o.n.Node ] [opensearch-node1] started
opensearch-node1 | [2023-02-09T23:47:50,530][INFO ][o.o.s.OpenSearchSecurityPlugin] [opensearch-node1] Node started
opensearch-node1 | [2023-02-09T23:47:50,530][INFO ][o.o.s.c.ConfigurationRepository] [opensearch-node1] Will attempt to create index .opendistro_security and default configs if they are absent

hey @elfcounsel

This maybe the network setting you choose, after reading Network host settings and I quote.

If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host (the container shares the host’s networking namespace), and the container does not get its own IP-address allocated. For instance, if you run a container which binds to port 80 and you use host networking, the container’s application is available on port 80 on the host’s IP address.

Perhaps that might be it.

If you could use the markdown in the test box when post config file/s or logs, that would be great. Its kind of hard to read and find the issue.

EDIT: Not sure if i this will work in your docker container but I was looking for something like this

ports:
        - "9200:80"
1 Like

I think that you are running only single node cluster then you have to set this in config. He is trying to find member to make quorum.

I think we have figured out that to run a cluster with a container per node we will need to use Docker Swarm to accomplish that. For some reason Opensearch doesn’t work with network_mode: host. Ultimately my goal is to run a container per node for testing purposes.
host_1:opensearch_node1
host_2:opensearch_node2
host_3:opensearch_node3
host_4:opensearch_dashboards

We thought we could do this with just individual VMs, but looks like Swarm or Kubernetes is needed to managed the multi-host networking.

Thanks for your help!

2 Likes

@elfcounsel
Hello, I have container per node deployment :slight_smile:

Docker-compose version.

version: '3'
services:
  opensearch-node1:
    image: opensearch-1.3.8
    container_name: apesmaster-node1-elastic
    environment:
      - cluster.name=apesmaster-cluster
      - node.name=apesmaster-node1
      - discovery.seed_hosts=xxx:9300,xxx:9300
      - cluster.initial_master_nodes=xxx,xxx,xxx
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    logging:
        driver: journald
        options:
          tag: "{{.Name}}"
    volumes:
      - /mnt/elkstore/usr/share/opensearch/data/:/usr/share/opensearch/data
      - /mnt/elkstore/var/log/analyzer/:/usr/share/opensearch/logs/
      - ./myopensearch.yml:/usr/share/opensearch/config/opensearch.yml
      - ./jvm.options:/usr/share/opensearch/config/jvm.options
      - ./roles.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/roles.yml
      - ./audit.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/audit.yml
      - ./config_security.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml
      - ./roles_map.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/roles_mapping.yml
      - ./internal_users.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/internal_users.yml
    mem_limit: 8g
    ports:
      - 9200:9200
      - 9300:9300
    networks:
      - mytest
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:1.3.8
    container_name: apesmaster-node1-kibana
    ports:
      - 5601:5601
    expose:
      - "5601"
    logging:
        driver: journald
        options:
          tag: "{{.Name}}"
    environment:
          OPENSEARCH_HOSTS: '["https://xxx:9200","https://xxx:9200","https://xxx:9200"]'
    mem_limit: 0.5g
    networks:
      - mytest

networks:
  mytest:
    external: true
    name: host

Podman-compose version

version: '3'
services:
  opensearch-node1:
    image: opensearch-1.3.8
    container_name: apesmaster-node1-elastic
    environment:
      - cluster.name=apesmaster-cluster
      - node.name=apesmaster-node1
      - discovery.seed_hosts=xxx:9300,xxx:9300
      - cluster.initial_master_nodes=xxx,xxx,xxx
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    logging:
        driver: journald
        options:
          tag: "{{.Name}}"
    volumes:
      - /mnt/elkstore/usr/share/opensearch/data/:/usr/share/opensearch/data
      - /mnt/elkstore/var/log/analyzer/:/usr/share/opensearch/logs/
      - ./myopensearch.yml:/usr/share/opensearch/config/opensearch.yml
      - ./jvm.options:/usr/share/opensearch/config/jvm.options
      - ./roles.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/roles.yml
      - ./audit.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/audit.yml
      - ./config_security.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml
      - ./roles_map.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/roles_mapping.yml
      - ./internal_users.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/internal_users.yml
    mem_limit: 8g
    network_mode: host

  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:1.3.8
    container_name: apesmaster-node1-kibana
    logging:
        driver: journald
        options:
          tag: "{{.Name}}"
    environment:
          OPENSEARCH_HOSTS: '["https://xxx:9200","https://xxx:9200","https://xxx:9200"]'
    mem_limit: 0.5g
    network_mode: host

Sweet!! I will gladly borrow this and give it a try! This is awesome!