One docker-compose.yml, two port collections, 2 opensearch instances, one ip

Versions: 2.17.0/2.17.0/Fedora (latest as of 9/28/2024), n/a

I have these goals, from one docker-compose.yml

  1. 2 OpenSearch “demo” instance/cluster configurations (I’m lazy - using the standard 2-node configuration from the demo docker-compose.yml for each cluster)
  2. One instance runs on standard ports 9200, dashboard port 5601
  3. The other runs on say 9210, dashboard port 5611
  4. The dashboards are isolated to the cluster
  5. The volumes for the data are also isolated by cluster.

The goal is to differentiate each instance by port - only. Specifically:

  1. by API port number, 9200 or 9210 (or whatever ports I should use based on ranges) - ‘curl -XGET :9200/_search…’ or ‘curl -XGET :9210/_search…’
  2. By dashboard port number, 5601 or 5611.

In other words: Figuring what OpenSearch address it needs to go to is based strictly off of the port routing, not ip. And yes, I think I know that’s not how standard networking works without ‘extra work’ - some sort of routing to determine where the request goes by something other than ip address. And the real difference between these instances is the populated data.

Is there a way of keeping this process within the Docker networking mechanism (I bet I need extra configuration, and not certain what that is - hopefully it’s minimal)?

As other similar posts have suggested, it’s easy to get the first instance up and working.

I have all 4 nodes, 2 nodes for each cluster up - ‘docker inspect opensearch_opensearch-net’ tells me so:

 docker network inspect opensearch_opensearch-net
[
    {
        "Name": "opensearch_opensearch-net",
        "Id": "2bd03c657109ed68b53b0a2d3f4a2b30bd865730cef261f8eeaf3f0c0f6f4cce",
        "Created": "2024-09-29T21:19:40.039063114Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "281a5685a8514c690b4911bf7e7cf28c80696592082a3ad2683404c8fbe02710": {
                "Name": "opensearch-dashboards",
                "EndpointID": "68eed65974fb11f7d791235f71eb4d1ed7bee919ba970e4e7b37ac6671c38a9e",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            },
            "3b99243947250fb4029ddc99cd639c0487b2175e17b28d2dfd6921a7b7eb89d5": {
                "Name": "opensearch-node1",
                "EndpointID": "9b7c30fba3e3f6043f1f4f784c9b13aaf6329b9155ba5df61cb477258d589df7",
                "MacAddress": "02:42:ac:14:00:05",
                "IPv4Address": "172.20.0.5/16",
                "IPv6Address": ""
            },
            "51a593e873e5e3149e8be1bd5a020f32fd51a765d7f943aef65ce1fe7d38bd3f": {
                "Name": "opensearch-node2",
                "EndpointID": "6f8ceb3bcb8cd5090e2436a586a7c88c19797cc442a04cd2fe83ba21456847ae",
                "MacAddress": "02:42:ac:14:00:03",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
            },
            "5f303b8437d53d67ae06dbe2c2834295e24a79df98a7a98ccb01708ed6803e42": {
                "Name": "opensearch-node1_2",
                "EndpointID": "d98a1e03dfc529d273694b136084ed6024f225eb09e7c33c2015e222e812a689",
                "MacAddress": "02:42:ac:14:00:06",
                "IPv4Address": "172.20.0.6/16",
                "IPv6Address": ""
            },
            "cfea4895d454026d05883ceedd8b64fc93dc9ac5ab2fc2e398f83d53edcb5127": {
                "Name": "opensearch-dashboards_2",
                "EndpointID": "0fdc498c40a1bc1ddf29cf9c95f089d934fc60389836aba89efa9154a7010f42",
                "MacAddress": "02:42:ac:14:00:07",
                "IPv4Address": "172.20.0.7/16",
                "IPv6Address": ""
            },
            "e759e8a232c0d784e6226257b024d8852322eb8b0ebc24209c16666fc187ffba": {
                "Name": "opensearch-node2_2",
                "EndpointID": "eef573c2c95807aced842b4f45e3d0be604ac1eb53540b1bb584f62ef7854ddb",
                "MacAddress": "02:42:ac:14:00:04",
                "IPv4Address": "172.20.0.4/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "opensearch-net",
            "com.docker.compose.project": "opensearch",
            "com.docker.compose.version": "2.29.2"
        }
    }
]

I went into ‘opensearch-dashboards_2’, 172.20.0.7, and could ping all the other containers via ip address (including the ones I don’t care about in the other cluster).

I think the problem is here in these snippets from ‘docker ps -a’:

... 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp  opensearch-dashboards

... 5611/tcp, 0.0.0.0:5611->5601/tcp, [::]:5611->5601/tcp opensearch-dashboards_2

I’m suspecting the 2nd snippet should read:

5611/tcp, 0.0.0.0:5611->5601/tcp, [::]:5611->5601/tcp

But I haven’t a clue how to get it to do that.

Am I on the right track? Any clues?

Thanks!

@digable1 Could you share your docker-compose.yml?

(could be a duplicate as I think I replied by responding to email notification. If I’m being anxious and this is a duplicate, I’ll remove one of them)

---
services:
  opensearch-node1:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node1
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node1
      - discovery.seed_hosts=opensearch-node1,opensearch-node2
      - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
      - bootstrap.memory_lock=true  # along with the memlock settings below, disables swapping
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m  # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
      - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD}    # Sets the demo admin user password when using demo configuration, required for OpenSearch 2.12 and higher
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536  # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data
    ports:
      - 9200:9200
      - 9300:9300
      - 9600:9600  # required for Performance Analyzer
    networks:
      - opensearch-net
  opensearch-node2:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node2
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node2
      - discovery.seed_hosts=opensearch-node1,opensearch-node2
      - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
      - bootstrap.memory_lock=true
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
      - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD}
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data2:/usr/share/opensearch/data
    networks:
      - opensearch-net
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:latest
    container_name: opensearch-dashboards
    ports:
      - 5601:5601
    expose:
      - '5601'
    environment:
      OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]'
    networks:
      - opensearch-net

  opensearch-node1_2:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node1_2
    environment:
      - cluster.name=opensearch-cluster_2
      - node.name=opensearch-node1_2
      - discovery.seed_hosts=opensearch-node1_2,opensearch-node2_2
      - cluster.initial_cluster_manager_nodes=opensearch-node1_2,opensearch-node2_2
      - bootstrap.memory_lock=true  # along with the memlock settings below, disables swapping
      - network.host=localhost
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m  # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
      - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD}    # Sets the demo admin user password when using demo configuration, required for OpenSearch 2.12 and higher
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536  # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - opensearch-data1_2:/usr/share/opensearch/data_2
    ports:
      - 9210:9200
      - 9310:9300
      - 9610:9600  # required for Performance Analyzer
    networks:
      - opensearch-net
  opensearch-node2_2:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node2_2
    environment:
      - cluster.name=opensearch-cluster_2
      - node.name=opensearch-node2_2
      - discovery.seed_hosts=opensearch-node1_2,opensearch-node2_2
      - cluster.initial_cluster_manager_nodes=opensearch-node1_2,opensearch-node2_2
      - bootstrap.memory_lock=true
      - network.host=0.0.0.0
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
      - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD}
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data2_2:/usr/share/opensearch/data_2
    networks:
      - opensearch-net
  opensearch-dashboards_2:
    image: opensearchproject/opensearch-dashboards:latest
    container_name: opensearch-dashboards_2
    ports:
      - 5611:5601
    expose:
      - '5611'
    environment:
      OPENSEARCH_HOSTS: '["https://opensearch-node1_2:9210","https://opensearch-node2_2:9210"]'
    networks:
      - opensearch-net

volumes:
  opensearch-data1:
  opensearch-data2:
  opensearch-data1_2:
  opensearch-data2_2:

networks:
  opensearch-net_2:
  opensearch-net:

@digable1 I’ve got your docker-compose.yml working.
First, these are not required.

When you set network.host to localhost, OpenSearch service will only accept the local calls and reject any external ones. This will prevent communication between the nodes and the creation of the cluster.
I removed both from your docker-compose.yml

Next, Docker doesn’t like underscores “_” in the container names. I’ve changed all of the “_2” to “-2”.

The opensearch-dashboards-2 started with the same port as opensearch-dashboards - 5601
I’ve added the following to opensearch-dashboards-2 environment variables

    environment:
      SERVER_PORT: 5611

Last one.

This is incorrect. OpenSearch Dashboards communicates internally with your OpenSearch cluster and doesn’t use exposed ports. The final configuration should be like the below.

    environment:
      SERVER_PORT: 5611
      OPENSEARCH_HOSTS: '["https://opensearch-node1-2:9200","https://opensearch-node2-2:9200"]'

Not sure what is the purpose of the opensearch-net_2 as your docker-compose.yml doesn’t use it. You can still configure with cluster 2 if you wish to separate both clusters.

This is the final docker-compose.yml that worked in my lab.

---
services:
  opensearch-node1:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node1
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node1
      - discovery.seed_hosts=opensearch-node1,opensearch-node2
      - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
      - bootstrap.memory_lock=true  # along with the memlock settings below, disables swapping
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m  # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
      - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD}    # Sets the demo admin user password when using demo configuration, required for OpenSearch 2.12 and higher
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536  # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data
    ports:
      - 9200:9200
      - 9300:9300
      - 9600:9600  # required for Performance Analyzer
    networks:
      - opensearch-net
  opensearch-node2:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node2
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node2
      - discovery.seed_hosts=opensearch-node1,opensearch-node2
      - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2
      - bootstrap.memory_lock=true
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
      - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD}
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data2:/usr/share/opensearch/data
    networks:
      - opensearch-net
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:latest
    container_name: opensearch-dashboards
    ports:
      - 5601:5601
    expose:
      - '5601'
    environment:
      OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]'
    networks:
      - opensearch-net

  opensearch-node1-2:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node1-2
    environment:
      - cluster.name=opensearch-cluster-2
      - node.name=opensearch-node1-2
      - discovery.seed_hosts=opensearch-node1-2,opensearch-node2-2
      - cluster.initial_cluster_manager_nodes=opensearch-node1-2,opensearch-node2-2
      - bootstrap.memory_lock=true  # along with the memlock settings below, disables swapping
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m  # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
      - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD}    # Sets the demo admin user password when using demo configuration, required for OpenSearch 2.12 and higher
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536  # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - opensearch-data1-2:/usr/share/opensearch/data-2
    ports:
      - 9210:9200
      - 9310:9300
      - 9610:9600  # required for Performance Analyzer
    networks:
      - opensearch-net
  opensearch-node2-2:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node2-2
    environment:
      - cluster.name=opensearch-cluster-2
      - node.name=opensearch-node2-2
      - discovery.seed_hosts=opensearch-node1-2,opensearch-node2-2
      - cluster.initial_cluster_manager_nodes=opensearch-node1-2,opensearch-node2-2
      - bootstrap.memory_lock=true
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
      - OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD}
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data2-2:/usr/share/opensearch/data-2
    networks:
      - opensearch-net
  opensearch-dashboards-2:
    image: opensearchproject/opensearch-dashboards:latest
    container_name: opensearch-dashboards-2
    ports:
      - 5611:5611
    expose:
      - '5611'
    environment:
      SERVER_PORT: 5611
      OPENSEARCH_HOSTS: '["https://opensearch-node1-2:9200","https://opensearch-node2-2:9200"]'
    networks:
      - opensearch-net

volumes:
  opensearch-data1:
  opensearch-data2:
  opensearch-data1-2:
  opensearch-data2-2:

networks:
  opensearch-net-2:
  opensearch-net:

Yup, it certainly works! Thanks much!

I’m now doing my own internal post-mortem to figure out how I got here before posting, and how did I miss what you found. So far:

1. Regarding network.host:  I noticed the "docker ps -a" for node 2 started with the last prt of node 1 - and was wondering if that was the problem and whether specifying network.host would rewind the 'discovery' from the beginning (it didn't).  Not sure why I came up with localhost (I started with 0.0.0.0).
2. Regarding underlines vs dashes in container names (snake convention vs kabob convention):  Did not know this.  And apparently it's not consistent between v1, docker_compose, and v2, docker compose.  In my duck-duck-going to try and understand this one, the results seem to be over the map.  Doesn't matter, it was a problem in this case - it's FMI (for my information).
3. Did not know about that environment variable SERVER_PORT (may have seen it while trying to search for the answer and it just didn't register).  Of course in hindsight now that it's there, it makes perfect sense!
4. I was wondering about the external port vs internal port specification on that line!  I tried it both ways - no change.  Of course, I had more than one problem so I was not going get this answer until the other issues were fixed.
5. Regarding the separate networks, one of them unused:  It's residue - I thought I needed to separate the clusters and started with that configuration, In the process of "going nowhere" with debugging, I realized binding different ports was enough separation for my use case and was hoping consolidating networks would simplify the problem and help fix it (obviously, nope).  And didn't clean up the now-unused network.

So yeah, I learned a few things. Again, thanks!

1 Like