Does security prevent a second node?

Hi,

I am trying to create a two node cluster. I don’t use any credentials for TLS, I just installed elasticsearch with the demo credentials.
I just cant get a second node to appear.
In both config files I’ve set the network.host to 0.0.0.0, the seeds to the list containg both IP’s and the hostnames to list of the hostname.
But If I check the cluster health, it always only shows one node.

if I try to connect to the ip of the second node using curl and port 9200, I get the message that the credentials could not be verified.

What am I doing wrong?
Thanks

For our 2-nodes cluster, we use the following elasticsearch.yml configs:

cluster.name: es-test.domain.tld
node.name: test-elk1
network.host: ["_local_", "_site_"]
http.host: ["_local_", "_site_"]
http.port: 9200
transport.host: ["_local_", "_site_"]
transport.port: 9300
path.data: /opt/elk-data
path.logs: /var/log/elasticsearch

cluster.initial_master_nodes: ["test-elk1", "test-elk2"]
discovery.seed_hosts: ["192.168.1.20:9300", "192.168.1.21:9300"]
gateway.recover_after_data_nodes: 1
node.master: true
node.data: true
...
opendistro_security.ssl.http.enabled: false

Nodes in the cluster:

curl localhost:9200/_cat/nodes
192.168.1.20  54 34 23 1.09 1.96 2.85 dim - test-elk1
192.168.1.21  67 45 29 1.23 1.43 2.47 dim * test-elk2

Documentation

Important discovery and cluster formation settings

I tried that, and when I restart the service on both nodes, just nothing happens. I get neither a “failed” nor an “ok”…

What is in the logs?

/var/log/elasticsearch/%cluster.name%.log

[2020-08-19T10:29:40,714][INFO ][o.e.n.Node ] [node-1] version[7.8.0], pid[4425], build[oss/rpm/757314695644ea9a1dc2fecd26d1a43856725e65/2020-06-14T19:35:50.234439Z], OS[Linux/4.18.0-147.5.1.el8_1.x86_64/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/14.0.1/14.0.1+7]
[2020-08-19T10:29:40,718][INFO ][o.e.n.Node ] [node-1] JVM home [/usr/share/elasticsearch/jdk]
[2020-08-19T10:29:40,718][INFO ][o.e.n.Node ] [node-1] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:G1ReservePercent=25, -XX:InitiatingHeapOccupancyPercent=30, -Djava.io.tmpdir=/tmp/elasticsearch-8731804530264085887, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Dclk.tck=100, -Djdk.attach.allowAttachSelf=true, -Djava.security.policy=file:///usr/share/elasticsearch/plugins/opendistro_performance_analyzer/pa_config/es_security.policy, -XX:MaxDirectMemorySize=536870912, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=oss, -Des.distribution.type=rpm, -Des.bundled_jdk=true]
[2020-08-19T10:29:41,648][INFO ][c.a.o.e.p.c.PluginSettings] [node-1] Config: metricsLocation: /dev/shm/performanceanalyzer/, metricsDeletionInterval: 1, httpsEnabled: false, cleanup-metrics-db-files: true
[2020-08-19T10:29:41,854][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [node-1] ES Config path is /etc/elasticsearch
[2020-08-19T10:29:42,008][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [node-1] JVM supports TLSv1.3
[2020-08-19T10:29:42,009][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [node-1] Config directory is /etc/elasticsearch/, from there the key- and truststore files are resolved relatively
[2020-08-19T10:29:42,474][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [node-1] TLS Transport Client Provider : JDK
[2020-08-19T10:29:42,475][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [node-1] TLS Transport Server Provider : JDK
[2020-08-19T10:29:42,475][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [node-1] TLS HTTP Provider : null
[2020-08-19T10:29:42,476][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [node-1] Enabled TLS protocols for transport layer : [TLSv1.3, TLSv1.2, TLSv1.1]
[2020-08-19T10:29:42,476][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [node-1] Enabled TLS protocols for HTTP layer :
[2020-08-19T10:29:42,768][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [node-1] Clustername: mcs
[2020-08-19T10:29:42,773][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [node-1] File /etc/elasticsearch/kirk.pem has insecure file permissions (should be 0600)
[2020-08-19T10:29:42,773][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [node-1] File /etc/elasticsearch/esnode.pem has insecure file permissions (should be 0600)
[2020-08-19T10:29:42,774][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [node-1] File /etc/elasticsearch/root-ca.pem has insecure file permissions (should be 0600)
[2020-08-19T10:29:42,774][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [node-1] File /etc/elasticsearch/esnode-key.pem has insecure file permissions (should be 0600)
[2020-08-19T10:29:42,774][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [node-1] File /etc/elasticsearch/kirk-key.pem has insecure file permissions (should be 0600)
[2020-08-19T10:29:42,775][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [node-1] File /etc/elasticsearch/.elasticsearch.keystore.initial_md5sum has insecure file permissions (should be 0600)
[2020-08-19T10:29:42,889][INFO ][c.a.o.j.JobSchedulerPlugin] [node-1] Loaded scheduler extension: opendistro_anomaly_detector, index: .opendistro-anomaly-detector-jobs
[2020-08-19T10:29:43,024][INFO ][c.a.o.j.JobSchedulerPlugin] [node-1] Loaded scheduler extension: opendistro-managed-index, index: .opendistro-ism-config
[2020-08-19T10:29:43,080][INFO ][o.e.p.PluginsService ] [node-1] loaded module [aggs-matrix-stats]
[2020-08-19T10:29:43,080][INFO ][o.e.p.PluginsService ] [node-1] loaded module [analysis-common]
[2020-08-19T10:29:43,081][INFO ][o.e.p.PluginsService ] [node-1] loaded module [geo]
[2020-08-19T10:29:43,081][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-common]
[2020-08-19T10:29:43,081][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-geoip]
[2020-08-19T10:29:43,081][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-user-agent]
[2020-08-19T10:29:43,081][INFO ][o.e.p.PluginsService ] [node-1] loaded module [kibana]
[2020-08-19T10:29:43,081][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-expression]
[2020-08-19T10:29:43,082][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-mustache]
[2020-08-19T10:29:43,082][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-painless]
[2020-08-19T10:29:43,082][INFO ][o.e.p.PluginsService ] [node-1] loaded module [mapper-extras]
[2020-08-19T10:29:43,082][INFO ][o.e.p.PluginsService ] [node-1] loaded module [parent-join]
[2020-08-19T10:29:43,083][INFO ][o.e.p.PluginsService ] [node-1] loaded module [percolator]
[2020-08-19T10:29:43,083][INFO ][o.e.p.PluginsService ] [node-1] loaded module [rank-eval]
[2020-08-19T10:29:43,083][INFO ][o.e.p.PluginsService ] [node-1] loaded module [reindex]
[2020-08-19T10:29:43,083][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-url]
[2020-08-19T10:29:43,083][INFO ][o.e.p.PluginsService ] [node-1] loaded module [systemd]
[2020-08-19T10:29:43,083][INFO ][o.e.p.PluginsService ] [node-1] loaded module [tasks]
[2020-08-19T10:29:43,084][INFO ][o.e.p.PluginsService ] [node-1] loaded module [transport-netty4]
[2020-08-19T10:29:43,084][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [opendistro-anomaly-detection]
[2020-08-19T10:29:43,084][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [opendistro-job-scheduler]
[2020-08-19T10:29:43,085][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [opendistro-knn]
[2020-08-19T10:29:43,085][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [opendistro_alerting]
[2020-08-19T10:29:43,085][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [opendistro_index_management]
[2020-08-19T10:29:43,085][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [opendistro_performance_analyzer]
[2020-08-19T10:29:43,085][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [opendistro_security]
[2020-08-19T10:29:43,085][INFO ][o.e.p.PluginsService ] [node-1] loaded plugin [opendistro_sql]
[2020-08-19T10:29:43,120][INFO ][o.e.e.NodeEnvironment ] [node-1] using [1] data paths, mounts [[/ (/dev/mapper/rhel-root)]], net usable_space [8.7gb], net total_space [16.9gb], types [xfs]
[2020-08-19T10:29:43,121][INFO ][o.e.e.NodeEnvironment ] [node-1] heap size [1gb], compressed ordinary object pointers [true]
[2020-08-19T10:29:43,203][INFO ][o.e.n.Node ] [node-1] node name [node-1], node ID [VUpAege4Sgm6fCDWJaqOKA], cluster name [mcs]
[2020-08-19T10:29:45,848][WARN ][c.a.o.s.c.Salt ] [node-1] If you plan to use field masking pls configure compliance salt e1ukloTsQlOgPquJ to be a random string of 16 chars length identical on all nodes
[2020-08-19T10:29:45,862][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing on REST API is enabled.
[2020-08-19T10:29:45,862][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] [GRANTED_PRIVILEGES, AUTHENTICATED] are excluded from REST API auditing.
[2020-08-19T10:29:45,863][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing on Transport API is enabled.
[2020-08-19T10:29:45,863][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] [GRANTED_PRIVILEGES, AUTHENTICATED] are excluded from Transport API auditing.
[2020-08-19T10:29:45,863][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing of request body is enabled.
[2020-08-19T10:29:45,863][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Bulk requests resolution is disabled during request auditing.
[2020-08-19T10:29:45,863][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Index resolution is enabled during request auditing.
[2020-08-19T10:29:45,864][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Sensitive headers auditing is enabled.
[2020-08-19T10:29:45,864][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing requests from kibanaserver users is disabled.
[2020-08-19T10:29:45,892][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing of external configuration is disabled.
[2020-08-19T10:29:45,893][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing of internal configuration is disabled.
[2020-08-19T10:29:45,893][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing only metadata information for read request is disabled.
[2020-08-19T10:29:45,894][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing will watch {} for read requests.
[2020-08-19T10:29:45,894][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing read operation requests from kibanaserver users is disabled.
[2020-08-19T10:29:45,894][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing only metadata information for write request is disabled.
[2020-08-19T10:29:45,894][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing diffs for write requests is disabled.
[2020-08-19T10:29:45,895][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing write operation requests from kibanaserver users is disabled.
[2020-08-19T10:29:45,895][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Auditing will watch for write requests.
[2020-08-19T10:29:45,895][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] .opendistro_security is used as internal security index.
[2020-08-19T10:29:45,895][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Internal index used for posting audit logs is null
[2020-08-19T10:29:46,001][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-1] Message routing enabled: true
[2020-08-19T10:29:46,143][INFO ][c.a.o.s.f.OpenDistroSecurityFilter] [node-1] indices are made immutable.
[2020-08-19T10:29:46,181][INFO ][c.a.o.a.b.ADCircuitBreakerService] [node-1] Registered memory breaker.
[2020-08-19T10:29:46,439][INFO ][o.e.d.DiscoveryModule ] [node-1] using discovery type [zen] and seed hosts providers [settings]
[2020-08-19T10:29:46,761][WARN ][stderr ] [node-1] java.util.ServiceConfigurationError: com.sun.tools.attach.spi.AttachProvider: Provider sun.tools.attach.LinuxAttachProvider not found
[2020-08-19T10:29:46,762][WARN ][stderr ] [node-1] at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:591)
[2020-08-19T10:29:46,762][WARN ][stderr ] [node-1] at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.nextProviderClass(ServiceLoader.java:1213)
[2020-08-19T10:29:46,762][WARN ][stderr ] [node-1] at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNextService(ServiceLoader.java:1222)
[2020-08-19T10:29:46,762][WARN ][stderr ] [node-1] at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator$1.run(ServiceLoader.java:1269)
[2020-08-19T10:29:46,763][WARN ][stderr ] [node-1] at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator$1.run(ServiceLoader.java:1268)
[2020-08-19T10:29:46,763][WARN ][stderr ] [node-1] at java.base/java.security.AccessController.doPrivileged(AccessController.java:391)
[2020-08-19T10:29:46,763][WARN ][stderr ] [node-1] at java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNext(ServiceLoader.java:1271)
[2020-08-19T10:29:46,763][WARN ][stderr ] [node-1] at java.base/java.util.ServiceLoader$2.hasNext(ServiceLoader.java:1301)
[2020-08-19T10:29:46,764][WARN ][stderr ] [node-1] at java.base/java.util.ServiceLoader$3.hasNext(ServiceLoader.java:1386)
[2020-08-19T10:29:46,764][WARN ][stderr ] [node-1] at jdk.attach/com.sun.tools.attach.spi.AttachProvider.providers(AttachProvider.java:259)
[2020-08-19T10:29:46,764][WARN ][stderr ] [node-1] at jdk.attach/com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:200)
[2020-08-19T10:29:46,764][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.jvm.ThreadList.runAttachDump(ThreadList.java:144)
[2020-08-19T10:29:46,764][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.jvm.ThreadList.lambda$runThreadDump$6(ThreadList.java:246)
[2020-08-19T10:29:46,765][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.core.Util.lambda$invokePrivileged$1(Util.java:56)
[2020-08-19T10:29:46,765][WARN ][stderr ] [node-1] at java.base/java.security.AccessController.doPrivileged(AccessController.java:312)
[2020-08-19T10:29:46,765][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.core.Util.invokePrivileged(Util.java:52)
[2020-08-19T10:29:46,765][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.jvm.ThreadList.runThreadDump(ThreadList.java:246)
[2020-08-19T10:29:46,766][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.jvm.ThreadList.getNativeTidMap(ThreadList.java:110)
[2020-08-19T10:29:46,766][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.OSMetricsCollector.collectMetrics(OSMetricsCollector.java:57)
[2020-08-19T10:29:46,766][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.PerformanceAnalyzerMetricsCollector.lambda$run$0(PerformanceAnalyzerMetricsCollector.java:57)
[2020-08-19T10:29:46,766][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.core.Util.lambda$invokePrivileged$1(Util.java:56)
[2020-08-19T10:29:46,767][WARN ][stderr ] [node-1] at java.base/java.security.AccessController.doPrivileged(AccessController.java:312)
[2020-08-19T10:29:46,767][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.core.Util.invokePrivileged(Util.java:52)
[2020-08-19T10:29:46,767][WARN ][stderr ] [node-1] at com.amazon.opendistro.elasticsearch.performanceanalyzer.collectors.PerformanceAnalyzerMetricsCollector.run(PerformanceAnalyzerMetricsCollector.java:57)
[2020-08-19T10:29:46,767][WARN ][stderr ] [node-1] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
[2020-08-19T10:29:46,768][WARN ][stderr ] [node-1] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
[2020-08-19T10:29:46,768][WARN ][stderr ] [node-1] at java.base/java.lang.Thread.run(Thread.java:832)
[2020-08-19T10:29:46,881][INFO ][c.a.o.e.p.h.c.PerformanceAnalyzerConfigAction] [node-1] PerformanceAnalyzer Enabled: true
[2020-08-19T10:29:46,958][INFO ][o.e.n.Node ] [node-1] initialized
[2020-08-19T10:29:46,958][INFO ][o.e.n.Node ] [node-1] starting …
[2020-08-19T10:29:47,197][ERROR][o.e.b.Bootstrap ] [node-1] Exception
org.elasticsearch.transport.BindTransportException: Failed to bind to 10.0.2.32:9300
at org.elasticsearch.transport.TcpTransport.bindToPort(TcpTransport.java:408) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:372) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:135) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:234) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.node.Node.start(Node.java:733) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:317) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:402) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) [elasticsearch-cli-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-7.8.0.jar:7.8.0]
Caused by: java.net.BindException: Die angeforderte Adresse kann nicht zugewiesen werden
at sun.nio.ch.Net.bind0(Native Method) ~[?:?]
at sun.nio.ch.Net.bind(Net.java:479) ~[?:?]
at sun.nio.ch.Net.bind(Net.java:468) ~[?:?]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) ~[?:?]
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134) ~[?:?]
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550) ~[?:?]
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491) ~[?:?]
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973) ~[?:?]
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248) ~[?:?]
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356) ~[?:?]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[?:?]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
at java.lang.Thread.run(Thread.java:832) ~[?:?]
[2020-08-19T10:29:47,206][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: BindTransportException[Failed to bind to 10.0.2.32:9300]; nested: BindException[Die angeforderte Adresse kann nicht zugewiesen werden];
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:174) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) ~[elasticsearch-cli-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.8.0.jar:7.8.0]
Caused by: org.elasticsearch.transport.BindTransportException: Failed to bind to 10.0.2.32:9300
at org.elasticsearch.transport.TcpTransport.bindToPort(TcpTransport.java:408) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:372) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:135) ~[?:?]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:234) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:59) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.node.Node.start(Node.java:733) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:317) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:402) ~[elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.8.0.jar:7.8.0]
… 6 more
Caused by: java.net.BindException: Die angeforderte Adresse kann nicht zugewiesen werden
at sun.nio.ch.Net.bind0(Native Method) ~[?:?]
at sun.nio.ch.Net.bind(Net.java:479) ~[?:?]
at sun.nio.ch.Net.bind(Net.java:468) ~[?:?]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:220) ~[?:?]
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134) ~[?:?]
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550) ~[?:?]
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491) ~[?:?]
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973) ~[?:?]
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248) ~[?:?]
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356) ~[?:?]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[?:?]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]

org.elasticsearch.transport.BindTransportException: Failed to bind to 10.0.2.32:9300

10.0.2.32 is supposed to be the second node.

as a little context, maybe it is important: Both nodes run in their own virtualbox VM ob RH8, installed through RPM.

On this error Google says that it is related to the fact that:

  1. Elasticsearch failed to bind on the specified Port - check that no other application is already dind on it.
  2. Elasticsearch failed to bind on the specific IP - check network.host and http.host parameters. Try to use VM IP or 0.0.0.0.

Also, check that your Virtual Box Network configuration permit you to have multiple VM’s with different IP and they can communicate between them.

They can, I can use ssh and scp between them. Also pinging works. But for some reason not the ES ports.
I had a look at the firewall and ip tables. Everything should be open, and this is fresh install. Nothing apart of ES should use those ports.

To exclude network communication issue:

  1. Node A - Start HTTP Server
    python -m SimpleHTTPServer 9300

  2. Node B - Connect to the Node A HTTP server
    curl node-a-ip:9300
    nc -zv node-a-ip:9300

And vice versa: Node B - run HTTP server and connect to it from node A.

In both directions curl gives me a list of the home directory, and the other VM acknowledges the connection, but nc says “Ncat: Could not resolve hostname “10.0.2.38:9300”: Name or service not known. QUITTING.”

But both only work if I stop the elasticsearch service, otherwise the simplehttpserver complains that the ports are in use.

Aha, sorry - nc command was incorrect:
nc -zv node-a-ip 9300

Anyway, we tested - no any network issue.

What value do you use for network.host and http.host?

interestingly, if I do it for port 9200, the server itself gives me “bad requestion version” every 2 seconds or so.

I used your suggestion, using the Ips of both nodes. network.host [“10.0.2.34”,“10.0.2.38”] (the second ip changed, using a different VM).
Same for http host. And the message about unable to bind to the port is always about the remote node.

It should be only current node IP, and to do it simpler just specify only one IP even if the node have 2 IP’s:
Node A
network.host: node-a-ip
http.host: node-a-ip

Node B
network.host: node-b-ip
http.host: node-b-ip

My current elasticsearch.yml is:

cluster.name: cluster
node.name: node-1
network.host: 10.0.2.34
http.host: 10.0.2.34
discovery.seed_hosts: [“10.0.2.34”,“10.0.2.38”]
cluster.initial_master_nodes: [“10.0.2.34”,“10.0.2.38”]

If I use the kibana dev tools and do

GET _cluster/health

I get

{“statusCode”:502,“error”:“Bad Gateway”,“message”:“connect ECONNREFUSED 127.0.0.1:9200”}

I don’t know what I did now, but now Kibana is refusing to work. The service is not running, but lsof shows

node 980 kibana 18u IPv4 29954 0t0 TCP 127.0.0.1:5601 (LISTEN)

Also no change after a reboot.

First we should configure Elasticsearch to work correctly, then Kibana will work - it listen the port but failed to connect to the Elasticsearch.

Please set your settings accordingly - all little things matter:

cluster.initial_master_nodes: ["node-1", "node-2"]
discovery.seed_hosts: ["10.0.2.34:9300", "10.0.2.38:9300"]

node-1 and node-1 in the cluster.initial_master_nodes should be exactly as it is set in the elasticsearch.yml on the host in the node.name

More information in the documentation: Important discovery and cluster formation settings.

I moved from kibana to use curl to get the elasticsearch response. Still not working.
As soon as I set network.host to something other than 0.0.0.0, or set http.host to something, I get a connection refused when I try to use curl.

Discovery.seed_hosts and cluster.initial_master_nodes are set exactly the way suggested, same names, same clustername.

Can you please post here both configs.
Also, please check /etc/hosts on both nodes and post nodes hostname.

Everything that is not commented out:

Node 1:

cluster.name: cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
transport.host: 10.0.2.34
transport.port: 9300
discovery.seed_hosts: [“10.0.2.34:9300”,“10.0.2.38:9300”]
cluster.initial_master_nodes: [“node-1”,“node-2”]
node.master: true
node.data: true
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: false
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3

Node 2:

cluster.name: cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
transport.host: 10.0.2.38
transport.port: 9300
discovery.seed_hosts: [“10.0.2.34:9300”,“10.0.2.38:9300”]
cluster.initial_master_nodes: [“node-1”,“node-2”]
node.master: true
node.data: true
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: false
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3

Everything else is commented out. This way, I can use curl or kibana to get the status of elasticsearch. But if I add, for example, network.host: 10.0.2.34 and/or http.host: 10.0.2.34 to Node 1, I get a “connection refused” if I try to use curl.

All the certificates there are the demo certificates that come with the installation.

/etc/hosts on both:

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

hostnames are node-1 and node-2

From your post:

Node 2:
node.name: node-1

It is node-1? Should be node-2. Every node in config should have configured its own hostname.