Install Help Ubuntu 18/20


I have and ELK stack and am moving to OpenSearch. I am trying to follow the documentation but am having issues with it. I am using Ubuntu 18 and have tried Ubuntu 20. I have tried the tarball install but after running the install script I cant find any directories to modify any files. Documentation says opensearch home directory is in /usr/share/opensearch but that directory doesnt exist.

I have had more success with the docker-compose file. I can get to the dashboards and login. I have installed the opensearch plugin on my existing logstash host but logstash doesnt seem to be sending the logs over. I am not seeing anything helpful in opensearch or logstash logs.

Any assistance would be appreciated I am pretty lost.

Hi @michael.anderton - I think I might be able to point you in the right direction.

The OpenSearch and OpenSearch Dashboards docker containers have OpenSEarch installed in the directory /usr/share/opensearch, that much is correct. When dealing with the tarball, I believe some assembly is required.

There’s some good starting documentation here: Tarball - OpenSearch documentation that describes it as follows:

The tarball installation provides a self-contained directory with everything you need to run OpenSearch, including an integrated Java Development Kit (JDK). The tarball is a good option for testing and development.

That is to say, if you’d like to make your Ubuntu environment mimic the docker container, you can just simply move the contents of that tarball to /usr/share/opensearch. Since the directory is self contained, you can put it anywhere you like.

Happy to help with any other questions!



Thanks for the response. I think I am going to stick with the docker-compose route as it is the farthest I have gotten.

You may be able to help on an additional point? I have Logstash running on a separate host and have installed the opensearch output plugin, I have commented out the security portion of config/opensearch.yml and logstash is reporting no errors but I am not seeing any data coming in. Any ideas?

Sure @michael.anderton , although there might be a bit of troubleshooting required here depending on what your security configuration is. Some direct log output indicating any errors might help. Is logstash logging any errors?

If you’re using the self signed certificates that came with OpenSearch, you may have to configure logstash to not verify the certificates being returned by the search API on port 9200. This is usually what people bump into.

You can double check whether data is actually coming in by checking the ‘Discover’ menu item under OpenSearch Dashboards:

Are you checking for data by using the discover tool or by submitting queries? Before you can query data, you’ll have to create an index pattern for it under Stack Management => Index Patterns:

Let’s see if we can verify that information is actually coming in first.



I had to restore a snapshot and now I cant even pull the image. Im reading its an mtu issue but I’ve changed it to 1500 and keeps failing


I have finally had time to get back into opensearch. I have it running with the docker-compose.yml file and I added the logstash plugin image. The issue I am facing now is in index policies. When I try to save I am getting a “failed to fetch” error. I have yet to find the documentation or anything in the google universe that helps. Any tips?

Hi @michael.anderton - I don’t think I’ll be of much help without seeing a specific error. Any chance you could grab a screenshot and/or paste any errors coming up in the log?


I brought the whole thing down and back up again and no more error! Thanks for the reply!

1 Like

Sorry you had to nuke from orbit, @michael.anderton. I hope you didn’t have too much data in there to lose.

Sometimes its the only way to be sure. :slight_smile:



Ive got everything running again with Docker including the logstash-oss image but I can not seem to figure out how to get data into it. I have two routes I am working. Logstash and Beat agent.

I have modified /usr/share/logstash/pipeline/logstash.conf to collect data from 1 host but I am not seeing the index populate in opensearch.

I have installed auditbeat on the ubuntu host where opensearch is running and modified /ect/auditbeat/auditbeat.yml file a few times to get to the opensearch-node1 but running auditbeat setup can not connect. I have tired as opensearch-node1,, localhost, and the ip of the host.

I am going back over the documentation with no luck.

Any ideas for me to try?

I’m afraid I’m not familiar with Beat agent, although I do aspire to take on more knowledge about all the various ingestion tools out there. Logstash I’m a bit more intimate with. Mind posting your logstash.conf ?

What helped for me for logstash-oss was to start with the most basic configuration that I could - take input from STDIN (keyboard) and display the json messages that would have been sent to sTDOUT. Run it, type random stuff in, see if it spits out a json message. Usually from there I iterate by adding things to my input{} section to see if the messagers spit out on the screen when I run it. At that point you know the only thing left to configure is the output{} section to send it to OpenSearch. Either way, I’d love to see the output{} section of your logstash.conf


I need to figure out how to make this config last through docker up/down but this is what I am using.

input {
syslog {
port => 5515

output {
opensearch {
index => “syslog-%{+YYY.MM}”
hosts => [“http://localhost:9200”]
ssl_certificate_verification => false
user => “admin”
password => “admin”

I do have a system pointing logs to 5515. This is the same input block I used in the ELK stack I are moving away from. With this config, when I attach to the logstash logs I am getting a refused to connect error.

Full error:

Attempted to resurrect connection to dead OpenSearch instance, but got an error {:url=>“http://admin:xxxxxx@localhost:9200/”, :exception=>LogStash::Outputs::OpenSearch::HttpClient::Pool::HostUnreachableError, :message=>“OpenSearch Unreachable: [http://admin:xxxxxx@localhost:9200/][Manticore::SocketException] Connect to localhost:9200 [localhost/] failed: Connection refused (Connection refused)”}

Thanks @michael.anderton

This likely has to do with the http in your hosts section. Mind changing that to https to see if you have better luck?


Thanks for pointing that out. Simple mistakes sure can be frustrating. That seems to have fixed the connection issue. I do not see the error anymore in the logs. Did my input block look okay? The logs show logstash listening on 5044, which is the default, and not the 5515 port.

1 Like

Looked good to me. The best way to check is to change your output section to just use the stdout plugin -

output { stdout {} }

That way you can see if it spits the json messages out to the screen.

Additionally, you can use the ‘Discover’ button in your Opensearch Dashboards instance to make sure that records are being added to your index.

Glad I was able to help!