Is dataprepper necessary to send logs from fluentbit to opensearch?

Hey there,

Just starting with opensearch and want to send some logs from my webserver to opensearch server.
I have setup fluentbit on the webserver and was under the assumption that I could directly send my logs to opensearch via the opensearch plugin from fluentbit (OpenSearch - Fluent Bit: Official Manual)

But I alo have read about dataprepper being the way to go for opensearch to receive data.
What is the best way to proceed? Is dataprepper necessary? My goal is to keep it as simple as possible without too many components.

Thanks in advance for any insights.

Hey @onehigh

It depends what you like for a log shipper. I notice with DataPrepper I created a port needed ( i.e, source). You can ship those logs directly to Opensearch.

One source
One or more sinks
(Optional) One buffer
(Optional) One or more processors

Example here

Fluent-bit I havent worked with that much.

Logstash I know better, you create multiply-inputs per-configuration. The example below all my beats send logs to logstash and it gets processed before hitting Opensearch. I believe Data-prepper can do the same but I havent worked with it that much.

# Beats -> Logstash -> OpenSearch pipeline.
input {
  beats {
    port => 5044
    tags => [ 'beat' ]
      }
}
input {
  udp {
    port => 5144
    tags => ['syslog']
  }
}
input {
  http {
    port      => 12345
    tags => ['fluent']
     }
  }
}

I know All three (i.e., Logstash, Fluent-bit & Data-Prepper) can send logs directly to OpenSearch.
As for one of the beats I havent tried that yet.

Thanks @Gsmitt

Yesterday I manageed to get it working with only fluent-bit and opensearch. In case it helps anybody here is my setup:

My docker-compose.yml file:

version: '3'
services:
  opensearch-node1:
    image: opensearchproject/opensearch:latest
    container_name: opensearch-node1
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node1
      - discovery.type=single-node
      - plugins.security.ssl.http.enabled=false
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - opensearch-net
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:latest
    container_name: opensearch-dashboards
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      OPENSEARCH_HOSTS: '["http://opensearch-node1:9200"]'
    networks:
      - opensearch-net

volumes:
  opensearch-data1:

networks:
  opensearch-net:
~                          

My /etc/fluent-bit/fluent-bit.conf:

[INPUT]
    name cpu
    tag  cpu.local

    # Read interval (sec) Default: 1
    interval_sec 5

[OUTPUT]
    Name  opensearch
    Match *
    Host  127.0.0.1
    Port  9200
    Index single_index
    HTTP_User admin
    HTTP_Passwd admin
    Type cpu_metrics
    Suppress_Type_Name On
cd ~/bin/opensearch
docker-compose up
#I created an open search index with the command
curl -X PUT "https://localhost:9200/single_index?pretty" -ku 'admin'
systemctl start fluent-bit

Gotcha’s
I was getting
broken connection to localhost:9200 ? from fluentbit and in the opensearch logs I saw
NotSslRecordException: not an SSL/TLS record so I added plugins.security.ssl.http.enabled=false.

Then I got

Action/metadata line [1] contains an unknown parameter [_type]"}],"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"},"status":400}

which was solved by adding

Suppress_Type_Name On

I guess the Type parameter does not need to be there anymore. Anyways this whole thing was a bit of rough testing anyway…:slight_smile:

Then data was coming in!

Hey @onehigh

Awesome :+1: and BTW thanks for sharing.

agree!! Thanks @Gsmitt for the assist, and thanks @onehigh for coming back to let the community know about the resolution!

1 Like