Hi Team,
Can anyone please help me on this…
My requirement is : using fluentd to read one log file and send this information to opensearch and want to create dashboard in OpenSearchDashboard…
Steps Followed -
Pulled OpenSearchDashboard and Opensearch images and started and able to launch…
docker pull opensearchproject/opensearch:1.1.0
docker pull opensearchproject/opensearch-dashboards:1.1.0
http://x.x.x.x .:5601/ – used to launch opensearchdashboard
https://x.x.x.x .:9200/ – used to launch opensearch
Pulled fluentd image and started and able to run the fluentd.conf
docker pull fluent/fluentd:edge-debian
docker run -v /home/clouduser/tmp:/fluentd/etc fluent/fluentd:edge-debian -c /fluentd/etc/fluentd.conf
or
fluentd -c fluentd/etc/dummy.conf → from inside the fluentd container
For Reference –
fluentd.conf data -
@type tail
read_from_head true
tag dummy_log_tag
path dummy.log
@type grok
grok_pattern %{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{SYSLOGPROG}: %{GREEDYDATA:message}
@type elasticsearch
host https://admin:admin@x.x.x.x:9200
scheme https
ssl_verify false
ssl_version TLSv1_2
logstash_format true
logstash_prefix fluentd
enable_ilm true
index_date_pattern “now/m{yyyy.mm}”
flush_interval 10s
Dockerfile -
FROM fluent/fluentd:edge-debian
USER root
RUN [“gem”, “install”, “fluent-plugin-grok-parser”, “–no-document”]
RUN [“gem”, “install”, “fluent-plugin-elasticsearch”, “–no-document”]
USER fluent
Error in output -
Could not communicate to Elasticsearch, resetting connection and trying again. no address for https (Resolv::ResolvError)
Entire output -
fluent@6833cee30b01:/$ fluentd -c fluentd/etc/fluentd.conf
2021-11-11 09:03:03 +0000 [info]: parsing config file is succeeded path=“fluentd/etc/fluentd.conf”
2021-11-11 09:03:03 +0000 [info]: gem ‘fluent-plugin-elasticsearch’ version ‘5.1.2’
2021-11-11 09:03:03 +0000 [info]: gem ‘fluent-plugin-grok-parser’ version ‘2.6.2’
2021-11-11 09:03:03 +0000 [info]: gem ‘fluentd’ version ‘1.14.2’
2021-11-11 09:03:03 +0000 [info]: ‘flush_interval’ is configured at out side of . ‘flush_mode’ is set to ‘interval’ to keep existing behaviour
2021-11-11 09:03:03 +0000 [info]: Expanded the pattern %{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{SYSLOGPROG}: %{GREEDYDATA:message} into (?(?:\b(?:[Jj]an(?:uary|uar)?|[Ff]eb(?:ruary|ruar)?|Mm ?r(?:ch|z)?|[Aa]pr(?:il)?|[Mm]a(?:y|i)?|[Jj]un(?:e|i)?|[Jj]ul(?:y|i)?|[Aa]ug(?:ust)?|[Ss]ep(?:tember)?|Oo ?t(?:ober)?|[Nn]ov(?:ember)?|[Dd]e(?:c|z)(?:ember)?)\b) +(?:(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])) (?:(?!<[0-9])(?:(?:2[0123]|[01]?[0-9])):(?:(?:[0-5][0-9]))(?::(?:(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)))(?![0-9]))) (?(?:(?:(?:(?:(?:((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?)|(?:(?<![0-9])(?:(?:[0-1]?[0-9]{1,2}|2[0-4][0-9]|25[0-5]). . . )(?![0-9]))))|(?:\b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))(.?|\b))))) (?:(?[\x21-\x5a\x5c\x5e-\x7e]+)(?:[(?\b(?:[1-9][0-9] )\b)])?): (?.*)
2021-11-11 09:03:03 +0000 [warn]: ‘pos_file PATH’ parameter is not set to a ‘tail’ source.
2021-11-11 09:03:03 +0000 [warn]: this parameter is highly recommended to save the position to resume tailing.
2021-11-11 09:03:03 +0000 [info]: using configuration file:
@type tail
read_from_head true
tag "dummy_log_tag"
path "dummy.log"
@type "grok"
grok_pattern "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{SYSLOGPROG}: %{GREEDYDATA:message}"
unmatched_lines
@type elasticsearch
host "https://admin:admin@10.179.37.132:9200"
scheme https
ssl_verify false
ssl_version TLSv1_2
logstash_format true
logstash_prefix "fluentd"
enable_ilm true
index_date_pattern "“now/m{yyyy.mm}”"
flush_interval 10s
flush_interval 10s
2021-11-11 09:03:03 +0000 [info]: starting fluentd-1.14.2 pid=163 ruby="2.6.8"
2021-11-11 09:03:03 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/local/bundle/bin/fluentd", "-c", "fluentd/etc/fluentd.conf", "--under-supervisor"]
2021-11-11 09:03:04 +0000 [info]: adding match pattern="dummy_log_tag" type="elasticsearch"
2021-11-11 09:03:05 +0000 [info]: #0 'flush_interval' is configured at out side of . 'flush_mode' is set to 'interval' to keep existing behaviour
2021-11-11 09:03:07 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. no address for https (Resolv::ResolvError)
2021-11-11 09:03:07 +0000 [warn]: #0 Remaining retry: 14. Retry to communicate after 2 second(s).
2021-11-11 09:03:11 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. no address for https (Resolv::ResolvError)
Thanks,
Hema