Hi,
we just recently deployed opendistro (or is it now called Opensearch?) + logstash via docker. Awesome tool. We used to have Elasticstack deployed a long time ago, but never really got to the point to actually configure and use it in production. Now we would like to collect the logs from our Ubiquity switches. Currently there is unfortunately no module for filebeat to handle this directly. There is one for Cisco, but I assume the logs are different and so we tried doing this the old school way:
Syslog client → Central Rsyslog → Logstash → Elasticsearch.
For the switches, we have configured the Unifi controller (central system that manages all the Ubiquity switches) to send all the logs to a central rsyslog server and now we are struggling what would be the best way to ship the logs to elasticsearch. We basically followed this article to ship the logs from rsyslog to logstash and from there to elasticsearch:
For testing I set up a linux as a rsyslog client as well as the unifi system. I can see both systems sending their logs when looking at /var/log/messages on the central rsyslog server:
Aug 12 06:55:07 katello qdrouterd: 2021-08-12 06:55:07.722479 +0000 ROUTER_CORE (info) [C80731][L176131] Link attached: dir=out source={pulp.agent.bdef005b-da2d-48f7-b6b5-58a4ad88a3cc expire:sess} target={<none> expire:sess}
Aug 12 08:57:34 ubisw01.local 00aabbddeeff,US-48-500W-1.00.00+12698: switch: DOT1X: Radius authenticated in unauthenticated VLAN on interface 0/36.
I first tried to set up the central rsyslog server to have a unique directory under /var/log/ for each rsyslog client but there seems to be a probem with my rsyslog.conf syntax and so I just have all logs go in to /var/log/messages on the central rsyslog server for now.
Also I can see that the logs from the linux client and the switches are send from the central rsyslog server to elasticsearch when tailing the logstash container:
docker logs -f logstash
{
"procid" => "7494",
"facility" => "daemon",
"severity" => "notice",
"@timestamp" => 2021-08-12T06:50:47.155Z,
"host" => "172.28.7.221",
"programname" => "puppet-agent",
"message" => "Applied catalog in 4.88 seconds",
"sysloghost" => "gedasvl401",
"@version" => "1"
}
{
"procid" => "-",
"facility" => "daemon",
"severity" => "notice",
"@timestamp" => 2021-08-12T09:01:40.000Z,
"host" => "172.28.7.221",
"programname" => "18e829ac1ac1,US-48-500W-5.43.35+12698",
"message" => " switch: TRAPMGR: Link Down: 0/5",
"sysloghost" => "ubi48-08.a.space.corp",
"@version" => "1"
}
But I can only see the logs from the linux client in Kibana. I assume that the format of the logs from the switches is not the same RFC type as the usual syslog and that is why elasticsearch seems to refuse it? But this is just a guess. I have not yet found a way to actually see why logstash is not submitting the logs from the switches to elasticsearch.
Basically we are completely open on how to ship the logs. There is not even a requirement to first ship them to rsyslog but there are just so many ways to do it that we are absolutely lost what would be the best and most efficient way. But I assume that shipping it to logstash prior shipping it to elasticsearch could be useful since the we need to grok the logs.
Best Regards,
Oliver