Log ingestion from Stormshield firewall

Hi there,

has anyone found a way to ingest logs from a Stormshield (SNS v4.6) firewall?

I configured syslog on the Stormshield interface with port 9200 but there’s no way to configure a user on the interface and logs are sent but not ingested by my Opensearch server.

I asked the Stormshield support team if I could install Filebeat, but they say it’s not possible.

If anyone has info about this, thanks for your help.

Best regards,

Stormshield would need to know how to index documents to opensearch via port 9200 - if you can ship via syslog you’ll need something to pick those logs up, then index them into opensearch.

1 Like

I use Fluentd as a “lightweight” alternative to Logstash for these kind of things (e.g. a Sophos appliance).

If it’s actual Syslog, you would use @type syslog as your <source>.

The output in the <match> directive would be open search pointing at the appropriate index.
Although I 'd recommend the data stream output for log data.

I can supply some example config if you’re interested.


Hi jasonrojas,
thanks for the hint, I guess I’ll have to look into a solution with somethink like logstash, fluentbit or fluentd as Alex below suggests.
I hoped to be able to skip thi and, I was kind of happy with my first Winlogbeat log data ingestion, but for the Stormshield, this kind of agent can’t be deployed.
Best, regards,

Hi Alex,
thanks for your message, yes, it’s syslog protocol on the Stormshield device, one can choose between legacy, legacy_long or RFC5424.

I’ll be out of office for 2 weeks and won’t have the time at least the weak after beeing back, but I’ll try to become familiar with solutions like Fluentd.

If I get it right, Fluentd is installed on it’s own server and you can ingest a great variety of logs and then send them to Opensearch (in my case)?

Would it be better to pass all logs, like Windows and Linux through it? Do you now if there’s a gain of something when using Fluentd?

Sorry,lots of questions. I’ll get back to you when I installed it.

Best regards,

Hi Moma,

this might be a steep learning curve… :smile:

I use Fluentd as universal Log aggregator and pipe data into Opensearch for storage and analysis.
In the classic ELK stack that would’ve been the role of logstash. (I prefer Fluentd because it appears much more lightweight than logstash to me.)

Fluentd can also ingest Elastic Beats so you wouldn’t have to worry about any versioning checks on the Beats side of things.

I run Fluentd on the same machine as the Opensearch ingest node - in fact they’re in the same Docker network. When using the docker version of Fluentd, you have to add some input/output plugins:

# Fluentd Dockerfile
FROM fluent/fluentd:edge
USER root
RUN gem install fluent-plugin-opensearch --no-document \
        && gem install fluent-plugin-beats --no-document \
        && gem install fluent-plugin-rename-key --no-document \
        && gem install fluent-plugin-dict-map --no-document \
        && gem sources --clear-all \
        && rm -rf /tmp/* /var/tmp/* /usr/lib/ruby/gems/*/cache/*.gem
USER fluent

This will install the Opensearch Output, Beats Input and some field modification plugins you will learn to appreciate :slightly_smiling_face:

Default Syslog input port is 514 but on most applications you can specify an alternate port.
If you didn’t already learn the hard way - syslog seem to differ not only in the RFC they use. That’s why I create multiple UDP listeners, doing my own syslog parsing:

# fluentd.conf
  log_level warn

  @type beats
  port 5044
  tag winlogbeat

  @type syslog
  port 5044
  source_address_key host.ip
  severity_key log.syslog.severity.name
  facility_key log.syslog.facility.name
   message_format auto
  tag system-log

  @type udp
  port 5140
  source_address_key host.ip
    @type regexp
    expression ^<(?<log.syslog.priority>[0-9]+)>([0-9TZ:.-]* )?(?<host.name>[a-zA-Z0-9_.-]+) +(?<log.logger>[^:\[]*)?(\[(?<log.pid>[0-9]+)\])?\]?:? *(?<message>.*)
  tag linux-syslog

<filter *-syslog.**>
  @type record_transformer
  enable_ruby true
   log.syslog.facility.code ${record["log.syslog.priority"].to_i / 8}
   log.syslog.severity.code ${record["log.syslog.priority"].to_i % 8}

<filter *-syslog.**>
  @type dict_map
  key_name log.syslog.severity.code
  destination_key_name log.syslog.severity.name
  dictionary { "0":"Emergency", "1":"Alert", "2":"Critical", "3":"Error", "4":"Warning", "5":"Notice", "6":"Informational", "7":"Debug" }

<filter *-syslog.**>
  @type dict_map
  key_name log.syslog.facility.code
  destination_key_name log.syslog.facility.name
  dictionary {"0":"kernel","1":"user","2":"mail","3":"system","4":"security/authorization","5":"syslogd","6":"lpr","7":"news","8":"UUCP","9":"clock","10":"security/authorization","11":"FTP","12":"NTP","13":"audit","14":"alert","15":"clock","16":"local0","17":"local1","18":"local2","19":"local3","20":"local4","21":"local5","22":"local6","23":"local7"}

<match {winlogbeat.**,*-syslog.**}>
  @type opensearch_data_stream
  data_stream_name ${tag[0]}
  data_stream_template_name ${tag[0]}
  hosts opensearch:9200
  scheme https
  ssl_verify false
  user fluentd
  password very_secret
  request_timeout 30s
  remove_keys _hash, _id
  validate_client_version true
  log_os_400_reason true
  <buffer tag>
    @type memory # or file
    total_limit_size 1024MB
    chunk_limit_size 16MB
    flush_mode interval
    flush_interval 30s
    flush_thread_count 4

<match *.**>
  @type stdout

You will have to create an empty template in Opensearch for each tag that you define in Fluentd. The storage in Opensearch will be a data_stream using that template.

If you want to see what the JSON looks like on stdout, just comment the first match directive entirely. (the conf file is processed from op to bottom)

When you’re happy to find the first log lines in Opensearch, you might re-iterate and change the ingestion pipeline to better match a common list of attributes (e.g. ECS).
While this can be done using ingestion-pipelines in Opensearch I prefer normalization in Fluentd. As an example I included my mapping of the Syslog priority to log.syslog.severity.name and log.syslog.facility.name


Hi Alex,
yes, kind of steep learning curve, I hope this works out. I was really baffled when I saw, how you could organize logs and I just had a glimpse at the Opensearch Dashboard. I hope to be able to make work some of the security functionalities later.
Thanks a lot for your help, I’ll get back to this after holiday.