Detector with Linux System Logs Type does not detect its rule

Hi there,

i’d like to ask for help in this matter:

im running 4 node opensearch cluster in version 2.15.0 on RKE2 cluster.

flow:
collecting logs from multiple servers where is running auditbeat 8.13.2 agent, then logs goes to logstash which is running next to opensearch deployment , finally from logstash goes to opensearch.

auditbeat conf:

root@jump.1:~# cat /etc/auditbeat/auditbeat.yml | grep -v "#" | grep -v "^$"
auditbeat.modules:
- module: auditd
  audit_rules: |
    -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time-change
    -a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod
    -a always,exit -F arch=b64 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod
    -a always,exit -F arch=b64 -S clock_settime -k time-change
    -a always,exit -F arch=b64 -S init_module -S delete_module -k modules
    -a always,exit -F arch=b32 -S init_module -S delete_module -k modules
    -a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=4294967295 -k mounts
    -a always,exit -F arch=b64 -S mount -F auid>=500 -F auid!=4294967295 -k export
    -a always,exit -F arch=b64 -S sethostname -S setdomainname -k system-locale
    -a always,exit -F arch=b64 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod
    -a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=4294967295
    -a exit,always -F arch=b32 -S execve
    -a exit,always -F arch=b64 -S execve
    -a exclude,never -F msgtype=PATH
    -w /etc/apparmor -p wa
    -w /etc/apparmor.d -p wa
    -w /etc/anacrontab
    -w /etc/at.allow
    -w /etc/at.deny
    -w /etc/cron.allow
    -w /etc/cron.d/
    -w /etc/cron.daily
    -w /etc/cron.deny
    -w /etc/cron.hourly/
    -w /etc/cron.monthly/
    -w /etc/cron.weekly/
    -w /etc/crontab
    -w /etc/group -p wa -k identity
    -w /etc/gshadow -p wa -k identity
    -w /etc/hosts -p wa -k system-locale
    -w /etc/issue -p wa -k system-locale
    -w /etc/issue.net -p wa -k system-locale
    -w /etc/localtime -p wa -k time-change
    -w /etc/modprobe.conf
    -w /etc/network -p wa -k system-locale
    -w /etc/networks -p wa -k system-locale
    -w /etc/nsswitch.conf
    -w /etc/pam.d/
    -w /etc/passwd -p wa -k identity
    -w /etc/profile
    -w /etc/profile.d/
    -w /etc/rsyslog.conf
    -w /etc/rsyslog.d/conf
    -w /etc/security/opasswd -p wa -k identity
    -w /etc/shadow -p wa -k identity
    -w /etc/shells
    -w /etc/ssh/sshd_config -p warx -k sshd_config
    -w /etc/sudoers -p wa -k scope
    -w /etc/sudoers.d -p wa -k scope
    -w /etc/sysctl.conf
    -w /etc/syslog.conf
    -w /sbin/insmod -p x -k modules
    -w /sbin/modprobe -p x -k modules
    -w /sbin/rmmod -p x -k modules
    -w /var/log/lastlog -p wa -k logins
    -w /var/log/sudo.log -p wa -k actions
    -w /var/spool/at/
- module: file_integrity
  paths:
  - /bin
  - /usr/bin
  - /sbin
  - /usr/sbin
  - /etc
  - /var/log
  - /var/lib/docker
  exclude_files:
  - '(?i)\.sw[nop]$'
  - '~$'
  - '/\.git($|/)'
  scan_at_start: true
  scan_rate_per_sec: 50 MiB
  max_file_size: 100 MiB
  hash_types: [sha1]
  recursive: true
- module: system
  datasets:
    - host
    - login
    - user
  state.period: 12h
  user.detect_password_changes: true
  login.wtmp_file_pattern: /var/log/wtmp*
  login.btmp_file_pattern: /var/log/btmp*
output.logstash:
  hosts: ["logs.com:9400"]
  ssl.certificate: "/certs/logs.pem"
  ssl.key: "/certs/logs.key"
  ssl.certificate_authorities: ["/certs/ca.pem"]
processors:
  - add_host_metadata:
  - drop_event:
       when:
         contains:
           status: "/var/log/pods"
logging.level: info
logging.selectors: ["*"]
fields:
  server_name: "1"
  location_jump:
    lat: 49.5889
    lon: 11.0079
fields_under_root: true


logstash conf:

logstash.conf
input {
  beats {
    port => 5044
    ssl_enabled => true
    ssl_certificate_authorities => ["/certs/ca.crt"]
    ssl_certificate => "/certs/logs.crt"
    ssl_key => "/certs/logs.key"
    ssl_client_authentication => "required"
  }
}
filter {
    mutate {
        convert => { "user.id" => "integer" }
    }
}
output {
  opensearch {
    hosts => ["https://opensearch-cluster-master:9200"]
    ssl => 'true'
    cacert => '/certs/ca.crt'
    user => 'fluentbit'
    password => '*/*/*/*/*/*/*'
    index => "%{[host][name]}-%{+YYYY.MM.dd}"
    action => "create"
   }
}

i am receiving logs into opensearch and there are template/index pattern and alias configure for these logs.

In security analytics i created detector with :

Log type:
Linux System Logs

Detection rules:
192

Threat intelligence:
Enabled

so i tried to check if i activate for example rule called:

id: 0f79c4d2-4e1f-4683-9c36-b5469a665e06
logsource:
  product: linux
title: Cat Sudoers
description: >-
  Detects the execution of a cat /etc/sudoers to list all users that have sudo
  rights
tags:
  - attack.reconnaissance
  - attack.t1592.004
falsepositives:
  - Legitimate administration activities
level: medium
status: test
references:
  - 'https://github.com/sleventyeleven/linuxprivchecker/'
author: Florian Roth (Nextron Systems)
detection:
  selection:
    Image|endswith:
      - /cat
      - grep
      - /head
      - /tail
      - /more
    CommandLine|contains: ' /etc/sudoers'
  condition: selection

but no findings/alert in detector.
I can see within discovery in index that such action has happend.

So i create custom rule for that based on result from discovery where the right one fields are used:

id: c-4GxpABBd68fWPzNMHl
logsource:
  product: linux
title: Sudoers File Access Detected via Cat Command
description: Detects when the sudoers file is accessed using the cat command
tags:
  - attack.reconnaissance
  - attack.t1592.004
falsepositives:
  - Legitimate administration activities
level: medium
status: test
references:
  - 'https://github.com/sleventyeleven/linuxprivchecker/'
author: 
detection:
  selection:
    process.title: cat /etc/sudoers
  condition: selection

so my question is:

  • why “official” Log type: Linux System Logs does not match the fields which are in index which is generated by auditbeat/logstash ?
    Or is there something wrong with configuration auditbeat/logstash ?

THanks a lot.
L

I have not modified the rules, but seeing similar behaviors in opensearch 2.13.

P.S I have modified logstash to add an “Image” field to my sys log data.

filter {
  json { source => "message" }
  mutate { add_field => { "Image" => "%{message}" } }
}

Hi dragsu,

thanks

i disable the customs rules ( idid not modifies existing one, just create 3 new custom rules for testing pruposes for chmod/chown/cat /etc/sudoers and userdel ) ,then put lines into logstash.conf, :


input {
  beats {
    port => 5044
    ssl_enabled => true
    ssl_certificate_authorities => ["/certs/ca.crt"]
    ssl_certificate => "/certs/logs.crt"
    ssl_key => "/certs/logs.key"
    ssl_client_authentication => "required"
  }
}

filter {
    json { source => "message" }
    mutate {
        convert => { "user.id" => "integer" }
        add_field => { "Image" => "%{message}" }
                      } 
              }

output {
  opensearch {
    hosts => ["https://opensearch-cluster-master:9200"]
    ssl => 'true'
    cacert => '/certs/ca.crt'
    user => 'fluentbit'
    password => 'epH2vsf4k6N7EiygNeB0'
    index => "%{[host][name]}-%{+YYYY.MM.dd}"
    action => "create"
   }
}

redeploy logstash deployment, but no findings/alert appeared.

You also wrote that, you noticed that this sitation appeared since 2.13.
So in version 2.12 default rules was working fine? Should i try rollback to 2.12 and test it ?

Thanks

Can you copy the output after the filtering step? I have not tested in v2.12.

output {
  opensearch {
  ...
  }
  stdout {
    codec => rubydebug
  }
}

uff,
its generated a lot of logs,log is here .

just for info,when i enabled custom rules again , i did not received any alerts/findings after add

filter {
  json { source => "message" }
  mutate { add_field => { "Image" => "%{message}" } }
}.

into logstash conf

thank you

I am also facing a similar kind of issue with windows logs, is it like that the rules are case sensitive and the detector searches for exact match ? any idea on this ?

I am asking this because when I created another rule matching the case of the value stored the detector fired.

1 Like

Log is deleted and could not access it. Can you verify after the mapping update, Image field value endswith one of,

/cat
grep
/head
/tail
/more

and , there is a CommandLine field with value /etc/sudoers?

ok so i changed logstash as you suggest, enable debug , then refreshed field list in my index pattern called jump* ,

within GUI i can see now field image:

but no commandline

after reload logstash, i execute cat /etc/sudoers on one jump(wvsc).

sorry i set only one hour to store log, here is the newone:

within Discover i can see sudoers record:

{
  "_index": "jump.wvsc-2024.07.26",
  "_id": "DVQW7pABBd68fWPzV86z",
  "_version": 1,
  "_score": null,
  "_source": {
    "Image": "%{message}",
    "event": {
      "kind": "event",
      "type": [
        "start"
      ],
      "category": [
        "process"
      ],
      "outcome": "success",
      "action": "executed",
      "module": "auditd"
    },
    "user": {
      "name": "root",
      "id": "0",
      "filesystem": {
        "name": "root",
        "id": "0",
        "group": {
          "id": "0",
          "name": "root"
        }
      },
      "group": {
        "id": "0",
        "name": "root"
      },
      "saved": {
        "name": "root",
        "id": "0",
        "group": {
          "id": "0",
          "name": "root"
        }
      },
      "audit": {
        "id": "10013",
        "name": "lad"
      }
    },
    "service": {
      "type": "auditd"
    },
    "tags": [
      "beats_input_raw_event"
    ],
    "auditd": {
      "session": "1329168",
      "message_type": "syscall",
      "data": {
        "syscall": "execve",
        "a0": "56220f60dc60",
        "a1": "56220f612980",
        "tty": "pts2",
        "a2": "56220f4cab50",
        "arch": "x86_64",
        "argc": "2",
        "a3": "8",
        "exit": "0"
      },
      "sequence": 543684720,
      "summary": {
        "how": "/usr/bin/cat",
        "actor": {
          "secondary": "root",
          "primary": "lad"
        },
        "object": {
          "type": "file"
        }
      },
      "result": "success"
    },
    "location_jump": {
      "lat": 49.5889,
      "lon": 11.0079
    },
    "process": {
      "name": "cat",
      "working_directory": "/root",
      "title": "cat /etc/sudoers",
      "executable": "/usr/bin/cat",
      "args": [
        "cat",
        "/etc/sudoers"
      ],
      "pid": 3831970,
      "parent": {
        "pid": 3827915
      }
    },
    "ecs": {
      "version": "8.0.0"
    },
    "@timestamp": "2024-07-26T08:09:13.035Z",
    "host": {
      "containerized": false,
      "mac": [
        "3A-50-6E-BF-68-B0",
        "3E-C9-2D-2E-B6-4C",
        "42-32-47-D8-AE-EC",
        "56-35-5D-7F-E3-F6",
        "82-BD-C2-47-49-72",
        "8E-B4-B0-E0-C3-01",
        "92-F5-2B-5E-D9-E8",
        "A6-B2-CA-78-CD-63",
        "A6-E6-4C-34-35-8D",
        "D4-F5-EF-31-B7-14",
        "D4-F5-EF-31-B7-15",
        "D4-F5-EF-31-B7-16",
        "D4-F5-EF-31-B7-17",
        "D4-F5-EF-38-B7-FC",
        "D4-F5-EF-38-B7-FD",
        "D4-F5-EF-38-B7-FE",
        "D4-F5-EF-38-B7-FF",
        "DA-94-51-5C-BD-FE",
        "EA-E6-71-A6-C7-2F",
        "FA-1E-5C-8D-D8-06",
        "FA-74-03-8B-F8-72"
      ],
      "name": "jump.wvsc",
      "os": {
        "platform": "ubuntu",
        "name": "Ubuntu",
        "kernel": "5.4.0-155-generic",
        "type": "linux",
        "codename": "focal",
        "version": "20.04.6 LTS (Focal Fossa)",
        "family": "debian"
      },
      "id": "55ccafccb08940cdab60035fae2ee88e",
      "hostname": "jump.wvsc",
      "architecture": "x86_64",
      "ip": [
        "fe80::d6f5:efff:fe31:b714",
        "10.16.14.130",
        "fe80::d6f5:efff:fe31:b715",
        "10.16.14.198",
        "fe80::d6f5:efff:fe31:b714",
        "10.82.0.9",
        "fe80::d6f5:efff:fe31:b714",
        "10.16.14.66",
        "fe80::d6f5:efff:fe31:b714",
        "10.16.14.6",
        "fe80::d6f5:efff:fe31:b714",
        "10.42.0.0",
        "10.42.0.1",
        "192.168.111.211",
        "10.16.0.226",
        "10.0.1.27"
      ]
    },
    "@version": "1",
    "server_name": "WvSC",
    "agent": {
      "id": "fc84c8cf-fd7b-457a-b784-956e8a7fc3a4",
      "name": "jump.wvsc",
      "version": "8.13.2",
      "ephemeral_id": "f4bdf083-cd7e-4dbb-9213-d3c7e0c33123",
      "type": "auditbeat"
    }
  },
  "fields": {
    "@timestamp": [
      "2024-07-26T08:09:13.035Z"
    ]
  },
  "highlight": {
    "server_name.keyword": [
      "@opensearch-dashboards-highlighted-field@WvSC@/opensearch-dashboards-highlighted-field@"
    ],
    "process.title": [
      "cat /etc/@opensearch-dashboards-highlighted-field@sudoers@/opensearch-dashboards-highlighted-field@"
    ]
  },
  "sort": [
    1721981353035
  ]
}

but not visibile within detector.

just for clarification, we are trying to reformat/edit logs from auditbeat within logstash so detector in opensearch will be able detect the findings?

another question regarding fields:

within/during creation of detector there are several fields which might be mapped manually.
In ma case i can see:


and you can see

in documentation of linux log type there are also:

{
      "raw_field":"Image",
      "ecs":"process.exe"
    },

and

 {
      "raw_field":"CommandLine",
      "ecs":"process.command_line"
    },

fields,
so maybe i should connect fields from source log which are matched fields provided by detector? as fields in log are in the different field as detector needed?

Thanks.

Yes

For me, log messages are under the message field, so I added a new Image field with the value of message field. Since your detection content is nested under process key, can you try the below?

mutate { add_field => { "Image" => "%{process}" } }
  1. You can try to map log fields to the content of your data to improve the accuracy of detectors but it is optional.

Hi Dragsu,

well,i tried to edit mapping within detector which is mark as optional and then detector starts generating findings/alerts:

of course, i have sometime trouble to get alert from specific rule , but detector works…

Nice! Yeah, it will detect certain things but also miss things due to field mismatching in actual data and sigma rules.

I’m also new to this and unsure of the best way to tackle this. Potentially to transform logs to match what Sigma rules are looking for?

Hi,

well, i took one rule where is defined detection part,
then i simulate on one monitored server,
run command which should be activate detector,
then in discovery checked if event and logs is sending into opensearch, ( eg… chmod/sudoers/ )
then i copy whole documents with fields,
paste it with unmapped fields ( within detector which might be mapped manually) into gpt chat
and let it to make most accurate mapping according such info.
do a manuall mapping within detector.

Also refresh index pattern must be done,few times, i dont know that flow between fields in index-templates-pattern-detector actually works…,but seems to be working,

Next step now i have to choose from those rules ( linux system log) which rules are important for me ,simulate it one by one and configure alert so i can get relevant info at the end…

Also i am trying within Alerting , create monitor which will be connected to detector (security analytics) , it seems to be little bit like a ticket tool and there are more option within action when alert was generated, but i do not know how exactly it works, especially create correct query for specific monitor type , i think per document type is the best monitor for connect between detector and monitor.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.