Issue with audit logs rolling index and disabled categories

Versions OpenSearch v2.14 - OS Ubuntu 22.04.4 LTS

Describe the issue:

#1

I changed settings in opensearch.yml to have a weekly rolling index instead of a daily rolling index on friday. It created the index 2024-23 and it worked for about two days (friday, saturday). The index 2024-24 wasn’t created and no more audit logs events are stored.

Before: There was no “plugins.security.audit.config.index” variable in opensearch.yml and there was daily rolling indexes created that I could query in the OpenSearch Dashboard.

After I added:
plugins.security.audit.config.index: “'security-auditlog-'YYYY.ww”

Then I restarted OpenSearch using:
systemctl restart opensearch

Immediately the index “security-auditlog-2024.23” was created and was working properly in OpenSearch Dashboards, events were added to it and I could query it.

But, “security-auditlog-2024-24” wasn’t created so it’s not working anymore.

Did I do something wrong? According to my understanding, no need to run securityadmin.sh if I modify opensearch.yml since it’s not in the opensearch-security directory but in the main opensearch directory?

#2

Using OpenSearch Dashboard GUI, I removed AUTHENTICATED and GRANTED_PRIVILEGES from disabled categories, in both REST and TRANSPORT.

Immediately, events were stored in the audit logs current index, but curiously, if I checked again in the GUI, AUTHENTICATED and GRANTED_PRIVILEGES were still in disabled categories.

I applied the settings again, and they were gone from the GUI this time. All that time, the events were stored properly in the current audit logs index.

I check again this morning, and the disabled categories are back in the GUI configuration.

What did I do wrong? It is my understanding that using the GUI means you do not need to restart opensearch and you do not need to use the securityadmin.sh script? Should I modify the audit.yml file directly, use the securityadmin.sh script then restart opensearch?

I’m not sure if it’s only a visual glitch this time because of my #1 issue, there’s no audit logs anymore…

Here’s the current configuration in the GUI:
REST layer Enabled
REST disabled categories AUTHENTICATED, GRANTED_PRIVILEGES
Transport layer Enabled
Transport disabled categories AUTHENTICATED, GRANTED_PRIVILEGES

Here’s the current configuration of audit.yml
audit:
# Enable/disable REST API auditing
enable_rest: true

# Categories to exclude from REST API auditing
disabled_rest_categories:
  - AUTHENTICATED
  - GRANTED_PRIVILEGES

# Enable/disable Transport API auditing
enable_transport: true

# Categories to exclude from Transport API auditing
disabled_transport_categories:
  - AUTHENTICATED
  - GRANTED_PRIVILEGES

Configuration:
We have a single node cluster.

Can you help me out please?

Thank you very much!

Finally it seems to be fixed. Seemed to have been partially caused by a storage issue due to a change in the NFS mounting point for shards replication and partially by a restart that tried to use the root account so it took ownership of some log files so it complicated the troubleshooting because logs were incomplete. We will see next week if it rolls over correctly to the next weekly audit log.

1 Like