Hi everyone,
I’m currently working with AWS OpenSearch (version 2.19) and have enabled audit logging using Terraform. Below is a summary of what I’ve implemented and the issue I’m facing:
What I’ve Done:
I’ve used the following Terraform configuration to enable audit logs:
log_publishing_options {
log_type = “AUDIT_LOGS”
cloudwatch_log_group_arn = aws_cloudwatch_log_group.audit_log_group.arn
enabled = true
}
This setup successfully updates the domain configuration, and I can see that **audit logs are marked as enabled** in the AWS console. The CloudWatch log group is created, and IAM permissions are in place.
---
❌ The Problem:
Despite audit logging being “enabled” via Terraform, **no audit logs are being generated in CloudWatch Logs**.
After some investigation, it appears that audit logging may still need to be **explicitly enabled or configured via the OpenSearch Dashboards UI** (e.g., choosing what events to log, setting compliance options, etc.).
I also tried using the OpenSearch provider resource `opensearch_audit_config` to configure detailed audit behavior (like excluding certain categories), but AWS OpenSearch returns:
Error: elastic: Error 403 (Forbidden)
My understanding is that AWS OpenSearch does **not expose** the `/plugins/_security/api/audit` endpoint, which is why this resource fails.
---
❓ My Question:
* Is it expected that audit logging must still be **manually activated/configured via OpenSearch Dashboards** even if it is enabled via Terraform?
* Is there **any way to automate full audit logging setup** (including fine-grained options) in AWS OpenSearch via Terraform or API?
* Or is this a limitation specific to AWS-managed OpenSearch Service?
---
📎 Notes:
* Fine-grained access control is enabled.
* The roles and rolesmapping APIs work via `curl` and Terraform `null_resource`.
* Only the `/_plugins/_security/api/audit` endpoint is blocked (403 error).
---
Any guidance, best practices, or workarounds would be highly appreciated.
Thanks in advance!