Parent circuit breaker intermittent tripping on Saved Object query (and various other operations)

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
1.3.19 on Ubuntu 20.04

Describe the issue:
We are seeing intermittent errors, logs are showing this:

[circuit_breaking_exception]: [parent] Data too large...

There isn’t much data in the saved objects, if I export it then the json file is less than 1.5Mb.
The .kibana_1 index (which presumably stores this) is green, 361kb in size, has 256 documents, 1 primary, no replicas (as expected).

GET .kibana_1/_settings shows it has 1 shard only.

As an example, GET _nodes/stats also intermittently shows this error.

We probably are missing some essential configuration.
Any hints would be appreciated. Thanks!

Configuration:

We use -Xms614m -Xmx614m

Breakers config:

      "breakers" : {
        "request" : {
          "limit_size_in_bytes" : 375835852,
          "limit_size" : "358.4mb",
          "estimated_size_in_bytes" : 0,
          "estimated_size" : "0b",
          "overhead" : 1.0,
          "tripped" : 0
        },
        "fielddata" : {
          "limit_size_in_bytes" : 250557235,
          "limit_size" : "238.9mb",
          "estimated_size_in_bytes" : 250564,
          "estimated_size" : "244.6kb",
          "overhead" : 1.03,
          "tripped" : 0
        },
        "in_flight_requests" : {
          "limit_size_in_bytes" : 626393088,
          "limit_size" : "597.3mb",
          "estimated_size_in_bytes" : 0,
          "estimated_size" : "0b",
          "overhead" : 2.0,
          "tripped" : 0
        },
        "accounting" : {
          "limit_size_in_bytes" : 626393088,
          "limit_size" : "597.3mb",
          "estimated_size_in_bytes" : 17219800,
          "estimated_size" : "16.4mb",
          "overhead" : 1.0,
          "tripped" : 0
        },
        "parent" : {
          "limit_size_in_bytes" : 595073433,
          "limit_size" : "567.5mb",
          "estimated_size_in_bytes" : 482235072,
          "estimated_size" : "459.8mb",
          "overhead" : 1.0,
          "tripped" : 8126
        }
      },

memory stats

        "mem" : {
          "total_in_bytes" : 5157343232,
          "free_in_bytes" : 250802176,
          "used_in_bytes" : 4906541056,
          "free_percent" : 5,
          "used_percent" : 95
        },

jvm stats:

jvm" : {
        "timestamp" : 1741084526136,
        "uptime_in_millis" : 65528088,
        "mem" : {
          "heap_used_in_bytes" : 482099080,
          "heap_used_percent" : 76,
          "heap_committed_in_bytes" : 626393088,
          "heap_max_in_bytes" : 626393088,
          "non_heap_used_in_bytes" : 249991864,
          "non_heap_committed_in_bytes" : 269979648,
          "pools" : {
            "young" : {
              "used_in_bytes" : 12773032,
              "max_in_bytes" : 139591680,
              "peak_used_in_bytes" : 139591680,
              "peak_max_in_bytes" : 139591680,
              "last_gc_stats" : {
                "used_in_bytes" : 2069936,
                "max_in_bytes" : 139591680,
                "usage_percent" : 1
              }
            },
            "survivor" : {
              "used_in_bytes" : 0,
              "max_in_bytes" : 17432576,
              "peak_used_in_bytes" : 17432576,
              "peak_max_in_bytes" : 17432576,
              "last_gc_stats" : {
                "used_in_bytes" : 0,
                "max_in_bytes" : 17432576,
                "usage_percent" : 0
              }
            },
            "old" : {
              "used_in_bytes" : 469356000,
              "max_in_bytes" : 469368832,
              "peak_used_in_bytes" : 469368608,
              "peak_max_in_bytes" : 469368832,
              "last_gc_stats" : {
                "used_in_bytes" : 469356000,
                "max_in_bytes" : 469368832,
                "usage_percent" : 99
              }
            }
          }
        },
        "threads" : {
          "count" : 89,
          "peak_count" : 92
        },
        "gc" : {
          "collectors" : {
            "young" : {
              "collection_count" : 2077,
              "collection_time_in_millis" : 61834
            },
            "old" : {
              "collection_count" : 8191,
              "collection_time_in_millis" : 1353485
            }
          }
        },
        "buffer_pools" : {
          "mapped" : {
            "count" : 4977,
            "used_in_bytes" : 1330023566,
            "total_capacity_in_bytes" : 1330023566
          },
          "direct" : {
            "count" : 63,
            "used_in_bytes" : 2877411,
            "total_capacity_in_bytes" : 2877410
          }
        },
        "classes" : {
          "current_loaded_count" : 24253,
          "total_loaded_count" : 25214,
          "total_unloaded_count" : 961
        }
      },

Indices stats:

      "indices" : {
        "docs" : {
          "count" : 10855050,
          "deleted" : 26
        },
        "store" : {
          "size_in_bytes" : 2496015204,
          "reserved_in_bytes" : 0
        },

Relevant Logs or Screenshots:

Mar 04 05:47:45 es-master opensearch-dashboards[57673]: {"type":"log","@timestamp":"2025-03-04T10:47:45Z","tags":["error","opensearch","data"],"pid":57673,"message":"[circuit_breaking_exception]: [parent] Data too large, data for [<http_request>] would be [605557056/577.5mb], which is larger than the limit of [595073433/567.5mb], real usage: [605557056/577.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=258080/252kb, in_flight_requests=0/0b, accounting=17157456/16.3mb]"}

I have just noticed that the parent circuit breaker value is less than some of the other breakers. I don’t think that is right, e.g.

        "in_flight_requests" : {
          "limit_size" : "597.3mb",
...        },
        "accounting" : {
          "limit_size" : "597.3mb",
...        },
        "parent" : {
          "limit_size" : "567.5mb",
...

Hi @wb4,

Have you checked here for performance improvement ideas/solutions:

Best,
mj