Too Many Buckets Exception

I have a table visualization, where I have total 15 columns, 13 buckets and 2 metric columns.

Table should display around 50k-1lac records/rows.
Pagination is used and table showing 20 rows per page.

For each bucket I need to set different bucket size in order to get all data. bucket size for these columns may range from 1 to >10000. While setting this bucket size for each column getting ‘Too many Buckets Exception’ after certain bucket limit is crossed(even bucket size=100 causes visualization error), due to which not able to show all the records in table.

How do we tackle this scenario. Any configuration related changes needed to show such high amount of data/records?

This limit is configurable via search.max_buckets of cluster settings. But take care when raising that limit, because the more bucket an aggregation produces, the more heap it uses.

If you need to increase the limit, make sure you properly monitor your cluster. I’m biased because I work for Sematext, but I’d recommend our OpenSearch Monitoring.

I tried increasing search.max_buckets (upto 100000) and heap space as well(upto 20gb), but its not helping. System RAM is 64gb.

The default is 65K, so maybe increasing to 100K isn’t enough…

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.