Is generating/download reports from the Dashboards UI just broken in Docker?

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):

v2.11.1, v2.11.1, Linux (official Docker images), Linux/Firefox

Describe the issue:

see opensearch-project/reporting#629 and opensearch-project/reporting#590

“Generating your report” spins for a bit, after which I get “Download Error; There was an error generating this report.”

From what I can see in those bug reports, I’m not the only one experiencing this, but there’s been no movement or comments on them for months.

Is there a workaround or a configuration thing that others know about that I’m missing?

Relevant Logs or Screenshots:

Hi @tlacuache
Are there any warnings or errors in OpenSearch or OpenSearch Dashboards logs? Do you see the same error for the admin user too?

Have you tried to increase CPU and RAM resources for Kibana?

By “Kibana” I assume you mean opensearch dashboards? I am not doing anything specific in my docker setup but I will see if I can hunt down some parameters to tweak to give it more resources:

When I click to generate the report, I see:

Generating report
Preparing your file for download.
Please keep this dialog open while report is being generated.

This goes for about a minute, at which point there is an error on the screen that says:

Download error
There was an error generating this report.

I don’t see anything in the logs for either opensearch or opensearch dashboards, error-wise or otherwise.

As far as the admin user goes, I am not using the security plugin at this level it is essentially wide-open.

I’ve been playing with it more, going into the create report UI. That gives me a slightly different error, which might shed some light on things.

When I try to create a report in the “create report definition” tool I get this error in my logs:

log [17:23:41.842] [error][plugins][reportsDashboards] Failed input validation for create report definition ValidationError: [report_params.core_params.base_url]: invalid relative url: /app#/view/0ad3d7c2-3441-485e-9dfe-dbb22e84e576

and then surfaced from my NGINX reverse proxy in front of Dashboards: - tlacuache [11/Dec/2023:17:27:32 +0000] “POST /dashboards/api/reporting/reportDefinition HTTP/1.1” 400 160 “https://dashboards.internal.lan/dashboards/app/reports-dashboards” “Mozilla/5.0 (X11; Linux x86_64; rv:120.0) Gecko/20100101 Firefox/120.0”

0ad3d7c2-3441-485e-9dfe-dbb22e84e576 is the dashboard ID of the dashboard I have selected, and works fine in the UI. I am running dashboards behind an NGINX reverse proxy, and everything does work fine in the UI otherwise.

My opensearch-dashboards.yml has these possibly relevant lines in it:

server.basePath: "/dashboards"
server.rewriteBasePath: true

But like I say, everything else is working correctly. Maybe reporting has an issue with basePath/rewriteBasePath?


@tlacuache Did you have a chance to increase resources for the OpenSearch Dashboards container?
I’ve noticed that you’ve used commercial sample data. I had the same error in the OpenSearch Dashboards UI when creating a PDF report for all data using commercial sample data.

However, when I’ve narrowed down the time range to 10 hours, the report has been generated.

I’ve got this error only with commercial sample data.

Have you tried using a shorter time range in your tests?

I’m not actually using commercial sample data, I just copy/pasted that screenshot from one of the two bug issues reported on GitHub (linked in my first comment) from other people who are also seeing this problem.

My Dashboards container has plenty of resources available to it. I’ll see if using a shorter time frame causes the issue. If I can still reproduce it after that I will package up an entirely self-contained reproduction environment and share it here.

Still broken even with a very, very small time range. I’ll try to put together a repro environment I can ZIP up and share here.

It’s been a bit but I’ve got some time dedicated this week to try to break this down and see what’s the issue.

I’ve been spending some time on this this morning and I’m now convinced it’s a timeout issue.

The forums don’t allow sharing video, but I uploaded a little screen grab to YouTube.

The video shows one dashboard that works as the basis of an on-demand report generation, then another dashboard that fails. Both dashboards themselves load fast enough, and they’re both a short time frame (last 15 minutes). I’m not sure what the difference is, other than maybe the failing dashboard has more visualizations? Is there some way I can change timeout values for the reporting plugin to see if it’s something there? Or a way to turn on some internal debugging messages or something like that?

I had the idea of turning on my browser’s debug console and saw that I get a client-side error when the report fails, which I do not get when it succeeds:

#2 2ms Starting document clone with size 1604.800048828125x2334 scrolled to 0,0 reportsDashboards.plugin.js:23:269649
#2 8928ms Document cloned, using foreign object rendering reportsDashboards.plugin.js:23:269649
#2 8928ms EXPERIMENTAL ForeignObject renderer initialized (1604.800048828125x2334 at 0,0) with scale 1.25 reportsDashboards.plugin.js:23:269649
error { target: img, isTrusted: true, srcElement: img, currentTarget: img, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, composed: false, … }
    generateInContextReport https://malcolm/dashboards/6867/bundles/plugin/reportsDashboards/reportsDashboards.plugin.js:24
    generateInContextReport https://malcolm/dashboards/6867/bundles/plugin/reportsDashboards/reportsDashboards.plugin.js:24
    <anonymous> https://malcolm/dashboards/6867/bundles/plugin/reportsDashboards/reportsDashboards.plugin.js:24
    dispatch https://malcolm/dashboards/6867/bundles/osd-ui-shared-deps/osd-ui-shared-deps.js:411
    handle https://malcolm/dashboards/6867/bundles/osd-ui-shared-deps/osd-ui-shared-deps.js:411

Googling is making me wonder if there are some header/buffer size limits in my NGINX reverse proxy that may be the cause.

Nope, I don’t think it’s an NGINX thing. I’ve discovered that it works just fine on Google Chrome (on both Linux and macOS, which are the operating systems I have access to), but fails with Firefox (also on both platforms).