Thank you everyone who attended!
Chat log:
07:49:41 From Andriy Redko to Everyone:
Apache Flink Opensearch Connector
07:50:12 From Andriy Redko to Everyone:
1.0.0-rc1
07:55:57 From Kris Freedain to Waiting Room Participants:
Good morning all - we’ll get going in a few minutes
08:01:35 From Patti Juric to Everyone:
Good morning!
08:01:36 From Joshua Bright to Everyone:
Yeeess!
08:07:59 From Kris Freedain to Everyone:
2.4.1 & 1.3.7 available!
OpenSearch and OpenSearch Dashboards 1.3.7 introduces Windows x64 distributions for OpenSearch and OpenSearch Dashboards for the 1.3 line. Available in ZIP format, the distributions allow users to deploy OpenSearch 1.3.7 directly in their Windows environment. 1.3.7 also includes bug fixes, infrastructure, and maintenance updates. Please see the Release Notes for full details . Download 1.3.7 .
OpenSearch and OpenSearch Dashboards 2.4.1 includes bug fixes, infrastructure, documentation, and maint…
08:08:23 From Kris Freedain to Everyone:
Survey feedback appreciated:
https://amazonmr.au1.qualtrics.com/jfe/form/SV_1BxJNrtCo4LbweW?Source=Community
08:09:30 From Kris Freedain to Everyone:
Prometheus exporter plugins 2.4.1 & 1.3.7 released!
08:12:27 From Nate B. to Everyone:
Community Members++
08:12:33 From Kris Freedain to Everyone:
08:13:44 From GSmith to Everyone:
How does this exporter compare with the older Elasticsearch_exporter (GitHub - prometheus-community/elasticsearch_exporter: Elasticsearch stats exporter for Prometheus ) project?
08:14:47 From Amitai Stern to Everyone:
Or this one that is a fork of the above:
08:16:22 From Kris Freedain to Everyone:
Apache Flink OpenSearch Connector 1.0.0-rc1
https://cwiki.apache.org/confluence/display/FLINK/FLIP-243%3A+Dedicated+Opensearch+connectors
08:16:25 From Nate B. to Everyone:
I’m not even sure what Flink is.
08:17:26 From Ryan Paras to Everyone:
“Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities.”
08:17:39 From Nate B. to Everyone:
08:19:23 From Kris Freedain to Everyone:
[Feature Proposal] - Augmenting Visualizations with Alerts
opened 10:06PM - 19 Apr 22 UTC
enhancement
# What / Why
## What are we building, and why?
>In one or two sentences, d… escribe the customer or community need, and what impact it has if we don’t address it.
Users who have narrowed down on something in their Dashboards are frustrated that they have to have to leave the context that they are looking at to create an alert. The user should be able to manage alerts based on what they are looking at in context.
## What matters most to users & customers?
>Describe the most important customer benefits and needs. Highlight any research, proposals, requests, issues, forum posts, anecdotes that signal this is the right thing to build.
Users will improve accuracy and reduce cognitive load when they create alerts based on what they are looking at instead of moving to a separate plugin page. Users will also benefit from seeing an overlay of alerts fired when reviewing a visualization tied to an alert.
## What matters most to OpenSearch?
>Describe the value that this feature will bring to the OpenSearch community, partners, or the project.
Making it easier for users to setup alerts and feel more confident in the systems they are responsible for which would result in higher adoption.
## What does the user experience look like?
>Describe the product feature requirements/specification. You may include low-fidelity sketches or wireframes, APIs stubs, or other examples of how a customer would use the feature. Using a bulleted list or simple diagrams to outline features is okay.
* User narrows down what they are looking at (e.g. troubleshooting a problem, setting up new service) in a visualization within Dashboards.
* Technical folk will use the Alerting plugin interface.
* Non technical will take advantage of the dashboards they are accustomed to or have built.
* The user selects the secondary menu from the visualization drop down after creating a visualization on a Dashboard.
* The user is then shown a consolidated experience of creating a new monitor/alert where the form is pre-filled based on the context. If the user decides that they want to the normal monitor/alert flow, they will be sent over to the normal monitor/alert workflow with their visualization information filled in.
* Error handling will be handled within the consolidated creation view.
* Alert creation will be confirmed with a toast message on the dashboard.
* Create alert flow
* As a user, I want to be able to create an alert from a Dashboard visualization, so I can quickly create an alert based on what I’m looking at in the moment
* As a user, I want my “create alert” form to be a simple experience, so I can quickly setup an alert and not be overwhelmed by configuration options.
* As a user, I want my create alert form to be pre-filled in based on what I’m looking at, so I don’t fight to get the right information in when setting them up.
* As a user, I want to know if there are errors when setting up a new alert, so I can fix them on the form.
* As an advanced user, I want the ability to move to the normal alert creation flow and retain all my pre-filled information, so I can setup the alert just the way I want it.
* View alerts tied to visualizations
* If an alert is tied to a visualization, it is tied to the Saved Object
* Once an alert has been created, I want to be able to see it on the visualization, so I can easily view what is happening for a context I am interested in.
* If I am looking at a visualization, I want to edit an alert’s attributes, so that I save time by not going into the plugin.
* If the visualization is refreshed with a new timescale, I want to see alerts associated with the updated timescale.
Performance / Bench Marking
* Need to answer the following questions...
* How much additional time will it take to fetch alert finding data and what will that do to page load times?
* How will adding 100, 1000, 10000 new monitors to a cluster affect cluster performance?
* How will we be logging performance data for later consumption in logs?
Administration
* Default State (on)
* Turn on/off
* As an administrator who is concerned about users creating alerts in my system, I want to turn off the feature for all users, so I can rest easy that detectors are not using critical indexing resources
* If alerts in the visualization is turned off, no users will see the feature
* Performance
* As an administrator, I want a mechanism which will tell me monitors are taking up too much compute, so I can react before things get bad on my cluster.
* As an administrator, I want to know what monitors are using inefficient queries, so I can optimize performance.
* Access Controls ([Predefined Roles](https://opensearch.org/docs/latest/security-plugin/access-control/users-roles/#predefined-roles))
* As an administrator who wants to lock down their system, I want to know what predefined roles are required for displaying alerts on visualizations on dashboards
* Either for Dashboards
* opensearch_dashboards_read_only - A special role that prevents users from making changes to visualizations, dashboards, and other OpenSearch Dashboards objects.
* opensearch_dashboards_user - Grants permissions to use OpenSearch Dashboards: cluster-wide searches, index monitoring, and write to various OpenSearch Dashboards indices.
* Either for Alerting
* alerting_ack_alerts - Grants permissions to view and acknowledge alerts, but not modify destinations or monitors.
* alerting_read_access - Grants permissions to view monitors, but not create, modify, or delete detectors.
* alerting_full_access - Grants full permissions to all alerting actions.
* As an administrator who wants to lock down their system, I want to designate users/groups who can view alerts on visualizations in dashboards
* opensearch_dashboards_read_only + alerting_ack_alerts or alerting_read_access = can view monitors created by others
* opensearch_dashboards_read_only + alerting_full_access = can view monitors created by others
* opensearch_dashboards_read_only + alerting_read_access = can view monitors created by others
* As an administrator who wants to lock down their system, I want to designate users/groups who can create alerts on visualizations in dashboards
* opensearch_dashboards_user + alerting_full_access = can create dashboards and create monitors
* Role Conflicts
* If a user who does not have alerting_full_access or alerting_read_access is provided a dashboard link which has visualizations with alerting enabled, they will not be able to see them because they won’t have alerting permissions.
## Not included in the initial release
* Setting up many alerts..
* As someone who has to setup many alerts which are essentially the same thing, I want an easy way to mimic an alert’s details to make it easy to create new alerts
* Cross cluster support
* Discover support
* [Unsupported aggregations/features](https://opensearch.org/docs/latest/monitoring-plugins/ad/index/#add-features-to-your-detector)
08:24:34 From Julien EL KHIATI to Everyone:
the security access of the alert will be the same of the dashboard where it was created ? (tenant)
08:24:55 From Kris Freedain to Everyone:
[RFC] Alert Chaining
opened 11:18PM - 12 Dec 22 UTC
closed 02:57PM - 24 Oct 23 UTC
enhancement
### Purpose
This issue talks about chaining the Monitors in the Alerting plug… in - how to re-use the findings and alerts generated by one monitor to another monitor. It highlights the need for a new kind of monitor to handle more elaborate use cases and complex queries which are not catered to by the current implementation of Alerting and proposes a solution to achieve that.
### Background
Alerting plugin is used to monitor user data and create alerts that trigger when conditions occur in one or more indexes. Monitors are used to create triggers. Bucket level monitors create triggers which have conditions that alert on data aggregated into buckets. Query level monitors support triggers which have conditions that generate alerts on query criteria. Document level monitor can create triggers that alert on individual document matching query conditions. Each monitor performs a very specific and limited function.
* Users of alerting plugin have complex queries which are not being catered to by a single monitor and need to use a combination of triggers provided by different types of monitors.
* Users need a single view of data source during the monitor execution run to be fanned out across multiple monitors, to attain finer consistency of results. (In the current implementation of alerting plugin we cannot achieve this as all monitors run as individual jobs and we can't guarantee they are looking at the same data set)
* If monitors are chained the findings generated by one monitor can be used as the source data for a second monitor that can do further granular analysis and generate more meaningful of alerts.
* Users need the ability to define pipelines of rules and queries that will run as single execution, internally running a sequence of monitors and generating a chain of alerts.
To achieve the above vision we would need to have a more abstract way to monitor data - *composite monitors*. *Composite* monitors allow users to execute delegate monitors in sequential fashion hence providing more control over the data seen by composite monitors which will enable creation of more granular findings and chained alerts.
### Challenges Solved By Composite Monitors
### Chained Findings
Let’s consider the following example where a composite monitor allows user to chain findings to extract more value from monitor analyses and create more meaningful alerts.:
* User creates Monitor 1, a Document level monitor on a set of logs indices with the following queries - 1. Request payload size >100 kb. 2. Response Status != 200 3. Response contains a specific header. On every execution, monitor would query and generate findings mentioning which documents matched how many queries.
* Although individually each of the above 3 queries is a data metric for the user, user deems that any documents matching all 3 queries would require further analysis - say, he wants to aggregate these logs by client IP and check how many such IPs are there and how many such requests are being fired from these IPs. These cannot be catered by Monitor 1 itself and would require further processing.
* User creates Monitor 2, a bucket level monitor which analyses the findings of above mentioned document level monitor and create triggers and alert notifications for said conditions.
* User creates a composite monitor which accepts a sequence of monitors to execute. User defines the sequence to execute monitor1 followed by monitor 2, which accepts a config to use Monitor 1 findings as its data set.
* This would let user define condition to execute a monitor based on another monitor’s findings.
<img width="1218" alt="Screen Shot 2022-12-09 at 11 29 52 AM" src="https://user-images.githubusercontent.com/6163412/207169737-7111fb6e-e194-4c03-bf73-214b10aa6cbe.png">
### Chained Alerts
Let’s consider the following use case:
* User creates Monitor 1, to monitor CPU Utilization of their service’s worker nodes. Monitor has trigger conditions to create alerts on high CPU threshold. But high CPU maybe due to background processes, high traffic, cluster instabilities etc. Simply notifying on high CPU is not of much value to user.
* User creates Monitor 2, to monitor incoming request count in the same time window. Monitor has trigger conditions to create alerts on high request count threshold. But if the cluster is able to handle the heavy traffic smoothly, there is no value in creating notifications.
* User creates a composite monitor with a sequence to run monitor 1 and monitor 2 and a monitor 3. Monitor 3 accepts a config to read alerts created by both monitor 1 and monitor 2 in the current execution and if yes, create an alert and send out a notification that their service is degrading due to high traffic. This alert would help customer scale their service and serves as a more meaningful and critical alert.
<img width="808" alt="Screen Shot 2022-12-07 at 11 32 08 AM" src="https://user-images.githubusercontent.com/6163412/207179201-8cd6409c-8245-444c-8769-8f0971fc40bf.png">
### Running multiple monitors in one execution
User simply wants to run multiple monitors as a part of a composite monitor at once to ensure all the monitors are deterministically querying data in the same time range and control the sequence of execution of monitors. This also ensures that an entire workflow or pipeline created by users is solved by a single composite monitor.
<img width="787" alt="sequence of monitors" src="https://user-images.githubusercontent.com/6163412/207179922-781987e3-ef16-4847-b3a2-9fc305788d95.png">
08:26:32 From Nate B. to Everyone:
Oh man you had me at waffles.
08:28:20 From Ashish to Everyone:
@Julien , the association of the alert to the monitor is done at the tenant level. However the control of the alerting monitor that is created from the dashboard is following the Alerting plugin’s current security model with cluster permissions.
08:36:31 From Ryan Paras to Everyone:
Josh - you know we would be interested in chatting. Reach out
08:36:38 From Kris Freedain to Everyone:
Another from Joshua:
[META] - Augmenting Visualizations with Anomaly Detection
opened 09:18PM - 25 Apr 22 UTC
closed 10:01PM - 13 Jul 23 UTC
enhancement
Priority-High
roadmap
v2.9.0
**Is your feature request related to a problem?**
- As an unsure user who is … troubleshooting a problem, I want to augment my visualization with anomaly detection, so I can quickly identify if something is problematic or not.
Resources
* [Original RFC](https://github.com/opensearch-project/anomaly-detection/issues/476)
* [Design Proposal](https://github.com/opensearch-project/OpenSearch-Dashboards/issues/1798)
Fundamental framing
* As a user who is looking at a supported time series visualization on a dashboard, I want to see anomalies on that visualization, so when I pull up my dashboard in the future I will be able to promptly identify what is in bounds and what is out of bounds.
* As a user who is looking at an anomaly view on a given visualization, I want to be able to get a summary of what the anomaly is, so I can decide if I want to dive deeper
* As a user who wants to adjust the detector, I want to choose the detector
* As a user who has setup a detector on a visualization within a dashboard, I want to be informed when detections are triggered, so I can feel confident in the system/service I’m supporting.
User Stories
* Creation
* As a user who is looking at a supported time series visualization on a dashboard, I want to see anomalies on that visualization, so when I pull up my dashboard in the future I will be able to promptly identify what is in bounds and what is out of bounds.
* Create anomaly detector quickly based on the visualization selected on the dashboard
* Be allowed to create advanced detectors if I’m interested
* Provide guidance on what fields mean when creating a detector to encourage learning about anomaly detection
* Encourage the user to create an alert if alerting is setup
* Error State - If the visualization contains aggregations/features which aren’t supported by the detector, inform the user as such
* As a user who has setup a detector, I want to know when I can see results of the new detector, so I can figure out if what I’m looking at is truly an anomaly
* Inform the user how long it will take to get results
* Allow the user to set a date range to review the metric(s) selected over time (set expectations on how long it will take)
* Viewing
* As a user who is looking at an anomaly view on a given visualization, I want to be able to get a summary of what the anomaly is, so I can decide if I want to dive deeper
* Editing
* As a user who wants to adjust the detector, I want to choose the detector
* As a user who has edited a visualization on a dashboard with an anomaly detector associated with it, I want the detector to automatically update, so I can reduce the steps in my workflow
* Notifications
* As a user who has setup a detector on a visualization within a dashboard, I want to be informed when detections are triggered, so I can feel confident in the system/service I’m supporting.
* Notifications are local to the visualization
* Notifications can also be for the entire dashboard being viewed
Performance / Benchmarking
* Need to answer the following questions...
* How much additional time will it take to fetch alert finding data and what will that do to page load times?
* How will adding 100, 1000, 10000 new monitors to a cluster affect cluster performance?
* How will we be logging performance data for later consumption in logs?
Administration
* Default State (on)
* Turn on/off
* As an administrator who is concerned about users creating anomalies in my system, I want to turn off the feature for all users, so I can rest easy that detectors are not using critical indexing resources
* If alerts in the visualization is turned off, no users will see the feature
* Performance
* As an administrator, I want a mechanism which will tell me monitors are taking up too much compute, so I can react before things get bad on my cluster.
* As an administrator, I want to know what monitors are using inefficient queries, so I can optimize performance.
* Access Controls ([Predefined Roles](https://opensearch.org/docs/latest/security-plugin/access-control/users-roles/#predefined-roles))
* As an administrator who wants to lock down their system, I want to know what predefined roles are required for displaying anomaly detection on visualizations on dashboards
* Either for Dashboards
* opensearch_dashboards_read_only - A special role that prevents users from making changes to visualizations, dashboards, and other OpenSearch Dashboards objects.
* opensearch_dashboards_user - Grants permissions to use OpenSearch Dashboards: cluster-wide searches, index monitoring, and write to various OpenSearch Dashboards indices.
* Either for Anomaly Detection
* anomaly_read_access - Grants permissions to view detectors, but not create, modify, or delete detectors.
* anomaly_full_access - Grants full permissions to all anomaly detection actions.
* As an administrator who wants to lock down their system, I want to designate users/groups who can view anomaly detectors on visualizations in dashboards
* opensearch_dashboards_read_only + anomaly_read_access = can view detectors created by others
* opensearch_dashboards_read_only + anomaly_full_access = can view detectors created by others
* opensearch_dashboards_read_only + anomaly_read_access = can view detectors created by others
* As an administrator who wants to lock down their system, I want to designate users/groups who can create anomaly detectors on visualizations in dashboards
* opensearch_dashboards_user + anomaly_full_access = can create dashboards and create detectors
* Role Conflicts
* If a user who does not have anomaly_full_access or anomaly_read_access is provided a dashboard link which has visualizations with anomaly detectors enabled, they will not be able to see them because they won’t have anomaly detection permissions.
Not included in the initial release
* Cloning detectors/alerts
* As someone who has to setup many alerts which are essentially the same thing, I want an easy way to mimic an alert’s details to make it easy to create new alerts
* Cross cluster support
* Anomaly detection view on Discover
* [Unsupported aggregations/features](https://opensearch.org/docs/latest/monitoring-plugins/ad/index/#add-features-to-your-detector)
* [High cardinality scenarios](https://opensearch.org/docs/latest/monitoring-plugins/ad/index/#optional-set-category-fields-for-high-cardinality)
* Provide feedback if something is detected which is not expected ([semi supervised anomaly detection](https://github.com/opensearch-project/anomaly-detection/issues/562))
**What alternatives have you considered?**
A clear and concise description of any alternative solutions or features you've considered.
**Do you have any additional context?**
Add any other context or screenshots about the feature request here.
08:37:52 From Chris Chua to Everyone:
May I know the name of the YouTube channel this will be in?
08:38:14 From Kris Freedain to Everyone:
OpenSearch Project is a community-driven, Apache 2.0-licensed open source search and analytics suite that makes it easy to ingest, search, visualize, and analyze data. Developers build with OpenSearch for use cases such as application search, log...
08:43:36 From William Beckler to Everyone:
That is a really useful perspective, Amitai!
08:45:11 From Ryan Paras to Everyone:
08:45:15 From Ryan Paras to Everyone:
08:49:56 From Ryan Paras to Everyone:
08:51:11 From William Beckler to Everyone:
I think you need “Triage” Github powers to add/remove tags
08:52:09 From Kris Freedain to Everyone:
Yes
08:52:24 From Nate B. to Everyone:
# OpenSearch and OpenSearch Dashboards 2.4.1 Release Notes
## Release Details
OpenSearch and OpenSearch Dashboards 2.4.1 includes the following bug fixes, infrastructure, documentation and maintenance updates.
OpenSearch [Release Notes](https://github.com/opensearch-project/OpenSearch/blob/main/release-notes/opensearch.release-notes-2.4.1.md).
OpenSearch Dashboards [Release Notes](https://github.com/opensearch-project/OpenSearch-Dashboards/blob/main/release-notes/opensearch-dashboards.release-notes-2.4.1.md).
## BUG FIXES
### OpenSearch ML Commons
* Wait for upload task to complete in security tests ([#551](https://github.com/opensearch-project/ml-commons/pull/551))
* Fix running task when reload loaded model on single node cluster ([#561](https://github.com/opensearch-project/ml-commons/pull/561))
* Change model state to UPLOADED when all chunks uploaded ([#573](https://github.com/opensearch-project/ml-commons/pull/573))
* Set model state as unloaded when call unload model API ([#580](https://github.com/opensearch-project/ml-commons/pull/580))
### OpenSearch Neural Search
This file has been truncated. show original
08:52:33 From Kris Freedain to Everyone:
# OpenSearch and OpenSearch Dashboards 1.3.7 Release Notes
## Release Highlights
This release introduces Windows x64 distributions for OpenSearch and OpenSearch Dashboards for the 1.3 line. Available in ZIP format, the distributions allow users to deploy OpenSearch 1.3.7 directly in their Windows environment.
## Release Details
OpenSearch and OpenSearch Dashboards 1.3.7 includes the following bug fixes, infrastructure and maintenance updates.
OpenSearch [Release Notes](https://github.com/opensearch-project/OpenSearch/blob/main/release-notes/opensearch.release-notes-1.3.7.md).
OpenSearch Dashboards [Release Notes](https://github.com/opensearch-project/OpenSearch-Dashboards/blob/main/release-notes/opensearch-dashboards.release-notes-1.3.7.md).
## ENHANCEMENTS
### OpenSearch k-NN
* Add windows support ([#583](https://github.com/opensearch-project/k-NN/pull/583))
## BUG FIXES
This file has been truncated. show original
08:53:59 From Daryll Swager to Everyone:
maybe we can have an update from engineering in the next meeting on this topic
08:54:36 From Ryan Paras to Everyone:
have to drop - happy holidays everyone
08:56:07 From Amitai Stern to Everyone:
Happy holidays!
08:56:24 From Daryll Swager to Everyone:
Wishing the whole community the very best as we approach the New Year
08:56:49 From Daryll Swager to Everyone:
Thanks to all for the amazing 2022 at OpenSearch.
08:56:57 From Andriy Redko to Everyone:
Thank you everyone and Happy Holidays!
08:57:49 From Lukáš Vlček (@Aiven.io ) to Everyone:
Happy New Year and enjoy Holidays!
08:58:15 From Daryll Swager to Everyone:
we’re going bigger at OpenSearchCon next year
08:58:46 From William Beckler to Everyone:
08:58:49 From Patti Juric to Everyone:
Geez
08:58:52 From Lukáš Vlček (@Aiven.io ) to Everyone:
LOL
08:58:55 From Kris Freedain to Everyone:
08:58:57 From Kevin Garcia to Everyone:
08:58:58 From Daryll Swager to Everyone:
groaner.
08:59:01 From Kris Freedain to Everyone:
08:59:05 From Kris Freedain to Everyone:
08:59:10 From William Beckler to Everyone: