ISM Policy rollover

[Google Translate]
Hello. Can you help me explain how rollover works? I have the index received by the logstash. I expect to get a new index every 40GB.

index_police

{
    "policy_id": "Police_40gb",
    "description": "Police for index",
    "last_updated_time": 1627022700260,
    "schema_version": 1,
    "error_notification": null,
    "default_state": "dfindex",
    "states": [
        {
            "name": "dfindex",
            "actions": [
                {
                    "rollover": {
                        "min_size": "40gb"
                    }
                }
            ],
            "transitions": [
                {
                    "state_name": "clsindex",
                    "conditions": {
                        "min_index_age": "180d"
                    }
                }
            ]
        },
        {
            "name": "clsindex",
            "actions": [
                {
                    "close": {}
                }
            ],
            "transitions": [
                {
                    "state_name": "dltindex",
                    "conditions": {
                        "min_index_age": "180d"
                    }
                }
            ]
        },
        {
            "name": "dltindex",
            "actions": [
                {
                    "delete": {}
                }
            ],
            "transitions": []
        }
    ],
    "ism_template": null
}

Expected Result

close  logstash-000001           LVqvx53iSRidCsVgCHwrGg            
close  logstash-000002           2cct2K4OSRmlswvQ9owwnw  
open   logstash-000003          R9qKv2SiRpae1DuT-rByMg 5 0    3827995      0     40gb     40gb
open   logstash-000004          Pk494fWeThKoXLuCIlbR_Q 5 0    3763195      0     30gb     30gb

logstash settings

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "logstash-000001"
    user => "login"
    password => "password"
  }
}

It says here that the index should be called for example logstash-000001

OK. I pointed to the police index_patterns logstash- * after which the policy began to be applied to logstash-0000002. The data is still sent to logstash-000001. Most likely the matter is in logstash and in the output you need to specify not the index but the alias of the index

An error occurred Missing rollover_alias index setting. Apparently it didn’t apply to logstash-000002. Changing rollover_alias manually did the trick.

PUT _template/ism_rollover
{
  "index_patterns": ["logstash-*"],
  "settings": {
    "opendistro.index_state_management.rollover_alias": "logstash"
  }
}

The command has bound rollover_alias to index_patterns

PUT _index_template/ism_rollover

replaced by

PUT _template/ism_rollover

it looks like in logstash you need to enter alias index and not name index

Actions that I did to apply ISM + Lostash

  1. Logstash.conf
output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    index => "winlogbeat-000001"
    cacert => "C:/winlog/root-ca.pem"
    user => "logstash"
    password => "password"
    ssl => true
    ssl_certificate_verification => false
    }
  }

2)Run logstash.
3)When the index appears in kibana stop logtash
4)Create Index patterns winlogbeat-*
5)Aliase

POST /_aliases
{
  "actions" : [
    { "add" : { "index" : "winlogbeat-000001", "alias" : "winlogbeat" } }
  ]
}
  1. rollover_aliase
PUT _template/winlogbeat
{
  "index_patterns": ["winlogbeat-*"],
  "settings": {
    "opendistro.index_state_management.rollover_alias": "winlogbeat"
  }
}

7)Create police

{
    "policy": {
        "policy_id": "winlogbeat",
        "description": "winlogbeat",
        "last_updated_time": 1627287699697,
        "schema_version": 1,
        "error_notification": null,
        "default_state": "rollover",
        "states": [
            {
                "name": "rollover",
                "actions": [
                    {
                        "rollover": {
                            "min_doc_count": 10
                        }
                    }
                ],
                "transitions": []
            }
        ],
        "ism_template": {
            "index_patterns": [
                "winlogbeat-*"
            ],
            "priority": 0,
            "last_updated_time": 1627132417234
        }
    }
}
  1. Run logstash but change index to aliase logstash.conf
output {
  elasticsearch {
    index => "winlogbet"
  1. Create Managed Indices via Kibana

  2. the end

If it is possible to make it easier, please write here.

I’ll let here my experience with fluentd configuration and ISM

  1. fluentd.conf
  <store>
    @type elasticsearch
    hosts https://opensearch-node1:9200,https://opensearch-node2:9200,https://opensearch-node3:9200
    ssl_verify false
    user "admin"
    password "passpasspass"
    ###

    ### Needed options to configure ISM
    include_timestamp true
    index_name fluentd-logs-alias               # In this case (ISM), this should be the ALIAS name

    ### Rollover index configuration
    rollover_index true                         # Specify this as true when an index with rollover capability needs to be created.
    application_name  fluentd
    index_date_pattern ""                       # Specify this to override the index date pattern for creating a rollover index.
    deflector_alias    fluentd-logs-alias       # Specify the deflector alias which would be assigned to the rollover index created.
                                                # This is useful in case of using the Elasticsearch rollover API
                                                # If rollover_index is set, then this parameter will be in effect otherwise ignored.
    template_name      fluentd-template         # The name of the template to define.

  1. Run docker-compose to an opensearch cluster (in my case) and opendashboards
version: '3'
services:
  opensearch-node1:
    image: opensearchproject/opensearch:${OSVERSION}
    container_name: opensearch-node1
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node1
      - discovery.seed_hosts=opensearch-node2,opensearch-node3
      - cluster.initial_master_nodes=opensearch-node1,opensearch-node2,opensearch-node3
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - opensearch-net
        #    healthcheck:
        #test: curl -k https://admin:admin@localhost:9200/_cluster/health | grep -vq '"status":"red"'
        #interval: 30s
        #timeout: 10s
        #retries: 5

  opensearch-node2:
    image: opensearchproject/opensearch:${OSVERSION}
    container_name: opensearch-node2
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node2
      - discovery.seed_hosts=opensearch-node1,opensearch-node3
      - cluster.initial_master_nodes=opensearch-node1,opensearch-node2,opensearch-node3
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data2:/usr/share/opensearch/data
    networks:
      - opensearch-net

  opensearch-node3:
    image: opensearchproject/opensearch:${OSVERSION}
    container_name: opensearch-node3
    environment:
      - cluster.name=opensearch-cluster
      - node.name=opensearch-node3
      - discovery.seed_hosts=opensearch-node1,opensearch-node2
      - cluster.initial_master_nodes=opensearch-node1,opensearch-node2,opensearch-node3
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data3:/usr/share/opensearch/data
    networks:
      - opensearch-net
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:${ODVERSION}
    container_name: opensearch-dashboards
    #depends_on: {"opensearch-node1": {"condition": "service_healthy"}}
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200","https://opensearch-node3:9200"]'
    networks:
      - opensearch-net


volumes:
  opensearch-data1:
  opensearch-data2:
  opensearch-data3:

networks:
  opensearch-net:
  1. Run Dev-tools to create policy, template and start a new index writable
PUT _plugins/_ism/policies/fluentd_policy
{
  "policy": {
    "description": "fluentd hot warm workflow",
    "default_state": "hot",
    "schema_version": 1,
    "states": [
      {
        "name": "hot",
        "actions": [
          {
            "replica_count": {
                "number_of_replicas": 5
            },
            "rollover": {
                "min_index_age": "1h",
                "min_doc_count": 500
            }
          }
        ],
        "transitions": [
          {
            "state_name": "warm"
          }
        ]
      },
      {
        "name": "warm",
        "actions": [
          {
            "replica_count": {
                "number_of_replicas": 2
            }
          }
        ],
        "transitions": []
      }
    ],
    "ism_template": {
      "index_patterns": ["fluentd-*"]
    }
  }
}

PUT _template/fluentd_template
{
  "index_patterns": [
    "fluentd-*"
    ],
  "settings": {
    "plugins.index_state_management.rollover_alias": "fluentd-logs-alias"
  }
}


PUT fluentd-logs-000001
{
  "aliases": {
    "fluentd-logs-alias": {
      "is_write_index": true
    }
  }
}

  1. Create the index pattern
  2. Start fluentd
  3. a big cup of cofffee :grin:

Hope this help someone.

2 Likes

Hi @aamarques ,
Is there any difference between elasticsearch and opensearch plugin for Fluentd?
However, I’ve done some researches and didn’t find the rollover_index configuration on opensearch plugin docs.

Hi @saeed.kazemi
I think they differ after e-search 7.10 version, but ISM depends both opensearch and fluend config.

“OpenSearch is a community-driven, open-source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2." https://opensearch.org/

Rollover index configuration

rollover_index true                         # Specify this as true when an index with rollover capability needs to be created.
application_name  fluentd
index_date_pattern ""                       # Specify this to override the index date pattern for creating a rollover index.
deflector_alias    fluentd-logs-alias       # Specify the deflector alias which would be assigned to the rollover index created.
                                            # This is useful in case of using the Elasticsearch rollover API
                                            # If rollover_index is set, then this parameter will be in effect otherwise ignored.

rollover_index is configured in fluentd.conf as rollover_alias belongs to filebeat.yml and etc.
deflector_alias is deprecated in ES but was necessary in this config.
Try to follow this setup above.

1 Like

Hi again :)) @aamarques

I want to remove index alias after the rollover occur. Is there any way to setup policy or template that do this magic?
Tank you for you time and consideration though.

Hi @saeed.kazemi
I can’t imagine any way to do it “auto-magically” as this feature was concept to create rollover indexes that can be searched buy alias :slight_smile:

But you can do it via dev-tools or curl.

Cheers

1 Like

Thank you for your reply.
Is there any problem with this approach? (have alias on rollover indexes or closed indexes.)
Cause we search on alias name as well.
@aamarques

If you @saeed.kazemi remove the rollover index alias, I don’t imagine how you will search using it :slight_smile: