Index Policy Still initializing

After applies policy to multiple indices is stays in the state Still initializing, please wait a moment. I don’t know what is the problem or how to debug .
Thanks

Do you have any message in the elastic node logs?

Hello tony
you mean in the /var/log/elasticsearch.log ?
There are no errors in that log.

Hi @clouca,

Are you using ODFE or on the Amazon Elasticsearch Service?
Also which version are you using?

Hello @dbbaughe I am using ODFE. opendistro 1.8.0 and elasticsearch 7.7.0. I have used the index management in previous deployment with same version without any problem.
thanks

And nothing in logs that would point to any error happening? Something like failing to create lock index or anything else? I’m not aware of any existing bugs in that version that makes a job not run.

Is there any way to track down the problem ? Maybe is the scheduler or something else?

Btw, the only difference with my previous installation was that the multi tenancy was disabled in kibana. Is it related ?

@clouca This is happening for every managed index? None get passed Initializing?
How many managed indices are you running? Cluster is in green health?

Since you’re using ODFE you can at least change the job interval time for quicker testing to every 1 minute in the cluster settings. Once that’s done try applying a test policy to an index so it creates a managed index job and confirm the following:

Get the document ID of the internal managed index, you can _search the .opendistro-ism-config and find it.

Then confirm you see a log on one of the nodes that says it’s been scheduled that looks like:
“Scheduling job id for index .”

Then wait 1 minute for it to run. If it does actually run, you should see a newly created document in the lock index: .opendistro-job-scheduler-lock which should have the index name and job id.

If you can find that then it confirms its at least running and if it still is always stuck then it means it’s failing on something in between somehow. Let me know and we can go from there.

Hello @dbbaughe thanks for the detailed answer. The cluster health is red. I found an index with red health. I assume that this is the cause that the cluster health is red.
I managed 10 indices. The red index is not included in them. Maybe this is the problem before doing more ?
Thanks.

@clouca That would explain it, every execution of the managed indices checks if the current cluster health is red and if it is it skips that current execution and logs a debug statement (which you’d need to set the logger to DEBUG to see).

Hi,
I have the same issue.
I saw each job submitted per indice to managed in the log but status stills in intializing state.

I removed all policies from indices and reapply only one … no more success.
Thanks for your help.

Hi @dbbaughe, how to set the logger to DEBUG ?

Hi @Oli,

I believe you can do this:

PUT /_cluster/settings
{"transient":{"logger._root":"DEBUG"}} 

Are you also running your own ODFE cluster or on the managed Amazon Elasticsearch Service?

Hi, we are using ODFE 1.9.0 on 3 nodes of ES 7.8.0.
Our cluster is in green health.

So I enabled debug but that’s very verbose. For what will I need to pay attention ?

When I applyied a policy to an indice, I saw this line :
[2020-12-10T09:41:10,144][INFO ][c.a.o.j.s.JobScheduler ] [node-1] Scheduling job id kcUSMaCxS7axT0OaTKvU8A for index .opendistro-ism-config .
… and the indice is listed in the Managed Indices page and still with the status Initializing (I’m waiting since 15 min) !
How to change the job interval time ?

Thanks.

Hi @clouca,

did you solve your problem about initializing status ?

Hello @Oli and @dbbaughe finally I found the issue. It is true that if an index with health in red exists this prevent the indexing from initializing. It doesn’t matter if you try to apply policy to the “red” index or not.
Thanks for support.!

p.s. sorry for my late response

1 Like

Hello, I have the same issue. In my case, all the indexes are green and no error in logs. Can someone help in fixing this?

Hello,

I have the exact same issue.

On one of my environment, everything works fine. If I launch the following API call :
GET _opendistro/_ism/explain/bdd-requete-penalisante-io-2021-06

I got the following result :

{
  "bdd-requete-penalisante-io-2021-06" : {
    "index.opendistro.index_state_management.policy_id" : "default",
    "index" : "bdd-requete-penalisante-io-2021-06",
    "index_uuid" : "7o5nzhOUQSqK6p5vW9hRBA",
    "policy_id" : "default",
    "policy_seq_no" : 8438,
    "policy_primary_term" : 3,
    "state" : {
      "name" : "hot",
      "start_time" : 1623162547305
    },
    "action" : {
      "name" : "transition",
      "start_time" : 1623162847295,
      "index" : -1,
      "failed" : false,
      "consumed_retries" : 0,
      "last_retry_time" : 0
    },
    "retry_info" : {
      "failed" : false,
      "consumed_retries" : 0
    },
    "info" : {
      "message" : "Attempting to transition"
    }
  }
}

However, on the bad environment, I got the following response :

{
  "bdd-requete-penalisante-io-2021-06" : {
    "index.opendistro.index_state_management.policy_id" : "default",
  }
}

Like if the job was not running. Of course, I enabled the ISM but nothing seems to work…

Best regards,

Hi @vincent2mots,

What version are you running? Is it the same for both env?