How can discovery.zen.minimum_master_nodes be unset in 2.0?

In a fresh new 2.0.1 cluster, I ended up setting discovery.zen.minimum_master_nodes as part of a routine.

curl --insecure -u admin:redacted -X PUT "https://localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
    "persistent" : {
        "discovery.zen.minimum_master_nodes": 2
    }
}
'

The cluster initially accepted this.
However, now any subsequent attempts to mess with _cluster/settings is met with this error:

{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"unknown setting [archived.discovery.zen.minimum_master_nodes] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"}],"type":"illegal_argument_exception","reason":"unknown setting [archived.discovery.zen.minimum_master_nodes] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},"status":400}

…and my change is not actually taken.

OK, no biggie, right? We’ll just remove discovery.zen.minimum_master_nodes by setting it to null, yeah?

curl --insecure -u admin:redacted -X PUT "https://localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
    "persistent" : {
        "discovery.zen.minimum_master_nodes": null
    }
}
'

…well, that doesn’t work. I get the same error message as before.
(I tried to same with archived.discovery.zen.minimum_master_nodes, too.).

So, I’m stuck in quite the pickle. I’ve set an “unknown” setting into _cluster/settings, and now cannot remove it, because any attempt to PUT anything to _cluster/settings throws an error about the old zen setting that is set.

Any ideas how I might proceed? This seems like quite the catch-22.

Hey @mhoydis ,

Zen1 discovery has been deprecated for a while and was removed in 2.0, check the respective change [1] please. Thank you.

[1] https://github.com/opensearch-project/OpenSearch/pull/1216

Thanks, I understand that that the discovery.zen.minimum_master_nodes is removed from Opensearch.
However, my question is about how to repair my cluster which has accidentally received the removed setting.
The fact that I accidentally included a setting related to an archived feature is irrelevant to the situation. The problem is there doesn’t seem to be a way to remove these specific blacklisted settings from the cluster state if it happens to make its way in there.
As it is now, this seems to be a very easy way to hose a cluster. I can’t do anything with the _cluster/settings API at this point. The only path I can see forward is to manually delete the cluster state from disk… obviously less than ideal.

Oh got it, thank you @mhoydis , so basically the issue is: because of this removed setting the updates to the cluster settings are not going through, correct? I think it is worth opening an issue here [1] since the migration process should not render clusters unusable because of that. Thank you.

[1] https://github.com/opensearch-project/OpenSearch/issues

Correct. Because a value for this setting has been set, no further changes to any other setting is allowed, including a change that would remove the offending value. An (apparently?) uncorrectable situation.