[Feedback Wanted] Should previous releases be available if bug fix patch is released?

If a bug is fixed in a patch release (e.g. 1.0.1) should the previous un-patched version (e.g. 1.0.0) still be available for download?

I see pros and cons of keeping unpatched software available:

Pro

  • Continuing to make the original, unpatched release available means no surprises. The artifact is a constant.
  • There is transparency on what the unpatched version delivered.

Cons

  • Knowingly letting people download buggy software is stomach churning.
  • Having more versions available could lead to confusion.

My gut says the project shouldn’t offer un-patched software, but I’m open to hearing other points of view.

1 Like

In my opinion yes, if I want to go back, I could. For docker images it is an absolute must, otherwise my cluster would not start anymore when the older tags are removed (always specifying a version)

They can be provided in a specific (well highlighted) section of unpatched versions in the website.

OK that makes some sense. Devils advocate: if your specified version had a critical flaw, do you have a mechanism to know when you need to upgrade?

Interesting idea - so you would separate patched and unpatched versions on the website?

I would also support all previous versions being available for download. There may be solutions that leverage OpenSearch which haven’t yet been updated or tested for the latest versions. Additionally what qualifies as a “critical flaw” could be very use-case specific.

2 Likes

Arguably every piece of software is knowingly buggy.

BTW, there are some use-cases where knowingly buggy software can be useful. For example security research.

You’re not wrong! I always make that assumption about software.

I guess I could rephrase that to: “Is it responsible to keep buggy software available” - if you know it’s buggy and there is a fixed version what should be done to ensure a user isn’t setup for failure.

The security research use is similar to the transparency bullet point, but it’s a good specific thought.

1 Like

“what should be done to ensure a user isn’t setup for failure?”

Clearly highlight that the version is unpatched. It could also be useful to provide some details of the reason. It could be a minimal bug that users can live with and does not put urgency to upgrade.

1 Like

Both you and @robcowart have highlighted that something important: Does the severity of a bug or flaw matter? Is there a point where the project should pull a release vs keep it available?

They should be available because build pipelines may depend on them and you may want to reproduce historic builds and versions (despite of any bugs). And if the artifacts are published in maven central (and they should!) its settled because you can not (easily) delete a release from maven central (for the same reason outlined above).

See also Removing an artifact from Maven Central - Stack Overflow and java - Can a Maven dependency be deleted from public use? - Stack Overflow

2 Likes

Makes sense.

On Maven: I was party to a long discussion about the immutability of Maven yesterday - the team dealing with such things is well aware.

Totally agree on this. Ecosystems where packages can too easily be removed (NPM, maybe Rubygems) are incredibly annoying in this regard. Once a package has been released, it must be available (but the idea of clearly marking it as “insecure” or similar in the download section is completely fine).

My two cents:

I like what NuGet (Microsoft’s .NET package manager) does in this regard: they have a way to “unpublish” a package, making it not appear in searches etc. But, if you try download the exact version using the right URL, you’ll still be able to get it. This, to me, is a really good compromise between “avoid breaking things for people” and “avoid people unknowingly downloading insecure, knowingly flawed software”.

Will opensearch unpatched and patched version speak with each other just fine? in past (with elasticsearch and kibana) that wasn’t the case and it would break the cluster or new nodes cannot join the cluster. you need the whole cluster on same patched version to get the full benefits.

So if the unpatched version is not available then the teams may not be able to scale or perform node replacements and that could hamper their operations.

Right. That’s the idea. OpenSearch can co-exist in the same cluster with ODFE even.

An important difference between OpenSearch and Elasticsearch is that OpenSearch is pretty committed to semver - so patch versions should never break you.

1 Like

Previously we could not update kibana to latest patch unless all elasticsearch nodes are on that version or higher.
opensearch-dashboards plugin would have to support a mix match of opensearch nodes. otherwise this is bound to fail.

Good call out regarding OpenSearch Dashboards, that’s a bit of a different beast at least when upgrading (see the upgrading blog post How To: Upgrade from Open Distro to OpenSearch · OpenSearch - mixing works for OpenSearch but you’ll have to upgrade Dashboards).

Let me see if I can find someone with insight on how that will work going forward with patches.

OK - I did find out that the way it’s coupled in OpenSearch, Dashboards would need a consistent version. I can see the use case, but I think the OpenSearch Dashboards team would see it as an enhancement (feel free to throw an request on repo)

as mentioned by others all versions must stay available once they have been published. it’s the same as with git: you don’t ever force-push to a published branch.

for security issues i’d expect a CVE (see also the Become a Mitre CNA thread).
in enterprise-y environments various scanning tools (SNYK, GitHub dependabot, twistlock, etc.) are being used which start issuing warnings/errors when they find artifacts with known issues (e.g. they’ll complain about the OpenSearch docker image / .tar.gz / … if there’s a CVE for it).

1 Like