Cluster design for 5 Tb of log data each month

Versions (relevant - OpenSearch/Dashboard/Server OS/Browser):
2.16

Describe the issue:
Hello folks,

I am setting up an Opensearch cluster for storing and displaying log data. Daily ingest will range from 100 GB to 160 GB daily(3-5 Tb/month). Retention period will be a month and data will then be deleted.

I spent some time reading documentation and learning different best practices. Below is a design I have in mind and some questions I could not find answers to specific to my case.
Draft Design


I will be using a combination of data streams and ISM policies to achieve daily rolling indices which get deleted at age of 30 days. I am still not sure how I will split my log data among indices and I wonder if there is anything I should keep in mind.
Questions/Concerns:

  1. Should I consider having coordinating nodes?
  2. Should I consider having separate data and master nodes?
  3. Should I favor less nodes with higher hardware specs(3 Nodes each of 16GB Mem., 4 CPU) or more nodes with lower specs(6 nodes each of 8GB Mem., 2 CPU)?
  4. I did the math for setting-up a hot-warm arch. and I concluded that it is not worth it. My calculations were based on difference in costs between two storage classes for my K8s cluster(SSD and HDD).

Any feedback would be truly appreciated!

Configuration:

Relevant Logs or Screenshots: