What is the best way to create a persistent volume claim with ReadWriteMany attaching the volume to multiple pods?
Based off the support table in https://kubernetes.io/docs/concepts/storage/persistent-volumes, GCEPersistentDisk does not support ReadWriteMany natively.
What is the best approach when working in the GCP GKE world? Should I be using a clustered file system such as CephFS or Glusterfs? Are there recommendations on what I should be using that is production ready?
I was able to get an NFS pod deployment configured following the steps here - https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266 however it seems a bit hacky and adds another layer of complexity. It also seems to only allow one replica (which makes sense as the disk can't be mounted multiple times) so if/when the pod goes down, my persistent storage will as well.
It's possible now with Cloud Filestore.
First create a Filestore instance.
gcloud filestore instances create nfs-server
Then create a persistent volume in GKE.
[IP_ADDRESS] is available in filestore instance details.
You can now request a persistent volume claim.
Finally, mount the volume in your pod.
- name: my container
- mountPath: /workdir
- name: mypvc
Solution is detailed here : https://cloud.google.com/filestore/docs/accessing-fileshares
I agree that it's disappointing but it's a consequence of the use of persistent disk which does not permit attaching to multiple instances read-write.
I've had success with NFS and with the limitations you describe.
You could -- as you state -- use Gluster or similar too.
A more expensive albeit managed Google Cloud alternative is Cloud Filestore:
Your questions suggests that you need NFS-like semantics but, if you don't, you may consider using Google Cloud Storage.