Skip to content

Storage Management

The storage management page allows admin to create, edit, and delete storage mounts inside each Orion deployment. Storage mounts are used to persist data across Orion deployments, and can be used to store user data, project data, and other files. The underlying engine is the Kubernetes CSI drivers that you have configured on your cluster. Genesis aims to not only make managing this easier, but also add features that are difficult to do with just the Kubernetes API.

workstation-creation

Persistent Volume Creation (PV)

Because Juno uses Kubernetes as it's backend, you can use any kind of storage that is supported via Kubernetes directly or via a CSI. Here a few examples of Persistent Volumes using common mount techniques.

  • NFS
  • ISCSI
  • HostPath
  • Many many more...

Juno only mounts PV's and does not actually manage them directly unless using the built-in "Quick Mount" setup. In our example, we will use the following PV spec and apply it to our cluster using a simple HostPath.

my-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Ti
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard
  hostPath:
    path: /tmp
kubectl apply -f my-pv.yaml

Ad-Hoc Volumes

While we can add volumes ad-hoc like this, we recommend pushing your PV's to a GitOps repository in production.

Mount Creation

Mount Creation

Click the "Create Mount" button to create a new storage mount.

storage-creation

Mount Form

You will be directed to the storage form where you can fill out the details for the new storage mount.

storage-form

Name Field

Juno will create a Kubernetes Persistent Volume Claim (PVC) for you, so you will need to provide a name for the PVC. The form will validate the name you put and ensure it is a valid Kubernetes name.

Project Field

Mounts are created per-project. This isolates the mounts across namespaces and will attach the projects code to the name you provided above. This allows the PVC's to be unique.

Provisioning

Provisioning Type

Provisioning type specifies how Juno should create the PVC. The options are:

  • Mount Existing Volume - Create a bound mount to an existing Volume. i.e. the HostPath we created above
  • Dynamically Provision - Use a StorageClass to dynamically provision storage on the fly. This is very common in cloud environments. i.e. `gp3`

In most on-prem environments, you will want to use the "Mount Existing Volume" option and specify the Persistent Volume you created to be mounted.

storage-provisioners

Mount Existing Volumes

Volume Selection

Juno will detect all unbound Persistent Volumes in your cluster and display them in a dropdown. You can select the volume you want to mount.

storage-mount-existing

Dynamically Provisioned Volumes

Storage Class Selection

Dynamic Provisioners will use a StorageClass to provision the PVC. Genesis will display all available StorageClasses in your cluster and you can select the one you want to use. If you do not have any StorageClasses, you will need to create one first. For common cloud providers, this is usually included in the Kubernetes distribution. For EKS on AWS, this can be `gp3` for example. To learn more about the storage classes available in your cluster, you can refer to the CSI documentation for that Storage Class from the provider.

storage-class

Size Field

The size field specifies the size of the PVC to be created. This is required for creating the volume size for the storage class. This is only used if the storage provider uses it.

storage-size

Mounting Options

Exclusive

By default, mounts are created to be "exclusive". Meaning, they are expected to be consumed by a single workload in the project.

Data Protection

Exclusive dynamically provisioned volumes are deleted as well as their data in most cases. Juno will inherit whatever the CSI driver sets upstream. If you need to customize this behavior, we recommend you manually create the Persistent Volume and then use the "Mount Existing Volume" option to mount it. This will allow you to set the persistentVolumeReclaimPolicy to Retain and ensure that the data is not deleted when the PVC is deleted.

Shared

Shared mounts are treated as a shared storage location that all workstations will mount. For example, if you have software that is shared across all workstations, you can create a shared mount and then mount it to all workstations in the project. A common use case is to use this to mount a shared NFS share across all workstations. You can also create a HostPath pass-through volume that will allow you to install software on the host machine and then pass it through to the workstations. This allows you to reuse existing software provisioning systems like Ansible, Puppet, or Chef to install and prepare the host, then pass that on to the containers which drastically speeds up provisioning times and provides a "hybrid" workstation solution.

Terra will present this volume as an option during software installs via Terra Plugins. This is very useful when creating software shares or even project data.

Data Protection

Shared volumes are not deleted when the PVC is deleted. This is because they are expected to be shared across multiple workstations and projects. If you need to delete the data, you will need to do so manually. This is a common use case for shared NFS shares or HostPath pass-through volumes.

Shared Mount

Specifies if the mount should be treated as a shared volume or exclusive.

storage-size

Container Path

Mount the volume to the container at the specified path. (This is passed through to Terra as well) In the example below, we are mounting the volume to /my-project

container-path

Volume Subpath

Volume Subpath specifies essentially the location on the Volume to start the mount. In the below example, we are mounting the my-project-share/ location to the container at /my-project. This allows you to mount a specific directory on the Volume to the container, rather than the entire Volume.

volume-subpath

Reference Volume

In Kubernetes, Persistent Volumes and the Persistent Volume Claims are a 1:1 match. Persistent Volume Claims can be mounted multiple times, but it can be very confusing and if you want to specify multiple mounts. For example, you have a single NFS server, but you want to mount it to multiple locations in the container. This would mean you would have to do a number of subpath mounts and it can get very messy very quickly. This also limits mount points. Because Persistent Volume Claims are namespaced, this means you would have to have a separate Persistent Volume Claim for each project and obviously a matching Persistent Volume.

Genesis allows for you to "reference" an existing Persistent Volume. By doing this, Genesis will create a duplicated Volume that will match the original Persistent Volume, and it will also tag it with a UUID as well as the project name. This allows you to mount the same Persistent Volume to multiple locations in the container without having to create multiple Persistent Volume Claims. This also allows you to mount the same reference volume in multiple projects across namespaces.

reference-volume

Real World Examples

Shared NFS Volume

In this example, we will mount an existing NFS server that has the address nfs-server.example.com and the path /. We have installed a number of shared apps on that NFS server at the path /shared-apps. We want to mount these apps to each workstation in the project at the /apps folder inside the container.

The container mount path will resolve to the following.

nfs-server.example.com/shared-apps/ -> /apps/

Existing Volume Subpath Container Path
test-nfs /shared-apps /apps

You can equate this to more normal terms.

Server Network Share Container Path
nfs-server.example.com /shared-apps /apps
  1. Create a Persistent Volume using NFS

    test-nfs-pv.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: test-nfs
    spec:
      capacity:
        storage: 10Ti
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: nfs
      nfs:
        path: /
        server: nfs-server.example.com
    

    kubectl apply -f test-nfs-pv.yaml

  2. Create a new mount using the below example values.

    reference-volume

  3. The mount will be displayed in the storage table with the REF in the name, it will already be bound. It will show that the path is shared as well as where the container path is mounted.

    storage-nfs-table

Host Pass-Through Volume

In this example, we have installed a number of tools on all servers that will be running our workstations. This package is installed on the host at /opt/luna-tools. These tools are geared toward AI and data science workloads, and we want to mount these tools to each workstation in the project at the /opt/luna-tools folder inside the container. This will help us keep our workstation containers very small and lightweight, while still allowing us to use the tools on the host machine.

This can also be used to pass through existing software or even VFX/GFX pipelines. Many companies already have existing software provisioning systems in place, such as Ansible, Puppet, or Chef. By using a HostPath pass-through volume, you can reuse these systems to install and prepare the host, then pass that on to the containers. This drastically speeds up provisioning times and provides a "hybrid" workstation solution. This also provides a clear path for migration without having to fully ditch existing software provisioning systems.

The container mount path will resolve to the following.

(host)/opt/luna-tools -> /opt/luna-tools

Existing Volume Subpath Container Path
luna /opt/luna-tools /opt/luna-tools

You can equate this to more normal terms.

Server Bind Mount Container Path
host /opt/luna-tools /opt/luna-tools
  1. Create a Persistent Volume using NFS

    luna-pv.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: luna
    spec:
      capacity:
        storage: 10Ti
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: standard
      hostPath:
        path: /opt/luna-tools
    

    kubectl apply -f luna-pv.yaml

  2. Create a new mount using the below example values.

    reference-volume

  3. The mount will be displayed in the storage table with the REF in the name, it will already be bound. It will show where the container path is mounted.

    storage-luna-table

Exclusive Database Volume

In this example, we want to create a database volume that is set to be exclusive. Meaning, it will be consumed by a single container somewhere downstream. This is mainly used with Terra Plugins. In our case, we will create a volume that will be used as a prometheus block device.

We are going to use the GP2 provisioner from AWS EKS, but you can use any provisioner that can dynamically provision block devices.

GP2

The GP2 storage class is a part of the EBS CSI driver that ships with EKS by default. Depending on your deployment environment, you will see different classes here. Please refer to your CSI drivers docs to see what is available.

  1. Create a new mount using the below example values.

    storage-prometheus-example

  2. The mount will be displayed with an orange wire icon meaning it is a 1:1 relationship. It will also be in a "Pending" state until a container requests it. This is because it is an exclusive mount and will not be bound until a container requests it.

    storage-prometheus-table

Qumulo Cloud Fabric (Global Storage)

In this example, we have deployed a Qumulo Cloud Data Fabric cluster across multiple locations in both the cloud and on-prem. In all locations, we have deployed Juno and intend to have a single project that spans all locations with a unified storage and high performance workstations that scale. This grants us the ability to have full location failover and disaster recovery as well as have access to talent across the globe.

ATA Media

Juno Innovations sister company, ATA Media, uses this exact setup to do global VFX for high-end commercial clients while accessing talent across the globe. This allows them to have a single project that spans multiple locations and have a unified storage solution that is fast and reliable.

Because Qumulo can be mounted via NFS, we can use the same NFS mount across all locations.

The container mount path will resolve to the following.

qumulo-nfs.example.com/shared-apps/ -> /apps/

Existing Volume Subpath Container Path
qumulo-nfs /shared-apps /apps

You can equate this to more normal terms.

Server Network Share Container Path
qumulo-nfs.example.com /shared-apps /apps
  1. Create a Persistent Volume using NFS

    qumulo-nfs-pv.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: qumulo-nfs
    spec:
      capacity:
        storage: 10Ti
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: nfs
      nfs:
        path: /
        server: qumulo-nfs.example.com
    

    kubectl apply -f qumulo-nfs-pv.yaml

  2. Create a new mount using the below example values.

    reference-volume

  3. The mount will be displayed in the storage table with the REF in the name, it will already be bound. It will show that the path is shared as well as where the container path is mounted.

    storage-nfs-table