Skip to content

Airgapped installation

This page will guide you through the process of deploying Orion in an airgapped environment with no internet access.

After following the below instructions you will have a functioning deployment of Orion, enabling you to launch workstations on your on-prem infrastructure.

If you need to handle a big number of nodes or would like to automate the process, we maintain a set of Ansible Playbooks you can use to perform the deployments. We support both online&airgapped deployments within those.

The playbook repository contains instructions on how to get running with them. If you prefer to perform the installation without Ansible, continue with the guide below.

Requirements

Before you get started, you must have the following available:

  • A Kubernetes cluster. While you can choose any way of deploying Kubernetes you'd like, we recommend trying our setup guide or leveraging our Ansible playbooks
  • An OCI container image registry available and accessible from your cluster. Common choices are:
  • Your nodes need access to your Git host, such as GitLab, GitHub Enterprise, Gitea, etc.
  • The appropriate charts&images forked into your image registry - see Requirements - what you need to mirror/fork
  • The ArgoCD helm chart downloaded to the machine you will be performing the install from.

Requirements - what you need to mirror/fork

Images

Before deploying, make sure you have all the images listed our Image Guide. Those must be available from within your internal image registry.

Airgapped ArgoCD installation

To get ArgoCD into your environment:

1) On an internet-enabled machine, download the chart

helm repo add argo https://argoproj.github.io/argo-helm
helm pull argo/argo-cd --version 8.1.2

2) Copy it over to your airgapped environment, and install it. Take note of the values and replace them to point it at your internal image registry. to values.yaml contained within tha tarball to inspect possible options you can set.

If your registry requires authentication, you will need to create an imagePullSecret in the argocd namespace. This secret should contain the credentials for your internal image registry. For details on how to create imagePullSecrets, refer to the Kubernetes documentation.

tar -xvf argo-cd-8.1.2.tgz
cd argo-cd
# use global.imagePullSecrets if you need to authenticate to your internal registry.
helm install argo-cd . --namespace argocd --create-namespace --set global.image.repository=your-internal-image-registry.example.com/argocd --set redis.image.repository=your-internal-image-registry.example.com/your/redis

Helm charts

To get Orion running in an airgapped environment, you must fork all Helm charts it depends on.

You can fork them either as git repositories or using OCI storage, as per Helm Documentation

We recommend Git repositories for:

  • When you want the flexibility to quickly customize
  • When you want to move fast
  • When you are new to airgapped K8s deployments

OCI can be very fitting when you are looking to keep a more stringent & auditable release process for the charts and would like to work off of your established workflows. Many state-of-the-art approaches such as image signing can be reused for Helm, if your requirements dictate so.

Both approaches will work to deploy Orion.

Below you can find the charts that are necessary to get running:

You will also need Juno's Bootstrap repo checked out on a host with kubectl access, helm CLI installed and values filled in as per its README.md

You will need to make sure the images you ingested earlier can be pulled from your internal registry. There are 2 options for that: - rewrite the paths such as docker.io/ to automatically use your registry. This will be specific to your container runtime, you can find examples for k3s&containerd here. - explicitly specify each image in the Juno-Bootstrap values file. An example is included at the bottom of this page.

Once you have configured one of the above, go ahead and complete the install following the Juno-Bootstrap README.md.

For advanced users, managing Juno-Bootstrap with Argo on your own is possible by poitning your Application at the chart/ directory and passing in your values.

Example airgapped values

Below you can find example Juno-Bootstrap values for an airgapped deployment, when opting to explicitly pass each image over a registry rewrite.

genesis:
  ### Genesis Helm repo  (REQUIRED)
  repoURL: "https://your-git-host.example.com/juno/Genesis-Deployment.git"
  ### Genesis Helm repo branch to deploy  (REQUIRED)
  version: "v1.1"
  config:
    image: genesis:unstable
    ### Container registry containing the Juno images  (REQUIRED)
    registry: your-internal-image-registry.example.com/junoinnovations
    ### Image Pull Secret (Uncomment if using)
    image_pull_secret: your-internal-registry-imagepull-secret
    ### hostname of the server: my-genesis.example.com  (REQUIRED)
    host: my-genesis.example.com
    # The rest of the Genesisconfiguration is intentionally omitted.
# Ingress Overrides
# https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml
ingress:
 repoURL: "https://your-internal-image-registry.example.com/chartrepo/ingress-nginx-proxy"
 version: 4.12.1
 # Map override values to the following config key
 config:
   # Uncomment this to pull from a local registry.
   image_pull_secret: 
    - name: your-internal-registry-imagepull-secret
   global:
     image:
       registry: your-internal-image-registry.example.com/ingress-nginx-proxy
# GPU Operator Overrides
# https://github.com/NVIDIA/gpu-operator/blob/main/deployments/gpu-operator/values.yaml
gpu:
 repoURL: "https://your-internal-image-registry.example.com/chartrepo/nvidia-proxy"
 version: v24.9.0
 # Map override values to the following config key
 config:
   # Uncomment this to pull from a local registry.
   node-feature-discovery:
      image_pull_secret: 
        - name: your-internal-registry-imagepull-secret
      image:
        repository: your-internal-image-registry.example.com/ingress-nginx-proxy/nfd/node-feature-discovery
   validator:
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
   operator:
       image_pull_secret: 
        - name: your-internal-registry-imagepull-secret
       repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia"
       initContainer: 
         repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia"
   driver:
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia"
   manager:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   toolkit:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/k8s"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   devicePlugin:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   dcgm:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   dcgmExporter:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/k8s"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   gfd:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   migManager:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   nodeStatusExporter:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   gds:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   gdrcopy:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   vgpuManager:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
     driverManager:
       image_pull_secret: 
        - name: your-internal-registry-imagepull-secret
       repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
   vgpuDeviceManager:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   vfioManager:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
     driverManager:
      repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
      image_pull_secret: 
        - name: your-internal-registry-imagepull-secret
   kataManager:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   sandboxDevicePlugin:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret
   ccManager:
     repository: "your-internal-image-registry.example.com/nvidia-proxy/nvidia/cloud-native"
     image_pull_secret: 
      - name: your-internal-registry-imagepull-secret