Skip to content

System Requirements

This guide will walk you through the necessary steps to prepare your Kubernetes cluster for a production-ready Orion environment. Proper cluster preparation ensures optimal performance and reliability.

Prerequisites

Hardware Requirements

The following table outlines the minimum hardware requirements for a standard Orion installation:

Server Role Count CPU RAM Purpose
Support 2 4 Core 16GB Runs core services required by Orion
Workstation 1 4 Core 16GB Handles workstation tasks
Headless 1 4 Core 16GB Processes headless workloads

Note: These are minimum requirements. For production environments or larger workloads, we recommend scaling up resources accordingly.

Kubernetes Requirements

Component Requirement Notes
Kubernetes Version 1.27+ Earlier versions may not be fully supported
Recommended Distro K3s Other distributions should work but may require additional configuration

Note: While we use K3s internally, Orion is designed to be compatible with most Kubernetes distributions.

Required Services

The following services are needed for Orion to function properly:

Service Required Recommended (cloud environments) Purpose
Ingress Controller Yes Yes We recommend the helm installation method over raw manifests
ArgoCD Yes Yes Handles GitOps deployments
External DNS No Yes Automates DNS record management
Cluster Autoscaler No Yes Automatically adjusts cluster size
Cluster Manager No Yes Provides a UI for cluster management

Ingress Controller Options

We currently support Ingress-Nginx. Our long-term roadmap will implement a vendor-agnostic handling for this, leveraging the new Gateway specifications.

Server Role Configuration

Juno products deploy components to servers based on assigned roles. These roles are implemented through Kubernetes labels and taints.

Core Roles

Role Label Purpose
Support juno-innovations.com/service: true Runs infrastructure services (databases, message queues, etc.)
Workstation juno-innovations.com/workstation: true Runs interactive workstation environments
Headless juno-innovations.com/headless: true Processes background and rendering workloads

Servers can have multiple roles if needed. For example, in smaller deployments, a server might handle both workstation and headless workloads.

On-Premise Prerequisites - LoadBalancer and DNS records

On-Premise requirements - LoadBalancer

When deploying the Ingress, which acts as the entry point for all the traffic, Kubernetes needs to assign it an IP address. To do that, it asks a "LoadBalancer".

How this is provided can be very specific - for example all cloud providers implement their own load balancers. Kubernetes simply asks this component what it should use - however it relies on the Load Balancer to handle all the work in the background.

While there's many options out there, we recommend two:

  • K3s built-in loadbalancer, ServiceLB. This is what our pre-packaged deployment uses as the default. It is a great choice for:
    • small-to-medium deployments
    • cases where you'd rather avoid any extra complexity
    • cases where you can perform manual failover

When using our pre-provided installation methods, this will be part of your setup by default. All you need to do on your end is to create appropriate DNS records.

To see how you would handle upgrading your underlying OS with zero-downtime, refer to our Failover & Node Maintenance Guide

  • For bigger deployments, we recommend Cilium instead. It brings additional complexity, however we believe larger environments will benefit from its automated failover, as well as observability capabilities. It provides:
    • automatic L2 failover. That means if the node goes down and it was holding the IP handling the traffic, it will be automatically "taken over" by a healthy node.
    • enhanced visibility, rich set of network-level metrics.
    • rich featureset - Cilium is in many ways the gold standard and is often used as a reference for the rest of the K8s ecosystem.

We only recommend Cilium for large, custom K8s deployments - we don't currently pre-package it in our standard installation methods.

You can configure Cilium in 2 modes: - BGP load-balancing - the most complex to manage, as well as most performant. It's unnecessary for most customers. We recommend evaluating & testing the L2 mode practically first and upgrading only if necessary. - L2 Announcements - despite the official beta status, this is a very mature and capable configuration, providing a single, failover-capable IP that acts as an entry point to your cluster. If automated failover is not required, you can stick with the pre-packaged ServiceLB and manual DNS-based failover.

For configuration details on both, refer to upstream Cilium documentation.

For a deeper dive into the underlying concepts, you can find relevant upstream documentation here: - Kubernetes networking overview - Load balancers

On-Premise requirements - DNS records

When you deploy Juno, you will need to set up DNS records pointing at your Ingress Controller. The Ingress Controller acts as a reverse proxy and is the entry point for all traffic to your cluster.

You can do this before setting up. The DNS records you'll need to create will be:

  • an IP of any of your control plane nodes, when using the default k3s Load Balancer, ServiceLB.
  • Failover is manual in this case. It relies on you switching the DNS record to another node's IP. Each node is capable of serving traffic.
  • You can also use Round-Robin DNS to provide simple load balancing. If a node fails, you would remove its record to avoid its selection.
  • the IP you designated for L2 (or BGP) failover/load balancing when using Cilium or bringing your own Load Balancer of choice.

Node Configuration

Cloud Provider Configuration

For cloud environments (AWS, GCP, Azure), refer to your provider's documentation for implementing node groups with the appropriate labels and taints:

On-Premises Configuration

Follow these steps to label and taint your on-premises Kubernetes nodes:

1. Labeling Nodes

Apply the appropriate labels to designate node roles:

# Label support nodes
kubectl label nodes <node-name> juno-innovations.com/service=true

# Label workstation nodes
kubectl label nodes <node-name> juno-innovations.com/workstation=true

# Label headless nodes
kubectl label nodes <node-name> juno-innovations.com/headless=true

2. Tainting Nodes

Tainting is only needed if you are doing cloud deployments or if you want to isolate workloads between local on-prem nodes..

Apply taints to reserve nodes for specific workloads:

# Taint workstation nodes
kubectl taint nodes <node-name> juno-innovations.com/workstation=true:NoSchedule

# Taint headless nodes
kubectl taint nodes <node-name> juno-innovations.com/headless=true:NoSchedule

Note: Support nodes typically do not need taints as they should be able to run general workloads.

Verifying Your Configuration

To verify that your nodes are correctly labeled and tainted:

# List all nodes with their labels
kubectl get nodes --show-labels | grep juno

# Check for taints on a specific node
kubectl describe node <node-name> | grep Taints

Next Steps

Once your cluster is properly configured, you can proceed with setting up your hosts.

Troubleshooting

If you encounter issues during cluster preparation:

  • Ensure all nodes meet the minimum hardware requirements
  • Verify that Kubernetes version is 1.27 or newer
  • Check that all required services are properly installed
  • Confirm node labels and taints are correctly applied

For further assistance, contact Juno Support.