System Requirements¶
This guide will walk you through the necessary steps to prepare your Kubernetes cluster for a production-ready Orion environment. Proper cluster preparation ensures optimal performance and reliability.
Prerequisites¶
Hardware Requirements¶
The following table outlines the minimum hardware requirements for a standard Orion installation:
Server Role | Count | CPU | RAM | Purpose |
---|---|---|---|---|
Service |
1 | 4 Core | 16GB | Runs core services required by Orion |
Workstation |
1 | 4 Core | 16GB | Handles workstation tasks |
Headless |
1 | 4 Core | 16GB | Processes headless workloads |
Minimum Requirements
These are minimum requirements. For production environments or larger workloads,
we recommend scaling up resources accordingly. We highly recommend having at least 2 nodes
for the Service
role to ensure high availability.
Multi-Role Nodes
For small test deployments or proof-of-concept setups, you can run all roles on a single server. However, this is not recommended for production use.
Kubernetes Requirements¶
Component | Requirement | Notes |
---|---|---|
Kubernetes Version | 1.27+ | Earlier versions may not be fully supported |
Other Distros
While we use K3s internally, Orion is designed to be compatible with most Kubernetes distributions.
Services¶
The following services are needed for Orion to function properly:
Service | Required | Recommended (cloud environments) | Purpose |
---|---|---|---|
Ingress Controller | Yes | Yes | We recommend the helm installation method over raw manifests |
ArgoCD | Yes | Yes | Handles GitOps deployments |
Server Role Configuration¶
Juno products deploy components to servers based on assigned roles. These roles are implemented through Kubernetes labels and taints.
Core Roles¶
Role | Label | Purpose |
---|---|---|
Support | juno-innovations.com/service: true |
Runs infrastructure services (Genesis, Kuiper, Terra, Titan, etc.) |
Workstation | juno-innovations.com/workstation: true |
Runs interactive workstation environments |
Headless | juno-innovations.com/headless: true |
Runs generic workloads |
Multi-Role Nodes
Servers can have multiple roles if needed. For example, in smaller deployments, a server might handle both workstation and headless workloads.
Node Configuration¶
Cloud Provider Configuration¶
For cloud environments (AWS, GCP, Azure), refer to your provider's documentation for implementing node groups with the appropriate labels and taints:
On-Premises Configuration¶
Follow these steps to label and taint your on-premises Kubernetes nodes:
Labeling Nodes¶
k3s Users
If you used our OneClick installer, your nodes may already be labeled correctly. You can verify this by opening the Genesis panel and navigating to the Network section.
You can also use our integrated node provisioning feature to add and label nodes automatically. You can follow the guide in the Expand your Orion cluster section.
Apply the appropriate labels to designate node roles:
# Label support nodes
kubectl label nodes <node-name> juno-innovations.com/service=true
# Label workstation nodes
kubectl label nodes <node-name> juno-innovations.com/workstation=true
# Label headless nodes
kubectl label nodes <node-name> juno-innovations.com/headless=true
Verifying Your Configuration¶
To verify that your nodes are correctly labeled:
On-Premise Recommendations - LoadBalancer and DNS records¶
LoadBalancer¶
When deploying the Ingress, which acts as the entry point for all the traffic, Kubernetes needs to assign it an IP address. To do that, it asks a "LoadBalancer". How this is provided can be very specific - for example all cloud providers implement their own load balancers. Kubernetes simply asks this component what it should use - however it relies on the Load Balancer to handle all the work in the background.
While there's many options out there, we recommend two:
-
K3s built-in Load Balancer, ServiceLB. This is what our pre-packaged deployment uses as the default. It is a great choice for:
-
small-to-medium deployments
-
cases where you'd rather avoid any extra complexity
-
cases where you can perform manual failover
-
When using our pre-provided installation methods, this will be part of your setup by default. All you need to do on your end is to create appropriate DNS records.
To see how you would handle upgrading your underlying OS with zero-downtime, refer to our Failover & Node Maintenance Guide
- For bigger deployments, we recommend Cilium instead.
It brings additional complexity, however we believe larger environments will benefit from its automated failover, as well as observability capabilities.
It provides:
- Automatic L2 failover. That means if the node goes down and it was holding the IP handling the traffic, it will be automatically "taken over" by a healthy node.
- Enhanced visibility, rich set of network-level metrics.
- Rich FeatureSet - Cilium is in many ways the gold standard and is often used as a reference for the rest of the K8s ecosystem.
We only recommend Cilium for large, custom K8s deployments - we don't currently pre-package it in our standard installation methods.
You can configure Cilium in 2 modes:
-
BGP load-balancing - the most complex to manage, as well as most performant.
Advice
It's unnecessary for most customers. We recommend evaluating & testing the L2 mode practically first and upgrading only if necessary.
-
L2 Announcements - despite the official beta status, this is a very mature and capable configuration, providing a single, failover-capable IP that acts as an entry point to your cluster.
Advice
If automated failover is not required, you can stick with the pre-packaged ServiceLB and manual DNS-based failover.
For configuration details on both, refer to upstream Cilium documentation.
For a deeper dive into the underlying concepts, you can find relevant upstream documentation here:
DNS records¶
When you deploy Juno, you will need to set up DNS records pointing at your Ingress Controller. The Ingress Controller acts as a reverse proxy and is the entry point for all traffic to your cluster. You can do this before setting up. The DNS records you'll need to create will be:
- an IP of any of your control plane nodes, when using the default k3s Load Balancer, ServiceLB.
- Failover is manual in this case. It relies on you switching the DNS record to another node's IP. Each node is capable of serving traffic.
- You can also use Round-Robin DNS to provide simple load balancing. If a node fails, you would remove its record to avoid its selection.
- the IP you designated for L2 (or BGP) failover/load balancing when using Cilium or bringing your own Load Balancer of choice.
Troubleshooting¶
If you encounter issues during cluster preparation:
- Ensure all nodes meet the minimum hardware requirements
- Verify that Kubernetes version is 1.27 or newer
- Check that all required services are properly installed
- Confirm node labels and taints are correctly applied
For further assistance, contact Juno Support.