Resource Proxy
This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.
This feature requires vCluster Platform. Both the client and target tenant clusters must be managed as VirtualClusterInstance within the platform.
The Resource Proxy feature enables vCluster to proxy custom resource (CRD) requests to other tenant clusters. When enabled, the client tenant cluster transparently stores resources in and delegates management to a target tenant cluster. This enables cross-cluster communication patterns, centralized resource management, and multi-tenant architectures.
How it works​
When you configure a client tenant cluster to proxy custom resources, vCluster intercepts API requests for those resources and forwards them to the target tenant cluster through vCluster Platform.
The proxy performs several key functions:
- Request interception: The client's Kubernetes API server intercepts requests for configured custom resources and routes them to the proxy.
- Authentication: The proxy authenticates to the target using the client's vCluster Platform identity (
loft:vcluster:p-<project>:<name>). - Owner labeling: On create and update operations, the proxy adds owner labels to track which client created each resource.
- Visibility filtering: When listing resources, the proxy filters results based on the configured access mode (
ownedorall). - Namespace synchronization: If a namespace doesn't exist on the target, the proxy creates it automatically.
Multi-client isolation​
When multiple client tenant clusters proxy to the same target, each client only sees resources it created by default. The proxy achieves this through owner labels and label selector injection.
Key capabilities​
- Transparent access: Users interact with custom resources as if they were local to their tenant cluster.
- Centralized storage: A dedicated target tenant cluster stores all resources.
- Multi-tenant isolation: Each client tenant cluster only sees its own resources by default.
- RBAC enforcement: The target tenant cluster enforces its own RBAC policies on proxied requests.
Platform RBAC requirements​
For the resource proxy to function, the client tenant cluster must be authorized to access the target tenant cluster through vCluster Platform. This requires RBAC configuration on the platform's management cluster.
Platform RBAC configuration​
Create a Role and RoleBinding in the project namespace (p-<project-name>) on the platform's management cluster:
# Platform RBAC for resource proxy
# This grants the client vCluster permission to proxy requests to the target
# Apply to the platform's management cluster in the project namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: vcluster-proxy-target-access
namespace: p-default
rules:
- apiGroups: ["management.loft.sh"]
resources: ["virtualclusterinstances"]
resourceNames: ["target"] # Target vCluster name
verbs: ["use"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: client-vcluster-proxy-access
namespace: p-default
subjects:
- kind: User
name: "loft:vcluster:p-default:client" # Client vCluster identity
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: vcluster-proxy-target-access
apiGroup: rbac.authorization.k8s.io
Apply this configuration to the platform's management cluster (not the tenant clusters):
kubectl apply -f platform-proxy-rbac.yaml --context <platform-context>
Configuration​
To enable resource proxying, configure the experimental.proxy.customResources section in your vcluster.yaml:
# Basic Resource Proxy configuration
# Proxies MyResource resources from example.com/v1 to a target virtual cluster
experimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"
Configuration options​
| Field | Type | Description |
|---|---|---|
enabled | boolean | Enable or disable the proxy for this resource. |
targetVirtualCluster | object | Reference to the target VirtualClusterInstance to proxy requests to. |
targetVirtualCluster.name | string | Name of the target tenant cluster. Required when enabled. |
targetVirtualCluster.project | string | Project of the target tenant cluster. Defaults to the same project as the client vCluster. |
accessResources | string | Resource visibility mode: owned (default) or all. See Access modes. |
Resource key format​
The resource key follows the format resource.apiGroup/version:
myresources.example.com/v1- proxies MyResource resources from theexample.comAPI group, versionv1otherresources.test.io/v2- proxies OtherResource resources from thetest.ioAPI group, versionv2additionalresources.acme.org/v1alpha1- proxies AdditionalResource resources from theacme.orgAPI group, versionv1alpha1
Example: Basic proxy setup​
This example demonstrates a simple two-cluster setup where a client tenant cluster proxies MyResource resources to a target tenant cluster.
Create the target tenant cluster.
Create a tenant cluster to serve as the target. The target doesn't need any proxy configuration - it just stores the resources and enforces RBAC:
Create target tenant clustervcluster create targetInstall the CustomResourceDefinition in the target tenant cluster.
The CustomResourceDefinition must exist in the target tenant cluster:
myresource-crd.yamlapiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: myresources.example.com
spec:
group: example.com
names:
kind: MyResource
listKind: MyResourceList
plural: myresources
singular: myresource
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
name:
type: string
priority:
type: stringApply the CustomResourceDefinition to the target tenant cluster:
Apply CustomResourceDefinition to targetvcluster connect target -- kubectl apply -f myresource-crd.yamlConfigure RBAC in the target tenant cluster.
Create RBAC rules to allow the client tenant cluster to access resources. The client tenant cluster authenticates using its vCluster Platform identity in the format
loft:vcluster:p-<project>:<name>:target-rbac.yamlapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: vcluster-proxy-client
rules:
- apiGroups: ["example.com"]
resources: ["myresources"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["example.com"]
resources: ["myresources/status"]
verbs: ["get", "update", "patch"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vcluster-proxy-client
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vcluster-proxy-client
subjects:
- kind: User
# vCluster identity format: loft:vcluster:p-<project>:<name>
name: "loft:vcluster:p-default:client"
apiGroup: rbac.authorization.k8s.ioApply RBAC to the target tenant cluster:
Apply RBAC to targetvcluster connect target -- kubectl apply -f target-rbac.yamlConfigure platform RBAC.
Grant the client tenant cluster permission to access the target through vCluster Platform. Apply this to the platform's management cluster:
platform-rbac.yaml# Platform RBAC for resource proxy
# This grants the client vCluster permission to proxy requests to the target
# Apply to the platform's management cluster in the project namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: vcluster-proxy-target-access
namespace: p-default
rules:
- apiGroups: ["management.loft.sh"]
resources: ["virtualclusterinstances"]
resourceNames: ["target"] # Target vCluster name
verbs: ["use"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: client-vcluster-proxy-access
namespace: p-default
subjects:
- kind: User
name: "loft:vcluster:p-default:client" # Client vCluster identity
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: vcluster-proxy-target-access
apiGroup: rbac.authorization.k8s.ioApply to the platform's management cluster:
Apply platform RBACkubectl apply -f platform-rbac.yaml --context <platform-context>Create the client tenant cluster with proxy configuration.
Configure the client tenant cluster to proxy MyResource resources to the target:
client-vcluster.yamlexperimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"Deploy the client tenant cluster:
Deploy client tenant clustervcluster create client -f client-vcluster.yamlTest the proxy.
Create a MyResource in the client tenant cluster:
Create MyResource in clientvcluster connect client -- kubectl apply -f - <<EOF
apiVersion: example.com/v1
kind: MyResource
metadata:
name: test-resource
namespace: default
spec:
name: "Test Resource"
priority: "high"
EOFVerify the resource exists in both tenant clusters:
Verify resource in both clusters# Check in client via proxy
vcluster connect client -- kubectl get myresources
# Check in target where resources are stored
vcluster connect target -- kubectl get myresources
Example: Multi-target proxy​
A single tenant cluster can proxy different resources to different target tenant clusters based on API group and version.
# Multi-target Resource Proxy configuration
# Proxies different resources to different target virtual clusters
experimental:
proxy:
customResources:
# Proxy MyResource and SecondaryResource to target-a (example.com/v1)
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target-a"
secondaryresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target-a"
# Proxy OtherResource and AdditionalResource to target-b (test.io/v2)
otherresources.test.io/v2:
enabled: true
targetVirtualCluster:
name: "target-b"
additionalresources.test.io/v2:
enabled: true
targetVirtualCluster:
name: "target-b"
In this configuration:
target-astores MyResource and SecondaryResource resources fromexample.com/v1target-bstores OtherResource and AdditionalResource resources fromtest.io/v2
Example: Cross-project proxy​
By default, the target tenant cluster is assumed to be in the same project as the client. You can proxy to a tenant cluster in a different project by specifying the project field. This works across different control plane clusters connected to the same vCluster Platform.
# Cross-project Resource Proxy configuration
# Proxies resources to a target virtual cluster in a different project
experimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"
project: "other-project" # Target is in a different project
This is useful for scenarios where:
- A shared resource storage cluster exists in a centralized project
- Teams in different projects need to access common resources
- Tenant clusters across different control plane clusters need to share CRDs
- CI/CD environments need access to centralized resource management
Access modes​
The accessResources field controls which resources a client tenant cluster can see in the target:
# Access modes configuration examples
# Example 1: "owned" mode (default)
# The virtual cluster only sees resources it created
experimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"
# accessResources defaults to "owned" - only see resources we created
---
# Example 2: "all" mode
# The virtual cluster can see all resources regardless of owner
experimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"
accessResources: all # Can see all resources, not just owned ones
owned mode - default​
Each client tenant cluster only sees resources it created. This enables tenant isolation where multiple client tenant clusters can share a target without seeing each other's resources.
all mode​
The client tenant cluster can see all resources in the target, regardless of who created them. This is useful for read-only observers, centralized dashboards, or admin access.
The accessResources mode controls visibility - what resources a tenant cluster can see. RBAC in the target tenant cluster controls permissions - what operations the tenant cluster can perform.
For example, a tenant cluster with accessResources: all but read-only RBAC can see all resources but can't modify any.
Example: Multi-client isolation​
This example demonstrates how multiple client tenant clusters can share a target while maintaining isolation.
Configure client tenant clusters.
Both clients proxy to the same target:
team-a-vcluster.yamlexperimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "orchestrator"
# Uses default accessResources: ownedteam-b-vcluster.yamlexperimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "orchestrator"
# Uses default accessResources: ownedConfigure target RBAC for multiple clients.
Configure RBAC in the target for both clients:
multi-client-target-rbac.yamlapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: vcluster-proxy-client
rules:
- apiGroups: ["example.com"]
resources: ["myresources"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["example.com"]
resources: ["myresources/status"]
verbs: ["get", "update", "patch"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch", "create"]
---
# Bind for team-a
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vcluster-proxy-team-a
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vcluster-proxy-client
subjects:
- kind: User
name: "loft:vcluster:p-default:team-a"
apiGroup: rbac.authorization.k8s.io
---
# Bind for team-b
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vcluster-proxy-team-b
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vcluster-proxy-client
subjects:
- kind: User
name: "loft:vcluster:p-default:team-b"
apiGroup: rbac.authorization.k8s.ioApply to the target tenant cluster:
Apply target RBACvcluster connect orchestrator -- kubectl apply -f multi-client-target-rbac.yamlConfigure platform RBAC for multiple clients.
Grant both client tenant clusters permission to access the target through vCluster Platform:
multi-client-platform-rbac.yamlapiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: vcluster-proxy-target-access
namespace: p-default
rules:
- apiGroups: ["management.loft.sh"]
resources: ["virtualclusterinstances"]
resourceNames: ["orchestrator"]
verbs: ["use"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-a-vcluster-proxy-access
namespace: p-default
subjects:
- kind: User
name: "loft:vcluster:p-default:team-a"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: vcluster-proxy-target-access
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-b-vcluster-proxy-access
namespace: p-default
subjects:
- kind: User
name: "loft:vcluster:p-default:team-b"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: vcluster-proxy-target-access
apiGroup: rbac.authorization.k8s.ioApply to the platform's management cluster:
Apply platform RBACkubectl apply -f multi-client-platform-rbac.yaml --context <platform-context>Test isolation.
Test multi-client isolation# Team A creates a resource
vcluster connect team-a -- kubectl apply -f - <<EOF
apiVersion: example.com/v1
kind: MyResource
metadata:
name: team-a-resource
spec:
name: "Team A Data Resource"
EOF
# Team B creates a resource
vcluster connect team-b -- kubectl apply -f - <<EOF
apiVersion: example.com/v1
kind: MyResource
metadata:
name: team-b-resource
spec:
name: "Team B ML Resource"
EOF
# Team A only sees their resource
vcluster connect team-a -- kubectl get myresources
# NAME AGE
# team-a-resource 1m
# Team B only sees their resource
vcluster connect team-b -- kubectl get myresources
# NAME AGE
# team-b-resource 1m
# Target orchestrator sees both
vcluster connect orchestrator -- kubectl get myresources
# NAME AGE
# team-a-resource 2m
# team-b-resource 1m
Limitations​
- All
CustomResourceswithin the same APIGroupVersionmust use the same target tenant cluster. - When configuring RBAC for status updates, include permissions for the status subresource.
Troubleshoot​
Resources not appearing​
If resources don't appear after creation:
-
Check RBAC configuration: Ensure the client tenant cluster's identity has correct permissions in the target.
-
Verify the CustomResourceDefinition exists in target:
vcluster connect <target> -- kubectl get crd <resource>.<group>
- Check tenant cluster logs for errors:
kubectl logs -n vcluster-<name> -l app=vcluster --tail=100
Permission denied errors​
Permission errors can occur at two levels: the platform level and the target tenant cluster level.
- Check platform RBAC: Ensure the client has "use" permission on the target VirtualClusterInstance:
kubectl auth can-i use virtualclusterinstances/target \
--as="loft:vcluster:p-default:client" \
-n p-default \
--context <platform-context>
-
Verify the tenant cluster identity format: The identity follows
loft:vcluster:p-<project>:<name>. -
Test target RBAC directly:
vcluster connect <target> -- kubectl auth can-i create myresources.example.com \
--as="loft:vcluster:p-default:client"
Target unavailable errors​
Ensure the target tenant cluster is running and healthy. vCluster automatically attempts to reconnect when the target becomes available.
Config reference​
experimental.proxy​
| Field | Type | Default | Description |
|---|---|---|---|
customResources | map[string]CustomResourceProxy | {} | Map of resource keys to proxy configuration. |
CustomResourceProxy​
| Field | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enable the proxy for this resource. |
targetVirtualCluster | VirtualClusterRef | - | Reference to the target tenant cluster. Required when enabled. |
accessResources | string | "owned" | Resource visibility mode: owned or all. |
VirtualClusterRef​
| Field | Type | Default | Description |
|---|---|---|---|
name | string | - | Name of the target tenant cluster. Required. |
project | string | Same as source | Project of the target tenant cluster. |