As organizations mature in their Kubernetes journey, they often encounter more complex requirements that go beyond basic cluster operations. This article explores three advanced Kubernetes topics: multi-cluster management, Custom Resource Definitions (CRDs), and building effective CI/CD pipelines with Kubernetes.
Multi-Cluster Management
Managing multiple Kubernetes clusters has become increasingly common for organizations seeking fault tolerance, geographic distribution, regulatory compliance, or environment separation. Multi-cluster strategies present unique challenges and opportunities.
Multi-Cluster Architecture Patterns
1. Federation Pattern
Using tools like Kubernetes Cluster Federation (KubeFed) to manage multiple clusters as a single logical entity:
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
name: my-app
namespace: my-namespace
spec:
placement:
clusters:
- name: cluster1
- name: cluster2
- name: cluster3
template:
metadata:
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:latest
ports:
- containerPort: 8080
2. Hub-Spoke Pattern
A central cluster manages configuration and policies for multiple edge clusters:
# Using GitOps with ArgoCD for hub-spoke model
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cluster1-app
namespace: argocd
spec:
destination:
server: https://cluster1.example.com
namespace: production
source:
repoURL: https://github.com/myorg/gitops-repo
targetRevision: HEAD
path: apps/my-app
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
3. Multi-Cluster Service Mesh
Extending service mesh across cluster boundaries with Istio:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc
spec:
hosts:
- external-service.example.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
endpoints:
- address: cluster1.example.com
ports:
http: 15443
- address: cluster2.example.com
ports:
http: 15443
Multi-Cluster Management Tools
Kubernetes Cluster API
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: production-cluster
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
services:
cidrBlocks: ["10.96.0.0/12"]
controlPlaneEndpoint:
host: production-cluster.example.com
port: 6443
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
name: production-cluster
Rancher Fleet for Multi-Cluster GitOps
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: my-app
namespace: fleet-default
spec:
repo: https://github.com/myorg/gitops-repo
branch: main
targets:
- name: production
clusterSelector:
matchLabels:
env: production
- name: development
clusterSelector:
matchLabels:
env: development
Multi-Cluster Networking
Submariner for Cross-Cluster Networking
apiVersion: submariner.io/v1
kind: Cluster
metadata:
name: cluster1
spec:
clusterId: cluster1
colorCodes:
- blue
globalCIDR: 242.0.0.0/8
namespace: submariner-operator
serviceCIDR: 10.96.0.0/12
clusterCIDR: 192.168.0.0/16
Custom Resource Definitions (CRDs)
CRDs extend the Kubernetes API to create custom resources that behave like native Kubernetes objects. They enable domain-specific abstractions and operators.
Creating CRDs
Basic CRD Definition
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: webapps.example.com
spec:
group: example.com
names:
kind: WebApp
listKind: WebAppList
plural: webapps
singular: webapp
shortNames:
- wa
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
replicas:
type: integer
minimum: 1
maximum: 10
image:
type: string
port:
type: integer
status:
type: object
properties:
availableReplicas:
type: integer
conditions:
type: array
items:
type: object
properties:
type:
type: string
status:
type: string
lastTransitionTime:
type: string
reason:
type: string
message:
type: string
subresources:
status: {}
additionalPrinterColumns:
- name: Replicas
type: integer
jsonPath: .spec.replicas
- name: Image
type: string
jsonPath: .spec.image
- name: Port
type: integer
jsonPath: .spec.port
- name: Available
type: integer
jsonPath: .status.availableReplicas
Advanced CRD Features
Validation with Webhooks
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: webapp-validator
webhooks:
- name: webapp-validator.example.com
rules:
- apiGroups: ["example.com"]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["webapps"]
clientConfig:
service:
namespace: default
name: webapp-validator
path: /validate
caBundle: ${CA_BUNDLE}
admissionReviewVersions: ["v1"]
sideEffects: None
timeoutSeconds: 5
Conversion Webhooks
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: webapps.example.com
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
service:
namespace: default
name: webapp-converter
path: /convert
conversionReviewVersions: ["v1"]
# ... rest of CRD definition
Finalizers
apiVersion: example.com/v1
kind: WebApp
metadata:
name: my-webapp
finalizers:
- webapp.example.com/finalizer
spec:
replicas: 3
image: nginx:latest
port: 80
Controller Implementation Patterns
Using controller-runtime
import (
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func (r *WebAppReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := ctrl.LoggerFrom(ctx)
// Fetch the WebApp instance
webapp := &examplecomv1.WebApp{}
if err := r.Get(ctx, req.NamespacedName, webapp); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
// Add finalizer if not present
if !controllerutil.ContainsFinalizer(webapp, webappFinalizer) {
controllerutil.AddFinalizer(webapp, webappFinalizer)
if err := r.Update(ctx, webapp); err != nil {
return ctrl.Result{}, err
}
}
// Check if object is being deleted
if !webapp.GetDeletionTimestamp().IsZero() {
return r.finalizeWebApp(ctx, webapp)
}
// Reconcile logic
if err := r.reconcileDeployment(ctx, webapp); err != nil {
return ctrl.Result{}, err
}
if err := r.reconcileService(ctx, webapp); err != nil {
return ctrl.Result{}, err
}
// Update status
if err := r.updateStatus(ctx, webapp); err != nil {
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
CI/CD Pipelines with Kubernetes
Building effective CI/CD pipelines for Kubernetes requires specialized approaches that leverage Kubernetes-native tools and patterns.
GitOps Approach with ArgoCD
Application Definition
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
namespace: production
source:
repoURL: https://github.com/myorg/gitops-repo
targetRevision: HEAD
path: apps/my-app
helm:
valueFiles:
- values-production.yaml
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas
App-of-Apps Pattern
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root-app
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
namespace: argocd
source:
repoURL: https://github.com/myorg/gitops-repo
targetRevision: HEAD
path: root
syncPolicy:
automated:
prune: true
selfHeal: true
Tekton Pipelines
Pipeline Definition
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: ci-pipeline
spec:
workspaces:
- name: source-code
- name: docker-config
params:
- name: imageUrl
type: string
- name: imageTag
type: string
tasks:
- name: fetch-source
taskRef:
name: git-clone
workspaces:
- name: output
workspace: source-code
params:
- name: url
value: https://github.com/myorg/my-app
- name: revision
value: main
- name: run-tests
taskRef:
name: npm-test
runAfter: ["fetch-source"]
workspaces:
- name: source
workspace: source-code
- name: build-image
taskRef:
name: buildah
runAfter: ["run-tests"]
workspaces:
- name: source
workspace: source-code
- name: dockerconfig
workspace: docker-config
params:
- name: IMAGE
value: "$(params.imageUrl):$(params.imageTag)"
- name: deploy-to-dev
taskRef:
name: kustomize-deploy
runAfter: ["build-image"]
workspaces:
- name: manifest-dir
workspace: source-code
params:
- name: environment
value: development
PipelineRun
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: ci-pipeline-run-1
annotations:
triggers.tekton.dev/trigger: push-trigger
spec:
pipelineRef:
name: ci-pipeline
workspaces:
- name: source-code
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- name: docker-config
secret:
secretName: docker-config
params:
- name: imageUrl
value: registry.example.com/myapp
- name: imageTag
value: latest
Jenkins on Kubernetes
Jenkinsfile with Kubernetes Pod Templates
pipeline {
agent {
kubernetes {
label 'my-app-build'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: node
image: node:16
command: ['cat']
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
- name: docker
image: docker:20
command: ['cat']
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Build') {
steps {
container('node') {
sh 'npm install'
sh 'npm run build'
}
}
}
stage('Test') {
steps {
container('node') {
sh 'npm test'
}
}
}
stage('Build Image') {
steps {
container('docker') {
sh 'docker build -t myapp:${env.BUILD_ID} .'
}
}
}
}
}
Security Scanning in CI/CD
Trivy Security Scanning
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: trivy-scan
spec:
params:
- name: image
type: string
steps:
- name: trivy-scan
image: aquasec/trivy:latest
script: |
#!/bin/sh
trivy image --exit-code 1 --severity HIGH,CRITICAL $(params.image)
trivy image --format template --template "@/contrib/html.tpl" \
-o /workspace/scan-report.html $(params.image)
volumeMounts:
- name: trivy-cache
mountPath: /root/.cache
volumes:
- name: trivy-cache
emptyDir: {}
Progressive Delivery with Flagger
Canary Release Configuration
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: my-app
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
service:
port: 9898
analysis:
interval: 1m
threshold: 5
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
threshold: 99
interval: 1m
- name: request-duration
threshold: 500
interval: 1m
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "hey -z 1m -q 10 -c 2 http://my-app.canary:9898/"
- name: acceptance-test
type: pre-rollout
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
type: cmd
cmd: "curl -s http://my-app-primary:9898/health | grep ok"
Best Practices
Multi-Cluster Best Practices
- Use consistent naming and labeling across clusters
- Implement centralized logging and monitoring
- Establish clear cluster boundaries and responsibilities
- Automate cluster provisioning and management
- Regularly test failover and disaster recovery procedures
CRD Best Practices
- Follow Kubernetes API conventions
- Use OpenAPI schema validation
- Implement proper status handling
- Use finalizers for proper cleanup
- Version your CRDs appropriately
CI/CD Best Practices
- Implement GitOps for declarative environment management
- Use ephemeral environments for testing
- Scan images for vulnerabilities in the pipeline
- Implement progressive delivery strategies
- Monitor deployment metrics and rollback automatically when needed
These advanced Kubernetes topics represent the evolution of cloud-native practices as organizations scale their Kubernetes usage. By mastering multi-cluster management, leveraging CRDs for custom automation, and implementing robust CI/CD pipelines, teams can achieve higher levels of efficiency, reliability, and innovation.
.png)
0 Comments