Introduction to Microservices on GKE
Deploying microservices on Google Kubernetes Engine (GKE) combines the flexibility of containerized applications with the power of managed Kubernetes. In this guide, we'll walk through the process of deploying a microservice on GKE, covering the essential components: Deployment YAML, Services, and Ingress controllers.
Whether you're transitioning from a monolith or building a new cloud-native application, understanding these core concepts will help you leverage GKE's full potential for your microservices architecture.
Creating a Deployment YAML
The Deployment resource is the foundation of your microservice on Kubernetes. It defines how your application containers should be deployed and managed.
Basic Deployment Structure:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-microservice
labels:
app: my-microservice
spec:
replicas: 3
selector:
matchLabels:
app: my-microservice
template:
metadata:
labels:
app: my-microservice
spec:
containers:
- name: my-microservice
image: gcr.io/my-project/my-microservice:v1.0.0
ports:
- containerPort: 8080
env:
- name: ENVIRONMENT
value: "production"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Key Components Explained:
- Replicas: Defines how many identical pod instances should run
- Selector: Determines which pods the deployment manages
- Template: Specifies the pod configuration including container image, ports, and environment variables
- Resources: Sets CPU and memory requests/limits for proper scheduling and stability
Exposing Your Service
While deployments manage your application pods, Services provide networking connectivity to those pods.
Service YAML Example:
apiVersion: v1
kind: Service
metadata:
name: my-microservice-service
spec:
selector:
app: my-microservice
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
Service Types:
- ClusterIP: Exposes the service internally within the cluster (default)
- NodePort: Makes the service accessible on each node's IP at a static port
- LoadBalancer: Creates an external load balancer in cloud providers
Configuring Ingress for External Access
Ingress manages external access to your services, typically HTTP/HTTPS, providing load balancing, SSL termination, and name-based virtual hosting.
GKE Ingress Controller
GKE includes a built-in ingress controller that provisions a Google Cloud Load Balancer when you create an Ingress resource.
Ingress YAML Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-microservice-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
networking.gke.io/managed-certificates: "my-ssl-certificate"
spec:
rules:
- host: api.myapp.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: my-microservice-service
port:
number: 80
Essential Ingress Annotations for GKE:
- Static IP:
kubernetes.io/ingress.global-static-ip-name
associates a reserved static IP - SSL Certificates:
networking.gke.io/managed-certificates
enables managed SSL certificates - Backend Config:
cloud.google.com/backend-config
customizes backend services
Putting It All Together: Deployment Workflow
- Containerize Your Application: Create a Docker image and push it to Google Container Registry
- Define Your Deployment: Create a YAML file specifying your application deployment
- Create a Service: Define how to access your pods internally
- Set Up Ingress: Configure external access with load balancing and SSL
- Apply Configuration: Use kubectl to deploy to your GKE cluster
Sample Deployment Commands:
# Build and push container image
docker build -t gcr.io/my-project/my-microservice:v1.0.0 .
docker push gcr.io/my-project/my-microservice:v1.0.0
# Apply configuration to GKE cluster
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
# Verify deployment
kubectl get deployments
kubectl get services
kubectl get ingress
Advanced Deployment Strategies
GKE supports sophisticated deployment patterns for zero-downtime updates:
Rolling Updates (Default):
Kubernetes gradually replaces old pods with new ones, ensuring continuous availability.
Blue-Green Deployments:
Maintain two identical environments and switch traffic between them.
Canary Releases:
Gradually roll out changes to a small subset of users before full deployment.
Monitoring and Troubleshooting
After deployment, monitor your microservice using:
- GKE Workloads Dashboard: View deployment status and resource usage
- Cloud Logging: Access container logs and cluster events
- Cloud Monitoring: Set up alerts and dashboards for performance metrics
- kubectl commands: Use
kubectl logs
,kubectl describe
, andkubectl get events
for debugging
Best Practices for GKE Microservices
- Use readiness and liveness probes to ensure application health
- Implement proper resource requests and limits for stable performance
- Leverage ConfigMaps and Secrets for configuration management
- Set up Horizontal Pod Autoscaling based on CPU or custom metrics
- Use namespaces to organize environments (development, staging, production)
- Implement network policies to control traffic between microservices
Conclusion
Deploying microservices on GKE involves several key components working together: Deployments manage your application pods, Services provide internal networking, and Ingress controllers handle external access. By understanding these elements and following Kubernetes best practices, you can build scalable, resilient microservices architectures on Google Kubernetes Engine.
As you become more comfortable with these fundamentals, explore advanced GKE features like Cloud Run for Anthos, Istio-based service mesh, and automated pipeline deployments to further enhance your microservices strategy.
0 Comments