Part-92: πŸš€ Create a Node Pool and Deploy Kubernetes Workloads with NodeSelector in Google Kubernetes Engine (GKE)

Part-92: πŸš€ Create a Node Pool and Deploy Kubernetes Workloads with NodeSelector in Google Kubernetes Engine (GKE)


When working with Google Kubernetes Engine (GKE), you often want to control which nodes your workloads run on.
This is where Node Pools and Node Selectors come in.

In this guide, we’ll:

βœ… Create a new GKE Node Pool
βœ… Deploy a Kubernetes Deployment that uses a NodeSelector
βœ… Verify Pod placement and access the application




πŸ”Ή Step 1: Introduction

We’ll be doing three things:

  1. Create a Node Pool in GKE
  2. Deploy a Kubernetes Deployment with a nodeSelector
  3. Verify and clean up resources



πŸ”Ή Step 2: Create a GKE Node Pool

First, let’s check the existing node pools in our cluster:

# List Node Pools
gcloud container node-pools list \
  --cluster "standard-public-cluster-1" \
  --location "us-central1"
Enter fullscreen mode

Exit fullscreen mode

n1

Now, create a Linux Node Pool with spot VMs:

# Create Linux Node Pool 
gcloud container node-pools create "linuxapps-nodepool" \
  --cluster "standard-public-cluster-1" \
  --machine-type "e2-small" \
  --disk-size "20" \
  --num-nodes "1" \
  --location "us-central1" \
  --spot 
Enter fullscreen mode

Exit fullscreen mode

Verify creation:

# List Node Pools again
gcloud container node-pools list \
  --cluster "standard-public-cluster-1" \
  --location "us-central1"
Enter fullscreen mode

Exit fullscreen mode

n2



n3



πŸ”Ή Step 3: Review Kubernetes Deployment Pod Specification with

NodeSelector

Now, let’s create a Deployment that forces Pods to run only on our new node pool.

πŸ“Œ Create a file: 01-kubernetes-deployment.yaml

apiVersion: apps/v1
kind: Deployment  
metadata: 
  name: mylinuxapp-deployment
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: mylinuxapp
  template:
    metadata: 
      name: mylinuxapp-pod
      labels:
        app: mylinuxapp 
    spec:
      # πŸ‘‡ NodeSelector ensures Pods only run in our node pool
      nodeSelector:
        cloud.google.com/gke-nodepool: linuxapps-nodepool  
      containers: 
        - name: mylinuxapp-container
          image: ghcr.io/stacksimplify/kubenginx:1.0.0
          ports: 
            - containerPort: 80 
Enter fullscreen mode

Exit fullscreen mode

πŸ“Œ Create a LoadBalancer service: 02-kubernetes-loadbalancer-service.yaml

apiVersion: v1
kind: Service 
metadata:
  name: mylinuxapp-lb-service
spec:
  type: LoadBalancer 
  selector:
    app: mylinuxapp
  ports: 
    - name: http
      port: 80   # Service Port
      targetPort: 80 # Container Port    
Enter fullscreen mode

Exit fullscreen mode




πŸ”Ή Step 4: Deploy and Verify

Apply the manifests:

# Deploy Kubernetes Resources
kubectl apply -f kube-manifests/01-kubernetes-deployment.yaml
kubectl apply -f kube-manifests/02-kubernetes-loadbalancer-service.yaml
Enter fullscreen mode

Exit fullscreen mode

Check if Pods are running in the correct node pool:

# Verify Pods and their nodes
kubectl get pods -o wide
Enter fullscreen mode

Exit fullscreen mode

n4

πŸ‘‰ Observation: Pods should be scheduled on nodes that belong to linuxapps-nodepool.

Now, get the service external IP and test the app:

# Access Application
kubectl get svc
Enter fullscreen mode

Exit fullscreen mode

Open in browser:

http://
Enter fullscreen mode

Exit fullscreen mode

n5




πŸ”Ή Step 5: Clean-Up

When done, clean up resources.

# Delete Kubernetes Resources
kubectl delete -f kube-manifests/
Enter fullscreen mode

Exit fullscreen mode

If you don’t need the node pool anymore:

# Delete Node Pool (⚠️ keep if needed for next demo like DaemonSets)
gcloud container node-pools delete "linuxapps-nodepool" \
  --cluster "standard-public-cluster-1" \
  --location "us-central1"
Enter fullscreen mode

Exit fullscreen mode

n6




βœ… Recap

  • Node Pools let you group nodes with different configurations inside a GKE cluster.
  • Node Selectors ensure Pods are scheduled only on the nodes you want.
  • Together, they give you fine-grained control over workload placement in GKE.

🌟 Thanks for reading! If this post added value, a like ❀️, follow, or share would encourage me to keep creating more content.


β€” Latchu | Senior DevOps & Cloud Engineer

☁️ AWS | GCP | ☸️ Kubernetes | πŸ” Security | ⚑ Automation
πŸ“Œ Sharing hands-on guides, best practices & real-world cloud solutions



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *