How to Set Up AWS EFS Static Provisioning Across Multiple Kubernetes Namespaces

How to Set Up AWS EFS Static Provisioning Across Multiple Kubernetes Namespaces


Bitnami PostgreSQL is a widely-used container image with the default being to run safely as a non-root user. But persistent storage—especially shared storage between environments such as dev and test—becomes a problem. Here in this blog post, I’ll walk you through how I used AWS EFS static provisioning to share storage between two namespaces with Bitnami PostgreSQL running on Kubernetes.



Why Static Provisioning?

While dynamic provisioning is easy, static provisioning offers full control. It allows you to set a PersistentVolume (PV) by hand which corresponds with an AWS EFS File System or Access Point—ideal for environments where there are multiple environments (e.g., dev and test). As we’re using static provisioning, there’s no need to define a StorageClass for EFS.

  • Full control over PersistentVolume (PV) setup.
  • A way to reuse the same EFS volume across different namespaces.
  • Simpler debugging for permission or access issues.
  • No need to define a StorageClass



What We’re Building

A PostgreSQL setup running in two separate namespaces: dev and test

  • Both environments mount the same EFS volume
  • PostgreSQL data is shared

Image description



Prerequisites

Before you begin:

  • A running Kubernetes cluster (K3s, EKS, etc.)
  • An AWS EFS file system already created



Project Structure

Your repo should look like this:
deployment-files/
├── deployment-dev/
│ └── pv-dev.yml, pvc-dev.yml, postgres.yml
└── deployment-test/
└── pv-test.yml, pvc-test.yml, postgres*.yml



Step 1: Create EFS Access Point

To prevent permission issues when mounting EFS across namespaces, create an Access Point from the AWS Console with:

  • User ID: 1001
  • Group ID: 1001
  • Permissions: 0775

Image description

Install EFS CSI Driver in your cluster:

kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.7"
Enter fullscreen mode

Exit fullscreen mode

You can also use Helm for EFS CSI Driver installation.



Step 2: Define the PV and PVC

Set your pv’s volumeHandle section with both fs file system id and access point id :

volumeHandle: fs-::fsap-
Enter fullscreen mode

Exit fullscreen mode

Leave storageClassName empty.

pv-dev.yml:

Image description
For pvc also leave storageClassName empty.

pvc-dev.yml:

Image description

Create PV and PVC as the same for the test namespace. The test namespace PV will also point to the same access point in the EFS.



Step 3: Configure PostgreSQL Deployment

Make sure the deployment uses fsGroup: 1001 in its securityContext to match EFS Access Point permissions:
securityContext:
fsGroup: 1001



Step 4: Deploy Namespaces

Deploy to dev:

kubectl create namespace dev
kubectl apply -f deployment-files/deployment-dev/ -n dev
Enter fullscreen mode

Exit fullscreen mode

Check the logs to verify that the PostgreSQL PV and PVC are bound, and the postgres pod is running.

Deploy to test:

kubectl create namespace test
kubectl apply -f deployment-files/deployment-test/ -n test
Enter fullscreen mode

Exit fullscreen mode

Check the logs to verify that the PostgreSQL PV and PVC are bound, and the postgres pod is running.
Image description



Outcome

You now have a shared EFS volume accessed by PostgreSQL pods running in different namespaces.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *