Share this
SQL Server AlwaysOn Availability Groups on Google Kubernetes Engine
by Sandeep Arora on Nov 9, 2021 12:00:00 AM
This tutorial shows you how to configure SQL Server AlwaysOn availability groups (AG) using DH2i DxEnterprise and Google Kubernetes Engine (GKE). This offering helps you achieve high availability for your SQL Server databases running on containers in Kubernetes.
GCP Resources that will be created from this demonstration
- VPC Network named network-1
- privatenet subnet
- publicnet subnet
- Private Service Connection
- Firewall rule to allow port 1433 from bastion host to GKE nodes
- Firewall rule to allow RDP (Remote Desktop Protocol) from your laptop\pc to bastion host
- Cloud Firestore instance for providing shared nfs storage to containers\pods.
- Bastion host with windows server 2019 and sql server 2017 express edition. Comes preinstalled with SSMS.
- 3 Node internal Google kubernetes engine cluster with 3 compute engine nodes
- A optional nat gateway (not represented in the above diagram) to allow cluster to download images and updates from internet. You will also need internet access to activate DxEnterprise license
Prerequisites
- A valid DxEnterprise license with availability group management features and tunnels enabled. A fully featured developer edition is available for free for non-production use at https://dh2i.com/dxenterprise-dxodyssey-developer-edition.
- A Google Cloud Project
- How to connect to SQL Server and create database objects
- Basic knowledge of Google Cloud and Kubernetes (preferred)
Code Repository
All the sample code files are hosted in the GiT Repository. Open Cloud Shell and clone the repository.
git clone https://github.com/sa-proj/proj-containers.git
Building the Cloud infrastructure
We’ll use Cloud Shell and GCLOUD commands to provision the required infrastructure as described in the infrastructure design. Navigate to the gke-sql-server-always-on-failover-type-external-dh2i folder to start:
cd proj-containers/gke-sql-server-always-on-failover-type-external-dh2i
You can run the gcloud-infrastructure.sh script (make sure to edit the Project ID and <<YOUR PUBLIC IP HERE>> with relevant details before you continue) to create all resources in one shot as described in the infrastructure design section, or you can follow along with the instructions in next section and modify the resource name and properties. Then, then go step by step.
Open Editor to modify the contents of gcloud-infrastructure.sh:
Once you have updated the script with the correct Project ID and your public IP details as shown below, you can execute the script. Don’t forget to specify /32 at the end of your Public IP:
export GCP_PROJECT_ID='my-sample-project-1' --- export MY_IP='101.101.101.101/32'
Execute the script and wait while resources are being created. It can take up to 10-15 mins for all resources to be created successfully:
Note – You will need to update the Image Name and GKE version in case of VM Image and GKE versions are deprecated. See step by step section for more details:
chmod u+x gcloud-infrastructure.sh ./gcloud-infrastructure.sh
Creating Cloud resources step by step
To start, export variables that will be used to provision the infrastructure using GCLOUD commands. This has details on network name, subnet names, IP ranges etc.
Before executing the below statements, you’ll need to ensure that you have correctly supplied the GCP Project ID. You will also need to ensure that you replace <<YOUR PUBLIC IP HERE>> for your laptop because this IP will be used while creating firewall rules to allow RDP (Remote Desktop Protocol) to bastion host from your laptop/PC. You can check here to find your public IP address:
export GCP_PROJECT_ID='xxxxxxxxxxxxxxxxx' export REGION='us-central1' export ZONE_A='us-central1-a' export ZONE_B='us-central1-b' export NETWORK_NAME='network-1' export PUBLIC_SUBNET='publicnet' export PUBLIC_SUBNET_RANGE='10.0.1.0/24' export PRIVATE_SUBNET='privatenet' export PRIVATE_SUBNET_RANGE='10.0.0.0/24' export MY_IP='<<YOUR PUBLIC IP HERE>>' export BASTION_HOST_TAG='bastion-host' export BASTION_HOST_NAME='bastion-host-1' export CLOUDSHELL_PUBLIC_IP="$(curl https://ipinfo.io/ip)/32" export CLUSTERNAME='gke-cluster-1' export PRIVATE_SERVICE_CONNECT_RANGE_NAME="$NETWORK_NAME-psc-range" export PRIVATE_SERVICE_CONNECT_ADDRESS="10.0.2.0"
Enable the APIS and Services to be used for this demo:
gcloud services enable file.googleapis.com gcloud services enable compute.googleapis.com gcloud services enable container.googleapis.com gcloud services enable containerregistry.googleapis.com gcloud services enable servicenetworking.googleapis.com
Create a custom VPC network name network-1:
gcloud compute networks create $NETWORK_NAME \ --project=$GCP_PROJECT_ID \ --subnet-mode=custom
Create a private and a public subnet and enable private Google access:
gcloud compute networks subnets create $PRIVATE_SUBNET \ --project=$GCP_PROJECT_ID \ --range=$PRIVATE_SUBNET_RANGE \ --network=$NETWORK_NAME \ --region=$REGION \ --enable-private-ip-google-access gcloud compute networks subnets create $PUBLIC_SUBNET \ --project=$GCP_PROJECT_ID \ --range=$PUBLIC_SUBNET_RANGE \ --network=$NETWORK_NAME \ --region=$REGION \ --enable-private-ip-google-access
Create firewall rules to allow RDP (Remote Desktop) access from your laptop to bastion host and SQL Server access for pods with SQL Server 2019 hosted in the Google Kubernetes engine:
gcloud compute --project=$GCP_PROJECT_ID firewall-rules create allow-rdp-access \ --direction=INGRESS \ --priority=1000 \ --network=$NETWORK_NAME \ --action=ALLOW \ --rules=tcp:3389 \ --source-ranges=$MY_IP \ --target-tags=$BASTION_HOST_TAG gcloud compute --project=$GCP_PROJECT_ID firewall-rules create allow-mssql-access \ --direction=INGRESS \ --priority=1000 \ --network=$NETWORK_NAME \ --action=ALLOW \ --rules=tcp:1433 \ --source-ranges=$PUBLIC_SUBNET_RANGE \ --target-tags=$BASTION_HOST_TAG
Create a private service connection for your VPC network named network-1:
gcloud compute addresses create $PRIVATE_SERVICE_CONNECT_RANGE_NAME \ --global \ --purpose=VPC_PEERING \ --addresses=$PRIVATE_SERVICE_CONNECT_ADDRESS \ --prefix-length=24 \ --description="PRIVATE SERVICE CONNECTION" \ --network=$NETWORK_NAME gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=$PRIVATE_SERVICE_CONNECT_RANGE_NAME \ --network=$NETWORK_NAME \ --project=$GCP_PROJECT_ID
Create a filestore instance using private service connection as with connect mode set to PRIVATE_SERVICE_ACCESS. Any backups files or files that need to be shared across servers can be dumped into this nfs share:
gcloud beta filestore instances create nfs-server-1 \ --zone=$ZONE_B \ --tier=BASIC_HDD \ --file-share=name="common",capacity=1TB \ --network=name=$NETWORK_NAME,reserved-ip-range=$PRIVATE_SERVICE_CONNECT_RANGE_NAME,connect-mode=PRIVATE_SERVICE_ACCESS
Create a bastion host. I’m using Windows Server 2019 with SQL Server Express Edition installed as an image because Server Management Studio is already pre-installed and pre-configured. You can use any Windows Server image and install SSMS on it separately.
Note – Make sure to update the image to latest version before executing. You can check the latest image name from Compute Engine > Images > Filter based on sql-2017-express and you will find the latest available image name.
gcloud beta compute --project=$GCP_PROJECT_ID instances create $BASTION_HOST_NAME \ --zone=$ZONE_A \ --machine-type=n2d-standard-2 \ --subnet=$PUBLIC_SUBNET \ --network-tier=PREMIUM \ --maintenance-policy=MIGRATE \ --tags=$BASTION_HOST_TAG \ --image=sql-2017-express-windows-2019-dc \ --image-project=windows-sql-cloud \ --boot-disk-size=50GB \ --boot-disk-type=pd-balanced \ --boot-disk-device-name=$BASTION_HOST_NAME
Create a nat router so the private kubernetes cluster will have internet access without being exposed on the internet. This will used to to download updates, images and activating external HADR providers.
gcloud compute routers create nat-router \ --network $NETWORK_NAME \ --region $REGION gcloud compute routers nats create nat-config \ --router-region $REGION \ --router nat-router \ --nat-all-subnet-ip-ranges \ --auto-allocate-nat-external-ips
Create a private kubernetes cluster. You will need to update the cluster-version to available version in case the one mentioned in the script is outdated. You can check current versions for GKE here.
gcloud beta container --project $GCP_PROJECT_ID clusters create $CLUSTERNAME \ --zone $ZONE_B \ --no-enable-basic-auth \ --cluster-version "1.20.9-gke.701" \ --release-channel "regular" \ --machine-type "e2-medium" \ --image-type "COS_CONTAINERD" \ --disk-type "pd-standard" \ --disk-size "100" \ --metadata disable-legacy-endpoints=true \ --max-pods-per-node "110" \ --num-nodes "3" \ --enable-private-nodes \ --master-ipv4-cidr "10.0.3.0/28" \ --enable-master-global-access \ --enable-ip-alias \ --network "projects/$GCP_PROJECT_ID/global/networks/$NETWORK_NAME" \ --subnetwork "projects/$GCP_PROJECT_ID/regions/$REGION/subnetworks/$PRIVATE_SUBNET" \ --no-enable-intra-node-visibility \ --default-max-pods-per-node "110" \ --enable-master-authorized-networks \ --master-authorized-networks $CLOUDSHELL_PUBLIC_IP \ --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \ --enable-autoupgrade \ --enable-autorepair \ --max-surge-upgrade 1 \ --max-unavailable-upgrade 0 \ --enable-shielded-nodes \ --node-locations $ZONE_B
Obtain credentials to set the context for your Kubernetes cluster and authenticate to the cluster from Cloud Shell.
gcloud container clusters get-credentials $CLUSTERNAME \ --zone $ZONE_B --project $GCP_PROJECT_ID
Optional: Prepare the SQL Server & DH2i custom container image
This step is only required if you wish to use your own docker image. An image is already pushed to docker hub and available at this repository: sandeeparora/sql2k19-hadr-dh2i.
Use the below Dockerfile to build the image:
FROM mcr.microsoft.com/mssql/server:2019-latest USER root #Install dotnet RUN apt-get update \ && apt-get install -y apt-utils \ && ACCEPT_EULA=Y apt-get upgrade -y \ && apt-get install -y wget \ && wget --no-dns-cache https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb \ && dpkg -i packages-microsoft-prod.deb \ && apt-get update \ && apt-get install -y dotnet-runtime-3.1 zip \ && dpkg --purge packages-microsoft-prod \ && apt-get purge -y wget \ && apt-get clean \ && rm packages-microsoft-prod.deb \ && rm -rf /var/lib/apt/lists/* #Download and unpack DxE, setup permissions ADD https://repos.dh2i.com/container/ ./dxe.tgz RUN tar zxvf dxe.tgz && rm dxe.tgz \ && chown -R mssql /var/opt/mssql \ && chmod -R 777 /opt/dh2i /etc/dh2i #Finish setup EXPOSE 7979 7985 ENV DX_HAS_MSSQLSERVER=1 USER mssql ENTRYPOINT ["/opt/dh2i/sbin/dxstart.sh"]
On Cloud Shell run the following commands to build the image and push to the Google Container registry. Make sure to set your Project ID before executing the below cmdlets:
export GCP_PROJECT_ID='xxxxxxxxxxxxxx' cd ~/proj-containers/sql2k19-hadr-dh2i-image/ docker build -t gcr.io/$GCP_PROJECT_ID/sql2k19-hadr-dh2i . docker push gcr.io/$GCP_PROJECT_ID/sql2k19-hadr-dh2i
Switch back to the original work directory:
cd ~/proj-containers/gke-sql-server-always-on-failover-type-external-dh2i
Create Google Kubernetes Engine (GKE) resources for SQL Server 2019
You’ll need to create a password for the SA account when running the SQL Server container. It should be strong; otherwise, the deployment will fail.
kubectl create secret generic sql-server-secret \ --from-literal=MSSQL_SA_PASSWORD="P@ssw0rd"
The nodes need to be labelled as primary and secondaries for placement of pods on the cluster. Add labels to the nodes and later these labels will be used to create each StatefulSet using the nodeSelector field in PodSpec.
Check names for all nodes participating in the cluster:
kubectl get nodes
Output:
Copy the node names returned by the above command and replace them with node-1, node-2 and node-3 in the below code snippet. Execute them to assign primary and secondary roles to each node respectively. Node with role ags-primary will host the primary AlwaysOn replica, while Node(s) with role ags-secondary-N will host secondary replicas:
kubectl label node node-1 role=ags-primary kubectl label node node-2 role=ags-secondary-1 kubectl label node node-3 role=ags-secondary-2
In my case the commands that ran in CloudShell are below
kubectl label node gke-gke-cluster-1-default-pool-bf1f5db8-6tdx role=ags-primary kubectl label node gke-gke-cluster-1-default-pool-bf1f5db8-9b19 role=ags-secondary-1 kubectl label node gke-gke-cluster-1-default-pool-bf1f5db8-xslh role=ags-secondary-2
NFS Share for SQL backups for all replicas
Create a persistent volume and persistent volume claim for Filestore. This will be mounted to all pods running SQL Server 2019 and used for backups across all SQL instances.
Modify the common.yaml file in the Visual Editor in CloudShell with the IP address of Filestore. To check the IP address allocated for Filestore, navigate to Navigation Menu > STORAGE > Filestore:
Copy the IP Address from the console output and update the common.yaml file:
Once the file is updated, run the kubectl apply command to create PV and PVC to allocate storage from NFS. You can modify the allocation if you like; the current YAML config only has 20 GB allocated:
kubectl apply -f common.yaml
Below is the YAML config file for your reference:
apiVersion: v1 kind: PersistentVolume metadata: name: mssql-common-volume spec: capacity: storage: 20Gi accessModes: - ReadWriteMany nfs: path: /common server: 10.0.2.2 #Change the IP to Filestore IP --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql-common-volume-claim spec: accessModes: - ReadWriteMany storageClassName: "" volumeName: mssql-common-volume resources: requests: storage: 20Gi
Now we will create StatefulSets for all SQL replicas.
StatefulSet: SQL-1
We’ll create 4 volume claims for:
- DXE volume (External HA)
- Data volume (MDF files)
- Log volume (LDF files)
- Base volume
Create StatefulSet for SQL-1 and service to expose SQL-1 using internal load balancer.
Note: If you’re using your own container image from container registry, make sure to update the container image in the config file from sandeeparora/sql2k19-hadr-dh2i to gcr.io/<PROJECT_ID>/sql2k19-hadr-dh2i:
kubectl apply -f sql-1.yaml
Below is the YAML config file for your reference:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql1-dxe-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql1-base-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql1-data-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql1-log-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1 kind: StatefulSet metadata: name: sql-1 spec: serviceName: "sql-1" replicas: 1 selector: matchLabels: app: sql-1 template: metadata: labels: app: sql-1 hadr: turn-on spec: securityContext: runAsUser: 0 runAsGroup: 0 terminationGracePeriodSeconds: 10 nodeSelector: role: ags-primary containers: - name: mssql image: sandeeparora/sql2k19-hadr-dh2i ports: - containerPort: 1433 - containerPort: 7979 env: - name: MSSQL_PID value: "Developer" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD valueFrom: secretKeyRef: name: sql-server-secret key: MSSQL_SA_PASSWORD - name: MSSQL_AGENT_ENABLED value: "true" - name: MSSQL_ENABLE_HADR value: "1" - name: MSSQL_DATA_DIR value: /var/opt/mssql/data - name: MSSQL_LOG_DIR value: /var/opt/mssql/log - name: MSSQL_BACKUP_DIR value: /var/opt/mssql/backup volumeMounts: - name: mssql1-base-volume mountPath: /var/opt/mssql - name: mssql1-log-volume mountPath: /var/opt/mssql/log - name: mssql1-data-volume mountPath: /var/opt/mssql/data - name: mssql-common-volume mountPath: /var/opt/mssql/backup - name: mssql1-dxe-volume mountPath: /etc/dh2i volumes: - name: mssql1-base-volume persistentVolumeClaim: claimName: mssql1-base-volume - name: mssql1-data-volume persistentVolumeClaim: claimName: mssql1-data-volume - name: mssql1-log-volume persistentVolumeClaim: claimName: mssql1-log-volume - name: mssql-common-volume persistentVolumeClaim: claimName: mssql-common-volume-claim - name: mssql1-dxe-volume persistentVolumeClaim: claimName: mssql1-dxe-volume --- apiVersion: v1 kind: Service metadata: name: sql-1-external-service annotations: networking.gke.io/load-balancer-type: "Internal" spec: selector: app: sql-1 ports: - name: mssql protocol: TCP port: 1433 targetPort: 1433 - name: dxe protocol: TCP port: 7979 targetPort: 7979 type: LoadBalancer
StatefulSet: SQL-2
Create 4 volume claims for:
- DXE volume (External HA)
- Data volume (MDF files)
- Log volume (LDF files)
- Base volume
Create StatefulSet for SQL-2 and service to expose SQL-2 using internal load balancer.
Note: If you are using your own container image from container registry then make sure to update the container image in the config file from sandeeparora/sql2k19-hadr-dh2i to gcr.io/<PROJECT_ID>/sql2k19-hadr-dh2i:
kubectl apply -f sql-2.yaml
Below is the YAML config file for your reference:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql2-dxe-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql2-base-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql2-data-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql2-log-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1 kind: StatefulSet metadata: name: sql-2 spec: serviceName: "sql-2" replicas: 1 selector: matchLabels: app: sql-2 template: metadata: labels: app: sql-2 hadr: turn-on spec: securityContext: runAsUser: 0 runAsGroup: 0 terminationGracePeriodSeconds: 10 nodeSelector: role: ags-secondary-1 containers: - name: mssql image: sandeeparora/sql2k19-hadr-dh2i ports: - containerPort: 1433 - containerPort: 7979 env: - name: MSSQL_PID value: "Developer" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD valueFrom: secretKeyRef: name: sql-server-secret key: MSSQL_SA_PASSWORD - name: MSSQL_AGENT_ENABLED value: "true" - name: MSSQL_ENABLE_HADR value: "1" - name: MSSQL_DATA_DIR value: /var/opt/mssql/data - name: MSSQL_LOG_DIR value: /var/opt/mssql/log - name: MSSQL_BACKUP_DIR value: /var/opt/mssql/backup volumeMounts: - name: mssql2-base-volume mountPath: /var/opt/mssql - name: mssql2-log-volume mountPath: /var/opt/mssql/log - name: mssql2-data-volume mountPath: /var/opt/mssql/data - name: mssql-common-volume mountPath: /var/opt/mssql/backup - name: mssql2-dxe-volume mountPath: /etc/dh2i volumes: - name: mssql2-base-volume persistentVolumeClaim: claimName: mssql2-base-volume - name: mssql2-data-volume persistentVolumeClaim: claimName: mssql2-data-volume - name: mssql2-log-volume persistentVolumeClaim: claimName: mssql2-log-volume - name: mssql-common-volume persistentVolumeClaim: claimName: mssql-common-volume-claim - name: mssql2-dxe-volume persistentVolumeClaim: claimName: mssql2-dxe-volume --- apiVersion: v1 kind: Service metadata: name: sql-2-external-service annotations: networking.gke.io/load-balancer-type: "Internal" spec: selector: app: sql-2 ports: - name: mssql protocol: TCP port: 1433 targetPort: 1433 - name: dxe protocol: TCP port: 7979 targetPort: 7979 type: LoadBalancer
StatefulSet: SQL-3
Create 4 volume claims for:
- DXE volume (External HA)
- Data volume (MDF files)
- Log volume (LDF files)
- Base volume
Create StatefulSet for SQL-3 and service to expose SQL-3 using internal load balancer.
Note: If you’re using your own container image from container registry, make sure to update the container image in the config file from sandeeparora/sql2k19-hadr-dh2i to gcr.io/<PROJECT_ID>/sql2k19-hadr-dh2i:
kubectl apply -f sql-3.yaml
Below is the YAML config file for your reference:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql3-dxe-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql3-base-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql3-data-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mssql3-log-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1 kind: StatefulSet metadata: name: sql-3 spec: serviceName: "sql-3" replicas: 1 selector: matchLabels: app: sql-3 template: metadata: labels: app: sql-3 hadr: turn-on spec: securityContext: runAsUser: 0 runAsGroup: 0 terminationGracePeriodSeconds: 10 nodeSelector: role: ags-secondary-2 containers: - name: mssql image: sandeeparora/sql2k19-hadr-dh2i ports: - containerPort: 1433 - containerPort: 7979 env: - name: MSSQL_PID value: "Developer" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD valueFrom: secretKeyRef: name: sql-server-secret key: MSSQL_SA_PASSWORD - name: MSSQL_AGENT_ENABLED value: "true" - name: MSSQL_ENABLE_HADR value: "1" - name: MSSQL_DATA_DIR value: /var/opt/mssql/data - name: MSSQL_LOG_DIR value: /var/opt/mssql/log - name: MSSQL_BACKUP_DIR value: /var/opt/mssql/backup volumeMounts: - name: mssql3-base-volume mountPath: /var/opt/mssql - name: mssql3-log-volume mountPath: /var/opt/mssql/log - name: mssql3-data-volume mountPath: /var/opt/mssql/data - name: mssql-common-volume mountPath: /var/opt/mssql/backup - name: mssql3-dxe-volume mountPath: /etc/dh2i volumes: - name: mssql3-base-volume persistentVolumeClaim: claimName: mssql3-base-volume - name: mssql3-data-volume persistentVolumeClaim: claimName: mssql3-data-volume - name: mssql3-log-volume persistentVolumeClaim: claimName: mssql3-log-volume - name: mssql-common-volume persistentVolumeClaim: claimName: mssql-common-volume-claim - name: mssql3-dxe-volume persistentVolumeClaim: claimName: mssql3-dxe-volume --- apiVersion: v1 kind: Service metadata: name: sql-3-external-service annotations: networking.gke.io/load-balancer-type: "Internal" spec: selector: app: sql-3 ports: - name: mssql protocol: TCP port: 1433 targetPort: 1433 - name: dxe protocol: TCP port: 7979 targetPort: 7979 type: LoadBalancer
Connect to SQL Server pods on GKE
1. Log into the bastion host by using external IP to RDP (Remote Desktop). You can generate the password from Set Windows password button:
2. Get the IP address of all services that are exposing the SQL Server Service. Execute the below command in CloudShell:
kubectl get all -o wide
3. Use SSMS to connect with an IP address as ServerName. Use SQL authentication “sa” as username ‘P@ssw0rd’ as password (specified while creating the secret) and verify that all instances are available. Their agent is up and HADR is enabled by expanding the AlwaysOn Availability Groups folder.
4. Create a sample database on sql-1 instance using SSMS.
Configure the DxEnterprise cluster on the deployed SQL Server containers
Follow the Official documentation or the steps below to create an availability group.
You’ll need an activation code to proceed. I’m using g the developer edition for demo purposes. A fully featured developer edition is available for free for non-production use and can be downloaded here. Once you have subscribed, you can log in and request a developer license and it will be emailed to you as per your registered ID.
Below is the email I received almost immediately after successfully requesting the license:
Once you have the license you can proceed.
On Primary POD sql-1-0
1. Activate the DxEnterprise license using the command:
kubectl exec sql-1-0 -- dxcli activate-server XXXX-XXXX-XXXX-XXXX
2. Add a Vhost to the cluster:
kubectl exec sql-1-0 -- dxcli cluster-add-vhost vhost1 *127.0.0.1 sql-1-0
3. Encrypt the SQL Server sysadmin password.The encrypted password will be used to create the availability group in the next step:
kubectl exec sql-1-0 -- dxcli encrypt-text P@ssw0rd
Output of above command: OAfe63v3+APrmqyDdnSnhQ==
4. Add an availability group to the Vhost. The SQL Server sysadmin password must be encrypted. Use the password generated in the last step. ags1 is the availability group name in the below command:
kubectl exec sql-1-0 -- dxcli add-ags vhost1 ags1 "sql-1-0|mssqlserver|sa|OAfe63v3+APrmqyDdnSnhQ==|5022|synchronous_commit|40001"
5. Set a One-Time PassKey (OTPK). The output from this command will be used to join the other nodes to the DxEnterprise cluster:
kubectl exec sql-1-0 -- dxcli set-otpk
Output of above command: OTPK: **************************************** Good Until: 2021-09-18T10:47:13
On Secondary 1 POD sql-2-0
1. Activate the DxEnterprise license using the command:
kubectl exec sql-2-0 -- dxcli activate-server XXXX-XXXX-XXXX-XXXX
2. Join the second node to the DxEnterprise cluster. Use the default NAT proxy of match.dh2i.com. Use the OTPK generated in the previous steps instead of asterisk:
kubectl exec sql-2-0 -- dxcli join-cluster-ex match.dh2i.com **************************************** true
3. Add the second node to the existing availability group. The SQL Server sysadmin password must be encrypted. Use the password that you encrypted in previous steps:
kubectl exec sql-2-0 -- dxcli add-ags-node vhost1 ags1 "sql-2-0|mssqlserver|sa|OAfe63v3+APrmqyDdnSnhQ==|5022|synchronous_commit|40002"
On Secondary 2 POD sql-3-0
1. Activate the DxEnterprise license using the command:
kubectl exec sql-3-0 -- dxcli activate-server XXXX-XXXX-XXXX-XXXX
2. Join the second node to the DxEnterprise cluster. Use the default NAT proxy of match.dh2i.com. Use the OTPK generated in the previous steps instead of asterisk:
kubectl exec sql-3-0 -- dxcli join-cluster-ex match.dh2i.com **************************************** true
3. Add the second node to the existing availability group. The SQL Server sysadmin password must be encrypted. Use the password that you encrypted in previous steps:
kubectl exec sql-3-0 -- dxcli add-ags-node vhost1 ags1 "sql-3-0|mssqlserver|sa|OAfe63v3+APrmqyDdnSnhQ==|5022|synchronous_commit|40003"
Back On Primary POD sql-1-0
1. Add databases to the availability group. The database name used here and already created on sql-1-0 is named sample:
kubectl exec sql-1-0 -- dxcli add-ags-databases vhost1 ags1 sample
2. Add a listener to the availability group:
kubectl exec sql-1-0 -- dxcli add-ags-listener vhost1 ags1 44444
Configure AlwaysOn availability group listener
Create a service of type internal load balancer to expose the listener IP and port on the network.
kubectl apply -f listener.yaml
Below is the YAML config file for your reference:
apiVersion: v1 kind: Service metadata: name: agslistener-service annotations: networking.gke.io/load-balancer-type: "Internal" spec: selector: hadr: turn-on ports: - protocol: TCP port: 44444 targetPort: 44444 type: LoadBalancer
Test the AG Listener
- Run kubectl get all and grab the IP addresses of the listener: kubectl get all
- RDP (Remote Desktop) back into the bastion host and connect to the listenerIP,44444 using ‘sa’ and ‘P@ssw0rd’.
- You will be connected to the Primary/Active node. The Cluster type you will see will be external.
Congratulations! You have successfully set up AlwaysOn Availability Groups on SQL Server 2019 on Google Kubernetes Engine.
If you have any questions, drop them in the comments. And don’t forget to sign up for more updates here.
Share this
- Technical Track (969)
- Oracle (400)
- MySQL (137)
- Cloud (131)
- Open Source (90)
- Google Cloud (83)
- DBA Lounge (76)
- Microsoft SQL Server (76)
- Technical Blog (74)
- Big Data (52)
- AWS (49)
- Google Cloud Platform (47)
- Cassandra (44)
- DevOps (41)
- Azure (38)
- Pythian (33)
- Linux (30)
- Database (26)
- Podcasts (25)
- Site Reliability Engineering (25)
- Performance (24)
- SQL Server (24)
- Microsoft Azure (23)
- Oracle E-Business Suite (23)
- PostgreSQL (23)
- Oracle Database (22)
- Docker (21)
- Group Blog Posts (20)
- Security (20)
- DBA (19)
- Log Buffer (19)
- SQL (19)
- Exadata (18)
- Mongodb (18)
- Oracle Cloud Infrastructure (OCI) (18)
- Oracle Exadata (18)
- Automation (17)
- Hadoop (16)
- Oracleebs (16)
- Amazon RDS (15)
- Ansible (15)
- Ebs (15)
- Snowflake (15)
- ASM (13)
- BigQuery (13)
- Patching (13)
- RDS (13)
- Replication (13)
- Data (12)
- GenAI (12)
- Kubernetes (12)
- Oracle 12C (12)
- Advanced Analytics (11)
- Backup (11)
- LLM (11)
- Machine Learning (11)
- OCI (11)
- Rman (11)
- Cloud Migration (10)
- Datascape Podcast (10)
- Monitoring (10)
- R12 (10)
- 12C (9)
- AI (9)
- Apache Cassandra (9)
- Data Guard (9)
- Infrastructure (9)
- Oracle 19C (9)
- Oracle Applications (9)
- Python (9)
- Series (9)
- AWR (8)
- Amazon Web Services (AWS) (8)
- Articles (8)
- High Availability (8)
- Oracle EBS (8)
- Percona (8)
- Powershell (8)
- Recovery (8)
- Weblogic (8)
- Apache Beam (7)
- Backups (7)
- Data Governance (7)
- Goldengate (7)
- Innodb (7)
- Migration (7)
- Myrocks (7)
- OEM (7)
- Oracle Enterprise Manager (OEM) (7)
- Performance Tuning (7)
- Authentication (6)
- ChatGPT-4 (6)
- Data Enablement (6)
- Database Performance (6)
- E-Business Suite (6)
- Fmw (6)
- Grafana (6)
- Oracle Enterprise Manager (6)
- Orchestrator (6)
- Postgres (6)
- Rac (6)
- Renew Refresh Republish (6)
- RocksDB (6)
- Serverless (6)
- Upgrade (6)
- 19C (5)
- Azure Data Factory (5)
- Azure Synapse Analytics (5)
- Cpu (5)
- Data Visualization (5)
- Disaster Recovery (5)
- Error (5)
- Generative AI (5)
- Google BigQuery (5)
- Indexes (5)
- Love Letters To Data (5)
- Mariadb (5)
- Microsoft (5)
- Proxysql (5)
- Scala (5)
- Sql Server Administration (5)
- VMware (5)
- Windows (5)
- Xtrabackup (5)
- Airflow (4)
- Analytics (4)
- Apex (4)
- Best Practices (4)
- Centrally Managed Users (4)
- Cli (4)
- Cloud FinOps (4)
- Cloud Spanner (4)
- Cockroachdb (4)
- Configuration Management (4)
- Container (4)
- Data Management (4)
- Data Pipeline (4)
- Data Security (4)
- Data Strategy (4)
- Database Administrator (4)
- Database Management (4)
- Database Migration (4)
- Dataflow (4)
- Dbsat (4)
- Elasticsearch (4)
- Fahd Mirza (4)
- Fusion Middleware (4)
- Google (4)
- Io (4)
- Java (4)
- Kafka (4)
- Middleware (4)
- Mysql 8 (4)
- Network (4)
- Ocidtab (4)
- Opatch (4)
- Oracle Autonomous Database (Adb) (4)
- Oracle Cloud (4)
- Pitr (4)
- Post-Mortem Analysis (4)
- Prometheus (4)
- Redhat (4)
- September 9Th 2015 (4)
- Sql2016 (4)
- Ssl (4)
- Terraform (4)
- Workflow (4)
- 2Fa (3)
- Alwayson (3)
- Amazon Relational Database Service (Rds) (3)
- Apache Kafka (3)
- Apexexport (3)
- Aurora (3)
- Azure Sql Db (3)
- Cdb (3)
- ChatGPT (3)
- Cloud Armor (3)
- Cloud Database (3)
- Cloud Security (3)
- Cluster (3)
- Consul (3)
- Cosmos Db (3)
- Cost Management (3)
- Covid19 (3)
- Crontab (3)
- Data Analytics (3)
- Data Integration (3)
- Database 12C (3)
- Database Monitoring (3)
- Database Troubleshooting (3)
- Database Upgrade (3)
- Databases (3)
- Dataops (3)
- Dbt (3)
- Digital Transformation (3)
- ERP (3)
- Google Chrome (3)
- Google Cloud Sql (3)
- Graphite (3)
- Haproxy (3)
- Heterogeneous Database Migration (3)
- Hugepages (3)
- Inside Pythian (3)
- Installation (3)
- Json (3)
- Keras (3)
- Ldap (3)
- Liquibase (3)
- Love Letter (3)
- Lua (3)
- Mfa (3)
- Multitenant (3)
- Mysql 5.7 (3)
- Mysql Configuration (3)
- Nginx (3)
- Nodetool (3)
- Non-Tech Articles (3)
- Oem 13C (3)
- Oms (3)
- Oracle 18C (3)
- Oracle Data Guard (3)
- Oracle Live Sql (3)
- Oracle Rac (3)
- Patch (3)
- Perl (3)
- Pmm (3)
- Pt-Online-Schema-Change (3)
- Rdbms (3)
- Recommended (3)
- Remote Teams (3)
- Reporting (3)
- Reverse Proxy (3)
- S3 (3)
- Spark (3)
- Sql On The Edge (3)
- Sql Server Configuration (3)
- Sql Server On Linux (3)
- Ssis (3)
- Ssis Catalog (3)
- Stefan Knecht (3)
- Striim (3)
- Sysadmin (3)
- System Versioned (3)
- Systemd (3)
- Temporal Tables (3)
- Tensorflow (3)
- Tools (3)
- Tuning (3)
- Vasu Balla (3)
- Vault (3)
- Vulnerability (3)
- Waf (3)
- 18C (2)
- Adf (2)
- Adop (2)
- Agent (2)
- Agile (2)
- Amazon Data Migration Service (2)
- Amazon Ec2 (2)
- Amazon S3 (2)
- Apache Flink (2)
- Apple (2)
- Apps (2)
- Ashdump (2)
- Atp (2)
- Audit (2)
- Automatic Backups (2)
- Autonomous (2)
- Autoupgrade (2)
- Awr Data Mining (2)
- Azure Sql (2)
- Azure Sql Data Sync (2)
- Bash (2)
- Business (2)
- Business Intelligence (2)
- Caching (2)
- Cassandra Nodetool (2)
- Cdap (2)
- Certification (2)
- Cloning (2)
- Cloud Cost Optimization (2)
- Cloud Data Fusion (2)
- Cloud Hosting (2)
- Cloud Infrastructure (2)
- Cloud Shell (2)
- Cloud Sql (2)
- Cloudscape (2)
- Cluster Level Consistency (2)
- Conferences (2)
- Consul-Template (2)
- Containerization (2)
- Containers (2)
- Cosmosdb (2)
- Costs (2)
- Cql (2)
- Cqlsh (2)
- Cyber Security (2)
- Data Discovery (2)
- Data Migration (2)
- Data Quality (2)
- Data Streaming (2)
- Data Warehouse (2)
- Database Consulting (2)
- Database Migrations (2)
- Dataguard (2)
- Datapump (2)
- Ddl (2)
- Debezium (2)
- Dictionary Views (2)
- Dms (2)
- Docker-Composer (2)
- Dr (2)
- Duplicate (2)
- Ecc (2)
- Elastic (2)
- Elastic Stack (2)
- Em12C (2)
- Encryption (2)
- Enterprise Data Platform (EDP) (2)
- Enterprise Manager (2)
- Etl (2)
- Events (2)
- Exachk (2)
- Filter Driver (2)
- Flume (2)
- Full Text Search (2)
- Galera (2)
- Gemini (2)
- General Purpose Ssd (2)
- Gh-Ost (2)
- Gke (2)
- Google Workspace (2)
- Hanganalyze (2)
- Hdfs (2)
- Health Check (2)
- Historical Trends (2)
- Incremental (2)
- Infiniband (2)
- Infrastructure As Code (2)
- Innodb Cluster (2)
- Innodb File Structure (2)
- Innodb Group Replication (2)
- Install (2)
- Internals (2)
- Java Web Start (2)
- Kibana (2)
- Log (2)
- Log4J (2)
- Logs (2)
- Memory (2)
- Merge Replication (2)
- Metrics (2)
- Mutex (2)
- MySQLShell (2)
- NLP (2)
- Neo4J (2)
- Node.Js (2)
- Nosql (2)
- November 11Th 2015 (2)
- Ntp (2)
- Oci Iam (2)
- Oem12C (2)
- Omspatcher (2)
- Opatchauto (2)
- Open Source Database (2)
- Operational Excellence (2)
- Oracle 11G (2)
- Oracle Datase (2)
- Oracle Extended Manager (Oem) (2)
- Oracle Flashback (2)
- Oracle Forms (2)
- Oracle Installation (2)
- Oracle Io Testing (2)
- Pdb (2)
- Podcast (2)
- Puppet (2)
- Pythian Europe (2)
- R12.2 (2)
- Redshift (2)
- Remote DBA (2)
- Remote Sre (2)
- SAP (2)
- SAP HANA Cloud (2)
- Sap Migration (2)
- Scale (2)
- Schema (2)
- September 30Th 2015 (2)
- September 3Rd 2015 (2)
- Shell (2)
- Simon Pane (2)
- Single Sign-On (2)
- Sql Server On Gke (2)
- Sqlplus (2)
- Sre (2)
- Ssis Catalog Error (2)
- Ssisdb (2)
- Standby (2)
- Statspack Mining (2)
- Systemstate Dump (2)
- Tablespace (2)
- Technical Training (2)
- Tempdb (2)
- Tfa (2)
- Throughput (2)
- Tls (2)
- Tombstones (2)
- Transactional Replication (2)
- User Groups (2)
- Vagrant (2)
- Variables (2)
- Virtual Machine (2)
- Virtual Machines (2)
- Virtualbox (2)
- Web Application Firewall (2)
- Webinars (2)
- X5 (2)
- scalability (2)
- //Build2019 (1)
- 11G (1)
- 12.1 (1)
- 12Cr1 (1)
- 12Cr2 (1)
- 18C Grid Installation (1)
- 2022 (1)
- 2022 Snowflake Summit (1)
- AI Platform (1)
- AI Summit (1)
- Actifio (1)
- Active Directory (1)
- Adaptive Hash Index (1)
- Adf Custom Email (1)
- Adobe Flash (1)
- Adrci (1)
- Advanced Data Services (1)
- Afd (1)
- After Logon Trigger (1)
- Ahf (1)
- Aix (1)
- Akka (1)
- Alloydb (1)
- Alter Table (1)
- Always On (1)
- Always On Listener (1)
- Alwayson With Gke (1)
- Amazon (1)
- Amazon Athena (1)
- Amazon Aurora Backtrack (1)
- Amazon Efs (1)
- Amazon Redshift (1)
- Amazon Sagemaker (1)
- Amazon Vpc Flow Logs (1)
- Amdu (1)
- Analysis (1)
- Analytical Models (1)
- Analyzing Bigquery Via Sheets (1)
- Anisble (1)
- Annual Mysql Community Dinner (1)
- Anthos (1)
- Apache (1)
- Apache Nifi (1)
- Apache Spark (1)
- Application Migration (1)
- Architect (1)
- Architecture (1)
- Ash (1)
- Asmlib (1)
- Atlas CLI (1)
- Audit In Postgres (1)
- Audit In Postgresql (1)
- Auto Failover (1)
- Auto Increment (1)
- Auto Index (1)
- Autoconfig (1)
- Automated Reports (1)
- Automl (1)
- Autostart (1)
- Awr Mining (1)
- Aws Glue (1)
- Aws Lake Formation (1)
- Aws Lambda (1)
- Azure Analysis Services (1)
- Azure Blob Storage (1)
- Azure Cognitive Search (1)
- Azure Data (1)
- Azure Data Lake (1)
- Azure Data Lake Analytics (1)
- Azure Data Lake Store (1)
- Azure Data Migration Service (1)
- Azure Dma (1)
- Azure Dms (1)
- Azure Document Intelligence (1)
- Azure Integration Runtime (1)
- Azure OpenAI (1)
- Azure Sql Data Warehouse (1)
- Azure Sql Dw (1)
- Azure Sql Managed Instance (1)
- Azure Vm (1)
- Backup For Sql Server (1)
- Bacpac (1)
- Bag (1)
- Bare Metal Solution (1)
- Batch Operation (1)
- Batches In Cassandra (1)
- Beats (1)
- Best Practice (1)
- Bi Publisher (1)
- Binary Logging (1)
- Bind Variables (1)
- Bitnami (1)
- Blob Storage Endpoint (1)
- Blockchain (1)
- Browsers (1)
- Btp Architecture (1)
- Btp Components (1)
- Buffer Pool (1)
- Bug (1)
- Bugs (1)
- Build 2019 Updates (1)
- Build Cassandra (1)
- Bundle Patch (1)
- Bushy Join (1)
- Business Continuity (1)
- Business Insights (1)
- Business Process Modelling (1)
- Business Reputation (1)
- CAPEX (1)
- Capacity Planning (1)
- Career (1)
- Career Development (1)
- Cassandra-Cli (1)
- Catcon.Pm (1)
- Catctl.Pl (1)
- Catupgrd.Sql (1)
- Cbo (1)
- Cdb Duplication (1)
- Certificate (1)
- Certificate Management (1)
- Chaos Engineering (1)
- Cheatsheet (1)
- Checkactivefilesandexecutables (1)
- Chmod (1)
- Chown (1)
- Chrome Enterprise (1)
- Chrome Security (1)
- Cl-Series (1)
- Cleanup (1)
- Cloud Browser (1)
- Cloud Build (1)
- Cloud Consulting (1)
- Cloud Data Warehouse (1)
- Cloud Database Management (1)
- Cloud Dataproc (1)
- Cloud Foundry (1)
- Cloud Manager (1)
- Cloud Migations (1)
- Cloud Networking (1)
- Cloud SQL Replica (1)
- Cloud Scheduler (1)
- Cloud Services (1)
- Cloud Strategies (1)
- Cloudformation (1)
- Cluster Resource (1)
- Cmo (1)
- Cockroach Db (1)
- Coding Benchmarks (1)
- Colab (1)
- Collectd (1)
- Columnar (1)
- Communication Plans (1)
- Community (1)
- Compact Storage (1)
- Compaction (1)
- Compliance (1)
- Compression (1)
- Compute Instances (1)
- Compute Node (1)
- Concurrent Manager (1)
- Concurrent Processing (1)
- Configuration (1)
- Consistency Level (1)
- Consolidation (1)
- Conversational AI (1)
- Covid-19 (1)
- Cpu Patching (1)
- Cqlsstablewriter (1)
- Crash (1)
- Create Catalog Error (1)
- Create_File_Dest (1)
- Credentials (1)
- Cross Platform (1)
- CrowdStrike (1)
- Crsctl (1)
- Custom Instance Images (1)
- Cve-2022-21500 (1)
- Cvu (1)
- Cypher Queries (1)
- DBSAT 3 (1)
- Dacpac (1)
- Dag (1)
- Data Analysis (1)
- Data Analytics Platform (1)
- Data Box (1)
- Data Classification (1)
- Data Cleansing (1)
- Data Encryption (1)
- Data Engineering (1)
- Data Estate (1)
- Data Flow Management (1)
- Data Insights (1)
- Data Integrity (1)
- Data Lake (1)
- Data Leader (1)
- Data Lifecycle Management (1)
- Data Lineage (1)
- Data Masking (1)
- Data Mesh (1)
- Data Migration Assistant (1)
- Data Migration Service (1)
- Data Mining (1)
- Data Modeling (1)
- Data Monetization (1)
- Data Policy (1)
- Data Profiling (1)
- Data Protection (1)
- Data Retention (1)
- Data Safe (1)
- Data Sheets (1)
- Data Summit (1)
- Data Vault (1)
- Data Warehouse Modernization (1)
- Database Auditing (1)
- Database Consultant (1)
- Database Link (1)
- Database Modernization (1)
- Database Provisioning (1)
- Database Provisioning Failed (1)
- Database Replication (1)
- Database Scaling (1)
- Database Schemas (1)
- Database Security (1)
- Databricks (1)
- Datadog (1)
- Datafile (1)
- Datapatch (1)
- Dataprivacy (1)
- Datascape 59 (1)
- Datasets (1)
- Datastax Cassandra (1)
- Datastax Opscenter (1)
- Datasync Error (1)
- Db_Create_File_Dest (1)
- Dbaas (1)
- Dbatools (1)
- Dbcc Checkident (1)
- Dbms_Cloud (1)
- Dbms_File_Transfer (1)
- Dbms_Metadata (1)
- Dbms_Service (1)
- Dbms_Stats (1)
- Dbupgrade (1)
- Deep Learning (1)
- Delivery (1)
- Devd (1)
- Dgbroker (1)
- Dialogflow (1)
- Dict0Dict (1)
- Did You Know (1)
- Direct Path Read Temp (1)
- Disk Groups (1)
- Disk Management (1)
- Diskgroup (1)
- Dispatchers (1)
- Distributed Ag (1)
- Distribution Agent (1)
- Documentation (1)
- Download (1)
- Dp Agent (1)
- Duet AI (1)
- Duplication (1)
- Dynamic Sampling (1)
- Dynamic Tasks (1)
- E-Business Suite Cpu Patching (1)
- E-Business Suite Patching (1)
- Ebs Sso (1)
- Ec2 (1)
- Edb Postgresql Advanced Server (1)
- Edb Postgresql Password Verify Function (1)
- Editions (1)
- Edp (1)
- El Carro (1)
- Elassandra (1)
- Elk Stack (1)
- Em13Cr2 (1)
- Emcli (1)
- End of Life (1)
- Engineering (1)
- Enqueue (1)
- Enterprise (1)
- Enterprise Architecture (1)
- Enterprise Command Centers (1)
- Enterprise Manager Command Line Interface (Em Cli (1)
- Enterprise Plus (1)
- Episode 58 (1)
- Error Handling (1)
- Exacc (1)
- Exacheck (1)
- Exacs (1)
- Exadata Asr (1)
- Execution (1)
- Executive Sponsor (1)
- Expenditure (1)
- Export Sccm Collection To Csv (1)
- External Persistent Volumes (1)
- Fail (1)
- Failed Upgrade (1)
- Failover In Postgresql (1)
- Fall 2021 (1)
- Fast Recovery Area (1)
- FinOps Strategy (1)
- Flash Recovery Area (1)
- Flashback (1)
- Fnd (1)
- Fndsm (1)
- Force_Matching_Signature (1)
- Fra Full (1)
- Framework (1)
- Freebsd (1)
- Fsync (1)
- Function-Based Index (1)
- GCVE Architecture (1)
- GPQA (1)
- Gaming (1)
- Garbagecollect (1)
- Gcp Compute (1)
- Gcp-Spanner (1)
- Geography (1)
- Geth (1)
- Getmospatch (1)
- Git (1)
- Global Analytics (1)
- Gmail (1)
- Gmail Security (1)
- Google Analytics (1)
- Google Cloud Architecture Framework (1)
- Google Cloud Data Services (1)
- Google Cloud Partner (1)
- Google Cloud Spanner (1)
- Google Cloud VMware Engine (1)
- Google Compute Engine (1)
- Google Dataflow (1)
- Google Datalab (1)
- Google Grab And Go (1)
- Google Sheets (1)
- Gp2 (1)
- Graph Algorithms (1)
- Graph Databases (1)
- Graph Inferences (1)
- Graph Theory (1)
- GraphQL (1)
- Graphical User Interface (Gui) (1)
- Grid (1)
- Grid Infrastructure (1)
- Griddisk Resize (1)
- Grp (1)
- Guaranteed Restore Point (1)
- Guid Mismatch (1)
- HR Technology (1)
- HRM (1)
- Ha (1)
- Hang (1)
- Hashicorp (1)
- Hbase (1)
- Hcc (1)
- Hdinsight (1)
- Healthcheck (1)
- Hemantgiri S. Goswami (1)
- Hortonworks (1)
- How To Install Ssrs (1)
- Hr (1)
- Httpchk (1)
- Https (1)
- Huge Pages (1)
- HumanEval (1)
- Hung Database (1)
- Hybrid Columnar Compression (1)
- Hyper-V (1)
- Hyperscale (1)
- Hypothesis Driven Development (1)
- Ibm (1)
- Identity Management (1)
- Idm (1)
- Ilom (1)
- Imageinfo (1)
- Impdp (1)
- In Place Upgrade (1)
- Incident Response (1)
- Indempotent (1)
- Indexing In Mongodb (1)
- Influxdb (1)
- Information (1)
- Infrastructure As A Code (1)
- Injection (1)
- Innobackupex (1)
- Innodb Concurrency (1)
- Innodb Flush Method (1)
- Insights (1)
- Installing (1)
- Instance Cloning (1)
- Integration Services (1)
- Integrations (1)
- Interactive_Timeout (1)
- Interval Partitioning (1)
- Invisible Indexes (1)
- Io1 (1)
- IoT (1)
- Iops (1)
- Iphone (1)
- Ipv6 (1)
- Iscsi (1)
- Iscsi-Initiator-Utils (1)
- Iscsiadm (1)
- Issues (1)
- It Industry (1)
- It Teams (1)
- JMX Metrics (1)
- Jared Still (1)
- Javascript (1)
- Jdbc (1)
- Jinja2 (1)
- Jmx (1)
- Jmx Monitoring (1)
- Jvm (1)
- Jython (1)
- K8S (1)
- Kernel (1)
- Key Btp Components (1)
- Kfed (1)
- Kill Sessions (1)
- Knapsack (1)
- Kubeflow (1)
- LMSYS Chatbot Arena (1)
- Large Pages (1)
- Latency (1)
- Latest News (1)
- Leadership (1)
- Leap Second (1)
- Limits (1)
- Line 1 (1)
- Linkcolumn (1)
- Linux Host Monitoring (1)
- Linux Storage Appliance (1)
- Listener (1)
- Loadavg (1)
- Lock_Sga (1)
- Locks (1)
- Log File Switch (Archiving Needed) (1)
- Logfile (1)
- Looker (1)
- Lvm (1)
- MMLU (1)
- Managed Instance (1)
- Managed Services (1)
- Management (1)
- Management Servers (1)
- Marketing (1)
- Marketing Analytics (1)
- Martech (1)
- Masking (1)
- Megha Bedi (1)
- Metadata (1)
- Method-R Workbench (1)
- Metric (1)
- Metric Extensions (1)
- Michelle Gutzait (1)
- Microservices (1)
- Microsoft Azure Sql Database (1)
- Microsoft Build (1)
- Microsoft Build 2019 (1)
- Microsoft Ignite (1)
- Microsoft Inspire 2019 (1)
- Migrate (1)
- Migrating Ssis Catalog (1)
- Migrating To Azure Sql (1)
- Migration Checklist (1)
- Mirroring (1)
- Mismatch (1)
- Model Governance (1)
- Monetization (1)
- MongoDB Atlas (1)
- MongoDB Compass (1)
- Ms Excel (1)
- Msdtc (1)
- Msdtc In Always On (1)
- Msdtc In Cluster (1)
- Multi-IP (1)
- Multicast (1)
- Multipath (1)
- My.Cnf (1)
- MySQL Shell Logical Backup (1)
- MySQLDump (1)
- Mysql Enterprise (1)
- Mysql Plugin For Oracle Enterprise Manager (1)
- Mysql Replication Filters (1)
- Mysql Server (1)
- Mysql-Python (1)
- Nagios (1)
- Ndb (1)
- Net_Read_Timeout (1)
- Net_Write_Timeout (1)
- Netcat (1)
- Newsroom (1)
- Nfs (1)
- Nifi (1)
- Node (1)
- November 10Th 2015 (1)
- November 6Th 2015 (1)
- Null Columns (1)
- Nullipotent (1)
- OPEX (1)
- ORAPKI (1)
- O_Direct (1)
- Oacore (1)
- October 21St 2015 (1)
- October 6Th 2015 (1)
- October 8Th 2015 (1)
- Oda (1)
- Odbcs (1)
- Odbs (1)
- Odi (1)
- Oel (1)
- Ohs (1)
- Olvm (1)
- On-Prem To Azure Sql (1)
- On-Premises (1)
- Onclick (1)
- Open.Canada.Ca (1)
- Openstack (1)
- Operating System Monitoring (1)
- Oplog (1)
- Opsworks (1)
- Optimization (1)
- Optimizer (1)
- Ora-01852 (1)
- Ora-7445 (1)
- Oracle 19 (1)
- Oracle 20C (1)
- Oracle Cursor (1)
- Oracle Database 12.2 (1)
- Oracle Database Appliance (1)
- Oracle Database Se2 (1)
- Oracle Database Standard Edition 2 (1)
- Oracle Database Upgrade (1)
- Oracle Database@Google Cloud (1)
- Oracle Exadata Smart Scan (1)
- Oracle Licensing (1)
- Oracle Linux Virtualization Manager (1)
- Oracle Oda (1)
- Oracle Openworld (1)
- Oracle Parallelism (1)
- Oracle Rdbms (1)
- Oracle Real Application Clusters (1)
- Oracle Reports (1)
- Oracle Security (1)
- Oracle Wallet (1)
- Orasrp (1)
- Organizational Change (1)
- Orion (1)
- Os (1)
- Osbws_Install.Jar (1)
- Oui Gui (1)
- Output (1)
- Owox (1)
- Paas (1)
- Package Deployment Wizard Error (1)
- Parallel Execution (1)
- Parallel Query (1)
- Parallel Query Downgrade (1)
- Partitioning (1)
- Partitions (1)
- Password (1)
- Password Change (1)
- Password Recovery (1)
- Password Verify Function In Postgresql (1)
- Patches (1)
- Patchmgr (1)
- Pdb Duplication (1)
- Penalty (1)
- Perfomrance (1)
- Performance Schema (1)
- Pg 15 (1)
- Pg_Rewind (1)
- Pga (1)
- Pipeline Debugging (1)
- Pivot (1)
- Planning (1)
- Plsql (1)
- Policy (1)
- Polybase (1)
- Post-Acquisition (1)
- Post-Covid It (1)
- Postgresql Complex Password (1)
- Postgresql With Repmgr Integration (1)
- Power Bi (1)
- Pq (1)
- Preliminar Connection (1)
- Preliminary Connection (1)
- Privatecloud (1)
- Process Mining (1)
- Production (1)
- Productivity (1)
- Profile In Edb Postgresql (1)
- Programming (1)
- Prompt Engineering (1)
- Provisioned Iops (1)
- Provisiones Iops (1)
- Proxy Monitoring (1)
- Psu (1)
- Public Cloud (1)
- Pubsub (1)
- Purge (1)
- Purge Thread (1)
- Pythian Blackbird Acquisition (1)
- Pythian Goodies (1)
- Pythian News (1)
- Python Pandas (1)
- Query Performance (1)
- Quicksight (1)
- Quota Limits (1)
- R12 R12.2 Cp Concurrent Processing Abort (1)
- R12.1.3 (1)
- REF! (1)
- Ram Cache (1)
- Rbac (1)
- Rdb (1)
- Rds_File_Util (1)
- Read Free Replication (1)
- Read Latency (1)
- Read Only (1)
- Read Replica (1)
- Reboot (1)
- Recruiting (1)
- Redo Size (1)
- Relational Database Management System (1)
- Release (1)
- Release Automation (1)
- Repair (1)
- Replication Compatibility (1)
- Replication Error (1)
- Repmgr (1)
- Repmgrd (1)
- Reporting Services 2019 (1)
- Resiliency Planning (1)
- Resource Manager (1)
- Resources (1)
- Restore (1)
- Restore Point (1)
- Retail (1)
- Rhel (1)
- Risk (1)
- Risk Management (1)
- Rocksrb (1)
- Role In Postgresql (1)
- Rollback (1)
- Rolling Patch (1)
- Row0Purge (1)
- Rpm (1)
- Rule "Existing Clustered Or Clustered-Prepared In (1)
- Running Discovery On Remote Machine (1)
- SQL Optimization (1)
- SQL Tracing (1)
- SSRS Administration (1)
- SaaS (1)
- Sap Assessment (1)
- Sap Assessment Report (1)
- Sap Backup Restore (1)
- Sap Btp Architecture (1)
- Sap Btp Benefits (1)
- Sap Btp Model (1)
- Sap Btp Services (1)
- Sap Homogenous System Copy Method (1)
- Sap Landscape Copy (1)
- Sap Migration Assessment (1)
- Sap On Mssql (1)
- Sap System Copy (1)
- Sar (1)
- Scaling Ir (1)
- Sccm (1)
- Sccm Powershell (1)
- Scheduler (1)
- Scheduler_Job (1)
- Schedulers (1)
- Scheduling (1)
- Scott Mccormick (1)
- Scripts (1)
- Sdp (1)
- Secrets (1)
- Securing Sql Server (1)
- Security Compliance (1)
- Sed (Stream Editor) (1)
- Self Hosted Ir (1)
- Semaphore (1)
- Seps (1)
- September 11Th 2015 (1)
- Serverless Computing (1)
- Serverless Framework (1)
- Service Broker (1)
- Service Bus (1)
- Shared Connections (1)
- Shared Storage (1)
- Shellshock (1)
- Signals (1)
- Silent (1)
- Slave (1)
- Slob (1)
- Smart Scan (1)
- Smtp (1)
- Snapshot (1)
- Snowday Fall 2021 (1)
- Socat (1)
- Software Development (1)
- Software Engineering (1)
- Solutions Architecture (1)
- Spanner-Backups (1)
- Sphinx (1)
- Split Brain In Postgresql (1)
- Spm (1)
- Sql Agent (1)
- Sql Backup To Url Error (1)
- Sql Cluster Installer Hang (1)
- Sql Database (1)
- Sql Developer (1)
- Sql On Linux (1)
- Sql Server 2014 (1)
- Sql Server 2016 (1)
- Sql Server Agent On Linux (1)
- Sql Server Backups (1)
- Sql Server Denali Is Required To Install Integrat (1)
- Sql Server Health Check (1)
- Sql Server Troubleshooting On Linux (1)
- Sql Server Version (1)
- Sql Setup (1)
- Sql Vm (1)
- Sql2K19Ongke (1)
- Sqldatabase Serverless (1)
- Ssh User Equivalence (1)
- Ssis Denali Error (1)
- Ssis Install Error E Xisting Clustered Or Cluster (1)
- Ssis Package Deployment Error (1)
- Ssisdb Master Key (1)
- Ssisdb Restore Error (1)
- Sso (1)
- Ssrs 2019 (1)
- Sstable2Json (1)
- Sstableloader (1)
- Sstablesimpleunsortedwriter (1)
- Stack Dump (1)
- Standard Edition (1)
- Startup Process (1)
- Statistics (1)
- Statspack (1)
- Statspack Data Mining (1)
- Statspack Erroneously Reporting (1)
- Statspack Issues (1)
- Storage (1)
- Stored Procedure (1)
- Strategies (1)
- Streaming (1)
- Sunos (1)
- Swap (1)
- Swapping (1)
- Switch (1)
- Syft (1)
- Synapse (1)
- Sync Failed There Is Not Enough Space On The Disk (1)
- Sys Schema (1)
- System Function (1)
- Systems Administration (1)
- T-Sql (1)
- Table Optimization (1)
- Tablespace Growth (1)
- Tablespaces (1)
- Tags (1)
- Tar (1)
- Tde (1)
- Team Management (1)
- Tech Debt (1)
- Technology (1)
- Telegraf (1)
- Tempdb Encryption (1)
- Templates (1)
- Temporary Tablespace (1)
- Tenserflow (1)
- Teradata (1)
- Testing New Cassandra Builds (1)
- There Is Not Enough Space On The Disk (1)
- Thick Data (1)
- Third-Party Data (1)
- Thrift (1)
- Thrift Data (1)
- Tidb (1)
- Time Series (1)
- Time-Drift (1)
- Tkprof (1)
- Tmux (1)
- Tns (1)
- Trace (1)
- Tracefile (1)
- Training (1)
- Transaction Log (1)
- Transactions (1)
- Transformation Navigator (1)
- Transparent Data Encryption (1)
- Trigger (1)
- Triggers On Memory-Optimized Tables Must Use With (1)
- Troubleshooting (1)
- Tungsten (1)
- Tvdxtat (1)
- Twitter (1)
- U-Sql (1)
- UNDO Tablespace (1)
- Upgrade Issues (1)
- Uptime (1)
- Uptrade (1)
- Url Backup Error (1)
- Usability (1)
- Use Cases (1)
- User (1)
- User Defined Compactions (1)
- Utilization (1)
- Utl_Smtp (1)
- VDI Jump Host (1)
- Validate Structure (1)
- Validate_Credentials (1)
- Value (1)
- Velocity (1)
- Vertex AI (1)
- Vertica (1)
- Vertical Slicing (1)
- Videos (1)
- Virtual Private Cloud (1)
- Virtualization (1)
- Vision (1)
- Vpn (1)
- Wait_Timeout (1)
- Wallet (1)
- Webhook (1)
- Weblogic Connection Filters (1)
- Webscale Database (1)
- Windows 10 (1)
- Windows Powershell (1)
- WiredTiger (1)
- With Native_Compilation (1)
- Word (1)
- Workshop (1)
- Workspace Security (1)
- Xbstream (1)
- Xml Publisher (1)
- Zabbix (1)
- dbms_Monitor (1)
- postgresql 16 (1)
- sqltrace (1)
- tracing (1)
- vSphere (1)
- xml (1)
- October 2024 (2)
- September 2024 (7)
- August 2024 (4)
- July 2024 (2)
- June 2024 (6)
- May 2024 (3)
- April 2024 (2)
- February 2024 (2)
- January 2024 (11)
- December 2023 (10)
- November 2023 (11)
- October 2023 (10)
- September 2023 (8)
- August 2023 (7)
- July 2023 (2)
- June 2023 (13)
- May 2023 (4)
- April 2023 (6)
- March 2023 (10)
- February 2023 (6)
- January 2023 (5)
- December 2022 (10)
- November 2022 (10)
- October 2022 (10)
- September 2022 (13)
- August 2022 (16)
- July 2022 (12)
- June 2022 (13)
- May 2022 (11)
- April 2022 (4)
- March 2022 (5)
- February 2022 (4)
- January 2022 (14)
- December 2021 (16)
- November 2021 (11)
- October 2021 (7)
- September 2021 (11)
- August 2021 (6)
- July 2021 (9)
- June 2021 (4)
- May 2021 (8)
- April 2021 (16)
- March 2021 (16)
- February 2021 (6)
- January 2021 (12)
- December 2020 (12)
- November 2020 (17)
- October 2020 (11)
- September 2020 (10)
- August 2020 (11)
- July 2020 (13)
- June 2020 (6)
- May 2020 (9)
- April 2020 (18)
- March 2020 (21)
- February 2020 (13)
- January 2020 (15)
- December 2019 (10)
- November 2019 (11)
- October 2019 (12)
- September 2019 (16)
- August 2019 (15)
- July 2019 (10)
- June 2019 (16)
- May 2019 (20)
- April 2019 (21)
- March 2019 (14)
- February 2019 (18)
- January 2019 (18)
- December 2018 (5)
- November 2018 (16)
- October 2018 (12)
- September 2018 (20)
- August 2018 (27)
- July 2018 (31)
- June 2018 (34)
- May 2018 (28)
- April 2018 (27)
- March 2018 (17)
- February 2018 (8)
- January 2018 (20)
- December 2017 (14)
- November 2017 (4)
- October 2017 (1)
- September 2017 (3)
- August 2017 (5)
- July 2017 (4)
- June 2017 (2)
- May 2017 (7)
- April 2017 (7)
- March 2017 (8)
- February 2017 (8)
- January 2017 (5)
- December 2016 (3)
- November 2016 (4)
- October 2016 (8)
- September 2016 (9)
- August 2016 (10)
- July 2016 (9)
- June 2016 (8)
- May 2016 (13)
- April 2016 (16)
- March 2016 (13)
- February 2016 (11)
- January 2016 (6)
- December 2015 (11)
- November 2015 (11)
- October 2015 (5)
- September 2015 (16)
- August 2015 (4)
- July 2015 (1)
- June 2015 (3)
- May 2015 (6)
- April 2015 (5)
- March 2015 (5)
- February 2015 (4)
- January 2015 (3)
- December 2014 (7)
- October 2014 (4)
- September 2014 (6)
- August 2014 (6)
- July 2014 (16)
- June 2014 (7)
- May 2014 (6)
- April 2014 (5)
- March 2014 (4)
- February 2014 (10)
- January 2014 (6)
- December 2013 (8)
- November 2013 (12)
- October 2013 (9)
- September 2013 (6)
- August 2013 (7)
- July 2013 (9)
- June 2013 (7)
- May 2013 (7)
- April 2013 (4)
- March 2013 (7)
- February 2013 (4)
- January 2013 (4)
- December 2012 (6)
- November 2012 (8)
- October 2012 (9)
- September 2012 (3)
- August 2012 (5)
- July 2012 (5)
- June 2012 (7)
- May 2012 (11)
- April 2012 (1)
- March 2012 (8)
- February 2012 (1)
- January 2012 (6)
- December 2011 (8)
- November 2011 (5)
- October 2011 (9)
- September 2011 (6)
- August 2011 (4)
- July 2011 (1)
- June 2011 (1)
- May 2011 (5)
- April 2011 (2)
- February 2011 (2)
- January 2011 (2)
- December 2010 (1)
- November 2010 (7)
- October 2010 (3)
- September 2010 (8)
- August 2010 (2)
- July 2010 (4)
- June 2010 (7)
- May 2010 (2)
- April 2010 (1)
- March 2010 (3)
- February 2010 (3)
- January 2010 (2)
- November 2009 (6)
- October 2009 (6)
- August 2009 (3)
- July 2009 (3)
- June 2009 (3)
- May 2009 (2)
- April 2009 (8)
- March 2009 (6)
- February 2009 (4)
- January 2009 (3)
- November 2008 (3)
- October 2008 (7)
- September 2008 (6)
- August 2008 (9)
- July 2008 (9)
- June 2008 (9)
- May 2008 (9)
- April 2008 (8)
- March 2008 (4)
- February 2008 (3)
- January 2008 (3)
- December 2007 (2)
- November 2007 (7)
- October 2007 (1)
- August 2007 (4)
- July 2007 (3)
- June 2007 (8)
- May 2007 (4)
- April 2007 (2)
- March 2007 (2)
- February 2007 (5)
- January 2007 (8)
- December 2006 (1)
- November 2006 (3)
- October 2006 (4)
- September 2006 (3)
- July 2006 (1)
- May 2006 (2)
- April 2006 (1)
- July 2005 (1)
No Comments Yet
Let us know what you think