Friday, December 15, 2023

Pull Docker images from Oracle Container Registry - Weblogic k8s Operator

 HOWTO



Prepare domain - create Oracle container registry secret

  1. Accept the license agreement for WebLogic Server images.

    a. In a browser, go to the Oracle Container Registry (OCR) and log in using the Oracle Single Sign-On (SSO) authentication service. If you do not already have SSO credentials, then at the top, right side of the page, click Sign In to create them.

    b. Search for weblogic, then select weblogic in the Search Results.

    c. From the drop-down menu, select your language and click Continue.

    d. Then read and accept the license agreement.

  2. Create a docker-registry secret to enable pulling the example WebLogic Server image from the registry.

    $ kubectl create secret docker-registry weblogic-repo-credentials \
         --docker-server=container-registry.oracle.com \
         --docker-username=YOUR_REGISTRY_USERNAME \
         --docker-password=YOUR_REGISTRY_PASSWORD \
         --docker-email=YOUR_REGISTRY_EMAIL \
         -n sample-domain1-ns
    

    Replace YOUR_REGISTRY_USERNAMEYOUR_REGISTRY_PASSWORD - use AUTHTOKEN, not password, and YOUR_REGISTRY_EMAIL with the values you use to access the registry.



Product Homeand Oracle Linux versionDated TagNon Dated Tag (latest)
Oracle WebLogic Server 14.1.1.0 Generic Installation11.0.21 + Oracle Linux 7u914.1.1.0-11-23071814.1.1.0-11
Oracle WebLogic Server 14.1.1.0 Generic Installation11.0.21 + Oracle Linux 8u414.1.1.0-11-ol8-23071814.1.1.0-11-ol8












Pull Docker Weblogic 14.1.1 image from OCR
$ docker pull container-registry.oracle.com/middleware/weblogic:14.1.1.0
14.1.1.0: Pulling from middleware/weblogic
cd17e56c322c: Pull complete 
159378624825: Pull complete 
c04549775f16: Pull complete 
3843b8b6117a: Pull complete 
8c356b9f7aaa: Pull complete 
800aaf7a8639: Pull complete 
Digest: sha256:1b5c18b921fdc8367c07b5feb38189da06e7cb6d519ca125c7c50794e0ec2b29
Status: Downloaded newer image for container-registry.oracle.com/middleware/weblogic:14.1.1.0
container-registry.oracle.com/middleware/weblogic:14.1.1.0

Downloaded image
$ docker images
REPOSITORY                                                TAG                   IMAGE ID       CREATED         SIZE
container-registry.oracle.com/middleware/weblogic         14.1.1.0              17f98eb6bba3   3 years ago     1.33GB

Thursday, December 14, 2023

Alternatively create Persistent Volume domain - setup Weblogic Kubernetes ( k8s) Operator in GCP -

 HOWTO


GitHub


This is alternative to Model in image which is preffered solution

Domain on PV 

  • Set the domain resource domain.spec.domainHomeSourceType attribute to PersistentVolume.
  • Supply a WebLogic installation in an image and supply a WebLogic configuration as a domain home in a persistent volume.
  • Optionally supply the domain resource domain.spec.configuration.initialDomainOnPV section to provide information for the Operator to create the initial domain home.
  • Supply WebLogic applications in the persistent volume.
  • Update the WebLogic configuration using WLST, or the WebLogic Server Administration Console.
  • Optionally use configuration overrides supplied in a Kubernetes ConfigMap. Use this only if WLST, or the WebLogic Server Administration Console does not fit your deployment strategy.

Create domain Model in image - setup Weblogic Kubernetes ( k8s) Operator in GCP

 HOWTO


See also

GitHub

Model in Image:

  • Set the domain resource domain.spec.domainHomeSourceType attribute to FromModel.
  • Supply a WebLogic installation in an image and supply a WebLogic configuration in one of three ways:
    • As WDT model YAML file supplied in separate auxiliary images.
    • As WebLogic Deployment Tool (WDT) model YAML file layered on the WebLogic installation image. NOTE: Model in Image without auxiliary images (the WDT model and installation files are included in the same image with the WebLogic Server installation) is deprecated in WebLogic Kubernetes Operator version 4.0.7. Oracle recommends that you use Model in Image with auxiliary images. See Auxiliary images.
    • As WDT model YAML file in a Kubernetes ConfigMap.
  • Supply WebLogic applications in one of two ways:
    • In auxiliary images.
    • Layered on the installation image. NOTE: Model in Image without auxiliary images (the WDT model and installation files are included in the same image with the WebLogic Server installation) is deprecated in WebLogic Kubernetes Operator version 4.0.7. Oracle recommends that you use Model in Image with Auxiliary images. See Auxiliary images.
  • Mutate the WebLogic configuration by supplying a new image and rolling, or model updates supplied in a Kubernetes ConfigMap.


Create and label a namespace that can host one or more domains.

$ kubectl create namespace sample-domain1-ns
namespace/sample-domain1-ns created
dave@dave:~$ kubectl label ns sample-domain1-ns weblogic-operator=enabled
namespace/sample-domain1-ns labeled


Configure Traefik to manage ingresses created in this namespace
 helm upgrade traefik-operator traefik/traefik \
    --namespace traefik \
    --reuse-values \
    --set "kubernetes.namespaces={traefik,sample-domain1-ns}"
W1214 11:03:23.618551   10551 warnings.go:70] autopilot-default-resources-mutator:Autopilot updated Deployment traefik/traefik-operator: defaulted unspecified resources for containers [traefik-operator] (see http://g.co/gke/autopilot-defaults)
Release "traefik-operator" has been upgraded. Happy Helming!
NAME: traefik-operator
LAST DEPLOYED: Thu Dec 14 11:03:20 2023
NAMESPACE: traefik
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Traefik Proxy v2.10.6 has been deployed successfully on traefik namespace !

Domain type - Model in image



 Create the domain using a domain resource Select a user name and password for the WebLogic domain administrator credentials and use them to create a Kubernetes Secret for the domain.
$ kubectl create secret generic sample-domain1-weblogic-credentials \
  --from-literal=username=SOME_USER --from-literal=password=SOME_PASSWORD \
  -n sample-domain1-ns
secret/sample-domain1-weblogic-credentials created


Create a domain runtime encryption secret.
$ kubectl -n sample-domain1-ns create secret generic \
  sample-domain1-runtime-encryption-secret \
   --from-literal=password=SOME_PASSWORD
secret/sample-domain1-runtime-encryption-secret created

Create the sample-domain1 domain resource and an associated sample-domain1-cluster-1 cluster resource using a single YAML resource file which defines both resources. The domain resource and cluster resource do not replace the traditional WebLogic configuration files, but instead cooperates with those files to describe the Kubernetes artifacts of the corresponding domain. 


If you want to view or need to modify it, you can download the sample domain resource to a file called /tmp/quickstart/domain-resource.yaml or similar. Then apply the file using kubectl apply -f /tmp/quickstart/domain-resource.yaml.

# Copyright (c) 2022, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.

apiVersion: "weblogic.oracle/v9"
kind: Domain
metadata:
  name: sample-domain1
  namespace: sample-domain1-ns
  labels:
    weblogic.domainUID: sample-domain1

spec:
  configuration:

    model:
      # Optional auxiliary image(s) containing WDT model, archives, and install.
      # Files are copied from `sourceModelHome` in the aux image to the `/aux/models` directory
      # in running WebLogic Server pods, and files are copied from `sourceWDTInstallHome`
      # to the `/aux/weblogic-deploy` directory. Set `sourceModelHome` and/or `sourceWDTInstallHome`
      # to "None" if you want skip such copies.
      #   `image`                - Image location
      #   `imagePullPolicy`      - Pull policy, default `IfNotPresent`
      #   `sourceModelHome`      - Model file directory in image, default `/auxiliary/models`.
      #   `sourceWDTInstallHome` - WDT install directory in image, default `/auxiliary/weblogic-deploy`.
      auxiliaryImages:
      - image: "phx.ocir.io/weblogick8s/quick-start-aux-image:v1"
        #imagePullPolicy: IfNotPresent
        #sourceWDTInstallHome: /auxiliary/weblogic-deploy
        #sourceModelHome: /auxiliary/models

      # Optional configmap for additional models and variable files
      #configMap: sample-domain1-wdt-config-map

      # All 'FromModel' domains require a runtimeEncryptionSecret with a 'password' field
      runtimeEncryptionSecret: sample-domain1-runtime-encryption-secret

  # Set to 'FromModel' to indicate 'Model in Image'.
  domainHomeSourceType: FromModel

  # The WebLogic Domain Home, this must be a location within
  # the image for 'Model in Image' domains.
  domainHome: /u01/domains/sample-domain1

  # The WebLogic Server image that the Operator uses to start the domain
  # **NOTE**:
  # This example uses General Availability (GA) images. GA images are suitable for demonstration and
  # development purposes only where the environments are not available from the public Internet;
  # they are not acceptable for production use. In production, you should always use CPU (patched)
  # images from OCR or create your images using the WebLogic Image Tool.
  # Please refer to the `OCR` and `WebLogic Images` pages in the WebLogic Kubernetes Operator
  # documentation for details.
  image: "container-registry.oracle.com/middleware/weblogic:12.2.1.4"

  # Defaults to "Always" if image tag (version) is ':latest'
  imagePullPolicy: "IfNotPresent"

  # Identify which Secret contains the credentials for pulling an image
  imagePullSecrets:
  - name: weblogic-repo-credentials

  # Identify which Secret contains the WebLogic Admin credentials,
  # the secret must contain 'username' and 'password' fields.
  webLogicCredentialsSecret:
    name: sample-domain1-weblogic-credentials

  # Whether to include the WebLogic Server stdout in the pod's stdout, default is true
  includeServerOutInPodLog: true

  # Whether to enable overriding your log file location, see also 'logHome'
  #logHomeEnabled: false

  # The location for domain log, server logs, server out, introspector out, and Node Manager log files
  # see also 'logHomeEnabled', 'volumes', and 'volumeMounts'.
  #logHome: /shared/logs/sample-domain1

  # Set which WebLogic Servers the Operator will start
  # - "Never" will not start any server in the domain
  # - "AdminOnly" will start up only the administration server (no managed servers will be started)
  # - "IfNeeded" will start all non-clustered servers, including the administration server, and clustered servers up to their replica count.
  serverStartPolicy: IfNeeded

  # Settings for all server pods in the domain including the introspector job pod
  serverPod:
    # Optional new or overridden environment variables for the domain's pods
    env:
    - name: JAVA_OPTIONS
      value: "-Dweblogic.StdoutDebugEnabled=false"
    - name: USER_MEM_ARGS
      value: "-Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx512m "
    resources:
      requests:
        cpu: "250m"
        memory: "768Mi"

    # Optional volumes and mounts for the domain's pods. See also 'logHome'.
    #volumes:
    #- name: weblogic-domain-storage-volume
    #  persistentVolumeClaim:
    #    claimName: sample-domain1-weblogic-sample-pvc
    #volumeMounts:
    #- mountPath: /shared
    #  name: weblogic-domain-storage-volume

  # The desired behavior for starting the domain's administration server.
  # adminServer:
    # Set up a Kubernetes node port for the administration server default channel
    #adminService:
    #  channels:
    #  - channelName: default
    #    nodePort: 30701

  # The number of managed servers to start for unlisted clusters
  replicas: 1

  # The desired behavior for starting a specific cluster's member servers
  clusters:
  - name: sample-domain1-cluster-1

  # Change the restartVersion to force the introspector job to rerun
  # and apply any new model configuration, to also force a subsequent
  # roll of your domain's WebLogic Server pods.
  restartVersion: '1'

  # Changes to this field cause the operator to repeat its introspection of the
  #  WebLogic domain configuration.
  introspectVersion: '1'

    # Secrets that are referenced by model yaml macros
    # (the model yaml in the optional configMap or in the image)
    #secrets:
    #- sample-domain1-datasource-secret

---

apiVersion: "weblogic.oracle/v1"
kind: Cluster
metadata:
  name: sample-domain1-cluster-1
  # Update this with the namespace your domain will run in:
  namespace: sample-domain1-ns
  labels:
    # Update this with the `domainUID` of your domain:
    weblogic.domainUID: sample-domain1

spec:
  replicas: 2
  clusterName: cluster-1

GKE cluster logs



Create Weblogic domain type Model in image
Paste your text here.$ kubectl apply -f https://raw.githubusercontent.com/oracle/weblogic-kubernetes-operator/release/4.1/kubernetes/samples/quick-start/domain-resource.yaml
domain.weblogic.oracle/sample-domain1 created
cluster.weblogic.oracle/sample-domain1-cluster-1 created

1st attempt to create domain
$ kubectl describe domain sample-domain1 -n sample-domain1-ns
Name:         sample-domain1
Namespace:    sample-domain1-ns
Labels:       weblogic.domainUID=sample-domain1
Annotations:  <none>
API Version:  weblogic.oracle/v9
Kind:         Domain
Metadata:
  Creation Timestamp:  2023-12-15T09:16:03Z
  Generation:          1
  Resource Version:    993809
  UID:                 1f7d5657-cac5-442a-9b4c-8554459a4dc2
Spec:
  Clusters:
    Name:  sample-domain1-cluster-1
  Configuration:
    Model:
      Auxiliary Images:
        Image:                       phx.ocir.io/weblogick8s/quick-start-aux-image:v1
      Domain Type:                   WLS
      Runtime Encryption Secret:     sample-domain1-runtime-encryption-secret
    Override Distribution Strategy:  Dynamic
  Domain Home:                       /u01/domains/sample-domain1
  Domain Home Source Type:           FromModel
  Failure Retry Interval Seconds:    120
  Failure Retry Limit Minutes:       1440
  Http Access Log In Log Home:       true
  Image:                             container-registry.oracle.com/middleware/weblogic:12.2.1.4
  Image Pull Policy:                 IfNotPresent
  Image Pull Secrets:
    Name:                             weblogic-repo-credentials
  Include Server Out In Pod Log:      true
  Introspect Version:                 1
  Max Cluster Concurrent Shutdown:    1
  Max Cluster Concurrent Startup:     0
  Max Cluster Unavailable:            1
  Replace Variables In Java Options:  false
  Replicas:                           1
  Restart Version:                    1
  Server Pod:
    Env:
      Name:   JAVA_OPTIONS
      Value:  -Dweblogic.StdoutDebugEnabled=false
      Name:   USER_MEM_ARGS
      Value:  -Djava.security.egd=file:/dev/./urandom -Xms256m -Xmx512m 
    Resources:
      Requests:
        Cpu:            250m
        Memory:         768Mi
  Server Start Policy:  IfNeeded
  Web Logic Credentials Secret:
    Name:  sample-domain1-weblogic-credentials
Status:
  Clusters:
  Conditions:
    Last Transition Time:  2023-12-15T09:16:15.700869Z
    Message:               Failure on pod 'sample-domain1-introspector-wrxfg' in namespace 'sample-domain1-ns': 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
    Reason:                ServerPod
    Severity:              Severe
    Status:                True
    Type:                  Failed
    Last Transition Time:  2023-12-15T09:16:15.773251Z
    Status:                False
    Type:                  Completed
  Initial Failure Time:    2023-12-15T09:16:15.700869Z
  Last Failure Time:       2023-12-15T09:16:15.700869Z
  Message:                 Failure on pod 'sample-domain1-introspector-wrxfg' in namespace 'sample-domain1-ns': 0/1 nodes are available: 1 Insufficient cpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
  Observed Generation:     1
  Reason:                  ServerPod
  Servers:
  Start Time:  2023-12-15T09:16:10.298141Z
Events:
  Type     Reason   Age   From               Message
  ----     ------   ----  ----               -------
  Normal   Created  89s   weblogic.operator  Domain sample-domain1 was created.
  Warning  Failed   82s   weblogic.operator  Domain sample-domain1 failed due to 'Server pod error': Failure on pod 'sample-domain1-introspector-wrxfg' in namespace '
Get domain status
$ kubectl get domain sample-domain1 -n sample-domain1-ns -o json | jq .status
{
  "clusters": [],
  "conditions": [
    {
      "lastTransitionTime": "2023-12-15T09:18:14.230192Z",
      "message": "Job sample-domain1-introspector failed due to reason: DeadlineExceeded. ActiveDeadlineSeconds of the job is configured with 120 seconds. The job was started 120 seconds ago. Ensure all domain dependencies have been deployed (any secrets, config-maps, PVs, and PVCs that the domain resource references). Use kubectl describe for the job and its pod for more job failure information. The job may be retried by the operator with longer `ActiveDeadlineSeconds` value in each subsequent retry. Use `domain.spec.configuration.introspectorJobActiveDeadlineSeconds` to increase the job timeout interval if the job still fails after the retries are exhausted. The time limit for retries can be configured in `domain.spec.failureRetryLimitMinutes`.",
      "reason": "Introspection",
      "severity": "Severe",
      "status": "True",
      "type": "Failed"
    },
    {
      "lastTransitionTime": "2023-12-15T09:16:15.773251Z",
      "status": "False",
      "type": "Completed"
    }
  ],
  "failedIntrospectionUid": "8a861c80-cb72-4314-802c-f7c8ee53cffb",
  "initialFailureTime": "2023-12-15T09:18:14.230192Z",
  "lastFailureTime": "2023-12-15T09:18:14.230192Z",
  "message": "Job sample-domain1-introspector failed due to reason: DeadlineExceeded. ActiveDeadlineSeconds of the job is configured with 120 seconds. The job was started 120 seconds ago. Ensure all domain dependencies have been deployed (any secrets, config-maps, PVs, and PVCs that the domain resource references). Use kubectl describe for the job and its pod for more job failure information. The job may be retried by the operator with longer `ActiveDeadlineSeconds` value in each subsequent retry. Use `domain.spec.configuration.introspectorJobActiveDeadlineSeconds` to increase the job timeout interval if the job still fails after the retries are exhausted. The time limit for retries can be configured in `domain.spec.failureRetryLimitMinutes`.. Will retry next at 2023-12-15T09:20:14.230192611Z and approximately every 120 seconds afterward until 2023-12-16T09:18:14.230192611Z if the failure is not resolved.",
  "observedGeneration": 1,
  "reason": "Introspection",
  "servers": [],
  "startTime": "2023-12-15T09:16:10.298141Z"
}


Workloads




Logs explorer on GKE


Get pods

$ kubectl get pods -n sample-domain1-ns
NAME                                READY   STATUS    RESTARTS   AGE
sample-domain1-introspector-t9nbl   1/1     Running   0          6m22s

Install kubectl on Fedora and start k8s cluster in GCP

 HOWTO




Install kubectl using RPM


dave@dave:~$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
EOF
[sudo] password for dave: 
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
dave@dave:~$ sudo yum install -y kubectl
Fedora 39 - x86_64 - Updates                                                                                                                                                                                                                                                                  118 kB/s |  16 kB     00:00    
Fedora 39 - x86_64 - Updates                                                                                                                                                                                                                                                                  1.8 MB/s | 2.7 MB     00:01    
Kubernetes                                                                                                                                                                                                                                                                                     12 kB/s | 5.7 kB     00:00    
Dependencies resolved.
==============================================================================================================================================================================================================================================================================================================================
 Package                                                                   Architecture                                                             Version                                                                                Repository                                                                    Size
==============================================================================================================================================================================================================================================================================================================================
Installing:
 kubectl                                                                   x86_64                                                                   1.29.0-150500.1.1                                                                      kubernetes                                                                    10 M

Transaction Summary
==============================================================================================================================================================================================================================================================================================================================
Install  1 Package

Total download size: 10 M
Installed size: 47 M
Downloading Packages:
kubectl-1.29.0-150500.1.1.x86_64.rpm                                                                                                                                                                                                                                                           12 MB/s |  10 MB     00:00    
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                                                                                                          12 MB/s |  10 MB     00:00     
Kubernetes                                                                                                                                                                                                                                                                                    7.9 kB/s | 1.7 kB     00:00    
Importing GPG key 0x9A296436:
 Userid     : "isv:kubernetes OBS Project <isv:kubernetes@build.opensuse.org>"
 Fingerprint: DE15 B144 86CD 377B 9E87 6E1A 2346 54DA 9A29 6436
 From       : https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                                                                                                      1/1 
  Installing       : kubectl-1.29.0-150500.1.1.x86_64                                                                                                                                                                                                                                                                     1/1 
  Verifying        : kubectl-1.29.0-150500.1.1.x86_64                                                                                                                                                                                                                                                                     1/1 

Installed:
  kubectl-1.29.0-150500.1.1.x86_64                                                                                                                                                                                                                                                                                            

Complete!
dave@dave:~$ 


Check k8s cluster via gcloud

$ gcloud container clusters list
NAME          LOCATION       MASTER_VERSION  MASTER_IP     MACHINE_TYPE  NODE_VERSION    NUM_NODES  STATUS
dave-cluster  europe-west10  1.27.3-gke.100  1.1.1.1  e2-small      1.27.3-gke.100  1          RUNNING


Connect to GKE from localhost via gcloud and kubectl

 sudo yum install google-cloud-sdk-gke-gcloud-auth-plugin
[sudo] password for dave: 
Last metadata expiration check: 0:29:49 ago on Thu 14 Dec 2023 10:05:11 AM CET.
Dependencies resolved.
==================================================================================================================================================================================================================
 Package                                                                 Architecture                           Version                                    Repository                                        Size
==================================================================================================================================================================================================================
Installing:
 google-cloud-sdk-gke-gcloud-auth-plugin                                 x86_64                                 457.0.0-1                                  google-cloud-sdk                                 3.6 M

Transaction Summary
==================================================================================================================================================================================================================
Install  1 Package

Total download size: 3.6 M
Installed size: 12 M
Is this ok [y/N]: y
Downloading Packages:
4b31246c89a3b0d6ec3d0120181b2e556f2a4f483a412c1f89bca3d475061438-google-cloud-sdk-gke-gcloud-auth-plugin-457.0.0-1.x86_64.rpm                                                     9.9 MB/s | 3.6 MB     00:00    
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                             9.8 MB/s | 3.6 MB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                          1/1 
  Installing       : google-cloud-sdk-gke-gcloud-auth-plugin-457.0.0-1.x86_64                                                                                                                                 1/1 
  Running scriptlet: google-cloud-sdk-gke-gcloud-auth-plugin-457.0.0-1.x86_64                                                                                                                                 1/1 
  Verifying        : google-cloud-sdk-gke-gcloud-auth-plugin-457.0.0-1.x86_64                                                                                                                                 1/1 

Installed:
  google-cloud-sdk-gke-gcloud-auth-plugin-457.0.0-1.x86_64                                                                                                                                                        

Complete!

Check the version
dave@dave:~$ gke-gcloud-auth-plugin --version
Kubernetes v1.28.2-alpha+dafa3f6c671d0a0d879f3b203588c782ecb87a18

Update the kubectl configuration to use the plugin:
$ gcloud container clusters get-credentials dave-cluster \
    --zone=europe-west10
Fetching cluster endpoint and auth data.
kubeconfig entry generated for dave-cluster.
dave@dave:~$ kubectl get namespaces
NAME                       STATUS   AGE
default                    Active   13m
gke-gmp-system             Active   12m
gke-managed-filestorecsi   Active   13m
gmp-public                 Active   12m
kube-node-lease            Active   13m
kube-public                Active   13m
kube-system                Active   13m

Interact with k8s cluster via kubectl
dave@dave:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://34.32.43.197
  name: gke_dave-terraform_europe-west10_dave-cluster
contexts:
- context:
    cluster: gke_dave-terraform_europe-west10_dave-cluster
    user: gke_dave-terraform_europe-west10_dave-cluster
  name: gke_dave-terraform_europe-west10_dave-cluster
current-context: gke_dave-terraform_europe-west10_dave-cluster
kind: Config
preferences: {}
users:
- name: gke_dave-terraform_europe-west10_dave-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args: null
      command: gke-gcloud-auth-plugin
      env: null
      installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
        https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
      interactiveMode: IfAvailable
      provideClusterInfo: true

Current context
dave@dave:~$ kubectl config current-context
gke_dave-terraform_europe-west10_dave-cluster

Store cluster info
$ gcloud container clusters get-credentials dave-cluster \
    --zone=europe-west10
Fetching cluster endpoint and auth data.
kubeconfig entry generated for dave-cluster.



Google HOWTO for GKE k8s cluster creation

Create GKE cluster in GCP

Enable Google Kubernetes Engine in a Cloud project

For access to Google Kubernetes Engine, select or create a Cloud project to work in, then enable the required APIs:

  1.  to ensure that you have the permissions you need, or select an existing project in which you have the relevant permissions.

  2. The necessary APIs are enabled:
    • Kubernetes Engine API
    • Artifact Registry API


Create a GKE cluster

  1. Click the Navigation menu  icon, then Kubernetes Engine.

    You can see where it is by clicking the following button:

  2. Click Clusters.

  3. Click Create.

  4. Choose Standard mode and click Configure.

  5. In the Name field, enter the name hello-cluster.

  6. Under Location type, select Zonal and then select a Compute Engine zone from the Zone drop-down list, such as us-west1-a.

  7. Click Create. This creates a GKE cluster. After the cluster is ready, a green checkmark appears next to the cluster name.

Deploy the sample app to GKE

  1. Click Workloads.

  2. Click Deploy.

  3. In the Edit container section, select Existing container image.

  4. In the Image path field, click Select.

  5. In the Select container image pane, select the hello-app image you pushed to Artifact Registry.

  6. Click Select.

  7. Click Done.

  8. Click Continue.

  9. In the Configuration section, under Labels, enter app for Key
    and hello-app for Value.

  10. Under the Configuration section, click the View YAML button under Configuration YAML. This opens a YAML configuration file representing the two Kubernetes API resources about to be deployed into your cluster.

  11. Click Close.

  12. Click Deploy.

    When the Deployment Pods are ready, the Deployment details page opens.

  13. Under Managed pods, note the three running Pods for the hello-app Deployment.

Expose a sample app to the internet

  1. Click Workloads.

  2. Click on hello-app in the Name column.

  3. From the Deployment details page, click the Actions > Expose button.

  4. In the Expose dialog, under the Port Mapping section, set

  5. Click Expose to create a Kubernetes Service for hello-app.

    When the Load Balancer is ready, the Service details page opens.

  6. Scroll down to the Exposing services secion and copy the external IP from the Endpoints column for the newly exposed Load balancer .

    • Note that Endpoints are used as External IP addresses for the next sections.

Click Next to learn how to deploy a new version of your app.

🎉 Success

You have successfully deployed a containerized web application!



Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, delete your project if you're not going to use it again. If you intend to use it again, you can delete the individual resources.

Delete the project

To delete the Cloud project:

```bash
gcloud projects delete ${PROJECT_ID}
```

Delete individual resources

  1. Delete the Service: This deallocates the Cloud Load Balancer created for your Service:

    kubectl delete service \ hello-app-service
  2. Delete the cluster: This deletes the resources that make up the cluster. Replace COMPUTE_ZONE in the command below with your own zone:

    gcloud container clusters delete \ hello-cluster --zone \ COMPUTE_ZONE
  3. Delete your container images: This deletes the Docker images you pushed to Artifact Registry. Replace PROJECT-ID with your own:

    gcloud artifacts docker images \ delete \ ${REGION}-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v1 \ --delete-tags --quiet gcloud artifacts docker images \ delete \ ${REGION}-docker.pkg.dev/${PROJECT_ID}/hello-repo/hello-app:v2 \ --delete-tags --quiet

Do more with Google Kubernetes Engine