Sunday, August 3, 2025

OCP k8s CRC local monitoring

 HOWTO



Add memory 


$ crc stop
INFO Stopping kubelet and all containers...       
INFO Stopping the instance, this may take a few minutes... 
Stopped the instance
dave@fedora:~/Downloads$ crc config set memory 14336
Changes to configuration property 'memory' are only applied when the CRC instance is started.
If you already have a running CRC instance, then for this configuration change to take effect, stop the CRC instance with 'crc stop' and restart it with 'crc start'.
dave@fedora:~/Downloads$  crc config set enable-cluster-monitoring true
Successfully configured enable-cluster-monitoring to true
dave@fedora:~/Downloads$ crc start

Monitoring 


dave@fedora:~/Downloads$ oc config use-context crc-admin
Switched to context "crc-admin".
dave@fedora:~/Downloads$ oc whoami
kubeadmin
dave@fedora:~/Downloads$ oc get clusterversion version -ojsonpath='{range .spec.overrides[*]}{.name}{"\n"}{end}' | nl -v -2
    -2    cluster-monitoring-operator
    -1    monitoring
     0    cloud-credential-operator
     1    cloud-credential
     2    cluster-autoscaler-operator
     3    cluster-autoscaler
     4    cluster-cloud-controller-manager-operator
     5    cloud-controller-manager
dave@fedora:~/Downloads$ crc config set enable-cluster-monitoring true
Successfully configured enable-cluster-monitoring to true




Pod YAML

kind: Pod
apiVersion: v1
metadata:
  name: demo
  namespace: demo
  uid: 6be00704-f3ca-4d4b-9af6-f6bee0494ba1
  resourceVersion: '37632'
  creationTimestamp: '2025-08-03T09:24:59Z'
  labels:
    run: demo
  annotations:
    k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.66/23"],"mac_address":"0a:58:0a:d9:00:42","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.66/23","gateway_ip":"10.217.0.1","role":"primary"}}'
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.217.0.66"
          ],
          "mac": "0a:58:0a:d9:00:42",
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: anyuid
  managedFields:
    - manager: crc
      operation: Update
      apiVersion: v1
      time: '2025-08-03T09:24:59Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:k8s.ovn.org/pod-networks': {}
      subresource: status
    - manager: kubectl-run
      operation: Update
      apiVersion: v1
      time: '2025-08-03T09:24:59Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            .: {}
            'f:run': {}
        'f:spec':
          'f:containers':
            'k:{"name":"demo"}':
              .: {}
              'f:command': {}
              'f:image': {}
              'f:imagePullPolicy': {}
              'f:name': {}
              'f:resources': {}
              'f:terminationMessagePath': {}
              'f:terminationMessagePolicy': {}
          'f:dnsPolicy': {}
          'f:enableServiceLinks': {}
          'f:restartPolicy': {}
          'f:schedulerName': {}
          'f:securityContext': {}
          'f:terminationGracePeriodSeconds': {}
    - manager: multus-daemon
      operation: Update
      apiVersion: v1
      time: '2025-08-03T09:24:59Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:k8s.v1.cni.cncf.io/network-status': {}
      subresource: status
    - manager: kubelet
      operation: Update
      apiVersion: v1
      time: '2025-08-03T09:55:05Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:conditions':
            'k:{"type":"ContainersReady"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Initialized"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"PodReadyToStartContainers"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Ready"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
          'f:containerStatuses': {}
          'f:hostIP': {}
          'f:hostIPs': {}
          'f:phase': {}
          'f:podIP': {}
          'f:podIPs':
            .: {}
            'k:{"ip":"10.217.0.66"}':
              .: {}
              'f:ip': {}
          'f:startTime': {}
      subresource: status
spec:
  restartPolicy: Always
  serviceAccountName: default
  imagePullSecrets:
    - name: default-dockercfg-p7cfm
  priority: 0
  schedulerName: default-scheduler
  enableServiceLinks: true
  terminationGracePeriodSeconds: 30
  preemptionPolicy: PreemptLowerPriority
  nodeName: crc
  securityContext:
    seLinuxOptions:
      level: 's0:c26,c0'
  containers:
    - resources: {}
      terminationMessagePath: /dev/termination-log
      name: demo
      command:
        - sleep
        - 600s
      securityContext:
        capabilities:
          drop:
            - MKNOD
      imagePullPolicy: Always
      volumeMounts:
        - name: kube-api-access-z7xrc
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePolicy: File
      image: 'image-registry.openshift-image-registry.svc:5000/demo/ubi8@sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c'
  serviceAccount: default
  volumes:
    - name: kube-api-access-z7xrc
      projected:
        sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              name: kube-root-ca.crt
              items:
                - key: ca.crt
                  path: ca.crt
          - downwardAPI:
              items:
                - path: namespace
                  fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
          - configMap:
              name: openshift-service-ca.crt
              items:
                - key: service-ca.crt
                  path: service-ca.crt
        defaultMode: 420
  dnsPolicy: ClusterFirst
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
status:
  containerStatuses:
    - restartCount: 3
      started: true
      ready: true
      name: demo
      state:
        running:
          startedAt: '2025-08-03T09:55:04Z'
      volumeMounts:
        - name: kube-api-access-z7xrc
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          readOnly: true
          recursiveReadOnly: Disabled
      imageID: 'image-registry.openshift-image-registry.svc:5000/demo/ubi8@sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c'
      image: 'image-registry.openshift-image-registry.svc:5000/demo/ubi8@sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c'
      lastState:
        terminated:
          exitCode: 0
          reason: Completed
          startedAt: '2025-08-03T09:45:03Z'
          finishedAt: '2025-08-03T09:55:03Z'
          containerID: 'cri-o://fe191488478ade4fb4fbd6f045f4896cdd3128a02cd23c9163dea85039de6efc'
      containerID: 'cri-o://f9351ee373330f42397d46be58820ae2d34616afcddf9be923e30bbd906b5238'
  qosClass: BestEffort
  hostIPs:
    - ip: 192.168.126.11
  podIPs:
    - ip: 10.217.0.66
  podIP: 10.217.0.66
  hostIP: 192.168.126.11
  startTime: '2025-08-03T09:24:59Z'
  conditions:
    - type: PodReadyToStartContainers
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:25:03Z'
    - type: Initialized
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:24:59Z'
    - type: Ready
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:55:05Z'
    - type: ContainersReady
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:55:05Z'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:24:59Z'
  phase: Running



Deploying a sample application into OCP CRC local with odo

HOWTO

See also


 Prerequisites


Installing odo



dave@fedora:~/Downloads$ curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/v3.16.1/odo-linux-amd64 -o odo -o odo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 90.1M  100 90.1M    0     0  4255k      0  0:00:21  0:00:21 --:--:-- 4785k
dave@fedora:~/Downloads$ curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/v3.16.1/odo-linux-amd64.sha256 -o odo.sha256
echo "$(<odo.sha256)  odo" | shasum -a 256 --check
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    64  100    64    0     0    181      0 --:--:-- --:--:-- --:--:--   181
bash: shasum: command not found...
Install package 'perl-Digest-SHA' to provide command 'shasum'? [N/y] y


 * Waiting in queue... 
 * Loading list of packages.... 
The following packages have to be installed:
 perl-Digest-SHA-1:6.04-513.fc42.x86_64    Perl extension for SHA-1/224/256/384/512
Proceed with changes? [N/y] y


 * Waiting in queue... 
 * Waiting for authentication... 
 * Waiting in queue... 
 * Downloading packages... 
 * Requesting data... 
 * Testing changes... 
 * Installing packages... 
shasum: standard input: no properly formatted SHA checksum lines found

dave@fedora:~/Downloads$ curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/v3.16.1/odo-linux-amd64.sha256 -o odo.sha256
echo "$(<odo.sha256)  odo" | shasum -a 256 --check
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    64  100    64    0     0    385      0 --:--:-- --:--:-- --:--:--   385
odo: OK
dave@fedora:~/Downloads$ sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
[sudo] password for dave: 

dave@fedora:~/Downloads$  odo login -u developer -p developer
Connecting to the OpenShift cluster

Login successful.

You don't have any projects. You can try to create a new project, by running

    odo create project <projectname>

dave@fedora:~/Downloads$ odo create project sample-app
 ✓  Creating the project "sample-app" [80ms]
 ✓  Project "sample-app" is ready for use
 ✓  New project created and now using project: sample-app
dave@fedora:~/Downloads$ mkdir sample-app
dave@fedora:~/Downloads$ cd sample-app
dave@fedora:~/Downloads/sample-app$ git clone https://github.com/openshift/nodejs-ex
Cloning into 'nodejs-ex'...
remote: Enumerating objects: 836, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 836 (delta 0), reused 0 (delta 0), pack-reused 835 (from 2)
Receiving objects: 100% (836/836), 773.00 KiB | 908.00 KiB/s, done.
Resolving deltas: 100% (321/321), done.
dave@fedora:~/Downloads/sample-app$ cd nodejs-ex




Install local Openshift - CRC

 HOWTO



Install prerequisites


[sudo] password for dave: 
Updating and loading repositories:
Repositories loaded.
Package "NetworkManager-1:1.52.1-1.fc42.x86_64" is already installed.

Package                                                Arch          Version                                                Repository                         Size
Installing:
 libvirt                                               x86_64        11.0.0-3.fc42                                          updates                         0.0   B
Installing dependencies:
 libvirt-client-qemu                                   x86_64        11.0.0-3.fc42                                          updates                        64.0 KiB
 libvirt-daemon-config-nwfilter                        x86_64        11.0.0-3.fc42                                          updates                        20.2 KiB
 libvirt-daemon-driver-ch                              x86_64        11.0.0-3.fc42                                          updates                       838.1 KiB
 libvirt-daemon-driver-libxl                           x86_64        11.0.0-3.fc42                                          updates                         1.0 MiB
 libvirt-daemon-driver-lxc                             x86_64        11.0.0-3.fc42                                          updates                         1.1 MiB
 libvirt-daemon-driver-vbox                            x86_64        11.0.0-3.fc42                                          updates                       949.7 KiB
 python3-libvirt                                       x86_64        11.0.0-1.fc42                                          fedora                          2.0 MiB

Transaction Summary:
 Installing:         8 packages

Total size of inbound packages is 1 MiB. Need to download 1 MiB.
After this operation, 6 MiB extra will be used (install 6 MiB, remove 0 B).
Is this ok [y/N]: y
[1/8] libvirt-0:11.0.0-3.fc42.x86_64                                                                                       100% |  52.9 KiB/s |  10.8 KiB |  00m00s
[2/8] libvirt-client-qemu-0:11.0.0-3.fc42.x86_64                                                                           100% | 135.2 KiB/s |  31.2 KiB |  00m00s
[3/8] libvirt-daemon-config-nwfilter-0:11.0.0-3.fc42.x86_64                                                                100% |  98.9 KiB/s |  23.2 KiB |  00m00s
[4/8] libvirt-daemon-driver-ch-0:11.0.0-3.fc42.x86_64                                                                      100% | 772.5 KiB/s | 227.1 KiB |  00m00s
[5/8] libvirt-daemon-driver-libxl-0:11.0.0-3.fc42.x86_64                                                                   100% |   1.1 MiB/s | 299.4 KiB |  00m00s
[6/8] libvirt-daemon-driver-lxc-0:11.0.0-3.fc42.x86_64                                                                     100% | 969.8 KiB/s | 313.2 KiB |  00m00s
[7/8] python3-libvirt-0:11.0.0-1.fc42.x86_64                                                                               100% |   1.4 MiB/s | 363.5 KiB |  00m00s
[8/8] libvirt-daemon-driver-vbox-0:11.0.0-3.fc42.x86_64                                                                    100% |   1.0 MiB/s | 267.1 KiB |  00m00s
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
[8/8] Total                                                                                                                100% |   1.4 MiB/s |   1.5 MiB |  00m01s
Running transaction
[ 1/10] Verify package files                                                                                               100% |   1.3 KiB/s |   8.0   B |  00m00s
[ 2/10] Prepare transaction                                                                                                100% |  41.0   B/s |   8.0   B |  00m00s
[ 3/10] Installing python3-libvirt-0:11.0.0-1.fc42.x86_64                                                                  100% | 137.1 MiB/s |   2.1 MiB |  00m00s
[ 4/10] Installing libvirt-client-qemu-0:11.0.0-3.fc42.x86_64                                                              100% |   2.5 MiB/s |  64.7 KiB |  00m00s
[ 5/10] Installing libvirt-daemon-driver-vbox-0:11.0.0-3.fc42.x86_64                                                       100% |  33.2 MiB/s | 952.0 KiB |  00m00s
[ 6/10] Installing libvirt-daemon-driver-lxc-0:11.0.0-3.fc42.x86_64                                                        100% |  33.1 MiB/s |   1.1 MiB |  00m00s
[ 7/10] Installing libvirt-daemon-driver-libxl-0:11.0.0-3.fc42.x86_64                                                      100% |  33.5 MiB/s |   1.0 MiB |  00m00s
[ 8/10] Installing libvirt-daemon-driver-ch-0:11.0.0-3.fc42.x86_64                                                         100% |  32.8 MiB/s | 840.2 KiB |  00m00s
[ 9/10] Installing libvirt-daemon-config-nwfilter-0:11.0.0-3.fc42.x86_64                                                   100% | 115.1 KiB/s |  14.2 KiB |  00m00s
[10/10] Installing libvirt-0:11.0.0-3.fc42.x86_64                                                                          100% | 147.0   B/s | 124.0   B |  00m01s
>>> Running %posttrans scriptlet: libvirt-daemon-driver-vbox-0:11.0.0-3.fc42.x86_64                                                                                
>>> Finished %posttrans scriptlet: libvirt-daemon-driver-vbox-0:11.0.0-3.fc42.x86_64                                                                               
>>> Scriptlet output:                                                                                                                                              
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtvboxd.socket' → '/usr/lib/systemd/system/virtvboxd.socket'.                                      
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtvboxd-ro.socket' → '/usr/lib/systemd/system/virtvboxd-ro.socket'.                                
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtvboxd-admin.socket' → '/usr/lib/systemd/system/virtvboxd-admin.socket'.                          
>>> Created symlink '/etc/systemd/system/multi-user.target.wants/virtvboxd.service' → '/usr/lib/systemd/system/virtvboxd.service'.                                 
>>>                                                                                                                                                                
>>> Running %posttrans scriptlet: libvirt-daemon-driver-lxc-0:11.0.0-3.fc42.x86_64                                                                                 
>>> Finished %posttrans scriptlet: libvirt-daemon-driver-lxc-0:11.0.0-3.fc42.x86_64                                                                                
>>> Scriptlet output:                                                                                                                                              
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtlxcd.socket' → '/usr/lib/systemd/system/virtlxcd.socket'.                                        
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtlxcd-ro.socket' → '/usr/lib/systemd/system/virtlxcd-ro.socket'.                                  
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtlxcd-admin.socket' → '/usr/lib/systemd/system/virtlxcd-admin.socket'.                            
>>> Created symlink '/etc/systemd/system/multi-user.target.wants/virtlxcd.service' → '/usr/lib/systemd/system/virtlxcd.service'.                                   
>>>                                                                                                                                                                
>>> Running %posttrans scriptlet: libvirt-daemon-driver-libxl-0:11.0.0-3.fc42.x86_64                                                                               
>>> Finished %posttrans scriptlet: libvirt-daemon-driver-libxl-0:11.0.0-3.fc42.x86_64                                                                              
>>> Scriptlet output:                                                                                                                                              
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtxend.socket' → '/usr/lib/systemd/system/virtxend.socket'.                                        
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtxend-ro.socket' → '/usr/lib/systemd/system/virtxend-ro.socket'.                                  
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtxend-admin.socket' → '/usr/lib/systemd/system/virtxend-admin.socket'.                            
>>> Created symlink '/etc/systemd/system/multi-user.target.wants/virtxend.service' → '/usr/lib/systemd/system/virtxend.service'.                                   
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtlockd-admin.socket' → '/usr/lib/systemd/system/virtlockd-admin.socket'.                          
>>>                                                                                                                                                                
Complete!

Download installation 






Install CRC



 
dave@fedora:~$ cd ~/Downloads/
dave@fedora:~/Downloads$ ls -l crc-linux-amd64.tar.xz 
-rw-r--r--. 1 dave dave 37031432 Aug  3 09:32 crc-linux-amd64.tar.xz
dave@fedora:~/Downloads$ tar xvf crc-linux-amd64.tar.xz
crc-linux-2.53.0-amd64/
crc-linux-2.53.0-amd64/LICENSE
crc-linux-2.53.0-amd64/crc
dave@fedora:~/Downloads$ mkdir -p ~/bin
dave@fedora:~/Downloads$  cp ~/Downloads/crc-linux-*-amd64/crc ~/bin
dave@fedora:~/Downloads$ export PATH=$PATH:$HOME/bin
dave@fedora:~/Downloads$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
dave@fedora:~/Downloads$ find ~/bin/
/home/dave/bin/
/home/dave/bin/crc

Creating CRC
$ crc delete # Remove previous cluster (if present)
$ crc config set preset openshift # Configure to use openshift preset
$ crc setup # Initialize environment for cluster
$ crc start # Start the cluster

Setup CRC


dave@fedora:~/Downloads$ crc config set preset openshift 
To confirm your system is ready, and you have the needed system bundle, please run 'crc setup' before 'crc start'.
dave@fedora:~/Downloads$ crc setup 
CRC is constantly improving and we would like to know more about usage (more details at https://developers.redhat.com/article/tool-data-collection)
Your preference can be changed manually if desired using 'crc config set consent-telemetry <yes/no>'
Would you like to contribute anonymous usage statistics? [y/N]: y
Thanks for helping us! You can disable telemetry with the command 'crc config set consent-telemetry no'.
INFO Using bundle path /home/dave/.crc/cache/crc_libvirt_4.19.3_amd64.crcbundle 
INFO Checking if running as non-root              
INFO Checking if running inside WSL2              
INFO Checking if crc-admin-helper executable is cached 
INFO Caching crc-admin-helper executable          
INFO Using root access: Changing ownership of /home/dave/.crc/bin/crc-admin-helper-linux-amd64 
[sudo] password for dave: 
INFO Using root access: Setting suid for /home/dave/.crc/bin/crc-admin-helper-linux-amd64 
INFO Checking if running on a supported CPU architecture 
INFO Checking if crc executable symlink exists    
INFO Creating symlink for crc executable          
INFO Checking minimum RAM requirements            
INFO Check if Podman binary exists in: /home/dave/.crc/bin/oc 
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Adding user to libvirt group                 
INFO Using root access: Adding user to the libvirt group 
INFO Checking if active user/process is currently part of the libvirt group 
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Installing crc-driver-libvirt                
INFO Checking crc daemon systemd service          
INFO Setting up crc daemon systemd service        
INFO Checking crc daemon systemd socket units     
INFO Setting up crc daemon systemd socket units   
INFO Checking if vsock is correctly configured    
INFO Setting up vsock support                     
INFO Using root access: Setting CAP_NET_BIND_SERVICE capability for /home/dave/bin/crc executable 
INFO Using root access: Creating udev rule for /dev/vsock 
INFO Using root access: Changing permissions for /etc/udev/rules.d/99-crc-vsock.rules to 644  
INFO Using root access: Reloading udev rules database 
INFO Using root access: Loading vhost_vsock kernel module 
INFO Using root access: Creating file /etc/modules-load.d/vhost_vsock.conf 
INFO Using root access: Changing permissions for /etc/modules-load.d/vhost_vsock.conf to 644  
INFO Checking if CRC bundle is extracted in '$HOME/.crc' 
INFO Checking if /home/dave/.crc/cache/crc_libvirt_4.19.3_amd64.crcbundle exists 
INFO Getting bundle for the CRC executable        
INFO Downloading bundle: /home/dave/.crc/cache/crc_libvirt_4.19.3_amd64.crcbundle... 

Start CRC


dave@fedora:~/Downloads$ crc start
INFO Using bundle path /home/dave/.crc/cache/crc_libvirt_4.19.3_amd64.crcbundle 
INFO Checking if running as non-root              
INFO Checking if running inside WSL2              
INFO Checking if crc-admin-helper executable is cached 
INFO Checking if running on a supported CPU architecture 
INFO Checking if crc executable symlink exists    
INFO Checking minimum RAM requirements            
INFO Check if Podman binary exists in: /home/dave/.crc/bin/oc 
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Checking if active user/process is currently part of the libvirt group 
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Checking crc daemon systemd socket units     
INFO Checking if vsock is correctly configured    
INFO Loading bundle: crc_libvirt_4.19.3_amd64...  
CRC requires a pull secret to download content from Red Hat.
You can copy it from the Pull Secret section of https://console.redhat.com/openshift/create/local.
? Please enter the pull secret *********
X Sorry, your reply was invalid: invalid pull secret: invalid character 'c' looking for beginning of value
? Please enter the pull secret ******************************************************************************************************************
INFO Creating CRC VM for OpenShift 4.19.3...      
INFO Generating new SSH key pair...               
INFO Generating new password for the kubeadmin user 
INFO Starting CRC VM for openshift 4.19.3...      
INFO CRC instance is running with IP 127.0.0.1    
INFO CRC VM is running                            
INFO Updating authorized keys...                  
INFO Configuring shared directories               
INFO Check internal and public DNS query...       
INFO Check DNS query from host...                 
INFO Verifying validity of the kubelet certificates... 
INFO Starting kubelet service                     
INFO Waiting for kube-apiserver availability... [takes around 2min] 
INFO Adding user's pull secret to the cluster...  
INFO Updating SSH key to machine config resource... 
INFO Waiting until the user's pull secret is written to the instance disk... 



Started OCP cluster 


Started the OpenShift cluster.

The server is accessible via web console at:
  https://console-openshift-console.apps-crc.testing

Log in as administrator:
  Username: kubeadmin
  Password: SOME-PASSWORD

Log in as user:
  Username: developer
  Password: developer

Use the 'oc' command line interface:
  $ eval $(crc oc-env)
  $ oc login -u developer https://api.crc.testing:6443

oc projects
ave@fedora:~/Downloads$ oc projects
You have access to the following projects and can switch between them with ' project <projectname>':

  * default
    hostpath-provisioner
    kube-node-lease
    kube-public
    kube-system
    openshift
    openshift-apiserver
    openshift-apiserver-operator
    openshift-authentication
    openshift-authentication-operator
    openshift-cloud-network-config-controller
    openshift-cloud-platform-infra
    openshift-cluster-machine-approver
    openshift-cluster-samples-operator
    openshift-cluster-storage-operator
    openshift-cluster-version
    openshift-config
    openshift-config-managed
    openshift-config-operator
    openshift-console
    openshift-console-operator
    openshift-console-user-settings
    openshift-controller-manager
    openshift-controller-manager-operator
    openshift-dns
    openshift-dns-operator
    openshift-etcd
    openshift-etcd-operator
    openshift-host-network
    openshift-image-registry
    openshift-infra
    openshift-ingress
    openshift-ingress-canary
    openshift-ingress-operator
    openshift-kni-infra
    openshift-kube-apiserver
    openshift-kube-apiserver-operator
    openshift-kube-controller-manager
    openshift-kube-controller-manager-operator
    openshift-kube-scheduler
    openshift-kube-scheduler-operator
    openshift-kube-storage-version-migrator
    openshift-kube-storage-version-migrator-operator
    openshift-machine-api
    openshift-machine-config-operator
    openshift-marketplace
    openshift-monitoring
    openshift-multus
    openshift-network-console
    openshift-network-diagnostics
    openshift-network-node-identity
    openshift-network-operator
    openshift-node
    openshift-nutanix-infra
    openshift-oauth-apiserver
    openshift-openstack-infra
    openshift-operator-lifecycle-manager
    openshift-operators
    openshift-ovirt-infra
    openshift-ovn-kubernetes
    openshift-route-controller-manager
    openshift-service-ca
    openshift-service-ca-operator
    openshift-user-workload-monitoring
    openshift-vsphere-infra

Using project "default" on server "https://api.crc.testing:6443".

Login to OCP console



Login using CLI


ave@fedora:~/Downloads$ oc login -u developer https://api.crc.testing:6443
Logged into "https://api.crc.testing:6443" as "developer" using existing credentials.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

dave@fedora:~/Downloads$ oc whoami
developer

Become admin via CLI

ave@fedora:~/Downloads$ oc whoami
developer
dave@fedora:~/Downloads$ oc config use-context crc-admin
$ oc whoami
Switched to context "crc-admin".
bash: $: command not found...
dave@fedora:~/Downloads$ oc whoami
kubeadmin
dave@fedora:~/Downloads$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.19.3    True        False         False      12m     
config-operator                            4.19.3    True        False         False      23d     
console                                    4.19.3    True        False         False      15m     
control-plane-machine-set                  4.19.3    True        False         False      23d     
dns                                        4.19.3    True        False         False      16m     
etcd                                       4.19.3    True        False         False      23d     
image-registry                             4.19.3    True        False         False      15m     
ingress                                    4.19.3    True        False         False      23d     
kube-apiserver                             4.19.3    True        False         False      23d     
kube-controller-manager                    4.19.3    True        False         False      23d     
kube-scheduler                             4.19.3    True        False         False      23d     
kube-storage-version-migrator              4.19.3    True        False         False      16m     
machine-api                                4.19.3    True        False         False      23d     
machine-approver                           4.19.3    True        False         False      23d     
machine-config                             4.19.3    True        False         False      23d     
marketplace                                4.19.3    True        False         False      23d     
network                                    4.19.3    True        False         False      23d     
openshift-apiserver                        4.19.3    True        False         False      16m     
openshift-controller-manager               4.19.3    True        False         False      5m57s   
openshift-samples                          4.19.3    True        False         False      23d     
operator-lifecycle-manager                 4.19.3    True        False         False      23d     
operator-lifecycle-manager-catalog         4.19.3    True        False         False      23d     
operator-lifecycle-manager-packageserver   4.19.3    True        False         False      16m     
service-ca                                 4.19.3    True        False         False      23d     
dave@fedora:~/Downloads$ 


Create demo project via CLI

dave@fedora:~/Downloads$ oc whoami
kubeadmin
dave@fedora:~/Downloads$  oc registry login --insecure=true
info: Using registry public hostname default-route-openshift-image-registry.apps-crc.testing
Saved credentials for default-route-openshift-image-registry.apps-crc.testing into /run/user/1000/containers/auth.json
dave@fedora:~/Downloads$  oc new-project demo
Now using project "demo" on server "https://api.crc.testing:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname

dave@fedora:~/Downloads$ oc image mirror registry.access.redhat.com/ubi8/ubi:latest=default-route-openshift-image-registry.apps-crc.testing/demo/ubi8:latest --insecure=true --filter-by-os=linux/amd64
default-route-openshift-image-registry.apps-crc.testing/
  demo/ubi8
    blobs:
      registry.access.redhat.com/ubi8/ubi sha256:2ff9823b0bdc42e2d925ed7de8114a50e10c0efc5ba3b71f9828cdd8b4463294 5.018KiB
      registry.access.redhat.com/ubi8/ubi sha256:6a22c1a537480d7699f4f391f3810de860e3dbe23b3fc4128ed78dda4189dda4 74.23MiB
    manifests:
      sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c -> latest
  stats: shared=0 unique=2 size=74.24MiB ratio=1.00

phase 0:
  default-route-openshift-image-registry.apps-crc.testing demo/ubi8 blobs=2 mounts=0 manifests=1 shared=0

info: Planning completed in 1.09s
uploading: default-route-openshift-image-registry.apps-crc.testing/demo/ubi8 sha256:6a22c1a537480d7699f4f391f3810de860e3dbe23b3fc4128ed78dda4189dda4 74.23MiB
sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c default-route-openshift-image-registry.apps-crc.testing/demo/ubi8:latest
info: Mirroring completed in 20.67s (3.765MB/s)
dave@fedora:~/Downloads$ oc get is
NAME   IMAGE REPOSITORY                                                    TAGS     UPDATED
ubi8   default-route-openshift-image-registry.apps-crc.testing/demo/ubi8   latest   8 seconds ago
dave@fedora:~/Downloads$ oc set image-lookup ubi8
imagestream.image.openshift.io/ubi8 image lookup updated
dave@fedora:~/Downloads$ oc run demo --image=ubi8 --command -- sleep 600s
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "demo" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "demo" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "demo" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "demo" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/demo created

Check demo app via OCP console


Project 


Topology

Pod 


Events






Sunday, December 8, 2024

Argo CD - k8s GitOps on minikube

HOWTO 

Git


Argo CD architecture






 Start minikube

dave@fedora:~$ minikube start --driver=docker
😄  minikube v1.32.0 on Fedora 40
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🎉  minikube 1.34.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.34.0
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
    ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
    ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
🔎  Verifying ingress addon...
💡  Some dashboard features require the metrics-server addon. To enable all features please run:

    minikube addons enable metrics-server    


🌟  Enabled addons: ingress-dns, storage-provisioner, dashboard, ingress, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

 

 

Install Argo CD 


dave@fedora:~$ kubectl create namespace argocd
namespace/argocd created
dave@fedora:~$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created

 Get pods in argocd namespace

$ kubectl get pod -n argocd
NAME                                                READY   STATUS              RESTARTS   AGE
argocd-application-controller-0                     0/1     ContainerCreating   0          20s
argocd-applicationset-controller-5c787df94f-npjn9   0/1     ContainerCreating   0          22s
argocd-dex-server-6bb9b5fc75-x4vt8                  0/1     Init:0/1            0          22s
argocd-notifications-controller-7ccbd7fb6-dm8bq     0/1     ContainerCreating   0          22s
argocd-redis-c5c567495-cgtrl                        0/1     Init:0/1            0          22s
argocd-repo-server-799b498d8b-pc2l8                 0/1     Init:0/1            0          22s
argocd-server-f6d4d8775-9hlcj                       0/1     ContainerCreating   0          21s

Get services in argocd namespace
$ kubectl get svc -n argocd
NAME                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
argocd-applicationset-controller          ClusterIP   10.101.144.51   <none>        7000/TCP,8080/TCP            105s
argocd-dex-server                         ClusterIP   10.111.165.14   <none>        5556/TCP,5557/TCP,5558/TCP   105s
argocd-metrics                            ClusterIP   10.100.56.238   <none>        8082/TCP                     105s
argocd-notifications-controller-metrics   ClusterIP   10.111.151.85   <none>        9001/TCP                     105s
argocd-redis                              ClusterIP   10.97.35.44     <none>        6379/TCP                     104s
argocd-repo-server                        ClusterIP   10.109.34.145   <none>        8081/TCP,8084/TCP            104s
argocd-server                             ClusterIP   10.99.111.141   <none>        80/TCP,443/TCP               104s
argocd-server-metrics                     ClusterIP   10.106.186.75   <none>        8083/TCP                     104s

Port forward to access Argo CD UI
$ kubectl port-forward svc/argocd-server -n argocd 8080:443
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

Get admin password
$ kubectl get secret argocd-initial-admin-secret -n argocd -o yaml
apiVersion: v1
data:
  password: ab123==
kind: Secret
metadata:
  creationTimestamp: "2024-12-15T07:58:19Z"
  name: argocd-initial-admin-secret
  namespace: argocd
  resourceVersion: "190652"
  uid: a017dc3e-3232-40f6-a2e6-374872963888
type: Opaque

$ echo ab123|base64 --decode

Open UI at https://127.0.0.1:8080/applications





Apply application.yaml on k8s cluster via kubectl

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default

  source:
    repoURL: https://github.com/dveselka/devops-k8s.git
    targetRevision: HEAD
    path: argocd
    
  destination:
    server: https://kubernetes.default.svc
    namespace: myapp

  syncPolicy:
    syncOptions:
    - CreateNamespace = true  


    automated:
      selfHeal: true
      prune: true

Apply

$ cp * /git/devops-k8s/argocd
dave@dave:/git/argocd-app-config/dev$ ls
application.yaml  deployment.yaml  service.yaml
dave@dave:/git/argocd-app-config/dev$ kubectl apply -f application.yaml 
application.argoproj.io/guestbook created

Argo CD UI - unsynced


Argo CD UI with synced application 





Argo CD descriptors






Get pods in myapp namespace
$  kubectl get pods -n myapp
NAME                     READY   STATUS    RESTARTS   AGE
myapp-55c645d9b5-62llh   1/1     Running   0          6m29s
myapp-55c645d9b5-vmtmb   1/1     Running   0          6m29s

Get services in myapp namespace
$  kubectl get svc  -n myapp
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
myapp-service   ClusterIP   10.104.101.233   <none>        8080/TCP   6m35s

Minikube dashboard



Deployments 







Thursday, October 10, 2024

Create GCP VM via Terraform

 HOWTO



See also


GitHub


Initialize Terraform
dave@dave:/git/devops-terraform/gcp$ terraform init
Initializing the backend...
Initializing modules...
- mynet-eu-vm in instance
- mynet-us-vm in instance
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Installing hashicorp/google v6.6.0...
- Installed hashicorp/google v6.6.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.


Provider
provider "google" {}

Variables
variable "instance_name" {}
variable "instance_zone" {}
variable "instance_type" {
  default = "e2-micro"
}
variable "instance_network" {}

Network 
$ cat mynetwork.tf
# Create the mynetwork network
resource "google_compute_network" "mynetwork" {
  name = "mynetwork"
  # RESOURCE properties go here
  auto_create_subnetworks = "true"
}
# Add a firewall rule to allow HTTP, SSH, RDP and ICMP traffic on mynetwork
resource "google_compute_firewall" "mynetwork-allow-http-ssh-rdp-icmp" {
  name = "mynetwork-allow-http-ssh-rdp-icmp"
  # RESOURCE properties go here
  network = google_compute_network.mynetwork.self_link
  allow {
    protocol = "tcp"
    ports    = ["22", "80", "3389"]
  }
  allow {
    protocol = "icmp"
  }
  source_ranges = ["0.0.0.0/0"]
}
# Create the mynet-us-vm instance
module "mynet-us-vm" {
  source           = "./instance"
  instance_name    = "mynet-us-vm"
  instance_zone    = "us-central1-a"
  instance_network = google_compute_network.mynetwork.self_link
}
# Create the mynet-eu-vm" instance
module "mynet-eu-vm" {
  source           = "./instance"
  instance_name    = "mynet-eu-vm"
  instance_zone    = "europe-west1-d"
  instance_network = google_compute_network.mynetwork.self_link
}

Instance
resource "google_compute_instance" "vm_instance" {
  name = "${var.instance_name}"
  # RESOURCE properties go here
  zone         = "${var.instance_zone}"
  machine_type = "${var.instance_type}"
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
      }
  }
    network_interface {
    network = "${var.instance_network}"
    access_config {
      # Allocate a one-to-one NAT IP to the instance
    }
  }
}

Project structure
dave@dave:/git/devops-terraform/gcp$ find
.
./provider.tf
./.terraform
./.terraform/modules
./.terraform/modules/modules.json
./.terraform/providers
./.terraform/providers/registry.terraform.io
./.terraform/providers/registry.terraform.io/hashicorp
./.terraform/providers/registry.terraform.io/hashicorp/google
./.terraform/providers/registry.terraform.io/hashicorp/google/6.6.0
./.terraform/providers/registry.terraform.io/hashicorp/google/6.6.0/linux_amd64
./.terraform/providers/registry.terraform.io/hashicorp/google/6.6.0/linux_amd64/LICENSE.txt
./.terraform/providers/registry.terraform.io/hashicorp/google/6.6.0/linux_amd64/terraform-provider-google_v6.6.0_x5
./instance
./instance/main.tf
./instance/variables.tf
./.terraform.lock.hcl
./mynetwork.tf


1st attempt terraform plan 
p$ terraform plan

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: Attempted to load application default credentials since neither `credentials` nor `access_token` was set in the provider block.  No credentials loaded. To use your gcloud credentials, run 'gcloud auth application-default login'
│ 
│   with provider["registry.terraform.io/hashicorp/google"],
│   on provider.tf line 1, in provider "google":
│    1: provider "google" {}
│ 
│ google: could not find default credentials. See https://cloud.google.com/docs/authentication/external/set-up-adc for more information


Add GCP credentials
$ gcloud auth application-default login
Your browser has been opened to visit:

    https://accounts.google.com/o/oauth2/auth?....

Credentials saved to file: [/home/dave/.config/gcloud/application_default_credentials.json]

These credentials will be used by any library that requests Application Default Credentials (ADC).

Quota project "dave-terraform" was added to ADC which can be used by Google client libraries for billing and quota. Note that some services may still bill the project owning the resource.

2nd attempt to run terraform plan
$ terraform plan

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: Failed to retrieve project, pid: , err: project: required field is not set
│ 
│   with google_compute_network.mynetwork,
│   on mynetwork.tf line 2, in resource "google_compute_network" "mynetwork":
│    2: resource "google_compute_network" "mynetwork" {

Add project id into provider.tf
provider "google" {
    project = "dave-terraform"
}
3rd attempt terraform plan
$ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_firewall.mynetwork-allow-http-ssh-rdp-icmp will be created
  + resource "google_compute_firewall" "mynetwork-allow-http-ssh-rdp-icmp" {
      + creation_timestamp = (known after apply)
      + destination_ranges = (known after apply)
      + direction          = (known after apply)
      + enable_logging     = (known after apply)
      + id                 = (known after apply)
      + name               = "mynetwork-allow-http-ssh-rdp-icmp"
      + network            = (known after apply)
      + priority           = 1000
      + project            = "dave-terraform"
      + self_link          = (known after apply)
      + source_ranges      = [
          + "0.0.0.0/0",
        ]

      + allow {
          + ports    = [
              + "22",
              + "80",
              + "3389",
            ]
          + protocol = "tcp"
        }
      + allow {
          + ports    = []
          + protocol = "icmp"
        }
    }

  # google_compute_network.mynetwork will be created
  + resource "google_compute_network" "mynetwork" {
      + auto_create_subnetworks                   = true
      + delete_default_routes_on_create           = false
      + gateway_ipv4                              = (known after apply)
      + id                                        = (known after apply)
      + internal_ipv6_range                       = (known after apply)
      + mtu                                       = (known after apply)
      + name                                      = "mynetwork"
      + network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL"
      + numeric_id                                = (known after apply)
      + project                                   = "dave-terraform"
      + routing_mode                              = (known after apply)
      + self_link                                 = (known after apply)
    }

  # module.mynet-eu-vm.google_compute_instance.vm_instance will be created
  + resource "google_compute_instance" "vm_instance" {
      + can_ip_forward       = false
      + cpu_platform         = (known after apply)
      + current_status       = (known after apply)
      + deletion_protection  = false
      + effective_labels     = {
          + "goog-terraform-provisioned" = "true"
        }
      + id                   = (known after apply)
      + instance_id          = (known after apply)
      + label_fingerprint    = (known after apply)
      + machine_type         = "e2-micro"
      + metadata_fingerprint = (known after apply)
      + min_cpu_platform     = (known after apply)
      + name                 = "mynet-eu-vm"
      + project              = "dave-terraform"
      + self_link            = (known after apply)
      + tags_fingerprint     = (known after apply)
      + terraform_labels     = {
          + "goog-terraform-provisioned" = "true"
        }
      + zone                 = "europe-west1-d"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image                  = "debian-cloud/debian-11"
              + labels                 = (known after apply)
              + provisioned_iops       = (known after apply)
              + provisioned_throughput = (known after apply)
              + resource_policies      = (known after apply)
              + size                   = (known after apply)
              + type                   = (known after apply)
            }
        }

      + confidential_instance_config (known after apply)

      + guest_accelerator (known after apply)

      + network_interface {
          + internal_ipv6_prefix_length = (known after apply)
          + ipv6_access_type            = (known after apply)
          + ipv6_address                = (known after apply)
          + name                        = (known after apply)
          + network                     = (known after apply)
          + network_ip                  = (known after apply)
          + stack_type                  = (known after apply)
          + subnetwork                  = (known after apply)
          + subnetwork_project          = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = (known after apply)
            }
        }

      + reservation_affinity (known after apply)

      + scheduling (known after apply)
    }

  # module.mynet-us-vm.google_compute_instance.vm_instance will be created
  + resource "google_compute_instance" "vm_instance" {
      + can_ip_forward       = false
      + cpu_platform         = (known after apply)
      + current_status       = (known after apply)
      + deletion_protection  = false
      + effective_labels     = {
          + "goog-terraform-provisioned" = "true"
        }
      + id                   = (known after apply)
      + instance_id          = (known after apply)
      + label_fingerprint    = (known after apply)
      + machine_type         = "e2-micro"
      + metadata_fingerprint = (known after apply)
      + min_cpu_platform     = (known after apply)
      + name                 = "mynet-us-vm"
      + project              = "dave-terraform"
      + self_link            = (known after apply)
      + tags_fingerprint     = (known after apply)
      + terraform_labels     = {
          + "goog-terraform-provisioned" = "true"
        }
      + zone                 = "us-central1-a"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image                  = "debian-cloud/debian-11"
              + labels                 = (known after apply)
              + provisioned_iops       = (known after apply)
              + provisioned_throughput = (known after apply)
              + resource_policies      = (known after apply)
              + size                   = (known after apply)
              + type                   = (known after apply)
            }
        }

      + confidential_instance_config (known after apply)

      + guest_accelerator (known after apply)

      + network_interface {
          + internal_ipv6_prefix_length = (known after apply)
          + ipv6_access_type            = (known after apply)
          + ipv6_address                = (known after apply)
          + name                        = (known after apply)
          + network                     = (known after apply)
          + network_ip                  = (known after apply)
          + stack_type                  = (known after apply)
          + subnetwork                  = (known after apply)
          + subnetwork_project          = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = (known after apply)
            }
        }

      + reservation_affinity (known after apply)

      + scheduling (known after apply)
    }

Plan: 4 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

Terraform apply


$ terraform apply


Plan: 4 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

google_compute_network.mynetwork: Creating...
google_compute_network.mynetwork: Still creating... [10s elapsed]
google_compute_network.mynetwork: Still creating... [20s elapsed]
google_compute_network.mynetwork: Still creating... [30s elapsed]
google_compute_network.mynetwork: Still creating... [40s elapsed]
google_compute_network.mynetwork: Still creating... [50s elapsed]
google_compute_network.mynetwork: Creation complete after 53s [id=projects/dave-terraform/global/networks/mynetwork]
google_compute_firewall.mynetwork-allow-http-ssh-rdp-icmp: Creating...
module.mynet-eu-vm.google_compute_instance.vm_instance: Creating...
module.mynet-us-vm.google_compute_instance.vm_instance: Creating...
google_compute_firewall.mynetwork-allow-http-ssh-rdp-icmp: Still creating... [10s elapsed]
module.mynet-eu-vm.google_compute_instance.vm_instance: Still creating... [10s elapsed]
module.mynet-us-vm.google_compute_instance.vm_instance: Still creating... [10s elapsed]
google_compute_firewall.mynetwork-allow-http-ssh-rdp-icmp: Creation complete after 12s [id=projects/dave-terraform/global/firewalls/mynetwork-allow-http-ssh-rdp-icmp]
module.mynet-us-vm.google_compute_instance.vm_instance: Creation complete after 18s [id=projects/dave-terraform/zones/us-central1-a/instances/mynet-us-vm]
module.mynet-eu-vm.google_compute_instance.vm_instance: Still creating... [20s elapsed]
module.mynet-eu-vm.google_compute_instance.vm_instance: Creation complete after 26s [id=projects/dave-terraform/zones/europe-west1-d/instances/mynet-eu-vm]


Check created resources 

VPC network




Firewall 




VMs 

  

Destroy created resources

  $ terraform destroy
  
  Terraform will perform the following actions:

  # google_compute_firewall.mynetwork-allow-http-ssh-rdp-icmp will be destroyed
  - resource "google_compute_firewall" "mynetwork-allow-http-ssh-rdp-icmp" {
      - creation_timestamp      = "2024-10-10T09:46:54.415-07:00" -> null
      - destination_ranges      = [] -> null
      - direction               = "INGRESS" -> null
      - disabled                = false -> null
      - id                      = "projects/dave-terraform/global/firewalls/mynetwork-allow-http-ssh-rdp-icmp" -> null
      - name                    = "mynetwork-allow-http-ssh-rdp-icmp" -> null
      - network                 = "https://www.googleapis.com/compute/v1/projects/dave-terraform/global/networks/mynetwork" -> null
      - priority                = 1000 -> null
      - project                 = "dave-terraform" -> null
      - self_link               = "https://www.googleapis.com/compute/v1/projects/dave-terraform/global/firewalls/mynetwork-allow-http-ssh-rdp-icmp" -> null
      - source_ranges           = [
          - "0.0.0.0/0",
        ] -> null
      - source_service_accounts = [] -> null
      - source_tags             = [] -> null
      - target_service_accounts = [] -> null
      - target_tags             = [] -> null
        # (1 unchanged attribute hidden)

      - allow {
          - ports    = [
              - "22",
              - "80",
              - "3389",
            ] -> null
          - protocol = "tcp" -> null
        }
      - allow {
          - ports    = [] -> null
          - protocol = "icmp" -> null
        }
    }

  # google_compute_network.mynetwork will be destroyed
  - resource "google_compute_network" "mynetwork" {
      - auto_create_subnetworks                   = true -> null
      - delete_default_routes_on_create           = false -> null
      - enable_ula_internal_ipv6                  = false -> null
      - id                                        = "projects/dave-terraform/global/networks/mynetwork" -> null
      - mtu                                       = 0 -> null
      - name                                      = "mynetwork" -> null
      - network_firewall_policy_enforcement_order = "AFTER_CLASSIC_FIREWALL" -> null
      - numeric_id                                = "8609352241044109862" -> null
      - project                                   = "dave-terraform" -> null
      - routing_mode                              = "REGIONAL" -> null
      - self_link                                 = "https://www.googleapis.com/compute/v1/projects/dave-terraform/global/networks/mynetwork" -> null
        # (3 unchanged attributes hidden)
    }

Plan: 0 to add, 0 to change, 2 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: 


Sunday, September 15, 2024

Convert Java properties file to sed commands using ant

 HOWTO

GitHub

Install antcontrib 

Get via Maven  https://github.com/dveselka/java-tutorial/blob/master/tools/pom.xml

To install ant-contrib:

  1. Copy ant-contrib-0.3.jar to the lib directory of your Ant installation. If you want to use one of the tasks in your own project, add the lines
    <taskdef resource="net/sf/antcontrib/antcontrib.properties"/>
    
    to your build file.
  2. Keep ant-contrib-0.3.jar in a separate location. You now have to tell Ant explicitly where to find it (say in /usr/share/java/lib):
    <taskdef resource="net/sf/antcontrib/antcontrib.properties">
      <classpath>
        <pathelement location="/usr/share/java/lib/ant-contrib-0.3.jar"/>
      </classpath>
    </taskdef>

 

Get antcontrib.jar via Maven
<project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>dave</groupId>
    <artifactId>tools</artifactId>
    <version>1.0.0-SNAPSHOT</version>
    <packaging>jar</packaging>

    <parent>
        <groupId>dave</groupId>
        <artifactId>parent</artifactId>
        <version>1.0.0-SNAPSHOT</version>
    </parent>

    <dependencies>
        <!-- https://mvnrepository.com/artifact/ant-contrib/ant-contrib -->
        <dependency>
            <groupId>ant-contrib</groupId>
            <artifactId>ant-contrib</artifactId>
            <version>1.0b3</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-dependency-plugin</artifactId>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>copy-dependencies</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${project.build.directory}/lib</outputDirectory>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

$ mvn clean install


[INFO] --- maven-dependency-plugin:2.8:copy-dependencies (default) @ tools ---
[INFO] Copying ant-contrib-1.0b3.jar to /git/java-tutorial/tools/target/lib/ant-contrib-1.0b3.jar
[INFO] Copying ant-1.5.jar to /git/java-tutorial/tools/target/lib/ant-1.5.jar

 Properties file https://github.com/dveselka/java-tutorial/blob/master/tools/dave.properties

 a=1
jdbc.url=jdbc:oracle:thin:dave@dave:1521/DAVE_DB
url=https://dave.com/dave
date=Sun Sep 15 08:43:48 AM CEST 2024

 

Build.xml to produce sed file from properties file https://github.com/dveselka/java-tutorial/blob/master/tools/build.xml

Notice that @ gives value of for cycle variable , $ gives value of ant property
<project name="properties-to-sed">

    <taskdef resource="net/sf/antcontrib/antlib.xml">
        <classpath>
            <pathelement location="target/lib/ant-contrib-1.0b3.jar"/>
        </classpath>
    </taskdef>

    <target name="convert">
        
        <property file="dave.properties" prefix="dave."/>
        <propertyselector property="dave" match="dave\.(.*)" select="\1"/>
        
        <for param="key" list="${dave}">
            <sequential>
                <echo>@{key}=${dave.@{key}}</echo>
                <echo message="s#@{key}#${dave.@{key}}#g${line.separator}" file="replace-dave-props.sed" append="true"/>
            </sequential>
        </for>
    </target>
</project>
 

Run ant 

 
 $ ant convert
Buildfile: /git/java-tutorial/tools/build.xml

convert:
     [echo] url=https://dave.com/dave
     [echo] a=1
     [echo] date=Sun Sep 15 08:43:48 AM CEST 2024
     [echo] jdbc.url=jdbc:oracle:thin:dave@dave:1521/DAVE_DB

BUILD SUCCESSFUL
Total time: 0 seconds

Sed file produced from properties file

$ cat replace-dave-props.sed
s#url#https://dave.com/dave#g
s#a#1#g
s#date#Sun Sep 15 08:43:48 AM CEST 2024#g
s#jdbc.url#jdbc:oracle:thin:dave@dave:1521/DAVE_DB#g

 

 Asking Gemimi for hints - useful, provided idea with antcontrib for cycle, needs fixing - add antcontrib install, fix for cycle

 

Approach:

  1. Create a build.xml file:

    • Write the Ant build script in a build.xml file.
    • Define a target to perform the conversion.
  2. Load properties file:

    • Use the property task to load the properties file.
  3. Generate sed commands:

    • Use the for task to iterate over the properties entries.
    • For each entry, create a sed command using the echo task.
  4. Write sed commands to file:

    • Use the echo task to write the generated sed commands to a file.

build.xml example:

XML
<project name="properties-to-sed">
    <target name="convert">
        <property file="replacements.properties"/>

        <for param="key" elements="${replacements.keys()}">
            <sequential>
                <echo message="s/${replacements.${key}}/${replacements.${key}}/g" file="sed_script.sed" append="true"/>
            </sequential>
        </for>
    </target>
</project>

Explanation:

  • The replacements.properties file should contain key-value pairs where the key is the search pattern and the value is the replacement string. For example:
search1=replacement1
search2=replacement2
  • The build.xml file defines a convert target.
  • The property task loads the properties file.
  • The for task iterates over the keys in the properties file.
  • For each key, an echo task creates a sed command using the key and value as the search pattern and replacement string.
  • The echo task writes the sed command to the sed_script.sed file, appending it to the end of the file.

Additional considerations:

  • You can customize the sed command format based on your specific requirements.
  • If you need to escape special characters in the search patterns or replacement strings, use appropriate escape sequences.
  • For more complex scenarios or performance optimization, consider using a dedicated scripting language like Python or Perl.


 Asking ChatGPT for comparison


Daniel Veselka
convert properties file to sed commands using antcontrib for cycle
ChatGPT

To convert a properties file to sed commands using ant-contrib's for loop functionality, you'll need to follow a similar approach as previously mentioned, with some adjustments to implement the loop correctly. Below, I’ll provide a complete and detailed example, including how to set up your environment to use ant-contrib for iterating through properties.

Example Properties File

Assuming you have a properties file named config.properties:

properties
# config.properties
app.name=MyApp
app.version=1.0.0
app.author=John Doe

Ant Build File
<project name="PropertiesToSed" default="convert" basedir=".">
    
    <property file="config.properties"/>
    
    <target name="convert">
        <echo message="Converting properties to sed commands..." />

        <!-- Loop through the properties defined in config.properties -->
        <property name="properties.list" value="app.name,app.version,app.author"/>

        <foreach list="${properties.list}" param="property">
            <sequential>
                <echo message="sed -i 's/${property}=.*/${property}=${property}:${property}/g' target_file.txt" />
            </sequential>
        </foreach>
        
    </target>
    
</project>

Explanation of the Ant Build File

  1. Task Definition for Ant-Contrib: The line <taskdef resource="antcontrib.properties" /> defines the ant-contrib functions so you can use them in your Ant script. Make sure you have the ant-contrib.jar in your Ant classpath.

  2. Loading Properties: The properties file is loaded into the project with <property file="config.properties"/>.

  3. Defining Properties List: A property properties.list is defined, listing all the keys in the properties file that you want to generate sed commands for.

  4. Using <for> Loop: The <for> loop is used to iterate over the properties list. Each property will be accessible via the variable ${property} in the loop.

  5. Generating sed Commands:

    • The echo command inside the loop outputs the sed command that will perform the desired replacement in target_file.txt.
    • If you want to replace the value of each property in the file, you can use ${property} to get the key and ${property.value} to retrieve the corresponding value.

Step to Execute the Ant Script

  1. Ensure you have ant-contrib: Download the ant-contrib.jar and place it in the lib directory of your Ant installation or add it to your classpath.

  2. Run the Ant command:

bash
ant convert

Final Notes

  • The generated sed command will replace the line in target_file.txt that matches the pattern property=, updating it with the value from config.properties.
  • Adjust the target_file.txt with your actual target file; this is the file where the properties should be replaced.
  • Remember that you can customize the sed command based on how you want to implement the replacements based on your specific requirements.