Showing posts with label k8s. Show all posts
Showing posts with label k8s. Show all posts

Sunday, August 3, 2025

OCP k8s CRC local monitoring

 HOWTO



Add memory 


$ crc stop
INFO Stopping kubelet and all containers...       
INFO Stopping the instance, this may take a few minutes... 
Stopped the instance
dave@fedora:~/Downloads$ crc config set memory 14336
Changes to configuration property 'memory' are only applied when the CRC instance is started.
If you already have a running CRC instance, then for this configuration change to take effect, stop the CRC instance with 'crc stop' and restart it with 'crc start'.
dave@fedora:~/Downloads$  crc config set enable-cluster-monitoring true
Successfully configured enable-cluster-monitoring to true
dave@fedora:~/Downloads$ crc start

Monitoring 


dave@fedora:~/Downloads$ oc config use-context crc-admin
Switched to context "crc-admin".
dave@fedora:~/Downloads$ oc whoami
kubeadmin
dave@fedora:~/Downloads$ oc get clusterversion version -ojsonpath='{range .spec.overrides[*]}{.name}{"\n"}{end}' | nl -v -2
    -2    cluster-monitoring-operator
    -1    monitoring
     0    cloud-credential-operator
     1    cloud-credential
     2    cluster-autoscaler-operator
     3    cluster-autoscaler
     4    cluster-cloud-controller-manager-operator
     5    cloud-controller-manager
dave@fedora:~/Downloads$ crc config set enable-cluster-monitoring true
Successfully configured enable-cluster-monitoring to true




Pod YAML

kind: Pod
apiVersion: v1
metadata:
  name: demo
  namespace: demo
  uid: 6be00704-f3ca-4d4b-9af6-f6bee0494ba1
  resourceVersion: '37632'
  creationTimestamp: '2025-08-03T09:24:59Z'
  labels:
    run: demo
  annotations:
    k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.66/23"],"mac_address":"0a:58:0a:d9:00:42","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.66/23","gateway_ip":"10.217.0.1","role":"primary"}}'
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.217.0.66"
          ],
          "mac": "0a:58:0a:d9:00:42",
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: anyuid
  managedFields:
    - manager: crc
      operation: Update
      apiVersion: v1
      time: '2025-08-03T09:24:59Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:k8s.ovn.org/pod-networks': {}
      subresource: status
    - manager: kubectl-run
      operation: Update
      apiVersion: v1
      time: '2025-08-03T09:24:59Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            .: {}
            'f:run': {}
        'f:spec':
          'f:containers':
            'k:{"name":"demo"}':
              .: {}
              'f:command': {}
              'f:image': {}
              'f:imagePullPolicy': {}
              'f:name': {}
              'f:resources': {}
              'f:terminationMessagePath': {}
              'f:terminationMessagePolicy': {}
          'f:dnsPolicy': {}
          'f:enableServiceLinks': {}
          'f:restartPolicy': {}
          'f:schedulerName': {}
          'f:securityContext': {}
          'f:terminationGracePeriodSeconds': {}
    - manager: multus-daemon
      operation: Update
      apiVersion: v1
      time: '2025-08-03T09:24:59Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:k8s.v1.cni.cncf.io/network-status': {}
      subresource: status
    - manager: kubelet
      operation: Update
      apiVersion: v1
      time: '2025-08-03T09:55:05Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:conditions':
            'k:{"type":"ContainersReady"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Initialized"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"PodReadyToStartContainers"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
            'k:{"type":"Ready"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:status': {}
              'f:type': {}
          'f:containerStatuses': {}
          'f:hostIP': {}
          'f:hostIPs': {}
          'f:phase': {}
          'f:podIP': {}
          'f:podIPs':
            .: {}
            'k:{"ip":"10.217.0.66"}':
              .: {}
              'f:ip': {}
          'f:startTime': {}
      subresource: status
spec:
  restartPolicy: Always
  serviceAccountName: default
  imagePullSecrets:
    - name: default-dockercfg-p7cfm
  priority: 0
  schedulerName: default-scheduler
  enableServiceLinks: true
  terminationGracePeriodSeconds: 30
  preemptionPolicy: PreemptLowerPriority
  nodeName: crc
  securityContext:
    seLinuxOptions:
      level: 's0:c26,c0'
  containers:
    - resources: {}
      terminationMessagePath: /dev/termination-log
      name: demo
      command:
        - sleep
        - 600s
      securityContext:
        capabilities:
          drop:
            - MKNOD
      imagePullPolicy: Always
      volumeMounts:
        - name: kube-api-access-z7xrc
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePolicy: File
      image: 'image-registry.openshift-image-registry.svc:5000/demo/ubi8@sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c'
  serviceAccount: default
  volumes:
    - name: kube-api-access-z7xrc
      projected:
        sources:
          - serviceAccountToken:
              expirationSeconds: 3607
              path: token
          - configMap:
              name: kube-root-ca.crt
              items:
                - key: ca.crt
                  path: ca.crt
          - downwardAPI:
              items:
                - path: namespace
                  fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.namespace
          - configMap:
              name: openshift-service-ca.crt
              items:
                - key: service-ca.crt
                  path: service-ca.crt
        defaultMode: 420
  dnsPolicy: ClusterFirst
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
status:
  containerStatuses:
    - restartCount: 3
      started: true
      ready: true
      name: demo
      state:
        running:
          startedAt: '2025-08-03T09:55:04Z'
      volumeMounts:
        - name: kube-api-access-z7xrc
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          readOnly: true
          recursiveReadOnly: Disabled
      imageID: 'image-registry.openshift-image-registry.svc:5000/demo/ubi8@sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c'
      image: 'image-registry.openshift-image-registry.svc:5000/demo/ubi8@sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c'
      lastState:
        terminated:
          exitCode: 0
          reason: Completed
          startedAt: '2025-08-03T09:45:03Z'
          finishedAt: '2025-08-03T09:55:03Z'
          containerID: 'cri-o://fe191488478ade4fb4fbd6f045f4896cdd3128a02cd23c9163dea85039de6efc'
      containerID: 'cri-o://f9351ee373330f42397d46be58820ae2d34616afcddf9be923e30bbd906b5238'
  qosClass: BestEffort
  hostIPs:
    - ip: 192.168.126.11
  podIPs:
    - ip: 10.217.0.66
  podIP: 10.217.0.66
  hostIP: 192.168.126.11
  startTime: '2025-08-03T09:24:59Z'
  conditions:
    - type: PodReadyToStartContainers
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:25:03Z'
    - type: Initialized
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:24:59Z'
    - type: Ready
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:55:05Z'
    - type: ContainersReady
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:55:05Z'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2025-08-03T09:24:59Z'
  phase: Running



Deploying a sample application into OCP CRC local with odo

HOWTO

See also


 Prerequisites


Installing odo



dave@fedora:~/Downloads$ curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/v3.16.1/odo-linux-amd64 -o odo -o odo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 90.1M  100 90.1M    0     0  4255k      0  0:00:21  0:00:21 --:--:-- 4785k
dave@fedora:~/Downloads$ curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/v3.16.1/odo-linux-amd64.sha256 -o odo.sha256
echo "$(<odo.sha256)  odo" | shasum -a 256 --check
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    64  100    64    0     0    181      0 --:--:-- --:--:-- --:--:--   181
bash: shasum: command not found...
Install package 'perl-Digest-SHA' to provide command 'shasum'? [N/y] y


 * Waiting in queue... 
 * Loading list of packages.... 
The following packages have to be installed:
 perl-Digest-SHA-1:6.04-513.fc42.x86_64    Perl extension for SHA-1/224/256/384/512
Proceed with changes? [N/y] y


 * Waiting in queue... 
 * Waiting for authentication... 
 * Waiting in queue... 
 * Downloading packages... 
 * Requesting data... 
 * Testing changes... 
 * Installing packages... 
shasum: standard input: no properly formatted SHA checksum lines found

dave@fedora:~/Downloads$ curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/v3.16.1/odo-linux-amd64.sha256 -o odo.sha256
echo "$(<odo.sha256)  odo" | shasum -a 256 --check
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    64  100    64    0     0    385      0 --:--:-- --:--:-- --:--:--   385
odo: OK
dave@fedora:~/Downloads$ sudo install -o root -g root -m 0755 odo /usr/local/bin/odo
[sudo] password for dave: 

dave@fedora:~/Downloads$  odo login -u developer -p developer
Connecting to the OpenShift cluster

Login successful.

You don't have any projects. You can try to create a new project, by running

    odo create project <projectname>

dave@fedora:~/Downloads$ odo create project sample-app
 ✓  Creating the project "sample-app" [80ms]
 ✓  Project "sample-app" is ready for use
 ✓  New project created and now using project: sample-app
dave@fedora:~/Downloads$ mkdir sample-app
dave@fedora:~/Downloads$ cd sample-app
dave@fedora:~/Downloads/sample-app$ git clone https://github.com/openshift/nodejs-ex
Cloning into 'nodejs-ex'...
remote: Enumerating objects: 836, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 836 (delta 0), reused 0 (delta 0), pack-reused 835 (from 2)
Receiving objects: 100% (836/836), 773.00 KiB | 908.00 KiB/s, done.
Resolving deltas: 100% (321/321), done.
dave@fedora:~/Downloads/sample-app$ cd nodejs-ex




Install local Openshift - CRC

 HOWTO



Install prerequisites


[sudo] password for dave: 
Updating and loading repositories:
Repositories loaded.
Package "NetworkManager-1:1.52.1-1.fc42.x86_64" is already installed.

Package                                                Arch          Version                                                Repository                         Size
Installing:
 libvirt                                               x86_64        11.0.0-3.fc42                                          updates                         0.0   B
Installing dependencies:
 libvirt-client-qemu                                   x86_64        11.0.0-3.fc42                                          updates                        64.0 KiB
 libvirt-daemon-config-nwfilter                        x86_64        11.0.0-3.fc42                                          updates                        20.2 KiB
 libvirt-daemon-driver-ch                              x86_64        11.0.0-3.fc42                                          updates                       838.1 KiB
 libvirt-daemon-driver-libxl                           x86_64        11.0.0-3.fc42                                          updates                         1.0 MiB
 libvirt-daemon-driver-lxc                             x86_64        11.0.0-3.fc42                                          updates                         1.1 MiB
 libvirt-daemon-driver-vbox                            x86_64        11.0.0-3.fc42                                          updates                       949.7 KiB
 python3-libvirt                                       x86_64        11.0.0-1.fc42                                          fedora                          2.0 MiB

Transaction Summary:
 Installing:         8 packages

Total size of inbound packages is 1 MiB. Need to download 1 MiB.
After this operation, 6 MiB extra will be used (install 6 MiB, remove 0 B).
Is this ok [y/N]: y
[1/8] libvirt-0:11.0.0-3.fc42.x86_64                                                                                       100% |  52.9 KiB/s |  10.8 KiB |  00m00s
[2/8] libvirt-client-qemu-0:11.0.0-3.fc42.x86_64                                                                           100% | 135.2 KiB/s |  31.2 KiB |  00m00s
[3/8] libvirt-daemon-config-nwfilter-0:11.0.0-3.fc42.x86_64                                                                100% |  98.9 KiB/s |  23.2 KiB |  00m00s
[4/8] libvirt-daemon-driver-ch-0:11.0.0-3.fc42.x86_64                                                                      100% | 772.5 KiB/s | 227.1 KiB |  00m00s
[5/8] libvirt-daemon-driver-libxl-0:11.0.0-3.fc42.x86_64                                                                   100% |   1.1 MiB/s | 299.4 KiB |  00m00s
[6/8] libvirt-daemon-driver-lxc-0:11.0.0-3.fc42.x86_64                                                                     100% | 969.8 KiB/s | 313.2 KiB |  00m00s
[7/8] python3-libvirt-0:11.0.0-1.fc42.x86_64                                                                               100% |   1.4 MiB/s | 363.5 KiB |  00m00s
[8/8] libvirt-daemon-driver-vbox-0:11.0.0-3.fc42.x86_64                                                                    100% |   1.0 MiB/s | 267.1 KiB |  00m00s
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
[8/8] Total                                                                                                                100% |   1.4 MiB/s |   1.5 MiB |  00m01s
Running transaction
[ 1/10] Verify package files                                                                                               100% |   1.3 KiB/s |   8.0   B |  00m00s
[ 2/10] Prepare transaction                                                                                                100% |  41.0   B/s |   8.0   B |  00m00s
[ 3/10] Installing python3-libvirt-0:11.0.0-1.fc42.x86_64                                                                  100% | 137.1 MiB/s |   2.1 MiB |  00m00s
[ 4/10] Installing libvirt-client-qemu-0:11.0.0-3.fc42.x86_64                                                              100% |   2.5 MiB/s |  64.7 KiB |  00m00s
[ 5/10] Installing libvirt-daemon-driver-vbox-0:11.0.0-3.fc42.x86_64                                                       100% |  33.2 MiB/s | 952.0 KiB |  00m00s
[ 6/10] Installing libvirt-daemon-driver-lxc-0:11.0.0-3.fc42.x86_64                                                        100% |  33.1 MiB/s |   1.1 MiB |  00m00s
[ 7/10] Installing libvirt-daemon-driver-libxl-0:11.0.0-3.fc42.x86_64                                                      100% |  33.5 MiB/s |   1.0 MiB |  00m00s
[ 8/10] Installing libvirt-daemon-driver-ch-0:11.0.0-3.fc42.x86_64                                                         100% |  32.8 MiB/s | 840.2 KiB |  00m00s
[ 9/10] Installing libvirt-daemon-config-nwfilter-0:11.0.0-3.fc42.x86_64                                                   100% | 115.1 KiB/s |  14.2 KiB |  00m00s
[10/10] Installing libvirt-0:11.0.0-3.fc42.x86_64                                                                          100% | 147.0   B/s | 124.0   B |  00m01s
>>> Running %posttrans scriptlet: libvirt-daemon-driver-vbox-0:11.0.0-3.fc42.x86_64                                                                                
>>> Finished %posttrans scriptlet: libvirt-daemon-driver-vbox-0:11.0.0-3.fc42.x86_64                                                                               
>>> Scriptlet output:                                                                                                                                              
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtvboxd.socket' → '/usr/lib/systemd/system/virtvboxd.socket'.                                      
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtvboxd-ro.socket' → '/usr/lib/systemd/system/virtvboxd-ro.socket'.                                
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtvboxd-admin.socket' → '/usr/lib/systemd/system/virtvboxd-admin.socket'.                          
>>> Created symlink '/etc/systemd/system/multi-user.target.wants/virtvboxd.service' → '/usr/lib/systemd/system/virtvboxd.service'.                                 
>>>                                                                                                                                                                
>>> Running %posttrans scriptlet: libvirt-daemon-driver-lxc-0:11.0.0-3.fc42.x86_64                                                                                 
>>> Finished %posttrans scriptlet: libvirt-daemon-driver-lxc-0:11.0.0-3.fc42.x86_64                                                                                
>>> Scriptlet output:                                                                                                                                              
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtlxcd.socket' → '/usr/lib/systemd/system/virtlxcd.socket'.                                        
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtlxcd-ro.socket' → '/usr/lib/systemd/system/virtlxcd-ro.socket'.                                  
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtlxcd-admin.socket' → '/usr/lib/systemd/system/virtlxcd-admin.socket'.                            
>>> Created symlink '/etc/systemd/system/multi-user.target.wants/virtlxcd.service' → '/usr/lib/systemd/system/virtlxcd.service'.                                   
>>>                                                                                                                                                                
>>> Running %posttrans scriptlet: libvirt-daemon-driver-libxl-0:11.0.0-3.fc42.x86_64                                                                               
>>> Finished %posttrans scriptlet: libvirt-daemon-driver-libxl-0:11.0.0-3.fc42.x86_64                                                                              
>>> Scriptlet output:                                                                                                                                              
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtxend.socket' → '/usr/lib/systemd/system/virtxend.socket'.                                        
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtxend-ro.socket' → '/usr/lib/systemd/system/virtxend-ro.socket'.                                  
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtxend-admin.socket' → '/usr/lib/systemd/system/virtxend-admin.socket'.                            
>>> Created symlink '/etc/systemd/system/multi-user.target.wants/virtxend.service' → '/usr/lib/systemd/system/virtxend.service'.                                   
>>> Created symlink '/etc/systemd/system/sockets.target.wants/virtlockd-admin.socket' → '/usr/lib/systemd/system/virtlockd-admin.socket'.                          
>>>                                                                                                                                                                
Complete!

Download installation 






Install CRC



 
dave@fedora:~$ cd ~/Downloads/
dave@fedora:~/Downloads$ ls -l crc-linux-amd64.tar.xz 
-rw-r--r--. 1 dave dave 37031432 Aug  3 09:32 crc-linux-amd64.tar.xz
dave@fedora:~/Downloads$ tar xvf crc-linux-amd64.tar.xz
crc-linux-2.53.0-amd64/
crc-linux-2.53.0-amd64/LICENSE
crc-linux-2.53.0-amd64/crc
dave@fedora:~/Downloads$ mkdir -p ~/bin
dave@fedora:~/Downloads$  cp ~/Downloads/crc-linux-*-amd64/crc ~/bin
dave@fedora:~/Downloads$ export PATH=$PATH:$HOME/bin
dave@fedora:~/Downloads$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
dave@fedora:~/Downloads$ find ~/bin/
/home/dave/bin/
/home/dave/bin/crc

Creating CRC
$ crc delete # Remove previous cluster (if present)
$ crc config set preset openshift # Configure to use openshift preset
$ crc setup # Initialize environment for cluster
$ crc start # Start the cluster

Setup CRC


dave@fedora:~/Downloads$ crc config set preset openshift 
To confirm your system is ready, and you have the needed system bundle, please run 'crc setup' before 'crc start'.
dave@fedora:~/Downloads$ crc setup 
CRC is constantly improving and we would like to know more about usage (more details at https://developers.redhat.com/article/tool-data-collection)
Your preference can be changed manually if desired using 'crc config set consent-telemetry <yes/no>'
Would you like to contribute anonymous usage statistics? [y/N]: y
Thanks for helping us! You can disable telemetry with the command 'crc config set consent-telemetry no'.
INFO Using bundle path /home/dave/.crc/cache/crc_libvirt_4.19.3_amd64.crcbundle 
INFO Checking if running as non-root              
INFO Checking if running inside WSL2              
INFO Checking if crc-admin-helper executable is cached 
INFO Caching crc-admin-helper executable          
INFO Using root access: Changing ownership of /home/dave/.crc/bin/crc-admin-helper-linux-amd64 
[sudo] password for dave: 
INFO Using root access: Setting suid for /home/dave/.crc/bin/crc-admin-helper-linux-amd64 
INFO Checking if running on a supported CPU architecture 
INFO Checking if crc executable symlink exists    
INFO Creating symlink for crc executable          
INFO Checking minimum RAM requirements            
INFO Check if Podman binary exists in: /home/dave/.crc/bin/oc 
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Adding user to libvirt group                 
INFO Using root access: Adding user to the libvirt group 
INFO Checking if active user/process is currently part of the libvirt group 
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Installing crc-driver-libvirt                
INFO Checking crc daemon systemd service          
INFO Setting up crc daemon systemd service        
INFO Checking crc daemon systemd socket units     
INFO Setting up crc daemon systemd socket units   
INFO Checking if vsock is correctly configured    
INFO Setting up vsock support                     
INFO Using root access: Setting CAP_NET_BIND_SERVICE capability for /home/dave/bin/crc executable 
INFO Using root access: Creating udev rule for /dev/vsock 
INFO Using root access: Changing permissions for /etc/udev/rules.d/99-crc-vsock.rules to 644  
INFO Using root access: Reloading udev rules database 
INFO Using root access: Loading vhost_vsock kernel module 
INFO Using root access: Creating file /etc/modules-load.d/vhost_vsock.conf 
INFO Using root access: Changing permissions for /etc/modules-load.d/vhost_vsock.conf to 644  
INFO Checking if CRC bundle is extracted in '$HOME/.crc' 
INFO Checking if /home/dave/.crc/cache/crc_libvirt_4.19.3_amd64.crcbundle exists 
INFO Getting bundle for the CRC executable        
INFO Downloading bundle: /home/dave/.crc/cache/crc_libvirt_4.19.3_amd64.crcbundle... 

Start CRC


dave@fedora:~/Downloads$ crc start
INFO Using bundle path /home/dave/.crc/cache/crc_libvirt_4.19.3_amd64.crcbundle 
INFO Checking if running as non-root              
INFO Checking if running inside WSL2              
INFO Checking if crc-admin-helper executable is cached 
INFO Checking if running on a supported CPU architecture 
INFO Checking if crc executable symlink exists    
INFO Checking minimum RAM requirements            
INFO Check if Podman binary exists in: /home/dave/.crc/bin/oc 
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Checking if active user/process is currently part of the libvirt group 
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Checking crc daemon systemd socket units     
INFO Checking if vsock is correctly configured    
INFO Loading bundle: crc_libvirt_4.19.3_amd64...  
CRC requires a pull secret to download content from Red Hat.
You can copy it from the Pull Secret section of https://console.redhat.com/openshift/create/local.
? Please enter the pull secret *********
X Sorry, your reply was invalid: invalid pull secret: invalid character 'c' looking for beginning of value
? Please enter the pull secret ******************************************************************************************************************
INFO Creating CRC VM for OpenShift 4.19.3...      
INFO Generating new SSH key pair...               
INFO Generating new password for the kubeadmin user 
INFO Starting CRC VM for openshift 4.19.3...      
INFO CRC instance is running with IP 127.0.0.1    
INFO CRC VM is running                            
INFO Updating authorized keys...                  
INFO Configuring shared directories               
INFO Check internal and public DNS query...       
INFO Check DNS query from host...                 
INFO Verifying validity of the kubelet certificates... 
INFO Starting kubelet service                     
INFO Waiting for kube-apiserver availability... [takes around 2min] 
INFO Adding user's pull secret to the cluster...  
INFO Updating SSH key to machine config resource... 
INFO Waiting until the user's pull secret is written to the instance disk... 



Started OCP cluster 


Started the OpenShift cluster.

The server is accessible via web console at:
  https://console-openshift-console.apps-crc.testing

Log in as administrator:
  Username: kubeadmin
  Password: SOME-PASSWORD

Log in as user:
  Username: developer
  Password: developer

Use the 'oc' command line interface:
  $ eval $(crc oc-env)
  $ oc login -u developer https://api.crc.testing:6443

oc projects
ave@fedora:~/Downloads$ oc projects
You have access to the following projects and can switch between them with ' project <projectname>':

  * default
    hostpath-provisioner
    kube-node-lease
    kube-public
    kube-system
    openshift
    openshift-apiserver
    openshift-apiserver-operator
    openshift-authentication
    openshift-authentication-operator
    openshift-cloud-network-config-controller
    openshift-cloud-platform-infra
    openshift-cluster-machine-approver
    openshift-cluster-samples-operator
    openshift-cluster-storage-operator
    openshift-cluster-version
    openshift-config
    openshift-config-managed
    openshift-config-operator
    openshift-console
    openshift-console-operator
    openshift-console-user-settings
    openshift-controller-manager
    openshift-controller-manager-operator
    openshift-dns
    openshift-dns-operator
    openshift-etcd
    openshift-etcd-operator
    openshift-host-network
    openshift-image-registry
    openshift-infra
    openshift-ingress
    openshift-ingress-canary
    openshift-ingress-operator
    openshift-kni-infra
    openshift-kube-apiserver
    openshift-kube-apiserver-operator
    openshift-kube-controller-manager
    openshift-kube-controller-manager-operator
    openshift-kube-scheduler
    openshift-kube-scheduler-operator
    openshift-kube-storage-version-migrator
    openshift-kube-storage-version-migrator-operator
    openshift-machine-api
    openshift-machine-config-operator
    openshift-marketplace
    openshift-monitoring
    openshift-multus
    openshift-network-console
    openshift-network-diagnostics
    openshift-network-node-identity
    openshift-network-operator
    openshift-node
    openshift-nutanix-infra
    openshift-oauth-apiserver
    openshift-openstack-infra
    openshift-operator-lifecycle-manager
    openshift-operators
    openshift-ovirt-infra
    openshift-ovn-kubernetes
    openshift-route-controller-manager
    openshift-service-ca
    openshift-service-ca-operator
    openshift-user-workload-monitoring
    openshift-vsphere-infra

Using project "default" on server "https://api.crc.testing:6443".

Login to OCP console



Login using CLI


ave@fedora:~/Downloads$ oc login -u developer https://api.crc.testing:6443
Logged into "https://api.crc.testing:6443" as "developer" using existing credentials.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

dave@fedora:~/Downloads$ oc whoami
developer

Become admin via CLI

ave@fedora:~/Downloads$ oc whoami
developer
dave@fedora:~/Downloads$ oc config use-context crc-admin
$ oc whoami
Switched to context "crc-admin".
bash: $: command not found...
dave@fedora:~/Downloads$ oc whoami
kubeadmin
dave@fedora:~/Downloads$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.19.3    True        False         False      12m     
config-operator                            4.19.3    True        False         False      23d     
console                                    4.19.3    True        False         False      15m     
control-plane-machine-set                  4.19.3    True        False         False      23d     
dns                                        4.19.3    True        False         False      16m     
etcd                                       4.19.3    True        False         False      23d     
image-registry                             4.19.3    True        False         False      15m     
ingress                                    4.19.3    True        False         False      23d     
kube-apiserver                             4.19.3    True        False         False      23d     
kube-controller-manager                    4.19.3    True        False         False      23d     
kube-scheduler                             4.19.3    True        False         False      23d     
kube-storage-version-migrator              4.19.3    True        False         False      16m     
machine-api                                4.19.3    True        False         False      23d     
machine-approver                           4.19.3    True        False         False      23d     
machine-config                             4.19.3    True        False         False      23d     
marketplace                                4.19.3    True        False         False      23d     
network                                    4.19.3    True        False         False      23d     
openshift-apiserver                        4.19.3    True        False         False      16m     
openshift-controller-manager               4.19.3    True        False         False      5m57s   
openshift-samples                          4.19.3    True        False         False      23d     
operator-lifecycle-manager                 4.19.3    True        False         False      23d     
operator-lifecycle-manager-catalog         4.19.3    True        False         False      23d     
operator-lifecycle-manager-packageserver   4.19.3    True        False         False      16m     
service-ca                                 4.19.3    True        False         False      23d     
dave@fedora:~/Downloads$ 


Create demo project via CLI

dave@fedora:~/Downloads$ oc whoami
kubeadmin
dave@fedora:~/Downloads$  oc registry login --insecure=true
info: Using registry public hostname default-route-openshift-image-registry.apps-crc.testing
Saved credentials for default-route-openshift-image-registry.apps-crc.testing into /run/user/1000/containers/auth.json
dave@fedora:~/Downloads$  oc new-project demo
Now using project "demo" on server "https://api.crc.testing:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname

dave@fedora:~/Downloads$ oc image mirror registry.access.redhat.com/ubi8/ubi:latest=default-route-openshift-image-registry.apps-crc.testing/demo/ubi8:latest --insecure=true --filter-by-os=linux/amd64
default-route-openshift-image-registry.apps-crc.testing/
  demo/ubi8
    blobs:
      registry.access.redhat.com/ubi8/ubi sha256:2ff9823b0bdc42e2d925ed7de8114a50e10c0efc5ba3b71f9828cdd8b4463294 5.018KiB
      registry.access.redhat.com/ubi8/ubi sha256:6a22c1a537480d7699f4f391f3810de860e3dbe23b3fc4128ed78dda4189dda4 74.23MiB
    manifests:
      sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c -> latest
  stats: shared=0 unique=2 size=74.24MiB ratio=1.00

phase 0:
  default-route-openshift-image-registry.apps-crc.testing demo/ubi8 blobs=2 mounts=0 manifests=1 shared=0

info: Planning completed in 1.09s
uploading: default-route-openshift-image-registry.apps-crc.testing/demo/ubi8 sha256:6a22c1a537480d7699f4f391f3810de860e3dbe23b3fc4128ed78dda4189dda4 74.23MiB
sha256:0686ee6a1b9f7a4eb706b3562e50bbf55b929a573f6055a1128052b4b2266a2c default-route-openshift-image-registry.apps-crc.testing/demo/ubi8:latest
info: Mirroring completed in 20.67s (3.765MB/s)
dave@fedora:~/Downloads$ oc get is
NAME   IMAGE REPOSITORY                                                    TAGS     UPDATED
ubi8   default-route-openshift-image-registry.apps-crc.testing/demo/ubi8   latest   8 seconds ago
dave@fedora:~/Downloads$ oc set image-lookup ubi8
imagestream.image.openshift.io/ubi8 image lookup updated
dave@fedora:~/Downloads$ oc run demo --image=ubi8 --command -- sleep 600s
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "demo" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "demo" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "demo" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "demo" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/demo created

Check demo app via OCP console


Project 


Topology

Pod 


Events






Sunday, December 8, 2024

Argo CD - k8s GitOps on minikube

HOWTO 

Git


Argo CD architecture






 Start minikube

dave@fedora:~$ minikube start --driver=docker
😄  minikube v1.32.0 on Fedora 40
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🎉  minikube 1.34.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.34.0
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
    ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
    ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
🔎  Verifying ingress addon...
💡  Some dashboard features require the metrics-server addon. To enable all features please run:

    minikube addons enable metrics-server    


🌟  Enabled addons: ingress-dns, storage-provisioner, dashboard, ingress, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

 

 

Install Argo CD 


dave@fedora:~$ kubectl create namespace argocd
namespace/argocd created
dave@fedora:~$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created

 Get pods in argocd namespace

$ kubectl get pod -n argocd
NAME                                                READY   STATUS              RESTARTS   AGE
argocd-application-controller-0                     0/1     ContainerCreating   0          20s
argocd-applicationset-controller-5c787df94f-npjn9   0/1     ContainerCreating   0          22s
argocd-dex-server-6bb9b5fc75-x4vt8                  0/1     Init:0/1            0          22s
argocd-notifications-controller-7ccbd7fb6-dm8bq     0/1     ContainerCreating   0          22s
argocd-redis-c5c567495-cgtrl                        0/1     Init:0/1            0          22s
argocd-repo-server-799b498d8b-pc2l8                 0/1     Init:0/1            0          22s
argocd-server-f6d4d8775-9hlcj                       0/1     ContainerCreating   0          21s

Get services in argocd namespace
$ kubectl get svc -n argocd
NAME                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
argocd-applicationset-controller          ClusterIP   10.101.144.51   <none>        7000/TCP,8080/TCP            105s
argocd-dex-server                         ClusterIP   10.111.165.14   <none>        5556/TCP,5557/TCP,5558/TCP   105s
argocd-metrics                            ClusterIP   10.100.56.238   <none>        8082/TCP                     105s
argocd-notifications-controller-metrics   ClusterIP   10.111.151.85   <none>        9001/TCP                     105s
argocd-redis                              ClusterIP   10.97.35.44     <none>        6379/TCP                     104s
argocd-repo-server                        ClusterIP   10.109.34.145   <none>        8081/TCP,8084/TCP            104s
argocd-server                             ClusterIP   10.99.111.141   <none>        80/TCP,443/TCP               104s
argocd-server-metrics                     ClusterIP   10.106.186.75   <none>        8083/TCP                     104s

Port forward to access Argo CD UI
$ kubectl port-forward svc/argocd-server -n argocd 8080:443
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

Get admin password
$ kubectl get secret argocd-initial-admin-secret -n argocd -o yaml
apiVersion: v1
data:
  password: ab123==
kind: Secret
metadata:
  creationTimestamp: "2024-12-15T07:58:19Z"
  name: argocd-initial-admin-secret
  namespace: argocd
  resourceVersion: "190652"
  uid: a017dc3e-3232-40f6-a2e6-374872963888
type: Opaque

$ echo ab123|base64 --decode

Open UI at https://127.0.0.1:8080/applications





Apply application.yaml on k8s cluster via kubectl

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default

  source:
    repoURL: https://github.com/dveselka/devops-k8s.git
    targetRevision: HEAD
    path: argocd
    
  destination:
    server: https://kubernetes.default.svc
    namespace: myapp

  syncPolicy:
    syncOptions:
    - CreateNamespace = true  


    automated:
      selfHeal: true
      prune: true

Apply

$ cp * /git/devops-k8s/argocd
dave@dave:/git/argocd-app-config/dev$ ls
application.yaml  deployment.yaml  service.yaml
dave@dave:/git/argocd-app-config/dev$ kubectl apply -f application.yaml 
application.argoproj.io/guestbook created

Argo CD UI - unsynced


Argo CD UI with synced application 





Argo CD descriptors






Get pods in myapp namespace
$  kubectl get pods -n myapp
NAME                     READY   STATUS    RESTARTS   AGE
myapp-55c645d9b5-62llh   1/1     Running   0          6m29s
myapp-55c645d9b5-vmtmb   1/1     Running   0          6m29s

Get services in myapp namespace
$  kubectl get svc  -n myapp
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
myapp-service   ClusterIP   10.104.101.233   <none>        8080/TCP   6m35s

Minikube dashboard



Deployments 







Thursday, April 4, 2024

Weblogic k8s operator - cleanup

 HOWTO



Run delete


dave@dave:/git/weblogic-kubernetes-operator/kubernetes/samples/scripts$ ./delete-domain/delete-weblogic-domain-resources.sh -d sample-domain1
@@ Deleting kubernetes resources with label weblogic.domainUID 'sample-domain1'.
@@ 9 resources remaining after 0 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Setting serverStartPolicy to Never on each domain (this should cause operator(s) to initiate a controlled shutdown of the domain's pods.)
domain.weblogic.oracle/sample-domain1 patched
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 4 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 8 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 11 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 15 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 18 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 22 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 26 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 29 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 33 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 36 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 40 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 44 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 49 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 53 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 57 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Waiting for operator to shutdown pods (will wait for no more than half of max wait seconds before directly deleting them).
@@ 9 resources remaining after 62 seconds, including 3 WebLogic Server pods. Max wait is 120 seconds.
@@ Warning! 3 WebLogic Server pods remaining but wait time exceeds half of max wait seconds. About to directly delete all remaining resources, including the leftover pods.
pod "sample-domain1-admin-server" deleted
pod "sample-domain1-managed-server1" deleted
pod "sample-domain1-managed-server2" deleted
service "sample-domain1-admin-server" deleted
service "sample-domain1-cluster-cluster-1" deleted
service "sample-domain1-managed-server1" deleted
service "sample-domain1-managed-server2" deleted
configmap "sample-domain1-weblogic-domain-introspect-cm" deleted
domain.weblogic.oracle "sample-domain1" deleted
@@ 0 resources remaining after 77 seconds, including 0 WebLogic Server pods. Max wait is 120 seconds.
@@ Success.


Saturday, March 16, 2024

Weblogic k8s - WIT - create domain model image with WDT

 HOWTO

Oracle HOWTO

 

GitHub 

Steps to do 

  • Install WIT 
  • Download  Weblogic and JDK installers 
  • Create Docker image with installers
  • Prepare WDT model 
  • Create auxiliary image via WIT
  • Create domain  via kubectl

Download Weblogic and JDK Docker images


dave@dave:/git/weblogic/wit$ ls -lt
total 788764
-rw-r--r--. 1 dave dave 169934047 Apr  3 20:30 jdk-11.0.20_linux-x64_bin.tar.gz
-rw-r--r--. 1 dave dave 637756056 Apr  3 20:29 fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip

Download WIT
 curl -m 120 -fL https://github.com/oracle/weblogic-image-tool/releases/latest/download/imagetool.zip -o ./imagetool.zip

All downloaded files
<pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; color: #000000; background-color: #eee;font-size: 12px;border: 1px dashed #999999;line-height: 14px;padding: 5px; overflow: auto; width: 100%"><code>$ ls -1l
total 793232
-rw-r--r--. 1 dave dave 637756056 Apr  3 20:29 fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip
-rw-r--r--. 1 dave dave   2126347 Apr  3 20:32 imagetool.zip
-rw-r--r--. 1 dave dave 169934047 Apr  3 20:30 jdk-11.0.20_linux-x64_bin.tar.gz
-rw-r--r--. 1 dave dave   2443633 Apr  3 21:02 weblogic-deploy.zip

</code></pre>
Add path to imagetool into PATH
PATH=/app/weblogic/wit/imagetool/bin:$PATH

export PATH

Add installers into cache
$ imagetool.sh cache addInstaller --type jdk --version 11u020 --path jdk-11.0.20_linux-x64_bin.tar.gz
[INFO   ] Successfully added to cache. jdk_11u020=/git/weblogic/wit/jdk-11.0.20_linux-x64_bin.tar.gz

$ imagetool.sh cache addInstaller --type wls --version 14.1.1.0.0  --path fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip 
[INFO   ] Successfully added to cache. wls_14.1.1.0.0=/git/weblogic/wit/fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip


Build image
dave@dave:/git/weblogic/wit$ imagetool.sh create --tag wls:14.1.1.0.0 --jdkVersion 11u020 --version 14.1.1.0.0
[INFO   ] WebLogic Image Tool version 1.12.2
[INFO   ] Image Tool build ID: 9cfd0bb5-bd06-4d2b-a6f7-8e72189e4399
[INFO   ] Temporary directory used for image build context: /home/dave/wlsimgbuilder_temp14932286092203898011
[INFO   ] Copying /git/weblogic/wit/jdk-11.0.20_linux-x64_bin.tar.gz to build context folder.
[INFO   ] Using middleware installers (wls) version 14.1.1.0.0
[INFO   ] Copying /git/weblogic/wit/fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip to build context folder.
[INFO   ] Starting build: docker build --no-cache --force-rm --tag wls:14.1.1.0.0 /home/dave/wlsimgbuilder_temp14932286092203898011
Sending build context to Docker daemon  807.7MB

Step 1/27 : FROM ghcr.io/oracle/oraclelinux:8-slim as os_update
8-slim: Pulling from oracle/oraclelinux
692a254aa188: Pulling fs layer
692a254aa188: Download complete
692a254aa188: Pull complete
Digest: sha256:36d44bb00961a439af36c1bac759ced5d10c418b9b46076be866f5ba6fc923f6
Status: Downloaded newer image for ghcr.io/oracle/oraclelinux:8-slim
 ---> b1ef7cdd2820
Step 2/27 : LABEL com.oracle.weblogic.imagetool.buildid="9cfd0bb5-bd06-4d2b-a6f7-8e72189e4399"
 ---> Running in e30b9601c067
Removing intermediate container e30b9601c067
 ---> 005b760e097c
Step 3/27 : USER root
 ---> Running in e89138fc74ce
Removing intermediate container e89138fc74ce
 ---> e33a1419adbe
Step 4/27 : RUN microdnf update     && microdnf install gzip tar unzip libaio libnsl jq findutils diffutils shadow-utils     && microdnf clean all
 ---> Running in 1dab17b55647
Downloading metadata...
Downloading metadata...
Package                                                      Repository           Size
Installing:                                                                           
 glibc-gconv-extra-2.28-236.0.1.el8_9.12.x86_64              ol8_baseos_latest  1.6 MB
Upgrading:                                                                            
 glibc-2.28-236.0.1.el8_9.12.x86_64                          ol8_baseos_latest  2.3 MB
  replacing glibc-2.28-236.0.1.el8.7.x86_64                                           
 glibc-common-2.28-236.0.1.el8_9.12.x86_64                   ol8_baseos_latest  1.1 MB
  replacing glibc-common-2.28-236.0.1.el8.7.x86_64                                    
 glibc-minimal-langpack-2.28-236.0.1.el8_9.12.x86_64         ol8_baseos_latest 71.1 kB
  replacing glibc-minimal-langpack-2.28-236.0.1.el8.7.x86_64                          
 systemd-libs-239-78.0.4.el8.x86_64                          ol8_baseos_latest  1.2 MB
   replacing systemd-libs-239-78.0.3.el8.x86_64                                       
Transaction Summary:
 Installing:        1 packages
 Reinstalling:      0 packages
 Upgrading:         4 packages
 Obsoleting:        0 packages
 Removing:          0 packages
 Downgrading:       0 packages
Downloading packages...
Running transaction test...
Updating: glibc-common;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Updating: glibc-minimal-langpack;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Updating: glibc;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Installing: glibc-gconv-extra;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Updating: systemd-libs;239-78.0.4.el8;x86_64;ol8_baseos_latest
Cleanup: systemd-libs;239-78.0.3.el8;x86_64;installed
Cleanup: glibc;2.28-236.0.1.el8.7;x86_64;installed
Cleanup: glibc-minimal-langpack;2.28-236.0.1.el8.7;x86_64;installed
Cleanup: glibc-common;2.28-236.0.1.el8.7;x86_64;installed
Complete.
Package                              Repository            Size
Installing:                                                    
 diffutils-3.6-6.el8.x86_64          ol8_baseos_latest 369.3 kB
 findutils-1:4.6.0-21.el8.x86_64     ol8_baseos_latest 539.8 kB
 gzip-1.9-13.el8_5.x86_64            ol8_baseos_latest 170.7 kB
 jq-1.6-7.0.3.el8.x86_64             ol8_appstream     206.5 kB
 libaio-0.3.112-1.el8.x86_64         ol8_baseos_latest  33.4 kB
 libnsl-2.28-236.0.1.el8_9.12.x86_64 ol8_baseos_latest 112.3 kB
 oniguruma-6.8.2-2.1.el8_9.x86_64    ol8_appstream     191.5 kB
 unzip-6.0-46.0.1.el8.x86_64         ol8_baseos_latest 201.0 kB
Transaction Summary:
 Installing:        8 packages
 Reinstalling:      0 packages
 Upgrading:         0 packages
 Obsoleting:        0 packages
 Removing:          0 packages
 Downgrading:       0 packages
Downloading packages...
Running transaction test...
Installing: oniguruma;6.8.2-2.1.el8_9;x86_64;ol8_appstream
Installing: jq;1.6-7.0.3.el8;x86_64;ol8_appstream
Installing: unzip;6.0-46.0.1.el8;x86_64;ol8_baseos_latest
Installing: libnsl;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Installing: libaio;0.3.112-1.el8;x86_64;ol8_baseos_latest
Installing: gzip;1.9-13.el8_5;x86_64;ol8_baseos_latest
Installing: findutils;1:4.6.0-21.el8;x86_64;ol8_baseos_latest
Installing: diffutils;3.6-6.el8;x86_64;ol8_baseos_latest
Complete.
Complete.
Removing intermediate container 1dab17b55647
 ---> 920dd42ffc33
Step 5/27 : RUN if [ -z "$(getent group oracle)" ]; then groupadd oracle || exit 1 ; fi  && if [ -z "$(getent group oracle)" ]; then groupadd oracle || exit 1 ; fi  && if [ -z "$(getent passwd oracle)" ]; then useradd -g oracle oracle || exit 1; fi  && mkdir -p /u01  && chown oracle:oracle /u01  && chmod 775 /u01
 ---> Running in 00921092ad33
Removing intermediate container 00921092ad33
 ---> ac64255288f4
Step 6/27 : FROM os_update as jdk_build
 ---> ac64255288f4
Step 7/27 : LABEL com.oracle.weblogic.imagetool.buildid="9cfd0bb5-bd06-4d2b-a6f7-8e72189e4399"
 ---> Running in 6881679cbe63
Removing intermediate container 6881679cbe63
 ---> 6a2113f4ab37
Step 8/27 : ENV JAVA_HOME=/u01/jdk
 ---> Running in 983a97e761ee
Removing intermediate container 983a97e761ee
 ---> 2fe8ba9c53fd
Step 9/27 : COPY --chown=oracle:oracle ["jdk-11.0.20_linux-x64_bin.tar.gz", "/tmp/imagetool/"]
 ---> 9bd3a33a7490
Step 10/27 : USER oracle
 ---> Running in f0d399001938
Removing intermediate container f0d399001938
 ---> 427af35b8902
Step 11/27 : RUN tar xzf "/tmp/imagetool/jdk-11.0.20_linux-x64_bin.tar.gz" -C /u01 && $(test -d /u01/jdk* && mv /u01/jdk* /u01/jdk || mv /u01/graal* /u01/jdk) && rm -rf /tmp/imagetool && rm -f /u01/jdk/javafx-src.zip /u01/jdk/src.zip
 ---> Running in ec25e8b7ade0
Removing intermediate container ec25e8b7ade0
 ---> 5587b1cce14c
Step 12/27 : FROM os_update as wls_build
 ---> ac64255288f4
Step 13/27 : LABEL com.oracle.weblogic.imagetool.buildid="9cfd0bb5-bd06-4d2b-a6f7-8e72189e4399"
 ---> Running in faf1e036737c
Removing intermediate container faf1e036737c
 ---> 3cc009aa1512
Step 14/27 : ENV JAVA_HOME=/u01/jdk ORACLE_HOME=/u01/oracle OPATCH_NO_FUSER=true
 ---> Running in aa65d9085ddb
Removing intermediate container aa65d9085ddb
 ---> 3a28e724d56e
Step 15/27 : RUN mkdir -p /u01/oracle && mkdir -p /u01/oracle/oraInventory && chown oracle:oracle /u01/oracle/oraInventory && chown oracle:oracle /u01/oracle
 ---> Running in 8258da648b5b
Removing intermediate container 8258da648b5b
 ---> 661f9e2fa7d4
Step 16/27 : COPY --from=jdk_build --chown=oracle:oracle /u01/jdk /u01/jdk/
 ---> a6797f5cde62
Step 17/27 : COPY --chown=oracle:oracle fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip wls.rsp /tmp/imagetool/
 ---> 4f2c87f270f1
Step 18/27 : COPY --chown=oracle:oracle oraInst.loc /u01/oracle/
 ---> ffd17ce51f02
Step 19/27 : USER oracle
 ---> Running in ba9e3dbe3ff3
Removing intermediate container ba9e3dbe3ff3
 ---> 0730641497aa
Step 20/27 : RUN echo "INSTALLING MIDDLEWARE"     && echo "INSTALLING wls"     && unzip -q /tmp/imagetool/fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip "*.[jb][ai][rn]" -d /tmp/imagetool && /u01/jdk/bin/java -Xmx1024m -jar /tmp/imagetool/fmw_14.1.1.0.0_wls_lite_generic.jar -silent ORACLE_HOME=/u01/oracle     -responseFile /tmp/imagetool/wls.rsp -invPtrLoc /u01/oracle/oraInst.loc -ignoreSysPrereqs -force -novalidation   && test $? -eq 0 && chmod -R g+r /u01/oracle || (grep -vh "NOTIFICATION" /tmp/OraInstall*/install*.log && exit 1)
 ---> Running in e4466a9d0b0a
INSTALLING MIDDLEWARE
INSTALLING wls
Launcher log file is /tmp/OraInstall2024-04-03_06-46-13PM/launcher2024-04-03_06-46-13PM.log.
Extracting the installer . . . . . Done
Checking if CPU speed is above 300 MHz.   Actual 2399.716 MHz    Passed
Checking swap space: must be greater than 512 MB.   Actual 8191 MB    Passed
Checking temp space: must be greater than 300 MB.   Actual 76052 MB    Passed
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2024-04-03_06-46-13PM
Log: /tmp/OraInstall2024-04-03_06-46-13PM/install2024-04-03_06-46-13PM.log
Setting ORACLE_HOME...
Copyright (c) 1996, 2020, Oracle and/or its affiliates. All rights reserved.
Reading response file..
Skipping Software Updates
Validations are disabled for this session.
Verifying data
Copying Files
Percent Complete : 10
Percent Complete : 20
Percent Complete : 30
Percent Complete : 40
Percent Complete : 50
Percent Complete : 60
Percent Complete : 70
Percent Complete : 80
Percent Complete : 90
Percent Complete : 100

The installation of Oracle Fusion Middleware 14.1.1 WebLogic Server and Coherence 14.1.1.0.0 completed successfully.
Logs successfully copied to /u01/oracle/oraInventory/logs.
Removing intermediate container e4466a9d0b0a
 ---> df300afc2fdb
Step 21/27 : FROM os_update as final_build
 ---> ac64255288f4
Step 22/27 : ENV ORACLE_HOME=/u01/oracle     LD_LIBRARY_PATH=/u01/oracle/oracle_common/adr:$LD_LIBRARY_PATH     JAVA_HOME=/u01/jdk     PATH=${PATH}:/u01/jdk/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle
 ---> Running in 9b5cf310a026
Removing intermediate container 9b5cf310a026
 ---> 47585734f7b6
Step 23/27 : LABEL com.oracle.weblogic.imagetool.buildid="9cfd0bb5-bd06-4d2b-a6f7-8e72189e4399"
 ---> Running in b2e9ce86d3df
Removing intermediate container b2e9ce86d3df
 ---> 65384eb84c30
Step 24/27 : COPY --from=jdk_build --chown=oracle:oracle /u01/jdk /u01/jdk/
 ---> 918964a34393
Step 25/27 : COPY --from=wls_build --chown=oracle:oracle /u01/oracle /u01/oracle/
 ---> de7b4872c9ce
Step 26/27 : USER oracle
 ---> Running in e9d9f9530a40
Removing intermediate container e9d9f9530a40
 ---> 8a747fa87429
Step 27/27 : WORKDIR /u01/oracle
 ---> Running in 731cee3f8443
Removing intermediate container 731cee3f8443
 ---> adf15fbcf8b9
Successfully built adf15fbcf8b9
Successfully tagged wls:14.1.1.0.0
[INFO   ] Build successful. Build time=145s. Image tag=wls:14.1.1.0.0
dave@dave:/git/weblogic/wit$ docker images
REPOSITORY                                                TAG                   IMAGE ID       CREATED         SIZE
wls                                                       14.1.1.0.0            adf15fbcf8b9   8 seconds ago   1.26GB

Inspect image
$ imagetool.sh inspect --image wls:14.1.1.0.0
[INFO   ] Inspecting wls:14.1.1.0.0, this may take a few minutes if the image is not available locally.
{
  "os" : {
    "id" : "ol",
    "name" : "Oracle Linux Server",
    "version" : "8.9",
    "releasePackage" : "oraclelinux-release-8.9-1.0.8.el8.x86_64"
  },
  "javaHome" : "/u01/jdk",
  "javaVersion" : "11.0.20",
  "oracleHome" : "/u01/oracle",
  "oracleHomeGroup" : "oracle",
  "oracleHomeUser" : "oracle",
  "oracleInstalledProducts" : "WLS,COH,TOPLINK",
  "packageManager" : "MICRODNF",
  "wlsVersion" : "14.1.1.0.0"
}

Create an image with a WLS domain using WDT 

- deprecated - use auxiliary images https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/model-in-image/auxiliary-images/ 


Add WDT into cache
$ imagetool.sh cache addInstaller --type wdt --version 0.22 --path weblogic-deploy.zip 
[INFO   ] Successfully added to cache. wdt_0.22=/git/weblogic/wit/weblogic-deploy.zip


Create domain image 



  • WDT archive created via WDT discover
$ unzip -l /git/weblogic/wdt/DiscoveredABDataSourceEARDomain.zip
Archive:  /git/weblogic/wdt/DiscoveredABDataSourceEARDomain.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
    22119  04-01-2024 22:51   wlsdeploy/applications/basicWebapp.war
---------                     -------
    22119                     1 file

Run imagetool
$ imagetool.sh create --fromImage ghcr.io/oracle/oraclelinux:8-slim --tag wls:14.1.1.1.1  --version 14.1.1.0.0   --wdtDomainHome /u01/domains/simple_domain --jdkVersion 11u020  --wdtVersion 0.22  --wdtModel /git/weblogic/wdt/DiscoveredABDataSourceEARDomain.yaml  --wdtArchive /git/weblogic/wdt/DiscoveredABDataSourceEARDomain.zip
[INFO   ] WebLogic Image Tool version 1.12.2
[INFO   ] Image Tool build ID: 71107d65-7898-4f31-a595-7e86581f655f
[INFO   ] User specified fromImage ghcr.io/oracle/oraclelinux:8-slim
[INFO   ] Temporary directory used for image build context: /home/dave/wlsimgbuilder_temp2063144469653345589
[INFO   ] Inspecting ghcr.io/oracle/oraclelinux:8-slim, this may take a few minutes if the image is not available locally.
[INFO   ] Copying /git/weblogic/wit/jdk-11.0.20_linux-x64_bin.tar.gz to build context folder.
[INFO   ] Using middleware installers (wls) version 14.1.1.0.0
[INFO   ] Copying /git/weblogic/wit/fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip to build context folder.
[INFO   ] Copying /git/weblogic/wdt/DiscoveredABDataSourceEARDomain.yaml to build context folder.
[INFO   ] Copying /git/weblogic/wdt/DiscoveredABDataSourceEARDomain.zip to build context folder.
[INFO   ] Copying /git/weblogic/wit/weblogic-deploy.zip to build context folder.
[INFO   ] Starting build: docker build --no-cache --force-rm --tag wls:14.1.1.1.1 /home/dave/wlsimgbuilder_temp2063144469653345589
Sending build context to Docker daemon  810.2MB

Step 1/40 : FROM ghcr.io/oracle/oraclelinux:8-slim as os_update
 ---> b1ef7cdd2820
Step 2/40 : LABEL com.oracle.weblogic.imagetool.buildid="71107d65-7898-4f31-a595-7e86581f655f"
 ---> Running in 64913c2394ad
Removing intermediate container 64913c2394ad
 ---> e48c0c520d08
Step 3/40 : USER root
 ---> Running in 7b1f23d53b00
Removing intermediate container 7b1f23d53b00
 ---> d41ffb175ad4
Step 4/40 : RUN microdnf update     && microdnf install gzip tar unzip libaio libnsl jq findutils diffutils shadow-utils     && microdnf clean all
 ---> Running in e9997be5c471
Downloading metadata...
Downloading metadata...
Package                                                      Repository           Size
Installing:                                                                           
 glibc-gconv-extra-2.28-236.0.1.el8_9.12.x86_64              ol8_baseos_latest  1.6 MB
Upgrading:                                                                            
 glibc-2.28-236.0.1.el8_9.12.x86_64                          ol8_baseos_latest  2.3 MB
  replacing glibc-2.28-236.0.1.el8.7.x86_64                                           
 glibc-common-2.28-236.0.1.el8_9.12.x86_64                   ol8_baseos_latest  1.1 MB
  replacing glibc-common-2.28-236.0.1.el8.7.x86_64                                    
 glibc-minimal-langpack-2.28-236.0.1.el8_9.12.x86_64         ol8_baseos_latest 71.1 kB
  replacing glibc-minimal-langpack-2.28-236.0.1.el8.7.x86_64                          
 systemd-libs-239-78.0.4.el8.x86_64                          ol8_baseos_latest  1.2 MB
   replacing systemd-libs-239-78.0.3.el8.x86_64                                       
Transaction Summary:
 Installing:        1 packages
 Reinstalling:      0 packages
 Upgrading:         4 packages
 Obsoleting:        0 packages
 Removing:          0 packages
 Downgrading:       0 packages
Downloading packages...
Running transaction test...
Updating: glibc-common;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Updating: glibc-minimal-langpack;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Updating: glibc;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Installing: glibc-gconv-extra;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Updating: systemd-libs;239-78.0.4.el8;x86_64;ol8_baseos_latest
Cleanup: systemd-libs;239-78.0.3.el8;x86_64;installed
Cleanup: glibc;2.28-236.0.1.el8.7;x86_64;installed
Cleanup: glibc-minimal-langpack;2.28-236.0.1.el8.7;x86_64;installed
Cleanup: glibc-common;2.28-236.0.1.el8.7;x86_64;installed
Complete.
Package                              Repository            Size
Installing:                                                    
 diffutils-3.6-6.el8.x86_64          ol8_baseos_latest 369.3 kB
 findutils-1:4.6.0-21.el8.x86_64     ol8_baseos_latest 539.8 kB
 gzip-1.9-13.el8_5.x86_64            ol8_baseos_latest 170.7 kB
 jq-1.6-7.0.3.el8.x86_64             ol8_appstream     206.5 kB
 libaio-0.3.112-1.el8.x86_64         ol8_baseos_latest  33.4 kB
 libnsl-2.28-236.0.1.el8_9.12.x86_64 ol8_baseos_latest 112.3 kB
 oniguruma-6.8.2-2.1.el8_9.x86_64    ol8_appstream     191.5 kB
 unzip-6.0-46.0.1.el8.x86_64         ol8_baseos_latest 201.0 kB
Transaction Summary:
 Installing:        8 packages
 Reinstalling:      0 packages
 Upgrading:         0 packages
 Obsoleting:        0 packages
 Removing:          0 packages
 Downgrading:       0 packages
Downloading packages...
Running transaction test...
Installing: oniguruma;6.8.2-2.1.el8_9;x86_64;ol8_appstream
Installing: jq;1.6-7.0.3.el8;x86_64;ol8_appstream
Installing: unzip;6.0-46.0.1.el8;x86_64;ol8_baseos_latest
Installing: libnsl;2.28-236.0.1.el8_9.12;x86_64;ol8_baseos_latest
Installing: libaio;0.3.112-1.el8;x86_64;ol8_baseos_latest
Installing: gzip;1.9-13.el8_5;x86_64;ol8_baseos_latest
Installing: findutils;1:4.6.0-21.el8;x86_64;ol8_baseos_latest
Installing: diffutils;3.6-6.el8;x86_64;ol8_baseos_latest
Complete.
Complete.
Removing intermediate container e9997be5c471
 ---> 3e2aa0097ed3
Step 5/40 : RUN if [ -z "$(getent group oracle)" ]; then groupadd oracle || exit 1 ; fi  && if [ -z "$(getent group oracle)" ]; then groupadd oracle || exit 1 ; fi  && if [ -z "$(getent passwd oracle)" ]; then useradd -g oracle oracle || exit 1; fi  && mkdir -p /u01  && chown oracle:oracle /u01  && chmod 775 /u01
 ---> Running in 76b2939d72b0
Removing intermediate container 76b2939d72b0
 ---> 4db70353fbf5
Step 6/40 : FROM os_update as jdk_build
 ---> 4db70353fbf5
Step 7/40 : LABEL com.oracle.weblogic.imagetool.buildid="71107d65-7898-4f31-a595-7e86581f655f"
 ---> Running in 6fb4cecfc6be
Removing intermediate container 6fb4cecfc6be
 ---> 5d9ceb4928c0
Step 8/40 : ENV JAVA_HOME=/u01/jdk
 ---> Running in b6056e93a572
Removing intermediate container b6056e93a572
 ---> b0308e763043
Step 9/40 : COPY --chown=oracle:oracle ["jdk-11.0.20_linux-x64_bin.tar.gz", "/tmp/imagetool/"]
 ---> f07b5d9a5782
Step 10/40 : USER oracle
 ---> Running in b75a9206a29a
Removing intermediate container b75a9206a29a
 ---> a7f45025c54d
Step 11/40 : RUN tar xzf "/tmp/imagetool/jdk-11.0.20_linux-x64_bin.tar.gz" -C /u01 && $(test -d /u01/jdk* && mv /u01/jdk* /u01/jdk || mv /u01/graal* /u01/jdk) && rm -rf /tmp/imagetool && rm -f /u01/jdk/javafx-src.zip /u01/jdk/src.zip
 ---> Running in 7fe1203420be
Removing intermediate container 7fe1203420be
 ---> c3ed51229c1a
Step 12/40 : FROM os_update as wls_build
 ---> 4db70353fbf5
Step 13/40 : LABEL com.oracle.weblogic.imagetool.buildid="71107d65-7898-4f31-a595-7e86581f655f"
 ---> Running in 8bb593091330
Removing intermediate container 8bb593091330
 ---> 9b8cfbb51190
Step 14/40 : ENV JAVA_HOME=/u01/jdk ORACLE_HOME=/u01/oracle OPATCH_NO_FUSER=true
 ---> Running in bacc2eaf949a
Removing intermediate container bacc2eaf949a
 ---> b8a3dc900906
Step 15/40 : RUN mkdir -p /u01/oracle && mkdir -p /u01/oracle/oraInventory && chown oracle:oracle /u01/oracle/oraInventory && chown oracle:oracle /u01/oracle
 ---> Running in 98741d40596d
Removing intermediate container 98741d40596d
 ---> db58e3011e92
Step 16/40 : COPY --from=jdk_build --chown=oracle:oracle /u01/jdk /u01/jdk/
 ---> 3a1ab28183d3
Step 17/40 : COPY --chown=oracle:oracle fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip wls.rsp /tmp/imagetool/
 ---> 9b4415afdc2f
Step 18/40 : COPY --chown=oracle:oracle oraInst.loc /u01/oracle/
 ---> e94f75e4b27b
Step 19/40 : USER oracle
 ---> Running in 1b42950d779f
Removing intermediate container 1b42950d779f
 ---> 7c42385daa83
Step 20/40 : RUN echo "INSTALLING MIDDLEWARE"     && echo "INSTALLING wls"     && unzip -q /tmp/imagetool/fmw_14.1.1.0.0_wls_lite_Disk1_1of1.zip "*.[jb][ai][rn]" -d /tmp/imagetool && /u01/jdk/bin/java -Xmx1024m -jar /tmp/imagetool/fmw_14.1.1.0.0_wls_lite_generic.jar -silent ORACLE_HOME=/u01/oracle     -responseFile /tmp/imagetool/wls.rsp -invPtrLoc /u01/oracle/oraInst.loc -ignoreSysPrereqs -force -novalidation   && test $? -eq 0 && chmod -R g+r /u01/oracle || (grep -vh "NOTIFICATION" /tmp/OraInstall*/install*.log && exit 1)
 ---> Running in e307551d5bb2
INSTALLING MIDDLEWARE
INSTALLING wls
Launcher log file is /tmp/OraInstall2024-04-03_07-31-23PM/launcher2024-04-03_07-31-23PM.log.
Extracting the installer . . . . . Done
Checking if CPU speed is above 300 MHz.   Actual 2583.240 MHz    Passed
Checking swap space: must be greater than 512 MB.   Actual 8191 MB    Passed
Checking temp space: must be greater than 300 MB.   Actual 74508 MB    Passed
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2024-04-03_07-31-23PM
Log: /tmp/OraInstall2024-04-03_07-31-23PM/install2024-04-03_07-31-23PM.log
Setting ORACLE_HOME...
Copyright (c) 1996, 2020, Oracle and/or its affiliates. All rights reserved.
Reading response file..
Skipping Software Updates
Validations are disabled for this session.
Verifying data
Copying Files
Percent Complete : 10
Percent Complete : 20
Percent Complete : 30
Percent Complete : 40
Percent Complete : 50
Percent Complete : 60
Percent Complete : 70
Percent Complete : 80
Percent Complete : 90
Percent Complete : 100

The installation of Oracle Fusion Middleware 14.1.1 WebLogic Server and Coherence 14.1.1.0.0 completed successfully.
Logs successfully copied to /u01/oracle/oraInventory/logs.
Removing intermediate container e307551d5bb2
 ---> 1d878efb5a30
Step 21/40 : FROM wls_build as wdt_build
 ---> 1d878efb5a30
Step 22/40 : LABEL com.oracle.weblogic.imagetool.buildid="71107d65-7898-4f31-a595-7e86581f655f"
 ---> Running in 59171485ab53
Removing intermediate container 59171485ab53
 ---> d57a4b317c6e
Step 23/40 : ENV WLSDEPLOY_PROPERTIES=" -Djava.security.egd=file:/dev/./urandom" DOMAIN_HOME=/u01/domains/simple_domain
 ---> Running in 391bdeba5aa4
Removing intermediate container 391bdeba5aa4
 ---> d21d2c4ecf86
Step 24/40 : COPY --chown=oracle:oracle weblogic-deploy.zip /tmp/imagetool/
 ---> 1b54c9d8ad7b
Step 25/40 : USER root
 ---> Running in b294bb02241d
Removing intermediate container b294bb02241d
 ---> 829720897376
Step 26/40 : RUN mkdir -p /u01/wdt && chown oracle:oracle /u01/wdt
 ---> Running in 85d9a1633746
Removing intermediate container 85d9a1633746
 ---> baafc419cd36
Step 27/40 : USER oracle
 ---> Running in f1fb65009687
Removing intermediate container f1fb65009687
 ---> 452d9efbe756
Step 28/40 : RUN cd /u01/wdt && mkdir -p /u01/wdt/models && mkdir -p $(dirname /u01/domains/simple_domain)
 ---> Running in fbbb9d42fb3d
Removing intermediate container fbbb9d42fb3d
 ---> 53454300205a
Step 29/40 : COPY --chown=oracle:oracle ["DiscoveredABDataSourceEARDomain.yaml", "/u01/wdt/models/"]
 ---> a14aa616a21e
Step 30/40 : COPY --chown=oracle:oracle ["DiscoveredABDataSourceEARDomain.zip", "/u01/wdt/models/"]
 ---> abff8757ba8b
Step 31/40 : RUN test -d /u01/wdt/weblogic-deploy && rm -rf /u01/wdt/weblogic-deploy || echo Initial WDT install   && unzip -q /tmp/imagetool/weblogic-deploy.zip -d /u01/wdt
 ---> Running in a9467d64f8ce
Initial WDT install
Removing intermediate container a9467d64f8ce
 ---> 3f8bd1b2c87a
Step 32/40 : RUN cd /u01/wdt/weblogic-deploy/bin     &&  ./createDomain.sh     -oracle_home /u01/oracle     -domain_home /u01/domains/simple_domain     -domain_type WLS      -model_file /u01/wdt/models/DiscoveredABDataSourceEARDomain.yaml -archive_file /u01/wdt/models/DiscoveredABDataSourceEARDomain.zip
 ---> Running in 324177a4bdf7
JDK version is 11.0.20+9-LTS-256
JAVA_HOME = /u01/jdk
WLST_EXT_CLASSPATH = /u01/wdt/weblogic-deploy/lib/weblogic-deploy-core.jar
CLASSPATH = /u01/wdt/weblogic-deploy/lib/weblogic-deploy-core.jar
WLST_PROPERTIES = -Dcom.oracle.cie.script.throwException=true -Djava.util.logging.config.class=oracle.weblogic.deploy.logging.WLSDeployLoggingConfig  -Djava.security.egd=file:/dev/./urandom
/u01/oracle/oracle_common/common/bin/wlst.sh /u01/wdt/weblogic-deploy/lib/python/create.py -oracle_home /u01/oracle -domain_home /u01/domains/simple_domain -domain_type WLS -model_file /u01/wdt/models/DiscoveredABDataSourceEARDomain.yaml -archive_file /u01/wdt/models/DiscoveredABDataSourceEARDomain.zip

Initializing WebLogic Scripting Tool (WLST) ...

Jython scans all the jar files it can find at first startup. Depending on the system, this process may take a few minutes to complete, and WLST may not return a prompt right away.

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

####<Apr 3, 2024 7:32:46 PM> <INFO> <WebLogicDeployToolingVersion> <logVersionInfo> <WLSDPLY-01750> <The WebLogic Deploy Tooling createDomain version is 3.5.4:.43d3afb:Mar 14, 2024 11:55 UTC>
####<Apr 3, 2024 7:32:46 PM> <INFO> <WLSDeployLoggingConfig> <logLoggingDirectory> <WLSDPLY-01755> <The createDomain program will write its log to directory /u01/wdt/weblogic-deploy/logs>
####<Apr 3, 2024 7:32:46 PM> <INFO> <DomainTypedef> <__init__> <WLSDPLY-12328> <Domain type WLS type definition file /u01/wdt/weblogic-deploy/lib/typedefs/WLS.json version WLS_14 does not contain a postCreateRcuSchemasScript section>
####<Apr 3, 2024 7:32:46 PM> <INFO> <DomainTypedef> <__init__> <WLSDPLY-12321> <Domain type WLS type definition file /u01/wdt/weblogic-deploy/lib/typedefs/WLS.json version WLS_14 does not contain a postCreateDomainScript section>
####<Apr 3, 2024 7:32:46 PM> <INFO> <DomainTypedef> <__init__> <WLSDPLY-12328> <Domain type WLS type definition file /u01/wdt/weblogic-deploy/lib/typedefs/WLS.json version WLS_14 does not contain a postCreateRcuSchemasScript section>
####<Apr 3, 2024 7:32:46 PM> <INFO> <DomainTypedef> <__init__> <WLSDPLY-12321> <Domain type WLS type definition file /u01/wdt/weblogic-deploy/lib/typedefs/WLS.json version WLS_14 does not contain a postCreateDomainScript section>
####<Apr 3, 2024 7:32:46 PM> <INFO> <ModelContext> <__copy_from_args> <WLSDPLY-01050> <WebLogic version for aliases is 14.1.1.0.0>
####<Apr 3, 2024 7:32:47 PM> <INFO> <filter_helper> <apply_filters> <WLSDPLY-20017> <No filter configuration file /u01/wdt/weblogic-deploy/lib/model_filters.json>
####<Apr 3, 2024 7:32:47 PM> <INFO> <filter_helper> <apply_filters> <WLSDPLY-20016> <No filters of type create found in filter configuration file /u01/wdt/weblogic-deploy/lib/model_filters.json>
####<Apr 3, 2024 7:32:47 PM> <INFO> <Validator> <__validate_model_file> <WLSDPLY-05002> <Performing validation in TOOL mode for WebLogic Server version 14.1.1.0.0 and WLST OFFLINE mode>
####<Apr 3, 2024 7:32:47 PM> <INFO> <Validator> <__validate_model_file> <WLSDPLY-05003> <Performing model validation on the /u01/wdt/models/DiscoveredABDataSourceEARDomain.yaml model file>
####<Apr 3, 2024 7:32:47 PM> <INFO> <Validator> <__validate_model_file> <WLSDPLY-05005> <Performing archive validation on the /u01/wdt/models/DiscoveredABDataSourceEARDomain.zip archive file>
####<Apr 3, 2024 7:32:47 PM> <INFO> <Validator> <__validate_model_section> <WLSDPLY-05008> <Validating the domainInfo section of the model file>
####<Apr 3, 2024 7:32:47 PM> <INFO> <Validator> <__validate_model_section> <WLSDPLY-05008> <Validating the topology section of the model file>
####<Apr 3, 2024 7:32:47 PM> <INFO> <Validator> <__validate_model_section> <WLSDPLY-05008> <Validating the resources section of the model file>
####<Apr 3, 2024 7:32:48 PM> <INFO> <Validator> <__validate_model_section> <WLSDPLY-05008> <Validating the appDeployments section of the model file>
####<Apr 3, 2024 7:32:48 PM> <INFO> <DomainCreator> <__create_domain> <WLSDPLY-12203> <Creating domain of type WLS>
####<Apr 3, 2024 7:32:48 PM> <INFO> <DomainCreator> <__create_base_domain_with_select_template> <WLSDPLY-12210> <Selecting base template named Basic WebLogic Server Domain>
####<Apr 3, 2024 7:32:49 PM> <INFO> <DomainCreator> <__extend_domain_with_select_template> <WLSDPLY-12212> <Loading selected templates>
####<Apr 3, 2024 7:32:51 PM> <INFO> <TopologyHelper> <create_placeholder_named_elements> <WLSDPLY-19403> <Creating placeholder for JDBCSystemResource JDBC-Data-Source-Oracle>
####<Apr 3, 2024 7:32:51 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12100> <Creating Machine with the name machineA>
####<Apr 3, 2024 7:32:51 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12100> <Creating Machine with the name machineB>
####<Apr 3, 2024 7:32:51 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12100> <Creating Cluster with the name ClusterA>
####<Apr 3, 2024 7:32:51 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12100> <Creating Cluster with the name ClusterB>
####<Apr 3, 2024 7:32:51 PM> <INFO> <TopologyHelper> <create_placeholder_named_elements> <WLSDPLY-19403> <Creating placeholder for Server ManagedServerA1>
####<Apr 3, 2024 7:32:51 PM> <INFO> <TopologyHelper> <create_placeholder_named_elements> <WLSDPLY-19403> <Creating placeholder for Server ManagedServerA2>
####<Apr 3, 2024 7:32:51 PM> <INFO> <TopologyHelper> <create_placeholder_named_elements> <WLSDPLY-19403> <Creating placeholder for Server ManagedServerB1>
####<Apr 3, 2024 7:32:52 PM> <INFO> <TopologyHelper> <create_placeholder_named_elements> <WLSDPLY-19403> <Creating placeholder for Server ManagedServerB2>
####<Apr 3, 2024 7:32:52 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name AdminServer>
####<Apr 3, 2024 7:32:52 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name ManagedServerA1>
####<Apr 3, 2024 7:32:52 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name ManagedServerA2>
####<Apr 3, 2024 7:32:52 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name ManagedServerB1>
####<Apr 3, 2024 7:32:52 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name ManagedServerB2>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12100> <Creating MigratableTarget with the name ManagedServerA1 (migratable)>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12100> <Creating MigratableTarget with the name ManagedServerA2 (migratable)>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12100> <Creating MigratableTarget with the name ManagedServerB1 (migratable)>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12100> <Creating MigratableTarget with the name ManagedServerB2 (migratable)>
####<Apr 3, 2024 7:32:53 PM> <INFO> <TopologyHelper> <clear_jdbc_placeholder_targeting> <WLSDPLY-19404> <Clearing targets for JDBCSystemResource placeholder JDBC-Data-Source-Oracle>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Machine with the name machineA>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Machine with the name machineB>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Cluster with the name ClusterA>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Cluster with the name ClusterB>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name AdminServer>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name ManagedServerA1>
####<Apr 3, 2024 7:32:53 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name ManagedServerA2>
####<Apr 3, 2024 7:32:54 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name ManagedServerB1>
####<Apr 3, 2024 7:32:54 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating Server with the name ManagedServerB2>
####<Apr 3, 2024 7:32:54 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating MigratableTarget with the name ManagedServerA1 (migratable)>
####<Apr 3, 2024 7:32:54 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating MigratableTarget with the name ManagedServerA2 (migratable)>
####<Apr 3, 2024 7:32:54 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating MigratableTarget with the name ManagedServerB1 (migratable)>
####<Apr 3, 2024 7:32:54 PM> <INFO> <Creator> <_create_named_mbeans> <WLSDPLY-12101> <Updating MigratableTarget with the name ManagedServerB2 (migratable)>
####<Apr 3, 2024 7:32:54 PM> <INFO> <DomainCreator> <__extend_domain_with_select_template> <WLSDPLY-12205> <Writing base domain base_domain to directory /u01/domains/simple_domain>
####<Apr 3, 2024 7:32:57 PM> <INFO> <DomainCreator> <__extend_domain_with_select_template> <WLSDPLY-12206> <Closing templates for domain base_domain>
####<Apr 3, 2024 7:32:57 PM> <INFO> <DefaultAuthenticatorHelper> <create_default_init_file> <WLSDPLY-01900> <Updating default authenticator initialization file /u01/domains/simple_domain/security/DefaultAuthenticatorInit.ldift>
####<Apr 3, 2024 7:32:57 PM> <INFO> <LibraryHelper> <install_domain_libraries> <WLSDPLY-12213> <The model did not specify any domain libraries to install>
####<Apr 3, 2024 7:32:57 PM> <INFO> <LibraryHelper> <extract_classpath_libraries> <WLSDPLY-12218> <The archive file /u01/wdt/models/DiscoveredABDataSourceEARDomain.zip contains no classpath libraries to install>
####<Apr 3, 2024 7:32:57 PM> <INFO> <LibraryHelper> <install_domain_scripts> <WLSDPLY-12241> <The model did not specify any domain scripts to install>
####<Apr 3, 2024 7:32:57 PM> <INFO> <Creator> <_create_mbean> <WLSDPLY-20013> <Updating SecurityConfiguration>
####<Apr 3, 2024 7:32:57 PM> <INFO> <DatasourceDeployer> <_add_named_elements> <WLSDPLY-09608> <Updating JDBCSystemResource JDBC-Data-Source-Oracle>
####<Apr 3, 2024 7:32:57 PM> <INFO> <DatasourceDeployer> <_add_model_elements> <WLSDPLY-09604> <Updating JdbcResource for JDBCSystemResource JDBC-Data-Source-Oracle>
####<Apr 3, 2024 7:32:57 PM> <INFO> <DatasourceDeployer> <_add_model_elements> <WLSDPLY-09601> <Adding JDBCConnectionPoolParams to JdbcResource>
####<Apr 3, 2024 7:32:57 PM> <INFO> <DatasourceDeployer> <_add_model_elements> <WLSDPLY-09601> <Adding JDBCDataSourceParams to JdbcResource>
####<Apr 3, 2024 7:32:58 PM> <INFO> <DatasourceDeployer> <_add_model_elements> <WLSDPLY-09601> <Adding JDBCDriverParams to JdbcResource>
####<Apr 3, 2024 7:32:58 PM> <INFO> <DatasourceDeployer> <_add_named_elements> <WLSDPLY-09606> <Adding Properties user to JDBCDriverParams>
####<Apr 3, 2024 7:32:58 PM> <INFO> <ApplicationDeployer> <__add_applications> <WLSDPLY-09301> <Adding Application basicWebapp to Domain simple_domain>

Issue Log for createDomain version 3.5.4 running WebLogic version 14.1.1.0.0 offline mode:

Total:   SEVERE :    0  WARNING :    0

createDomain.sh completed successfully (exit code = 0)
Removing intermediate container 324177a4bdf7
 ---> d8743ead9fd4
Step 33/40 : FROM os_update as final_build
 ---> 4db70353fbf5
Step 34/40 : ENV ORACLE_HOME=/u01/oracle     LD_LIBRARY_PATH=/u01/oracle/oracle_common/adr:$LD_LIBRARY_PATH     JAVA_HOME=/u01/jdk     DOMAIN_HOME=/u01/domains/simple_domain     WDT_HOME=/u01/wdt     PATH=${PATH}:/u01/jdk/bin:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle:/u01/domains/simple_domain/bin
 ---> Running in 7fc9a81345d3
Removing intermediate container 7fc9a81345d3
 ---> 006dcc8d44c3
Step 35/40 : LABEL com.oracle.weblogic.imagetool.buildid="71107d65-7898-4f31-a595-7e86581f655f"
 ---> Running in 1b531655368b
Removing intermediate container 1b531655368b
 ---> af98be131cb2
Step 36/40 : COPY --from=jdk_build --chown=oracle:oracle /u01/jdk /u01/jdk/
 ---> fb648de5cd73
Step 37/40 : COPY --from=wls_build --chown=oracle:oracle /u01/oracle /u01/oracle/
 ---> 94dd2578579c
Step 38/40 : COPY --from=wdt_build --chown=oracle:oracle /u01/domains/simple_domain /u01/domains/simple_domain/
 ---> 78c2e0f481c8
Step 39/40 : USER oracle
 ---> Running in b9fcda9acd00
Removing intermediate container b9fcda9acd00
 ---> 306796a85c89
Step 40/40 : WORKDIR /u01/domains/simple_domain
 ---> Running in b25cd1ddcef0
Removing intermediate container b25cd1ddcef0
 ---> 07d91172a6ab
Successfully built 07d91172a6ab
Successfully tagged wls:14.1.1.1.1
[INFO   ] Build successful. Build time=199s. Image tag=wls:14.1.1.1.1

List files in created Docker image
List files in created Docker image 

$   docker run -it --rm wls:14.1.1.1.1 find /u01 -type d -maxdepth 2
/u01
/u01/jdk
/u01/jdk/bin
/u01/jdk/conf
/u01/jdk/include
/u01/jdk/jmods
/u01/jdk/legal
/u01/jdk/lib
/u01/jdk/man
/u01/oracle
/u01/oracle/OPatch
/u01/oracle/coherence
/u01/oracle/inventory
/u01/oracle/oraInventory
/u01/oracle/oracle_common
/u01/oracle/oui
/u01/oracle/wlserver
/u01/domains
/u01/domains/simple_domain


Image tool options

 Usage: imagetool create [OPTIONS]

Build WebLogic docker image

      --additionalBuildCommands=<additionalBuildCommandsPath>

                             path to a file with additional build commands.

      --additionalBuildFiles=<additionalBuildFiles>[,<additionalBuildFiles>...]

                             comma separated list of files that should be

                               copied to the build context folder.

  -b, --builder=<buildEngine>

                             Executable to process the Dockerfile. Use the full

                               path of the executable if not on your path.

                               Defaults to 'docker', or, when set, to the value

                               in environment variable WLSIMG_BUILDER.

      --build-arg=<String=String>

                             Additional argument passed directly to the build

                               engine.

      --buildNetwork=<networking mode>

                             Set the networking mode for the RUN instructions

                               during build.

      --chown=<owner:group>  owner and groupid to be used for files copied into

                               the image. Default: oracle:oracle

      --dryRun               Skip image build execution and print Dockerfile to

                               stdout.

      --fromImage=<image name>

                             Docker image to use as base image.  Default: ghcr.

                               io/oracle/oraclelinux:8-slim

      --httpProxyUrl=<HTTP proxy URL>

                             proxy for http protocol. Ex: http://myproxy:80 or

                               http://user:passwd@myproxy:8080

      --httpsProxyUrl=<HTTPS proxy URL>

                             proxy for https protocol. Ex: http://myproxy:80 or

                               http://user:passwd@myproxy:8080

      --installerResponseFile=<installerResponseFiles>[,

        <installerResponseFiles>...]

                             path to a response file. Override the default

                               responses for the Oracle installer

      --inventoryPointerFile=<inventoryPointerFile>

                             path to a user provided inventory pointer file as

                               input

      --inventoryPointerInstallLoc=<inventoryPointerInstallLoc>

                             path to where the inventory pointer file (oraInst.

                               loc) should be stored in the image

      --jdkVersion=<jdkVersion>

                             Version of server jdk to install. Default: 8u202

      --latestPSU            Whether to apply patches from latest PSU.

      --opatchBugNumber=<opatchBugNumber>

                             the patch number for OPatch (patching OPatch)

      --packageManager=<package manager>

                             Override the detected package manager for

                               installing OS packages.

      --password[=<support password>]

                             Enter password for Oracle Support userId on STDIN

      --passwordEnv=<environment variable>

                             environment variable containing the support

                               password

      --passwordFile=<password file>

                             path to file containing just the password

      --patches=patchId[,patchId...]

                             Comma separated patch Ids. Ex: 12345678,87654321

      --pull                 Always attempt to pull a newer version of base

                               images during the build.

      --recommendedPatches   Whether to apply recommended patches from latest

                               PSU.

      --skipcleanup          Do not delete the build context folder,

                               intermediate images, and failed build containers.

      --skipOpatchUpdate     Do not update OPatch version, even if a newer

                               version is available.

      --strictPatchOrdering  Use OPatch to apply patches one at a time.

*     --tag=<image tag>      Tag for the final build image. Ex:

                               container-registry.oracle.

                               com/middleware/weblogic:12.2.1.4

      --target=<kubernetesTarget>

                             Apply settings appropriate to the target

                               environment.  Default: Default.  Supported

                               values: Default, OpenShift.

      --type=<installerType> Installer type. Default: WLS. Supported values:

                               WLS, WLSSLIM, WLSDEV, FMW, OSB, SOA, SOA_OSB,

                               SOA_OSB_B2B, MFT, IDM, IDM_WLS, OAM, OIG, OUD,

                               OUD_WLS, OID, WCC, WCP, WCS, OHS, ODI

      --user=<support email> Oracle Support email id

      --version=<installerVersion>

                             Installer version. Default: 12.2.1.3.0

WDT Options

      --resourceTemplates=<resourceTemplates>[,<resourceTemplates>...]

                             Resolve variables in the resource template(s) with

                               information from the image tool build.

      --wdtArchive=<wdtArchivePath>

                             A WDT archive zip file, if needed (or

                               comma-separated list of files).

      --wdtDomainHome=<wdtDomainHome>

                             pass to the -domain_home for wdt

      --wdtDomainType=<wdtDomainType>

                             WDT Domain Type (-domain_type). Default: WLS.

                               Supported values: WLS, JRF, or RestrictedJRF

      --wdtEncryptionKey[=<passphrase>]

                             Enter the passphrase to decrypt the WDT model

      --wdtEncryptionKeyEnv=<environment variable name>

                             environment variable containing the passphrase to

                               decrypt the WDT model

      --wdtEncryptionKeyFile=<passphrase file>

                             path to file the passphrase to decrypt the WDT

                               model

      --wdtHome=<WDT home directory>

                             The target folder in the image for the WDT install

                               and models. Default: /u01/wdt.

      --wdtJavaOptions=<wdtJavaOptions>

                             Java command line options for WDT

      --wdtModel=<wdtModelPath>

                             A WDT model file (or a comma-separated list of

                               files).

      --wdtModelHome=<wdtModelHome>

                             The target location in the image to copy WDT

                               model, variable, and archive files.  Default:

                               WDT_HOME/models

      --wdtModelOnly         Install WDT and copy the models to the image, but

                               do not create the domain. Default: false.

      --wdtRunRCU            instruct WDT to run RCU when creating the Domain

      --wdtStrictValidation  Use strict validation for the WDT validation

                               method. Only applies when using model only.

                               Default: false.

      --wdtVariables=<wdtVariablesPath>

                             A WDT variables file, if needed (or

                               comma-separated list of files).

      --wdtVersion=<wdtVersion>

                             WDT version to use.  Default: latest.

  --                         All parameters and options provided after the --

                               will be passed to the container image build

                               command.