Non-helm JAS environments use k3s to host JAS containers for scanning. When a new scan is started, Xray spawns a container in the k3s cluster. Once the scan is finished, the results and logs are sent back to Xray. To troubleshoot scans while they are executing or when logs aren’t being returned, we can access the k3s servers and use kubectl to start investigating.
Seeing the default containers used by k3s:
[root@test-jas ~]# kubectl get pods --kubeconfig=/etc/rancher/k3s/k3s.yaml --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7b5bbc6644-pxnmz 1/1 Running 8 (12m ago) 77d kube-system local-path-provisioner-687d6d7765-wb8qs 1/1 Running 9 (12m ago) 77d kube-system metrics-server-667586758d-2vcsg 1/1 Running 9 (12m ago) 77d
When scans are running, ephemeral containers for applicability and exposures will appear:
[root@test-jas ~]# kubectl get pods --kubeconfig=/etc/rancher/k3s/k3s.yaml --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7b5bbc6644-pxnmz 1/1 Running 8 (16m ago) 77d kube-system local-path-provisioner-687d6d7765-wb8qs 1/1 Running 9 (16m ago) 77d kube-system metrics-server-667586758d-2vcsg 1/1 Running 9 (16m ago) 77d default applicabilityscannersjob-a071365c-014c-4a99-b54b-605f939982q8rg 1/1 Running 0 10s default exposuresscannersjob-80e6362a-275b-452d-9971-4593b0a04cca-dj9pq 1/1 Running 0 9s
To see logs while the container/scan is running:
[root@test-jas ~]# kubectl logs exposuresscannersjob-80e6362a-275b-452d-9971-4593b0a04cca-dj9pq --kubeconfig=/etc/rancher/k3s/k3s.yaml
{"timestamp": "2024-02-15T21:13:42.601127Z", "level": "INFO", "name": "expscan", "message": "got encrypted tokens, decrypting...", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.602711Z", "level": "INFO", "name": "expscan", "message": "tokens decrypted", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.602872Z", "level": "INFO", "name": "expscan", "message": "ARTIFACT_URL", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "artifact_url": "http://testjas.vm:8082/artifactory/maven-local2/log4j/log4j/1.2.17/log4j-1.2.17.jar", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.602957Z", "level": "INFO", "name": "expscan", "message": "EXPOSURES_RUNS_TBL_ID", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "exp_runs_tbl_id": "2", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.603039Z", "level": "INFO", "name": "expscan", "message": "EXPOSURES_RESULTS_URL", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "exp_results_url": "http://testjas.vm:8082/xray/api/v1/internal/exposures_results", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.603355Z", "level": "INFO", "name": "expscan", "message": "vdoo_artifact_type=2", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "app_loglevel": "INFO"} Example with error on the last line:
[root@test-jas ~]# kubectl logs exposuresscannersjob-80e6362a-275b-452d-9971-4593b0a04cca-dj9pq --kubeconfig=/etc/rancher/k3s/k3s.yaml
{"timestamp": "2024-02-15T21:13:42.601127Z", "level": "INFO", "name": "expscan", "message": "got encrypted tokens, decrypting...", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.602711Z", "level": "INFO", "name": "expscan", "message": "tokens decrypted", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.602872Z", "level": "INFO", "name": "expscan", "message": "ARTIFACT_URL", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "artifact_url": "http://testjas.vm:8082/artifactory/maven-local2/log4j/log4j/1.2.17/log4j-1.2.17.jar", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.602957Z", "level": "INFO", "name": "expscan", "message": "EXPOSURES_RUNS_TBL_ID", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "exp_runs_tbl_id": "2", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.603039Z", "level": "INFO", "name": "expscan", "message": "EXPOSURES_RESULTS_URL", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "exp_results_url": "http://testjas.vm:8082/xray/api/v1/internal/exposures_results", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:13:42.603355Z", "level": "INFO", "name": "expscan", "message": "vdoo_artifact_type=2", "traceid": "569c5a5dff15a192", "app_trace_id": "569c5a5dff15a192", "app_loglevel": "INFO"}
{"timestamp": "2024-02-15T21:14:28.692616Z", "level": "ERROR", "name": "expscan", "message": "Exposures scanner got exception Exception('Failed to resolve testjas.vm')",... After scan is done, the container will disappear:
[root@test-jas ~]# kubectl get pods --kubeconfig=/etc/rancher/k3s/k3s.yaml --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7b5bbc6644-pxnmz 1/1 Running 8 (18m ago) 77d kube-system local-path-provisioner-687d6d7765-wb8qs 1/1 Running 9 (18m ago) 77d kube-system metrics-server-667586758d-2vcsg 1/1 Running 9 (18m ago) 77d
k3s is installed as a service:
[root@test-jas ~]# cat /etc/systemd/system/k3s.service [Unit] Description=Lightweight Kubernetes Documentation=https://k3s.io After=network-online.target [Service] Type=notify ExecStartPre=-/sbin/modprobe br_netfilter ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/k3s server --node-ip=10.21.41.11 --tls-san 10.21.41.11 --disable servicelb --disable traefik KillMode=process Delegate=yes # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity TimeoutStartSec=0 Restart=always RestartSec=5s [Install] WantedBy=multi-user.target
Can use systemctl or journalctl to see k3s service status and/or k3s specific errors:
[root@test-jas ~]# systemctl status k3s ● k3s.service - Lightweight Kubernetes Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2024-02-15 20:57:34 UTC; 29min ago Docs: https://k3s.io Main PID: 1050 (k3s-server) Tasks: 97 Memory: 884.9M CGroup: /system.slice/k3s.service ├─1050 /usr/local/bin/k3s server ├─1689 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd ├─2614 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1e332a6d0e3fd0cc938d3520ba08105e6b0f7635bef39e5dbac9ea39f7822910 -address /run/k3s/containerd/containerd.sock ├─2617 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id aca44e4bc88cb2e37fdb6995f13d33f449ca176d0fceed3ca05bf2080915cd23 -address /run/k3s/containerd/containerd.sock └─2814 /var/lib/rancher/k3s/data/4cdfcad9f220e885cbc32cf86c6cb0d26b496e3949efb0aa33fb37692e11d521/bin/containerd-shim-runc-v2 -namespace k8s.io -id 27d4f0a04da057b65dfb1e131cead1b710a91c12088d7d9c74fb962972a1975b -address /run/k3s/containerd/containerd.sock Feb 15 21:14:31 test-jas k3s[1050]: I0215 21:14:31.982550 1050 job_controller.go:502] enqueueing job default/exposuresscannersjob-80e6362a-275b-452d-9971-4593b0a04cca Feb 15 21:15:01 test-jas k3s[1050]: I0215 21:15:01.800972 1050 job_controller.go:502] enqueueing job default/applicabilityscannersjob-a071365c-014c-4a99-b54b-605f93998811 Feb 15 21:15:01 test-jas k3s[1050]: I0215 21:15:01.830105 1050 job_controller.go:502] enqueueing job default/exposuresscannersjob-80e6362a-275b-452d-9971-4593b0a04cca Feb 15 21:15:01 test-jas k3s[1050]: I0215 21:15:01.867262 1050 job_controller.go:502] enqueueing job default/applicabilityscannersjob-a071365c-014c-4a99-b54b-605f93998811 Feb 15 21:15:01 test-jas k3s[1050]: I0215 21:15:01.874422 1050 job_controller.go:502] enqueueing job default/exposuresscannersjob-80e6362a-275b-452d-9971-4593b0a04cca Feb 15 21:15:03 test-jas k3s[1050]: I0215 21:15:03.495154 1050 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=df0b363e-8965-4903-a998-20e6893057d1 path="/var/lib/kubelet/pods/df0b363e-8965-4903-a998-20e6893057d1/volumes" Feb 15 21:15:03 test-jas k3s[1050]: I0215 21:15:03.495727 1050 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f34645a7-4426-46e4-a2f9-164dba3ca80a path="/var/lib/kubelet/pods/f34645a7-4426-46e4-a2f9-164dba3ca80a/volumes" Feb 15 21:15:47 test-jas k3s[1050]: I0215 21:15:47.347885 1050 scope.go:110] "RemoveContainer" containerID="e9df6e33e1159c7c6be144f875875d424c63989a0a210acf6ee983fd4b67fd63" Feb 15 21:15:47 test-jas k3s[1050]: I0215 21:15:47.357066 1050 scope.go:110] "RemoveContainer" containerID="fad7612a07b5ea13f2ef0a65e23d0662f282526148dd5b06c17791b2cdee60b2" Feb 15 21:23:53 test-jas k3s[1050]: time="2024-02-15T21:23:53Z" level=warning msg="Proxy error: write failed: write tcp 127.0.0.1:6443->127.0.0.1:59470: write: broken pipe"