Taming port-forwarding to k8s pods with SOCKS
Taming port-forwarding in kubernetes with SOCKS
I am no big fan of "devops" and kubernetes. Don't listen to the fan-bois. Layers upon layers of abstractions is not making things simpler. One of the annoyances for me, when I make applications that needs to run in a kubernetes cluster in order to be tested together, is to reach them. Most applications of some complexity have health-check or status API's (typically via a HTTP/REST endpoint). However, reaching those endpoints from my workstation is not as simple as it could have been. If all I need to do is to connect to one service, I can use
kubectl port-forward .... However, more often than not, I need to rapidly iterate over several such endpoints for several services to get an understanding about why something don't work.
The cause of the problem is that kubernetes use an internal network for it's pods and services, and it's own DNS server, which is not reachable outside the kubernetes cluster. Even if my workstation share the same physical network as my local kubernetes cluster (I have a bare metal kubernetes cluster in the basement), I can only reach the nodes IP addresses, - not the IP addresses for the pods or services.
The normal way to deal with this is to deploy a "jump pod" in the cluster, for example a pod just running busybox or Linux. You can then
kubectl exec -it ... to that pod, and run your commands. However, if you need to run scripts or other commands than curl, this can easily become annoying.
Today I ran into this annoyance. I wanted to run curl with some JWT tokens generated on the fly, and iterate over a series of IP numbers to pods to asset an assumption. Using the jump-pod was simply too slow, and it took my focus away from the problem at hand. So I decided to deploy shinysocks, a SOCKS proxy I wrote a few years ago.
SOCKS is a protocol that let you route TCP connections via a SOCKS proxy server.
When I run a SOCKS proxy server inside the kubernetes cluster, it will do DNS lookups for the host-names I provide using kubernetes internal DNS server. And since it's inside the cluster, it can reach all IP addresses inside the cluster. If I know a host-name or an IP address, I can use for example curl to reach that endpoint. Curl itself will connect to the SOCKS proxy, and the socks proxy will forward the connection to the internal resource.
To install my docker image in the kubernetes cluster, with a NodePort service - to make the SOCKS proxy available on my local network - I used another of my pet projects, k8deployer. With that, I could deploy it in my cluster in just a few seconds.
This is the yaml file for the deployment
name: socks-proxy kind: Deployment args: image: jgaafromnorth/shinysocks port: "1080" service.type: NodePort
This is the command to install it in the kubernetes cluster:
k8deployer -d shinysocks.yaml
Now, since I did not specify a host-port, I have to see what port kubernetes assigned to it:
kubectl get services | grep socks socks-proxy-svc NodePort 10.101.26.30 <none> 1080:31591/TCP 3m
So, the NodePort is 31591.
Now I can connect directly to host-names or private IP address available inside the cluster with any SOCKS enabled application. For example curl:
curl --socks5-hostname k8s-node-1:31591 http://10.244.2.196/healthz
And that's it.
If you like to write more extensive and verbose yaml files for your deployments, you can of course compose a yaml file usable for kubectl with a Deployment and a Service. Me, I like it simple :)