How to scrape an undeclared Port on a Pod with prometheus-operator

October 08, 2025

Or: Why, and how, Prometheus selects Kubernetes targets to scrape.

While it's not good practice, sometimes it's necessary to have Prometheus connect to a metrics port that is not declared in the Pod's manifest as an explicit Port. For example, the Pod may be managed by an operator that lacks an extensible PodTemplate.

portNumber is a selector not an instruction

Intuitively, you might expect to use PodMonitor's .spec.podMetricsEndpoints.portNumber setting to explicitly force the port. But this won't do what you might expect. You will instead find that all possible targets for the PodMonitor are shown as dropped in Prometheus's target discovery status UI.

This is because portNumber doesn't tell Prometheus "if you have matched a pod by the specified namespace, and pod label selectors, and container, then connect to this port".

It says "when considering targets to scrape, only match targets that declare this port number in their Pod spec".

What actually happens is that the Prometheus operator expands your portNumber: 9999 setting into a relabel_config stanza for the scrape config it generates for your PodMonitor, something like

relabel_configs:
  - source_labels: ['__meta_kubernetes_container_port_number']
    regex: '^9999$'
    action: 'keep'

Here the keep action drops all targets unless they match the rule. It isn't dropping metrics; relabel_configs apply to target discovery , deciding what to scrape.

Understanding how PodMonitor selects targets

The Prometheus kubernetes_sd_config for Pod says:

The pod role discovers all pods and exposes their containers as targets. For each declared port of a container, a single target is generated. If a container has no specified ports, a port-free target per container is created for manually adding a port via relabeling.

(Emphasis mine).

What Prometheus does is generate a huge list of candidate targets, then apply rules from each scrape config to filter and discard them until only the targets that config should scrape remain.

If you have 5 pods, each with 2 containers, each with 2 ports, Prometheus will generate  5*2*2 = 20 targets, one per (pod, container, port) combo. Then it applies the scrape config's relabel_configs rules in turn to filter the list.

For example, a pod spec like:

apiVersion: v1
kind: Pod
metadata:
  name: example
  labels:
    kubernetes.io/name: example-pod
spec:
  containers:
    - name: a
      ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
      # ...
    - name: b
      ports:
        - containerPort: 9999
        - containerPort: 1234
      # ...
  # ...

... will actually be expanded into 4 potential targets by kubernetes service discovery.

If you examine the Prometheus status "targets" view or use promtool check service_discovery (1) you'll see these represented as discovered labels, matching the labels documented in kubernetes service discovery, like

[
  {
    "discoveredLabels": {
    	"__meta_kubernetes_pod_name": "example",
    	"__meta_kubernetes_pod_label_kubernetes_io_name": "example-pod",
    	"__meta_kubernetes_pod_container_name": "a",
    	"__meta_kubernetes_pod_container_port_name": "http",
    	"__meta_kubernetes_pod_container_port_number": "80",
    	...
  	},
  },
  {
    "discoveredLabels": {
    	"__meta_kubernetes_pod_name": "example",
    	"__meta_kubernetes_pod_label_kubernetes_io_name": "example-pod",
    	"__meta_kubernetes_pod_container_name": "a",
    	"__meta_kubernetes_pod_container_port_name": "https",
    	"__meta_kubernetes_pod_container_port_number": "443",
    	...
  	},
  },
  {
    "discoveredLabels": {
    	"__meta_kubernetes_pod_name": "example",
    	"__meta_kubernetes_pod_label_kubernetes_io_name": "example-pod",
    	"__meta_kubernetes_pod_container_name": "b",
    	"__meta_kubernetes_pod_container_port_number": "9999",
    	...
  	},
  },
  {
    "discoveredLabels": {
    	"__meta_kubernetes_pod_name": "example",
    	"__meta_kubernetes_pod_label_kubernetes_io_name": "example-pod",
    	"__meta_kubernetes_pod_container_name": "b",
    	"__meta_kubernetes_pod_container_port_number": "1234",
    	...
  	},
  }
  ...
}
    

(There are many more labels, omitted to focus on what's important here).

A PodMonitor like

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: example
spec:
  selector:
    matchExpressions:
      - key: kubernetes.io/name
        operator: Equals
        value: example-pod
    container: b
    podMetricsEndpoints:
      - portNumber: 9999
        endpoint: /metrics
  # ...

will be expanded by prometheus-operator to a scrape_config with a set of label rewrite rules (acting like target selectors) functionally equivalent to:

- job_name: "example"
  metrics_path: "/metrics"
  relabel_configs:
  	- source_labels: ['__meta_kubernetes_pod_label_kubernetes_io_name']
    	regex: '^example-pod$'
    	action: 'keep'
  	- source_labels: ['__meta_kubernetes_pod_container_name']
    	regex: '^b$'
    	action: 'keep'
  	- source_labels: ['__meta_kubernetes_container_port_number']
    	regex: '^9999$'
    	action: 'keep'

When applied to the discovered targets above, it will drop the first two because their __meta_kubernetes_pod_container_name does not match.

The third will match because the __meta_kubernetes_container_port_number matches. The 4th will be dropped because the port number does not match. So the desired target is matched.

Prometheus constructs the address to connect to from __meta_kubernetes_pod_ip and __meta_kubernetes_container_port_number, then endpoint: /metrics tells Prometheus where to make the HTTP request.

So what if there's no Port declared?

Lets say we now want to scrape an un-declared port 9090 on the above Pod, where we know a service is listening. Bad practice, but sometimes reality gets in the way.

Given the above, it's clear using portNumber: 9090 won't work. No target will be generated by kubernetes_sd with __meta_kubernetes_container_port_number of 9090 so the PodMonitor won't match any targets.

If at all possible, fix your Pod spec.

If you can't, use __address__ rewriting in the  relabel_configs to re-define the target. 

In the above PodMonitor, you might revise it to:

    podMetricsEndpoints:
      - endpoint: /metrics
        # maps to relabel_configs in the generated scrape_config:
        relabelings:
          - action: replace
            targetLabel: __address__
            sourceLabels: [__meta_kubernetes_pod_ip]
            replacement: "${1}:9090"

This transforms the matched target, overriding the port to connect to.

This will work.... too well.

If the container declares Ports, but not the one you need to scrape

If you were to test this, you'd get 2 instances for the one pod, with the same host and port. And you'd get duplicate metrics.

The cause is that 

For each declared port of a container, a single target is generated

as noted earlier.

Service discovery does not emit another target with an empty or unset __meta_kubernetes_container_port_number to mean "the container as a whole" if the container declares any ports. So you can't match a target with unset __meta_kubernetes_container_port_number.

Instead, you have to lie. In your podMetricsEndpoints entry, add a port or portNumber for a different declared Port you know will be present. This constrains target matching to only one target. Then override the actual destination with __address__ rewriting.

The end result is something like:

     podMetricsEndpoints:
      - endpoint: /metrics
        # this is a lie; we override it in __address__ rewriting later. It's only here
        # so we don't match multiple ports and scrape duplicates.
        portNumber: 9999
        # maps to relabel_configs in the generated scrape_config:
        relabelings:
          # really scrape port 9090 instead
          - action: replace
            targetLabel: __address__
            sourceLabels: [__meta_kubernetes_pod_ip]
            replacement: "${1}:9090"

The fake port or portNumber is not required if the Pod declares no ports at all; it's only required if the Pod declares ports, but not the one you need.

I expect that a similar issue arises if the container you want to connect to declares no ports, but other containers do. You probably have to select a different container+port combo, then use label rewriting to replace __meta_kubernetes_pod_container_name to get the correct container= label on the resulting series.

Now please don't use this

To be clear: please don't do this. Fix your Pod specs instead.

But if you have to perpetrate some ugly hacks as an interim step, hopefully this helps you out of a jam.

Better alternatives

If possible, you should instead:

  • Fix whatever creates your Pod. This isn't always easy with the numbers of layers of indirection in a typical Kubernetes stack(2), but it's the best option. Remember, you can submit improvements to upstream components.
  • Use a mutating webhook to inject the Port into the deployed workload independently of the operator managing your workload. This is more work, and can cause reconciler loops with some operators that manage pods directly, but is your best option if you can't fix the operator.

If you do adopt this approach, please leave clear comments in your manifests and a kubernetes.io/description in the PodMonitor so the next person to see it has a hope of understanding what is happening and why.


End Notes

(1) Assuming you're using k8s and the prom operator you can see this with something like:

kubectl exec -i -c prometheus -n $YOUR_PROM_NS pod/$YOUR_PROM_POD -- \
  promtool check service_discovery /etc/prometheus/config_out/prometheus.env.yaml \
  podMonitor/$THE_PODMONITOR_NS/$THE_PODMONITOR_NAME/0

 

(2) I firmly believe that the Kubernetes operator pattern is a bit broken, and that almost all operators should embed a PodTemplate in their CRs to provide configuration of cross-cutting concerns. This should be part of the operator template, with built-in support for merging the PodTemplate. I should probably follow my own advice and submit an improvement for it upstream.

Share this
prometheus,podmonitor,port,address,dropped target,scrape,rewrite

How to scrape an undeclared Port on a Pod with a Prometheus operator PodMonitor

How, and why, Prometheus selects targets to scrape