Federate to Expanse
Adding this before your return the job works:
job.spec.template.metadata.annotations = {"multicluster.admiralty.io/elect": ""}
works and targets the admiralty scheduler. But can't resolve the taints on the nodes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 58s admiralty-proxy 0/234 nodes are available: 1 Insufficient smarter-devices/fuse, 1 node(s) had taint {nautilus.io/guru-research: true}, that the pod didn't tolerate, 1 node(s) had taint {nautilus.io/large-gpu: true}, that the pod didn't tolerate, 1 node(s) had taint {nautilus.io/nsi: }, that the pod didn't tolerate, 1 node(s) had taint {nautilus.io/suncave-head: true}, that the pod didn't tolerate, 1 node(s) had taint {nautilus.io/testing-ipv6: true}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate, 10 node(s) had taint {nautilus.io/stashcache: true}, that the pod didn't tolerate, 12 node(s) had taint {nautilus.io/suncave: true}, that the pod didn't tolerate, 124 node(s) didn't match Pod's node affinity/selector, 18 node(s) had taint {nautilus.io/nrp-testing: true}, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't ...
Warning FailedScheduling 51s (x1 over 55s) admiralty-proxy 0/234 nodes are available: 1 Insufficient smarter-devices/fuse, 1 node(s) had taint {nautilus.io/guru-research: true}, that the pod didn't tolerate, 1 node(s) had taint {nautilus.io/large-gpu: true}, that the pod didn't tolerate, 1 node(s) had taint {nautilus.io/nsi: }, that the pod didn't tolerate, 1 node(s) had taint {nautilus.io/suncave-head: true}, that the pod didn't tolerate, 1 node(s) had taint {nautilus.io/testing-ipv6: true}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate, 10 node(s) had taint {nautilus.io/stashcache: true}, that the pod didn't tolerate, 12 node(s) had taint {nautilus.io/suncave: true}, that the pod didn't tolerate, 124 node(s) didn't match Pod's node affinity/selector, 18 node(s) had taint {nautilus.io/nrp-testing: true}, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/unreachable: }, that the pod didn't ...
Tried to take out the bad nodes. Need to ask Dimma now.