How to develop HyperShift components in-cluster
Sometimes when developing HyperShift components it's useful to iterate on new binary builds inside the cluster itself, especially when working on functionality that depends on the Kubernetes or cloud environment for one reason or another.
Because such in-cluster build/image/publish/redeploy development workflows can be very tedious and slow, the HyperShift project includes a few tools and techniques to help make the feedback loop as fast as possible.
This guide makes use of the ko tool to rapidly build lightweight images which are then published directly into an OCP cluster's internal registry. This approach has the following properties which can speed up development:
- No local container runtime required to build images, and image builds are extremely fast.
- Resulting images are almost as small as the Go binary being published.
- Images are published directly into OCP's internal image registry, so images are immediately available on or near the machines that will be pulling them.
Prerequisites
- An OCP 4.9+ cluster
- The
oc
CLI tool - The ko CLI tool
For this workflow, the OCP cluster must be configured to expose its internal
image registry externally so the ko
tool can publish to it.
First, expose the cluster's image registry:
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
Next, generate an authentication token for the registry and install it into the
local Docker config file so that ko
can push images into the registry. Be sure
to replace <password>
with your actual kubeadmin
password.
1 2 |
|
Finally, configure OCP to allow any authenticated user to pull images from the internal registry. This will enable HyperShift component pods to pull the custom images you publish.
oc create clusterrolebinding authenticated-registry-viewer --clusterrole registry-viewer --group system:authenticated
Build and publish a component
To build and publish a given component into the OCP cluster from local source,
use the publish-ocp.sh
script. This tool uses ko
to build and publish
the image, and will output to stdout a single line containing the internal pullspec
suitable for use by any HyperShift component deployment.
For example, to build and publishing the hypershift-operator
, run:
hack/publish-ocp.sh ./hypershift-operator
Here's what the output will look like:
1 2 3 4 5 6 7 8 9 10 |
|
The publish-ocp.sh
script prints only the internal repo pullspec to stdout to
make it easy to incorporate the script into pipelines.
Note
Notice on line 9 that public pullspec of the image is
default-route-openshift-image-registry.apps.dmace-7894.devcluster.openshift.com/hypershift/hypershift-operator-cd22...
.
Pods in the cluster cannot pull the image using the public repo name because the
host's certificate is likely self-signed, which would require additional
configuration in the cluster to enable pods to pull it.
Pods must reference the internal repo pullspec as printed to stdout on line
10: image-registry.openshift-image-registry.svc:5000/hypershift/hypershift-operator-cd22...
.
Launch a custom hypershift-operator
image interactively
To iterate on the hypershift-operator
binary in-cluster interactively, first
scale down the operator's deployment:
oc scale --replicas 0 --namespace hypershift deployments/operator
Alternatively, run the HyperShift CLI install
command with the --development
flag which sets up the deployment with zero replicas:
go run . install \
--oidc-storage-provider-s3-bucket-name=$BUCKET_NAME \
--oidc-storage-provider-s3-region=$BUCKET_REGION \
--oidc-storage-provider-s3-credentials=$AWS_CREDS \
--development
Now, you can build and publish the hypershift-operator
image and run it interactively
in a single shot using publish-ocp.sh
together with the oc debug
command:
oc debug --namespace hypershift deployments/operator --image $(hack/publish-ocp.sh ./hypershift-operator) -- \
/ko-app/hypershift-operator run \
--oidc-storage-provider-s3-region $BUCKET_REGION \
--oidc-storage-provider-s3-bucket-name $BUCKET_NAME \
--oidc-storage-provider-s3-credentials /etc/oidc-storage-provider-s3-creds/credentials \
--namespace hypershift \
--pod-name operator-debug
Note
Make sure to replace $BUCKET_NAME
and $BUCKET_REGION
with the same values used to
install HyperShift.
Your latest code should be deployed and logs should soon begin streaming. Just
press ctrl-c
to terminate and delete the pod.
Note
See Use custom operator images to use your own registry.
Configure a HostedCluster for iterative control plane development
To iterate on control plane components which are deployed and managed in a
HostedCluster
control plane namespace (e.g. the control-plane-operator
or ignition-server
), it's possible to configure the HostedCluster
resource to
scale down individual control plane components and facilitate various development
workflows.
The hypershift.openshift.io/debug-deployments
annotation on a HostedCluster
is used to configure individual control plane components as targets for development
and debugging. The value of the annotation is a comma-delimited list of control
plane deployment names. Any control plane component in the list will always be
scaled to 0, enabling developers to replace the components with their own processes
(inside or outside the cluster) while preserving the Deployment
resources to
use as templates for the replacement process environments.
For example, to scale the control-plane-operator
and ignition-server
deployments
to 0:
oc annotate -n clusters HostedCluster test-cluster hypershift.openshift.io/debug-deployments=control-plane-operator,ignition-server
Note
Update the name of the HostedCluster to match your cluster.
This will result in a HostedCluster
like so:
1 2 3 4 5 6 7 8 9 10 11 |
|
To scale back up a given component's original deployment simply remove the component's deployment name from the list.
The hypershift.openshift.io/pod-security-admission-label-override
annotation
may also need to be set in order to run debug pods locally.
oc annotate -n clusters HostedCluster test-cluster hypershift.openshift.io/pod-security-admission-label-override=baseline
Launch a custom control-plane-operator
image interactively
To iterate on the control-plane-operator
binary in-cluster interactively, first
configure the HostedCluster
to scale down the control-plane-operator
deployment.
Now, you can build and publish the control-plane-operator
image and run it interactively
in a single shot using publish-ocp.sh
together with the oc debug
command. Be
sure to replace $NAMESPACE
with the namespace of the control plane that was deployed
for the HostedCluster
.
oc debug --namespace $NAMESPACE deployments/control-plane-operator --image $(hack/publish-ocp.sh ./control-plane-operator) -- /ko-app/control-plane-operator run
Your latest code should be deployed and logs should soon begin streaming. Just
press ctrl-c
to terminate and delete the pod.
Note
The default arguments to control-plane-operator run
should be sufficient to
get started.
Launch a custom ignition-server
interactively
To iterate on the ignition server in-cluster interactively, first
configure the HostedCluster
to scale down the ignition-server
deployment.
Now, you can build and publish the control-plane-operator
image and run the
ignition-server
command interactively in a single shot using publish-ocp.sh
together with the oc debug
command. Be sure to replace $NAMESPACE
with the
namespace of the control plane that was deployed for the HostedCluster
.
oc debug --namespace $NAMESPACE deployments/ignition-server --image $(hack/publish-ocp.sh ./control-plane-operator) -- /ko-app/control-plane-operator ignition-server
Your latest code should be deployed and logs should soon begin streaming. Just
press ctrl-c
to terminate and delete the pod.
Note
The default arguments to ignition-server
should be sufficient to
get started.