The Actuator interfaces have gone in the latest source code

Previously in cluster API 0.1, there was two Actuator interfaces(one for cluster, and the other for machine) which were exactly what the providers should implement. But the two interfaces have gone in the latest source code, so what interfaces should the providers implement?

Hi, there are no interfaces for providers to implement any more. Instead, providers implement full, regular controllers that watch provider-specific custom resources. We have an open PR to describe the differences between v1alpha1 and v1alpha2 - please see https://github.com/kubernetes-sigs/cluster-api/pull/1211. Also take a look at https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/proposals/20190610-machine-states-preboot-bootstrapping.md for more details on some of the design changes coming in v1alpha2.

Please let me know if you have any further questions.
Andy

Hi, @ncdc,
Thanks for the feedback, which makes sense.

It seems that what the cluster API providers should do is to reuse the existing cluster API controllers and then implement the following two additional controllers,

1. BootstrapConfig Controller
        It should watch both BootstrapConfig and machine, but only write BootstrapConfig.
2. InfrastructureConfig Controller
        It should watch both BootstrapConfig and machine, but only write InfrastructureConfig.

And we should can use the “clusterctl” binary in cluster-api repo directly instead of from a provider implementation once v1alpha2 is ready. Could you please confirm whether we have the same understanding?

It seems that what the cluster API providers should do is to reuse the existing cluster API controllers
and then implement the following two additional controllers,

1. BootstrapConfig Controller
        It should watch both BootstrapConfig and machine, but only write BootstrapConfig.

For v1alpha2, you should only need to implement the Infrastructure provider components and should be able to leverage the bootstrapping from the Kubeadm bootstrapping provider to provide bootstrapping support. If you have a need for bootstrapping with something other than Kubeadm, then I would suggest that live in a separate provider and not bind the bootstrapping tightly with the infrastructure provisioning.

  1. InfrastructureConfig Controller
    It should watch both BootstrapConfig and machine, but only write InfrastructureConfig.

This controller should only need to watch Machines and the type for your infrastructure provider (AWSMachine for the AWS provider as an example). This controller should wait until Machine.Spec.BootstrapReady is set to true by the Cluster API Machine controller, then you can retrieve the bootstrapping data from Machine.Spec.Bootstrap.Data, which is populated by the Bootstrap controller.

And we should can use the “clusterctl” binary in cluster-api repo directly instead of from a provider
implementation once v1alpha2 is ready. Could you please confirm whether we have the same understanding?

The state of clusterctl for v1alpha2 is still under heavy debate. For example: https://github.com/kubernetes-sigs/cluster-api/issues/1198 and https://github.com/kubernetes-sigs/cluster-api/issues/1187. There is also a discussion of alternative tooling to clusterctl in the form of a proposed project for clusteradm and a clusteradm operator: https://github.com/kubernetes-sigs/cluster-api/issues/1085

As a side note, I’ve been working with David Watson to get things configured for per-branch documentation with our Netlify config, which will allow us to maintain multiple versions of the documentation. This will allow us to start updating the documentation for v1alpha2 even though it is not fully complete yet, and I’m hoping that we can start to better address some of the v1alpha1/v1alpha2 confusion there.

Hi, @detiber,
Thanks for the feedback, which is really helpful.

I am not sure whether it is proper to adopt v1alpha2 at current stage. If we decide to adopt v1alpha2, which release(i.e. v0.1.8?) do you suggest to use?

@ahrtr I would not suggest adopting v1alpha2 at this point unless you are attempting to target that version specifically for some reason. Things are still in flux as work is still ongoing with the kubeadm bootstrap provider.

All of the existing releases are for the v1alpha1 types, and we are not targeting a v1alpha2 oriented release until the end of August as part of the v1alpha2 milestone: https://github.com/kubernetes-sigs/cluster-api/milestone/6

@ahrtr what provider are you working on?

@detiber, do you suggest to use the Actuator interfaces to implement the provider at current stage?

@ncdc, I am not sure whether I am allowed to tell you the exactly provider I am working on, because it isn’t an open source project so far, and so it could be related to business security compliance.

I totally understand - no worries.

If you create a provider based on v1alpha1, you have to take care of bootstrapping (kubeadm or otherwise) and infrastructure.

If you can wait a bit for v1alpha2, you won’t have to worry about bootstrapping, assuming you want to use kubeadm and your infrastructure supports cloud-init.

I would say it depends on your timelines. If you need to get something working ASAP, go with v1alpha1. If you can wait another 4-ish weeks (maybe a bit more, depending on progress & testing), you can start working on an infrastructure provider now, although the kubeadm bootstrap provider is still being developed.

Got it, thanks for all the helps.