Using native resources with custom controllers

Cluster information:

Kubernetes version:1.14
Cloud being used: (put bare-metal if not on a public cloud) bare-metal
Installation method:
Host OS:
CNI and version:
CRI and version:

Hi, I am interested in implementing custom controllers that can use native K8 resources such as Pods. e.g. building on the sample-controller crds I would like to add things like Pod specs so that I can do my own logic for controlling Pods etc. I saw that I could code golang and then use client-go apis to manage Pods. However it was not clear if the k8 controller infra would plumb the additional resources over to the synchandler code. e.g.

apiVersion: samplecontroller.k8s.io/v1alpha1
kind: Foo
metadata:
name: example-foo
spec:
deploymentName: example-foo
replicas: 1

  • podTemplateSpec:
  • name: podname
  • labels:
  • foo:bar
    
  • containers:

Can I expect that the attributes for the PodSpec specified in the yaml shall be available to the controller ? I am hoping to use the structures to then use APIs to create Pods etc.
I did see a lot of examples where some subset of resource attributes went to the CRDs and then the controller logic stamps out Pods etc based on embedded golang structs. I would instead prefer to use the CR YAML to stamp out resources on lines of say the existing replicaset. Is this something anyone has done successfully ? Thanks for any input !