Since you don’t have experience managing Kubernetes (I assume), perhaps a better idea would be to use a managed-Kubernetes service from AWS (you mentioned EC2). EKS would provide a managed control-plane and if you don’t want to manage any EC2 instances, you may consider using Fargate to deploy your client’s applications.
If you are familiar with IAM policies it may be easier for you to properly isolate your clients applications from each other (and from other’s clients applications). AWS offers its own monitoring services and even Prometheus+Grafana+ELK/OpenSearch, that you may need to deploy to monitor your self-managed cluster and applications.
You can fully isolate applications in your self-managed cluster, but it requires you to have some knowledge on Network Policies and RBAC. Namespaces on its own does not always isolate applications between them (depends on configuration). This is old (2016), but in Kubernetes Namespaces: use cases and insights one of the reasons not to use Namespaces is:
In some circumstances Kubernetes Namespaces will not provide the isolation that you need. This may be due to geographical, billing or security factors. For all the benefits of the logical partitioning of namespaces, there is currently no ability to enforce the partitioning. Any user or resource in a Kubernetes cluster may access any other resource in the cluster regardless of namespace. So, if you need to protect or isolate resources, the ultimate namespace is a separate Kubernetes cluster against which you may apply your regular security|ACL controls.
In 2017, Kubernetes 1.8, Network Policies provided “the ability to enforce the partitioning” (that was lacking in the previously mentioned article from 2016) but you have to configure them as the Default policies:
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace.
If you want to move away from managing “so many servers”, you will be changing managing “many EC2” instances to managing “many EC2” instances (in your self-managed cluster) and having to deal with Kubernetes.
My advice would be to focus on your core business (coding) and use a managed Kubernetes service from your favorite hyperscaler (AWS/Azure/GCP).