Return timeout when using port-forward or log

Cluster information:

Kubernetes version: v1.20.7
Cloud being used: AWS
Installation method: Kops
Host OS: Ubuntu
CNI and version: Calico(v3.18.3)
CRI and version:

I’m having a problem returning timeout when using port-forward or log. When I use kubectl log -v8 to view the log, it shows an error of status 500.

The log:

I0619 13:19:59.954085   34560 round_trippers.go:432] POST https://api.cluster.bob.com/api/v1/namespaces/test-middleware/pods/mongodb-665685b64f-qslnz/portforward
I0619 13:19:59.954105   34560 round_trippers.go:438] Request Headers:
I0619 13:19:59.954110   34560 round_trippers.go:442]     X-Stream-Protocol-Version: portforward.k8s.io
I0619 13:19:59.954115   34560 round_trippers.go:442]     User-Agent: kubectl/v1.21.1 (darwin/amd64) kubernetes/5e58841
I0619 13:20:30.621342   34560 round_trippers.go:457] Response Status: 500 Internal Server Error in 30666 milliseconds
I0619 13:20:30.621372   34560 round_trippers.go:460] Response Headers:
I0619 13:20:30.621384   34560 round_trippers.go:463]     Cache-Control: no-cache, private
I0619 13:20:30.621393   34560 round_trippers.go:463]     Content-Type: application/json
I0619 13:20:30.621401   34560 round_trippers.go:463]     Date: Sat, 19 Jun 2021 05:20:30 GMT
I0619 13:20:30.621409   34560 round_trippers.go:463]     Content-Length: 156
F0619 13:20:30.622291   34560 helpers.go:115] error: error upgrading connection: error dialing backend: dial tcp 172.20.40.79:10250: i/o timeout
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc000398800, 0x91, 0xf6)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3d73480, 0xc000000003, 0x0, 0x0, 0xc0003ca8c0, 0x32ce40d, 0xa, 0x73, 0x100e100)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3d73480, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0006c40b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000921c0, 0x62, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x2d31640, 0xc00067b220, 0x2baff98)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/portforward.NewCmdPortForward.func1(0xc000901340, 0xc00037e730, 0x2, 0x5)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/portforward/portforward.go:115 +0x1a5
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000901340, 0xc00037e690, 0x5, 0x5, 0xc000901340, 0xc00037e690)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:854 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0003e9340, 0xc00004e1e0, 0xc00003a070, 0x7)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
main.main()
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d

I checked that all the nodes are in Ready state, No exceptions for all kube-proxy instances either, Since Kops has automatically set up the security group for me, the port can be accessed