Cluster information:
Kubernetes version: minikube 1.17.0
Cloud being used: minikube
Installation method: minikube
Host OS: Ubuntu 20.04
Hi, when I deploy grafana with:
sudo minikube start --vm-driver=none
I can access Grafana with http://grafana.local with no issue, but I when I deploy it with
minikube start
, it won’t work, I will get a:
grafana.local refused to connect.
All deployment, service, ingress are deploying OK.
Here are my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
name: grafana-core-local
namespace: monitoring
labels:
app: grafana-local
component: core
spec:
selector:
matchLabels:
app: grafana-local
replicas: 1
template:
metadata:
labels:
app: grafana-local
component: core
spec:
dnsConfig:
options:
- name: ndots
value: "0"
initContainers:
- name: init-chown-data
image: grafana/grafana:7.0.0
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: ["chown", "-R", "472:472", "/var/lib/grafana"]
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
containers:
- image: grafana/grafana:7.0.0
name: grafana-core-local
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 472
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
envFrom:
- secretRef:
name: grafana-env
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana
key: admin-username
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana
key: admin-password
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
- name: GF_SERVER_ROOT_URL
value: "http://grafana.local"
# - name: GF_AUTH_ANONYMOUS_ORG_ROLE
# value: Admin
# does not really work, because of template variables in exported dashboards:
# - name: GF_DASHBOARDS_JSON_ENABLED
# value: "true"
readinessProbe:
httpGet:
path: /login
port: 3000
# initialDelaySeconds: 30
# timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
- name: grafana-datasources
mountPath: /etc/grafana/provisioning/datasources
volumes:
- name: grafana-persistent-storage
persistentVolumeClaim:
claimName: grafana-storage
- name: grafana-datasources
configMap:
name: grafana-datasources
I use traefik as ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: grafana.local
http:
paths:
- backend:
serviceName: grafana-local
servicePort: 3000
I have the same issue with pgadmin, and all my services, so it is not a grafana specific issue.
service:
➜ scripts kgs -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana-local NodePort 10.99.115.28 3000:31349/TCP 15m
ingress:
➜ scripts kgi -n monitoring
NAME CLASS HOSTS ADDRESS PORTS AGE
grafana-ingress <none> grafana.local 80 9m47s
/etc/hosts
127.0.0.1 localhost grafana.local pgadmin.local
When I check my open ports, I get:
➜ scripts sudo netstat -nlpute
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:44567 0.0.0.0:* LISTEN 0 156533 20458/kubelet
tcp 0 0 127.0.0.1:10391 0.0.0.0:* LISTEN 1000 90457 7824/Enpass
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 53062 1431/cupsd
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 0 46904 1607/nginx: master
tcp 0 0 127.0.0.1:41145 0.0.0.0:* LISTEN 1000 230738 38207/kubectl
tcp 0 0 127.0.0.1:6942 0.0.0.0:* LISTEN 1000 111395 13529/java
tcp 0 0 127.0.0.1:32772 0.0.0.0:* LISTEN 0 186947 28059/docker-proxy
tcp 0 0 127.0.0.1:32773 0.0.0.0:* LISTEN 0 186954 28080/docker-proxy
tcp 0 0 127.0.0.1:32774 0.0.0.0:* LISTEN 0 191578 28095/docker-proxy
tcp 0 0 127.0.0.1:32775 0.0.0.0:* LISTEN 0 189012 28108/docker-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 0 161924 20458/kubelet
tcp 0 0 127.0.0.1:45705 0.0.0.0:* LISTEN 1000 231993 39178/kubectl
tcp 0 0 127.0.0.1:63342 0.0.0.0:* LISTEN 1000 116872 13529/java
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 18965 1/init
tcp 0 0 127.0.0.1:5939 0.0.0.0:* LISTEN 0 61290 2541/teamviewerd
tcp6 0 0 ::1:631 :::* LISTEN 0 53061 1431/cupsd
tcp6 0 0 :::45247 :::* LISTEN 1000 227588 37926/kontena-lens
tcp6 0 0 :::10250 :::* LISTEN 0 157429 20458/kubelet
tcp6 0 0 :::111 :::* LISTEN 0 18968 1/init
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 200813 7378/chrome
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 200811 7378/chrome
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 200809 7378/chrome
udp 0 0 0.0.0.0:5353 0.0.0.0:* 115 41969 1425/avahi-daemon:
udp 0 0 0.0.0.0:40381 0.0.0.0:* 115 41971 1425/avahi-daemon:
udp 0 0 0.0.0.0:111 0.0.0.0:* 0 17675 1/init
udp 0 0 0.0.0.0:631 0.0.0.0:* 0 53066 1532/cups-browsed
udp 0 0 0.0.0.0:8976 0.0.0.0:* 1000 115069 13529/java
udp 0 0 0.0.0.0:34405 0.0.0.0:* 1000 115068 13529/java
udp6 0 0 :::5353 :::* 115 41970 1425/avahi-daemon:
udp6 0 0 :::111 :::* 0 18971 1/init
udp6 0 0 :::52249 :::* 115 41972 1425/avahi-daemon:
What am I failing?