
因为K8s集群版本为1.18,所以我们用0.5的版本
git clone -b release-0.5 --single-branch https://github.com.cnpmjs.org/coreos/kube-prometheus.git
kubectl apply -f kube-prometheus/manifests/setup/*
kubectl apply -f kube-prometheus/manifests/*
待全部pod起来后
vim kube-prometheus/manifests/grafana-service.yaml

用nodeport的方式暴露service、
默认用户名和mima都是admin
如果要配置告警
修改/root/kube-prometheus/manifests/alertmanager-secret.yaml文件
apiVersionv1
data
kindSecret
metadata
namealertmanager-main
namespacemonitoring
stringData
alertmanager.yaml-
global
resolve_timeout1m # 处理超时时间
smtp_smarthost'smtp.qq.com:465' # 邮箱smtp服务器代理
smtp_from'****<strong>@qq.com' # 发送邮箱名称
smtp_auth_username'</strong>*<strong>@qq.com' # 邮箱名称
smtp_auth_password'</strong>****<strong>' # 授权mima
smtp_require_tlsfalse # 不开启tls 默认开启
receivers
nameDefault
email_configs# 邮箱配置
to"</strong>*****@qq.com" # 接收警报的email配置
route
group_interval1m # 在发送新警报前的等待时间
group_wait10s # 最初即第一次等待多久时间发送一组警报的通知
receiverDefault
repeat_interval1m # 发送重复警报的周期
typeOpaque上面邮箱可根据自己需求更改
数据持久化prometheus(locolhost)
先创建sc
vim promethues-sc.yaml
kindStorageClass
apiVersionstorage.k8s.io/v1
metadata
namelocal-storage
provisionerkubernetes.io/no-provisioner
volumeBindingModeWaitForFirstConsumer再创建pv
vim pv.yaml
apiVersionv1
kindPersistentVolume
metadata
nameprom-local-pv-0
labels
appprometheus
spec
capacity
storage50Gi
volumeModeFilesystem
accessModes
ReadWriteOnce
persistentVolumeReclaimPolicyRetain
storageClassNamelocal-storage
local
path/data/prometheus-db
nodeAffinity
required
nodeSelectorTerms
matchExpressions
keykubernetes.io/hostname
operatorIn
values
moa-k8s-prometheus-01修改prometheus-prometheus.yaml文件
storage
volumeClaimTemplate
spec
selector
matchLabels
appprometheus
storageClassNamelocal-storage
resources
requests
storage50Gi然后执行kubectl apply -f prometheus-prometheus.yaml
可以使用Kubectl get pv查看pv的状态
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
prom-local-pv-0 50Gi RWO Retain Bound monitoring/prometheus-k8s-db-prometheus-k8s-0 local-storage 7m37s可以看出状态已经bound了,就可以去目录看有没有数据了
修改prometheus-operator数据存储时间
prometheus operator数据保留天数,根据官方文档的说明,默认prometheus operator数据存储的时间为1d,这个时候无论你prometheus operator如何进行持久化,都没有作用,因为数据只保留了1天,那么你是无法看到更多天数的数据

vim prometheus-prometheus.yml

然后重新apply,查看服务状态
监控其他服务
1. 监控redis
部署redis_exporter
wget https://github.com/oliver006/redis_exporter/releases/download/v1.12.1/redis_exporter-v1.12.1.linux-amd64.tar.gz
tar -zxvf redis_exporter-v1.12.1.linux-amd64.tar.gz./redis_exporter -redis.addr=10.0.13.104:6379 -redis.password='2RIQdfmfgdUrPJ83qbNDYzbu2m' -web.listen-address=10.0.13.104:9121 &在Kube-prometheus创建 endpoint service servicemonitor
apiVersionv1
kindEndpoints
metadata
nameredis-metrics
namespacemonitoring
labels
k8s-appredis-metrics
subsets
addresses
ip10.0.13.104
ports
nameredis-exporter
port9121
protocolTCP
---
apiVersionv1
kindService
metadata
nameredis-metrics
namespacemonitoring
labels
k8s-appredis-metrics
spec
typeClusterIP
clusterIPNone
ports
nameredis-exporter
port9121
protocolTCP
---
apiVersionmonitoring.coreos.com/v1
kindServiceMonitor
metadata
nameredis-metrics
namespacemonitoring
labels
appredis-metrics
k8s-appredis-metrics
prometheuskube-prometheus
releasekube-prometheus
spec
endpoints
portredis-exporter
interval15s
selector
matchLabels
k8s-appredis-metrics
namespaceSelector
matchNames
monitoringkubectl create -f redis-monitor.yaml
kubectl delete -f prometheus-prometheus.yaml
模板116922. 监控mysql
https://github.com/prometheus/mysqld_exporter/releases
mysql_exporter下载地址下0.12.0的
https://www.cnblogs.com/jasonminghao/p/12715018.html 文档地址
mysql_exporter部署
1.下载mysql_exporter并解压
$ tar xf /opt/src/mysqld_exporter-0.12.1.linux-amd64.tar.gz
// 将mysql_exporter二进制文件拷贝至/usr/local/bin
$ cp /opt/src/mysqld_exporter-0.12.1.linux-amd64/mysqld_exporter /usr/local/bin/2.需要授权用户给exporter使用
> CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'abc12345' WITH MAX_USER_CONNECTIONS 5;
// 可查看主从运行情况查看线程,及所有数据库。
> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';为该用户设置最大连接数为了避免监控数据过大导致服务器超载
3.修改mysql配置文件,添加刚才创建的exporter用户和mima
$ vim /etc/my.cnf
[client]
user=exporter
password=abc123454.启动exporter客户端,需指定mysql配置文件,读取exporter用户和mima
$ mysqld_exporter --config.my-cnf=/etc/my.cnf
常用参数:
// 选择采集innodb
--collect.info_schema.innodb_cmp
// innodb存储引擎状态
--collect.engine_innodb_status
// 指定配置文件
--config.my-cnf="/etc/my.cnf"5.添加system系统服务
$ vim /usr/lib/systemd/system/mysql_exporter.service
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=root
Group=root
Type=simple
ExecStart=/usr/local/bin/mysqld_exporter \
--config.my-cnf=/etc/my.cnf
[Install]
WantedBy=multi-user.target6.启动添加的system服务
$ systemctl daemon-reload
$ systemctl start mysql_exporter.service
$ systemctl enable mysql_exporter.service
mysql_exporter正常的话curl 10.185.***:9104/metrics |grep mysql指标是非常多的
vim /root/kube-prometheus/manifests/prometheus-additional.yaml
- job_name: 'mysql-exporter'
static_configs:
- targets:
- 10.185.***:9104
# - 10.185.***:9104
- job_name: 'mongo_export'
static_configs:
- targets:
- 10.185.***:9103kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml -n monitoring
vim /root/kube-prometheus/manifests/prometheus-prometheus.yaml
apiVersionmonitoring.coreos.com/v1
kindPrometheus
metadata
labels
prometheusk8s
namek8s
namespacemonitoring
spec
alerting
alertmanagers
namealertmanager-main
namespacemonitoring
portweb
imagequay.io/prometheus/prometheusv2.15.2
nodeSelector
kubernetes.io/oslinux
podMonitorNamespaceSelector
podMonitorSelector
additionalScrapeConfigs
nameadditional-scrape-configs
keyprometheus-additional.yaml
replicas2
resources
requests
memory400Mi
ruleSelector
matchLabels
prometheusk8s
rolealert-rules
securityContext
fsGroup2000
runAsNonRoottrue
runAsUser1000
serviceAccountNameprometheus-k8s
serviceMonitorNamespaceSelector
serviceMonitorSelector
versionv2.15.2修改完重启下kubectl apply -f /root/kube-prometheus/manifests/prometheus-prometheus.yaml
导入模板11796

其他的也可用此方式监控










