首页
关于
友链
Search
1
wlop 4K 壁纸 4k8k 动态 壁纸
1,517 阅读
2
Nacos持久化MySQL问题-解决方案
963 阅读
3
Docker搭建Typecho博客
765 阅读
4
滑动时间窗口算法
750 阅读
5
Nginx反向代理微服务配置
717 阅读
生活
解决方案
JAVA基础
JVM
多线程
开源框架
数据库
前端
分布式
框架整合
中间件
容器部署
设计模式
数据结构与算法
安全
开发工具
百度网盘
天翼网盘
阿里网盘
登录
Search
标签搜索
java
javase
docker
java8
springboot
thread
spring
分布式
mysql
锁
linux
redis
源码
typecho
centos
git
map
RabbitMQ
lambda
stream
少年
累计撰写
189
篇文章
累计收到
24
条评论
首页
栏目
生活
解决方案
JAVA基础
JVM
多线程
开源框架
数据库
前端
分布式
框架整合
中间件
容器部署
设计模式
数据结构与算法
安全
开发工具
百度网盘
天翼网盘
阿里网盘
页面
关于
友链
搜索到
30
篇与
的结果
2022-05-09
Docker安装Jenkins自动部署SpringBoot项目
Docker安装Jenkins自动部署SpringBoot项目根据之前文章《使用Docker安装好Jenkins》为前提搭建好Jenkins,不明白请看https://www.yanxizhu.com/index.php/archives/138/。环境说明:jenkins为docker部署,Docker+Jenkins+Gitee+JDK11+Maven3.8.5。以后每次改动代码,push提交到giee码云后会自动部署,不用手动点击部署。一、全局工具配置【首页】-【系统管理】-【全局工具配置】我之前启动jenkins容器映射参数如下,根据自己映射路径自行修改。docker run -p 10240:8080 -p 10241:50000 --name jenkins \ -u root \ -v /mydata/jenkins_home:/var/jenkins_home \ -v /mydata/maven/apache-maven-3.8.5:/maven/apache-maven-3.8.5 \ -v /mydata/jdk/jdk-11.0.10/:/jdk/jdk-11.0.10 \ -v /mydata/maven/repo:/mydata/maven/repo \ -v /usr/bin/docker:/usr/bin/docker \ -v /var/run/docker.sock:/var/run/docker.sock \ -d jenkins/jenkins:lts上面很重要,注意。jdk配置jdk11路径/jdk/jdk-11.0.10maven配置maven3.8.5路径/maven/apache-maven-3.8.5git配置Default路径/usr/bin/gitdocker配置docker路径/usr/bin注意点:1、jenkins容器里面自带git,可通过命令查看路径。2、注意自己jdk、mavn、docker安装路径。查看jenkins自带git路径命令:which git二、插件安装【首页】-【系统管理】-【插件管理】插件1:Publish Over SSH插件2:Gitee Plugin如果插件安装慢,可以修改源,请参考修改方案,https://www.yanxizhu.com/index.php/archives/138/注意:如果在【全局工具配置】没有对应的选项,就是缺少相应插件。三、系统配置1、SSH remote hosts配置新增加配置ssh登陆凭证,此步骤的主要作用是jenkins 打包镜像后,能够远程去登陆和执行脚本文件。Hostname:xxx.xxx.xxx.x..(需要登陆的服务器ip)Port:22(ssh登陆端口)Credentials:登陆账号和密码(此处点击[添加]按钮增加一个)如果是本机可以不用配置2、Gitee 配置链接名:giteeGitee 域名 URL:https://gitee.com添加凭证Gitee API V5 的私人令牌(获取地址 https://gitee.com/profile/personal_access_tokens)通过上面连接创建一个令牌,然后添加到这里。四、准备项目1、本地新建一个SpringBoot项目,新建HellocerConller控制层package com.yanxizhu.jenkins.demo.controller; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; /** * @description: Jenkins自动部署测试 * @author: <a href="mailto:batis@foxmail.com">清风</a> * @date: 2022/5/8 17:30 * @version: 1.0 */ @RestController public class HelloController { @GetMapping("/hello") public String hello(){ return "Hello World!"; } }本地启动项目确保通过127.0.0.1:8080/hello能够访问。2、编写Dockerfile# 指定是基于哪个基础镜像 FROM openjdk:11 # 作者信息 MAINTAINER batis # 挂载点声明 VOLUME /tmp # 将本地的一个文件或目录,拷贝到容器的文件或目录里 ADD /target/jenkins-demo-0.0.1-SNAPSHOT.jar springboot.jar #shell脚本 RUN bash -c 'touch /springboot.jar' # 将容器的8000端口暴露,给外部访问。 EXPOSE 8000 # 当容器运行起来时执行使用运行jar的指令 ENTRYPOINT ["java", "-jar", "springboot.jar"]注意:修改jdk版本、打包后名称、端口信息五、代码上传登录码云新建仓库,名字随意将代码关联并提交到码云。新建仓库、代码push自行google。六、WebHooks 管理配置1、打开仓库 -> 管理 -> 右侧的webhooksURL:填入服务器公网IP地址WebHook密码通过以下生成。七、部署SpringBoot项目1、新建部署任务任务名字随意、构建一个自由风格的软件项目。2、General描述随意填写,丢弃旧的构建策略,保持构建的天数1,保持构建的最大个数3,根据自己需要自行修改。3、源码管理选择gitRepository URL为自己gitee码云仓库地址。Credentials点击“添加”,Credentials凭证,选择通过用户名密码添加,id、备注可以为空。4、构建触发器其它默认:找到Gitee WebHook 密码,点击“生成按钮”生成,然后将该密码填入上面 “六、WebHooks 管理配置”中。轮询 SCM策略:* * * * *注意:*中间有空格,当您输入 "* * * * *" 时,意思为"每分钟"?也许您希望 "H * * * *" 每小时轮询。5、构建选择执行shell脚本#!/bin/bash -lex docker rm -f app_docker sleep 1 docker rmi -f app_docker:1.0 sleep 1 mvn clean install -Dmaven.test.skip=true sleep 1 docker build -t app_docker:1.0 -f ./src/main/Dockerfile . sleep 1 docker run -d -p 8000:8000 --name app_docker app_docker:1.0 注意自己端口名称。6、访问测试通过自己xxx.xxx.xx.xx:8000/hello即可自己写的helloword了。以后每次改动代码,push提交到giee码云后会自动部署,不用手动点击部署。7、问题记录及解决方案比如:1、查不到mvn、docker、jdk命令,可能是jenkins容器中环境配置问题,可以参考《Jenkins容器docker部署springboot项目-问题记录》2、如果开启了防火墙注意开发相应端口或关闭防火墙3、部署遇到问题,查看部署日志,以及google、baidu相关参考:Docker开启Remote API访问docker启动Jenkins报错Docker安装JenkinsNginx配置Jenkins二级域名,以及443 SSL证书访问Jenkins容器docker部署springboot项目-问题记录
2022年05月09日
400 阅读
0 评论
6 点赞
2022-05-09
Jenkins容器docker部署springboot项目-问题记录
Jenkins容器docker部署springboot项目-问题记录一、docker容器内不能使用vim解决方案:以root进入容器内docker exec -it -user root jenkins /bin/bash更新软件包apt-get update升级过程可能非常慢,因为是从海外站点拉取镜像,所以我们可以配置一个国内的镜像源,加速镜像拉取更新。备份原文件mv /etc/apt/sources.list /etc/apt/sources.list.bak查看容器中Debian版本cat /etc/issue修改配置sources.list文件根据自己版本修改成对应内容,修改内容参考阿里镜像https://developer.aliyun.com/mirror/debian我容器Debian为11.x版本,修改内容为:cat >/etc/apt/sources.list <<EOF deb http://mirrors.aliyun.com/debian/ bullseye main non-free contrib deb-src http://mirrors.aliyun.com/debian/ bullseye main non-free contrib deb http://mirrors.aliyun.com/debian-security/ bullseye-security main deb-src http://mirrors.aliyun.com/debian-security/ bullseye-security main deb http://mirrors.aliyun.com/debian/ bullseye-updates main non-free contrib deb-src http://mirrors.aliyun.com/debian/ bullseye-updates main non-free contrib deb http://mirrors.aliyun.com/debian/ bullseye-backports main non-free contrib deb-src http://mirrors.aliyun.com/debian/ bullseye-backports main non-free contrib EOF重新执行apt-get update安装vimapt-get install -y vim安装rpmapt-get install rpm -y二、docker容器内vim不能粘贴内容vim右键进入visual模式无法粘贴解决方案vim /usr/share/vim/vim80/defaults.vim修改内容:第70行,在mouse=a的=前面加个-,修改后如下:if has('mouse') set mouse-=a endif三、docker容器内环境配置修改环境变量配置vi /etc/profile新增jdk、mavn环境变量配置# java环境变量 export JAVA_HOME=/jdk/jdk-11.0.10 export JRE_HOME=$JAVA_HOME/jre export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=./:JAVA_HOME/lib:$JRE_HOME/lib # maven环境变量 export M2_HOME=/maven/apache-maven-3.8.5 export PATH=$PATH:$JAVA_HOME/bin:$M2_HOME/bin重新加载环境变量source /etc/profile检验是否配置成功java -version mvn -v
2022年05月09日
210 阅读
1 评论
4 点赞
2022-05-08
Docker安装Jenkins
Docker安装Jenkinsjdk安装下载jdk解压到个人安装目录/mydata/jdk/jdk-11.0.10maven安装下载maven解压到个人安装目录/mydata/maven/apache-maven-3.8.5修改mavne配置文件setting.xml,设置本地仓库目录<localRepository>/mydata/maven/repo</localRepository>添加阿里云镜像,在mirrors节点下增加以下内容<mirrors> <mirror> <id>alimaven</id> <mirrorOf>central</mirrorOf> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/repositories/central/</url> </mirror> </mirrors>开启Docker Remote API关闭防火墙 或者 开启防火墙的端口#关闭防火墙 systemctl stop firewalld.service # 禁止firewall开机启动 systemctl disable firewalld.service # 或者允许固定端口 firewall-cmd --zone=public --add-port=2375/tcp --permanent firewall-cmd --reloadDocker环境下安装Jenkins拉取最新的Jenkins的docker镜像docker pull jenkins/jenkins:lts启动Jenkins容器 docker run -p 10240:8080 -p 10241:50000 --name jenkins \ -u root \ -v /mydata/jenkins_home:/var/jenkins_home \ -v /mydata/maven/apache-maven-3.8.5:/maven/apache-maven-3.8.5 \ -v /mydata/jdk/jdk-11.0.10/:/jdk/jdk-11.0.10 \ -v /mydata/maven/repo:/mydata/maven/repo \ -v /usr/bin/docker:/usr/bin/docker \ -v /var/run/docker.sock:/var/run/docker.sock \ -d jenkins/jenkins:lts注意:自己的目录和端口是否相同,不同请求修改。说明:挂载目录/mydata/jenkins_home为 jenkins 安装配置文件地址挂载目录/mydata/maven/apache-maven-3.8.5:/maven/apache-maven-3.8.5,需提前下载好本地maven解压到宿主机/mydata/maven/apache-maven-3.8.5:/maven/apache-maven-3.8.5目录挂载目录/mydata/jdk/jdk-11.0.10/为 宿主机本地jdk目录/mydata/jdk/jdk-11.0.10/,需提前下载解压到该目录挂载目录/mydata/maven/repo为后面需要用到的 maven 仓库地址-p 10240:8080 -p 10241:50000,端口映射,根据自己端口需求更改--name jenkins,容器名称遇到问题:iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 10241 -j DNAT --to-destination 172.17.0.5:50000 ! -i docker0: iptables: No chain/target/match by that name.解决方案:重启dockersystemctl restart docker查看 jenkins初始密码(第一次访问jenkins需要用到这个管理员密码)docker logs jenkins配置jenkins首次访问jenkins配置访问jenkins,自己ip加自己映射的端口,我这配置的是12.7.0.0.1:10240等待启动完成,会提示输入管理员密码。也就是上面看到的密码。输入日志里面获取的管理员密码。首次进入jenkins需要下载推荐插件,点击左边第一项【安装推荐的插件】。等待过程有点长,请耐心等待...等待插件下载完成后,进入下一步。创建一个管理员账号 admin / admin输入实例配置url:htttp://127.0.0.1:10240注意:如果插件安装失败,提示“无法连接到Jenkins”,关闭jenkins修改安装源。进入jenkins的工作目录,修改hudson.model.UpdateCenter.xml更改为:国内的清华大学的镜像地址。https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json然后再重启jenkins稍等一会即可安装。
2022年05月08日
391 阅读
1 评论
3 点赞
2022-05-08
Nginx配置Jenkins二级域名,以及443 SSL证书访问
nginx配置jenkins二级域名,以及443 SSL访问新增配置文件server { listen 443; #listen 80; server_name jenkins.yanxizhu.com; #error_page 404/404.html; ssl_certificate /etc/nginx/conf.d/jenkins.yanxizhu.com_bundle.crt; ssl_certificate_key /etc/nginx/conf.d/jenkins.yanxizhu.com.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; error_page 497 https://$host$request_uri; #Location配置 location / { proxy_set_header X-Rea $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Nginx-Proxy true; proxy_pass http://xx.xxx.xx.xx:10240; proxy_set_header X-Forwarded-Proto $scheme; } access_log /var/log/nginx/jenkins.yanxizhu.com.log; } server { listen 80; server_name jenkins.yanxizhu.com; rewrite ^(.*) https://jenkins.yanxizhu.com$1 permanent; }注意:自己的端口以及ip地址和域名,以及域名解析配置、SSL证书名字根据自己的修改。
2022年05月08日
394 阅读
1 评论
2 点赞
2022-05-08
iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 10241 -j DNAT --to-destination 172.17.0.5:50000 ! -i docker0: iptables: No chain/target/match by that name.
iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 10241 -j DNAT --to-destination 172.17.0.5:50000 ! -i docker0: iptables: No chain/target/match by that name.docker启动Jenkins报错:iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 10241 -j DNAT --to-destination 172.17.0.5:50000 ! -i docker0: iptables: No chain/target/match by that name.解决办法:重启dockersystemctl restart docker
2022年05月08日
210 阅读
1 评论
2 点赞
2022-05-08
Docker开启Remote API访问
Docker开启Remote API访问方法一1、修改/usr/lib/systemd/system/docker.service配置,在[Service]部分ExecStart后面添加配置。-H tcp://0.0.0.0:2375修改后如下:[Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375 ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always2、重新加载配置文件systemctl daemon-reload systemctl restart docker方法二修改配置sudo vim /etc/default/docker加入下面配置DOCKER_OPTS="-H tcp://0.0.0.0:2375"重新加载配置文件sudo systemctl restart docker方法三修改配置文件daemon.jsonvim /etc/docker/daemon.json加入下面配置{ "hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"] }tcp socket,表示允许任何远程客户端通过 2375 端口连接 Docker Daemon。unix,本地客户端将通过这个来连接 Docker Daemon。重新加载配置文件systemctl daemon-reload systemctl restart docker检查是否开启ps -ef|grep docker即可看到端口是否开启。
2022年05月08日
211 阅读
1 评论
2 点赞
2022-03-24
K8S集群部署
K8S集群部署一、VirtualBox虚拟机搭建通过Oracle VM VirtualBox,Vagrant搭建三台虚拟机。准备工作1、准备软件CentOS虚拟系统:CentOS-7-x86_64-Vagrant-2004_01.VirtualBox.box虚拟软件(VirtualBox):VirtualBox-6.1.34-150636-Win.exeVagrant安装虚拟机软件:vagrant_2.2.19_i686.msi。2、软件安装先安装VirtualBox、然后安装Vagrant。3、修改配置虚拟机、配置存放位置如下:虚拟系统安装1、创建配置文件新建Vagrantfile文件,批量创建3台虚拟机,内容如下:该文件是放在个人用户名文件夹下的。Vagrant.configure("2") do |config| (1..3).each do |i| config.vm.define "k8s-node#{i}" do |node| # 设置虚拟机的Box node.vm.box = "centos/7" config.vm.box_url = "https://mirrors.ustc.edu.cn/centos-cloud/centos/7/vagrant/x86_64/images/CentOS-7.box" # 设置虚拟机的主机名 node.vm.hostname="k8s-node#{i}" # 设置虚拟机的IP node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0" # 设置主机与虚拟机的共享目录 # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share" # VirtaulBox相关配置 node.vm.provider "virtualbox" do |v| # 设置虚拟机的名称 v.name = "k8s-node#{i}" # 设置虚拟机的内存大小 v.memory = 4096 # 设置虚拟机的CPU个数 v.cpus = 4 end end end end 注意一个细节,如果不修改配置文件和存放路径,可能一直卡在文件复制过程中,检查是否复制了全部个人目录下数据。如果是请修改Vagrantfile文件:C:\Users\自己用户名\.vagrant.d\boxes\centos-VAGRANTSLASH-7\0\virtualbox\VagrantfileVagrant.configure("2") do |config| config.vm.base_mac = "5254004d77d3" config.vm.synced_folder "./.vagrant", "/vagrant", type: "rsync" end.vagrant:为自己用户名下的.vagrant文件夹2、创建虚拟系统进入配置好的Vagrantfile目录,cmd命令快速批量生成3台虚拟机:vagrant upSSH配置1、cmd进入虚拟机vagrant ssh k8s-node12、切换root账号,默认密码vagrantsu root3、修改ssh配置,开启 root 的密码访问权限vi /etc/ssh/sshd_config修改配置文件:PasswordAuthentication为yesPasswordAuthentication yes4、重启sshdservice sshd restart5、三台虚拟机相同配置。二、网络配置1、全局添加网卡2、网卡配置为每台虚拟配置网卡一NET网络,并重新生成mac地址。说明:网卡1是实际用的地址,网卡2是用于本地ssh链接到虚拟机的网络。注意:三台虚拟机进行同样操作,记得重新生成mac地址。3、linux 环境配置通过上面配置后,启动3太虚拟机,通过SSH软件连接到3台虚拟机,都进行下面操作:注意:三个节点都执行关闭防火墙systemctl stop firewalld systemctl disable firewalld关闭 selinuxsed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0关闭 swap临时关闭swapoff -a 永久sed -ri 's/.*swap.*/#&/' /etc/fstab验证,swap 必须为 0;free -g 添加主机名与 IP 对应关系vi /etc/hosts添加自己主机net网络ip与虚拟机名字映射:(注意,重点,ip不要搞错了,是eth0的ip)10.0.2.15 k8s-node1 10.0.2.6 k8s-node2 10.0.2.7 k8s-node3将桥接的 IPv4 流量传递到 iptables 的链:cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF重启sysctl --syste疑难问题,遇见提示是只读的文件系统,运行如下命令mount -o remount rw三、安装K8S环境所有节点安装 Docker、kubeadm、kubelet、kubectldocker安装1、卸载系统之前的 dockersudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine2、安装 Docker-CEsudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm23、设置 docker repo 的 yum 位置sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo4、安装 docker,以及 docker-clisudo yum install -y docker-ce docker-ce-cli containerd.iosudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo5、更新并安装Docker-CEsudo yum makecache fastsudo yum install -y docker-ce docker-ce-cli containerd.io6、配置docker加速sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker注意看看daemon.json最后文件是否创建成功,对不对。7、启动 docker & 设置 docker 开机自启systemctl enable docker8、添加阿里云 yum 源cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF安装 kubeadm,kubelet 、 kubectlyum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.31、设置开机启动systemctl enable kubelet systemctl start kubelet2、通过命令查看现在是起不来得,还没配置好。systemctl status kubelet四、部署 k8s-mastermaster 节点初始化1、初始化注意修改为自己master主机地址,我的master虚拟机IP:10.0.2.15kubeadm init \ --apiserver-advertise-address=10.0.2.15 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.17.3 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/162、/root/新建文件夹k8s,然后cd k8s目录,新建master_images.sh文件:注意版本。#!/bin/bash images=( kube-apiserver:v1.17.3 kube-proxy:v1.17.3 kube-controller-manager:v1.17.3 kube-scheduler:v1.17.3 coredns:1.6.5 etcd:3.4.3-0 pause:3.1 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName # docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done3、修改脚本权限:chmod 700 master_images.sh4、执行脚本:./master_images.sh执行结果:[bootstrap-token] Using token: zrevjr.nwh8xqynzt2yopdb [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.2.15:6443 --token zrevjr.nwh8xqynzt2yopdb \ --discovery-token-ca-cert-hash sha256:500b649b719b910b065659bd3dfac38aa184b9450d37fd2de0e8c0e69840de88 5、master根据提示执行命令:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config注意:记录下自己上面的打印信息,后面会用到生成的信息。kubeadm join 10.0.2.15:6443 --token zrevjr.nwh8xqynzt2yopdb \ --discovery-token-ca-cert-hash sha256:500b649b719b910b065659bd3dfac38aa184b9450d37fd2de0e8c0e69840de88 安装网络插件1、安装kube-flannel.yml文件地址https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl apply -f kube-flannel.yml由于是海外站点可能访问不到,kube-flannel.yml配置文件内容如下:--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg2、查看所有名称空间的 pods注意注意:多等一会,如果一直Pending状态,缺少网络插件,需要重新部署flannel网络插件。处理方案:执行脚本kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml重新查看:如果都是Running表示安装成功。kubectl get pods --all-namespaces查看节点信息:kubectl get nodes加入节点1、在 Node2、node3 节点执行,上面初始化master时生成的,加入:kubeadm join 10.0.2.15:6443 --token zrevjr.nwh8xqynzt2yopdb \ --discovery-token-ca-cert-hash sha256:500b649b719b910b065659bd3dfac38aa184b9450d37fd2de0e8c0e69840de88 2、查看节点信息:多等一会等全部Ready时表示成功。kubectl get nodes可监控 pod进度:watch kubectl get pod -n kube-system -o wide再次查看节点信息:kubectl get nodes此时都是Ready状态,整个集群就搭建成功了。一个manster+2个node节点。五、K8S测试1、master自动选择哪个节点,部署一个 tomcat。kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8说明:tomcat6:部署应用名称image:镜像获取到 tomcat 信息查看资源:kubectl get all查看更详细信息:kubectl get all -o wide可以看到tomcat部署在了node3节点。在node3执行docker命令可以看到tomcat已经执行:docker imagesdocker ps查看默认命名空间信息:kubectl get pods查看全部命名空间信息:kubectl get pods --all-namespaces2、容灾恢复node3节点模拟宕机,停掉tomcat应用:docker stop 9f2fad305252会自动再部署一个tomcat容器。node3节点直接关机测试模拟宕机。查看节点信息node3已经是noreadykubectl get nodes查看详细信息:kubectl get pods -o wide此时node2节点已经在拉去创建tomcat镜像创建tomcat了node2节点查看docker信息,已经有tomcat了:docker imagesdocker ps这就是所谓的容灾恢复。3、暴露 nginx 访问kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort说明:Pod 的 80 映射容器的 8080;service 会代理 Pod 的 80端口。查看服务信息:svc(service的简写)kubectl get svc -o wide然后就可以通过http://192.168.56.101:32476/访问了查看信息:kubectl get all4、动态扩容测试kubectl scale --replicas=3 deployment tomcat6扩容了多份,所有无论访问哪个 node 的指定端口,都可以访问到 tomcat6。上面扩容了3个tomcat6.查看扩容后情况:kubectl get pods -o wide查看服务端口信息:kubectl get svc -o wide此时通过任何节点32476端口都可以访问tomcat了。缩容同样可以实现:kubectl scale --replicas=1 deployment tomcat65、删除部署 查看资源信息kubectl get all删除整个部署信息 kubectl delete deployment.apps/tomcat6此时再查看kubectl get allkubectl get pods已经没有tomcat部署信息了流程:创建 deployment 会管理 replicas,replicas 控制 pod 数量,有 pod 故障会自动拉起 新的 pod。六、kubesphere最小化安装安装helmHelm 是Kubernetes 的包管理器。包管理器类似于我们在Ubuntu 中使用的apt、Centos中使用的yum 或者Python 中的pip 一样,能快速查找、下载和安装软件包。Helm 由客户端组件helm 和服务端组件Tiller 组成, 能够将一组K8S 资源打包统一管理, 是查找、共享和使用为Kubernetes 构建的软件的最佳方式。有3种安装方案,推荐第三种方案。1、helm安装方案一:直接下载安装curl -L https://git.io/get_helm.sh | bash方案二:使用通过给定的get_helm.sh脚本安装。chmod 700 get_helm.sh然后执行安装./get_helm.sh可能有文件格式兼容性问题,用vi 打开该sh 文件,输入::set ff 回车,显示fileformat=dos,重新设置下文件格式: :set ff=unix 保存退出: :wq方案三:上面2种方案都需要可以访问外网。因此有了这种离线安装方案。离线下载安装,推荐使用这种国内。注意:指定版本才行,其他版本官网不支持。1、下载离线安装包https://get.helm.sh/helm-v2.16.3-linux-amd64.tar.gz2、解压tar -zxvf helm-v2.16.3-linux-amd64.tar.gz3、安装cp linux-amd64/helm /usr/local/bin cp linux-amd64/tiller /usr/local/bin4、修改权限chmod 777 /usr/local/bin/helm chmod 777 /usr/local/bin/tiller5、验证:helm version2、授权文件配置创建权限(只需要master 执行),创建授权文件helm-rbac.yaml,内容如下:apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system应用配置文件:kubectl apply -f helm-rbac.yaml3、安装Tiller(master 执行)初始化:helm init --service-account tiller --upgrade \ -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 \ --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts查看:kubectl get pods --all-namespaces3、去掉 master 节点的 Taint等所有组件Running后,确认 master 节点是否有 Taint,如下表示master 节点有 Taint。kubectl describe node k8s-node1 | grep Taint去掉污点,否则污点会影响OpenEBS安装:kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-安装 OpenEBS创建 OpenEBS 的 namespace,OpenEBS 相关资源将创建在这个 namespace 下:kubectl create ns openebs安装 OpenEBS如果直接安装可能会报错:"Error: failed to download "stable/openebs" (hint: running helm repo update may help"解决方法:换镜像helm repo remove stable helm repo add stable http://mirror.azure.cn/kubernetes/charts执行安装helm install --namespace openebs --name openebs stable/openebs --version 1.5.03、安装 OpenEBS 后将自动创建 4 个 StorageClass,查看创建的 StorageClass:kubectl get sc4、将 openebs-hostpath设置为 默认的 StorageClass:kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'最小化安装kubesphere下载最小化安装配置文件kubesphere-mini.yaml内容如下:--- apiVersion: v1 kind: Namespace metadata: name: kubesphere-system --- apiVersion: v1 data: ks-config.yaml: | --- persistence: storageClass: "" etcd: monitoring: False endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 port: 2379 tlsEnable: True common: mysqlVolumeSize: 20Gi minioVolumeSize: 20Gi etcdVolumeSize: 20Gi openldapVolumeSize: 2Gi redisVolumSize: 2Gi metrics_server: enabled: False console: enableMultiLogin: False # enable/disable multi login port: 30880 monitoring: prometheusReplicas: 1 prometheusMemoryRequest: 400Mi prometheusVolumeSize: 20Gi grafana: enabled: False logging: enabled: False elasticsearchMasterReplicas: 1 elasticsearchDataReplicas: 1 logsidecarReplicas: 2 elasticsearchMasterVolumeSize: 4Gi elasticsearchDataVolumeSize: 20Gi logMaxAge: 7 elkPrefix: logstash containersLogMountedPath: "" kibana: enabled: False openpitrix: enabled: False devops: enabled: False jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g sonarqube: enabled: False postgresqlVolumeSize: 8Gi servicemesh: enabled: False notification: enabled: False alerting: enabled: False kind: ConfigMap metadata: name: ks-installer namespace: kubesphere-system --- apiVersion: v1 kind: ServiceAccount metadata: name: ks-installer namespace: kubesphere-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: creationTimestamp: null name: ks-installer rules: - apiGroups: - "" resources: - '*' verbs: - '*' - apiGroups: - apps resources: - '*' verbs: - '*' - apiGroups: - extensions resources: - '*' verbs: - '*' - apiGroups: - batch resources: - '*' verbs: - '*' - apiGroups: - rbac.authorization.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiregistration.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiextensions.k8s.io resources: - '*' verbs: - '*' - apiGroups: - tenant.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - certificates.k8s.io resources: - '*' verbs: - '*' - apiGroups: - devops.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.coreos.com resources: - '*' verbs: - '*' - apiGroups: - logging.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - jaegertracing.io resources: - '*' verbs: - '*' - apiGroups: - storage.k8s.io resources: - '*' verbs: - '*' - apiGroups: - admissionregistration.k8s.io resources: - '*' verbs: - '*' --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ks-installer subjects: - kind: ServiceAccount name: ks-installer namespace: kubesphere-system roleRef: kind: ClusterRole name: ks-installer apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: ks-installer namespace: kubesphere-system labels: app: ks-install spec: replicas: 1 selector: matchLabels: app: ks-install template: metadata: labels: app: ks-install spec: serviceAccountName: ks-installer containers: - name: installer image: kubesphere/ks-installer:v2.1.1 imagePullPolicy: "Always"1、执行安装kubectl apply -f kubesphere-mini.yaml2、等所有pod启动好后,执行日志查看。查看所有pod状态kubectl get pods --all-namespaces查看所有节点:kubectl get nodes3、查看日志:kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f最后控制台显示:4、最后,由于在文档开头手动去掉了 master 节点的 Taint,我们可以在安装完 OpenEBS 和 KubeSphere 后,可以将 master 节点 Taint 加上,避免业务相关的工作负载调度到 master 节点抢占 master 资源: kubectl describe node k8s-node1 | grep Taint添加Taint:kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule5、访问测试注意:日志如果显示的是内网地址,也可以直接通过Net网络IP地址地址访问。http://192.168.56.100:30880/dashboard默认账号:admin 密码:P@88w0rd七、定制化安装master节点执行下面命令,修改需要开启的功能为True保存后会自动安装新开启的组件。kubectl edit cm -n kubesphere-system ks-installer同样可以通过命令监听安装情况:kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
2022年03月24日
497 阅读
0 评论
17 点赞
2022-03-14
Nginx反向代理微服务配置
Nginx反向代理微服务配置一、本地域名服务访问过程二、正向代理和反向代理三、Nginx+Windows搭建域名访问环境1、修改C:\Windows\System32\drivers\etc下hosts文件,映射本地域名和192.168.56.10ip关系。每次通过记事本修改很麻烦,可以通过SwitchHosts软件修改。192.168.56.10 family.com2、现在就可以通过family.com域名访问了。想要访问部署在服务器上的其它端口应用也可以通过域名访问,比如:搭建的kibana是5601端口,通过域名http://family.com:5601/即可访问了(kibana时部署在56.10服务器上的)四、nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; }Nginx配置文件说明:1、全局块:配置影响nginx全局的指令。如:用户组、nginx进程pid存放路径,日志存放路径,配置文件引入,允许生成wordker process数等。2、events块:配置影响nginx服务器或用户的网络链接。如:每个进程的最大连接数,选取那种事件驱动模型处理链接请求,是否允许同时接收多个网络链接,开启多个网络链接序列化等。3、http块:可以嵌套多个server,配置代理,缓存,日志定义等绝大多数功能和第三方模块的配置。如文件引入,mime-type定义,是否使用sendfile传输文件,链接超时事件,单链接请求数等。 http块里面又包含: 3.1、http全局块:如upstream,错误页面,连接超时等。 3.2、server块: 配置虚拟注解的相关参数,一个http中可以有多个wever。 server块里面又包含: 3.2.1、location:配置请求的路由,以及各种页面的处理情况。 注意: server块里面可以包含多个location。4、include /etc/nginx/conf.d/*.conf;注意:会包含所有include /etc/nginx/conf.d/这个目录下的所有配置文件。比如/etc/nginx/conf.d/默认有个default.conf配置文件,里面就只包含了server块。server { listen 80; -----------nginx监听的端口 server_name localhost; ------------nginx监听的端口 #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; ----访问日志 location / { ---------------- /表示上面监听域名下所有的请求,都可以在root目录下找,包括首页index root /usr/share/nginx/html index index.html index.htm; } #error_page 404 /404.html; ----------404错误页面路径配置 # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; ---------500页面路径配置 location = /50x.html { root /usr/share/nginx/html; }五、反向代理配置配置实例:因为上面修改了host文件映射了域名到192.168.56.10服务器,所有通过服务上nginx访问本地127.0.0.1的springboot项目就可以通过配置反向代理域名访问。将通过nginx访问family.com的所有请求,通过nginx反向代理到本地的springboot项目地址本地springboot项目ip192.168.56.1:。server { listen 80; server_name family.com; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { proxy_pass http://192.168.56.1:40000; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }梳理流程:1、修改本地host文件,目的是模拟类似访问外网的一个域名,如果有服务器域名绑定了ip,就可以不用这步骤了。2、修改host映射family.com ------->192.168.56.10 (服务地址)。3、访问family.com就相当于访问192.168.56.10 (服务地址)。4、将所有family.com(这个域名默认端口为80)的所有请求,通过nginx反向代理到局域网内的其它服务器,比如这里就是本地搭建的springboot项目端口40000。5、通过反向代理配置listen 80; ------监控哪个端口 server_name family.com; ------监听哪个域名 location / { ---- /:表示所有请求 proxy_pass http://192.168.56.1:40000; --将family.com域名80端,反向代理到http://192.168.56.1:40000 }6、现在family.com:80端口的所有请求就会反向代理到http://192.168.56.1:40000了。反向代理后访问地址:七、nginx代理网关如果我们服务多了,每次都要修改域名,修改nginx配置反向代理到多个服务,这样比较麻烦。那我们就可以通过配置nginx代理到微服务的网关,让网关来负载均衡到各服务。nginx负载均衡参考地址nginx反向代理局域网内网关服务(192.168.56.1:88)。nginx配置如下: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; upstream family { server 192.168.56.1:88; } include /etc/nginx/conf.d/*.conf; }注意修改的地方:upstream family { ----网关取的名字 server 192.168.56.1:88; -----网关服务地址 server srv2.example.com; ----可以配置多个网关地址,我这只有一个网关配一个即可 server srv3.example.com; server srv4.example.com; }修改yanxizhu.nginx配置:server { listen 80; server_name family.com; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { proxy_pass http://family; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }注意修改点:location / { proxy_pass http://family; ---这里就反向代理到family这个网关了,也就是上面取的网关名字 }接着就开始配置getway网关服务请求:注意:nginx代理给网关的时候会丢失host信息。解决办法;yanxizhu.conf添加代理host的header。proxy_set_header Host $proxy_host;family.conf整体如下:server { listen 80; server_name family.com; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { proxy_set_header Host $proxy_host; proxy_pass http://family; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }getway网关服务负载均衡配置: - id: family_host_route uri: lb://family-booking predicates: - Host=**.family.com比如访问http://family.com/时,nginx就会将请求反向代理给网关服务(192.168.56.1:88),而网关服务通过配置的负载均衡,又会将**.family.com域名的请求,负载均衡到family-booking微服务的80端口。以上就是nginx的反向代理配置。注意将getway网关的配置,放在最后,不然全部服务都被代理到family_host_route了。八、静态资源配置将前端静态资源上传到nginx静态资源目录,后端主页面路径加上static路径。server { listen 80; server_name family.com; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location /static/ { root /var/www/html; } location / { proxy_set_header Host $proxy_host; proxy_pass http://family; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }说明:location /static/ { root /var/www/html; }后端主页访问静态资源都添加上/static/静态资源目录,当请求static资源路径时,会被nginx处理请求nginx下配置的静态资源。
2022年03月14日
717 阅读
1 评论
11 点赞
2022-03-12
Docker安装RabbitMQ
Docker安装RabbitMQ运行RabbitMQ容器第一次运行没有RabbitMQ镜像,会自动下载。docker run -d --name rabbitmq -p 5671:5671 -p 5672:5672 -p 4369:4369 -p 25672:25672 -p 15671:15671 -p 15672:15672 rabbitmq:management说明:4369, 25672 (Erlang发现&集群端口)5672, 5671 (AMQP端口)15672 (web管理后台端口)61613, 61614 (STOMP协议端口)1883, 8883 (MQTT协议端口)https://www.rabbitmq.com/networking.html设置随docker启动docker update --restart=always rabbitmq访问RabbitMQ通过ip地址加15672端口即可访问,初始账号密码guest
2022年03月12日
219 阅读
0 评论
5 点赞
1
2
3
4