首页
关于
友链
Search
1
wlop 4K 壁纸 4k8k 动态 壁纸
1,467 阅读
2
Nacos持久化MySQL问题-解决方案
931 阅读
3
Docker搭建Typecho博客
749 阅读
4
滑动时间窗口算法
728 阅读
5
Nginx反向代理微服务配置
699 阅读
生活
解决方案
JAVA基础
JVM
多线程
开源框架
数据库
前端
分布式
框架整合
中间件
容器部署
设计模式
数据结构与算法
安全
开发工具
百度网盘
天翼网盘
阿里网盘
登录
Search
标签搜索
java
javase
docker
java8
springboot
thread
spring
分布式
mysql
锁
linux
redis
源码
typecho
centos
git
map
RabbitMQ
lambda
stream
少年
累计撰写
189
篇文章
累计收到
24
条评论
首页
栏目
生活
解决方案
JAVA基础
JVM
多线程
开源框架
数据库
前端
分布式
框架整合
中间件
容器部署
设计模式
数据结构与算法
安全
开发工具
百度网盘
天翼网盘
阿里网盘
页面
关于
友链
搜索到
4
篇与
的结果
2023-08-13
Kafka内外网隔离配置及安装使用
环境版本说明环境:CentOS7版本:JDK1.8、Zookeeper-3.4.14、Kafka2.12-1.0.2JDK安装JDK1.8安装rpm -ivh jdk-8u261-linux-x64.rpm环境变量配置vim /etc/profile最后一行加上配置export JAVA_HOME=/usr/java/jdk1.8.0_261-amd64 export PATH=$PATH:$JAVA_HOME/binjdk验证java -versionZookeeper安装安装tar -zxf zookeeper-3.4.14.tar.gz -C /opt环境变量配置vim /etc/profileZOOKEEPER_PREFIX指向Zookeeper的解压目录export ZOOKEEPER_PREFIX=/opt/zookeeper-3.4.14将Zookeeper的bin目录添加到PATH中export PATH=$PATH:$ZOOKEEPER_PREFIX/bin设置环境变量ZOO_LOG_DIR,指定Zookeeper保存日志的位置export ZOO_LOG_DIR=/var/zookeeper/log使配置生效source /etc/profile修改zookeeper配置cd conf cp zoo_sample.cfg zoo.cfg修改配置文件zoo.cfgvi zoo.cfg修改zookeeper数据存放位置配置修改前dataDir=/tmp/zookeeper修改后dataDir=/var/zookeeper/data启动zookeeper进入zookeeper安装目录=/var/zookeeper/bin启动zookeeperzkServer.sh start查看zookeeper状态zkServer.sh statusKafka安装安装kafkatar -zxf kafka_2.12-1.0.2.tgz -C /opt修改环境变量vi /etc/profile最后加上配置export KAFKA_HOME=/opt/kafka_2.12-1.0.2 export PATH=$PATH:$KAFKA_HOME/bin使配置生效source /etc/profile修改kafka配置修改server.properties配置文件vi server.properties修改链接zookeeper地址:123行修改前zookeeper.connect=localhost:2181修改后zookeeper.connect=localhost:2181/mykafka说明:表示在zookeeper目录下会创建一个mykafka节点修改消息持久化目录:60行修改前log.dirs=/tmp/kafka-logs修改后log.dirs=/var/kafka/kafka-logs创建持久化目录文件夹mkdir -p /var/kafka/kafka-logs脚本说明cd /opt/kafka_2.12-1.0.2/bin说明:kafka-topics.sh 操作主题的 kafka-server-start.sh kafka启动 kafka-server-stop.sh kafka关闭 kafka-console-consumer.sh 命令行里使用消费者 kafka-console-producer.sh 命令行里面使用的生产者启动kafka注意是进入到bin目录下的kafka-server-start.sh ../config/server.properties客户端测试使用zookeeper客户端登录zookeeper,复制启动窗口执行zkCli.sh。(注意:必须复制服务器启动窗口来执行,不然没有脚本。)zkCli.sh查看zookeeper的根节点ls /查看mykafkals /mykafka说明:cluster 集群 controller 控制器 controller_epoch 控制器纪元数据 brokers broker admin 管理者 isr_change_notification 同步的副本 consumers 消费者 log_dir_event_notification log_dir事件通知 latest_producer_id_block 最后一个生产者 config 配置客户端退出zookeeperquit关闭kafkakafka-server-stop.sh主题后台启动kafkakafka-server-start.sh -daemon /opt/kafka_2.12-1.0.2/config/server.properties查看kafka进程信息ps aux | grep kafka主题脚本使用帮助,直接执行脚本,显示主题脚本使用参数。kafka-topics.sh使用参数如下:[root@default-dev bin]# kafka-topics.sh Create, delete, describe, or change a topic. Option Description ------ ----------- --alter Alter the number of partitions, replica assignment, and/or configuration for the topic. --config <String: name=value> A topic configuration override for the topic being created or altered.The following is a list of valid configurations: cleanup.policy compression.type delete.retention.ms file.delete.delay.ms flush.messages flush.ms follower.replication.throttled. replicas index.interval.bytes leader.replication.throttled.replicas max.message.bytes message.format.version message.timestamp.difference.max.ms message.timestamp.type min.cleanable.dirty.ratio min.compaction.lag.ms min.insync.replicas preallocate retention.bytes retention.ms segment.bytes segment.index.bytes segment.jitter.ms segment.ms unclean.leader.election.enable See the Kafka documentation for full details on the topic configs. --create Create a new topic. --delete Delete a topic --delete-config <String: name> A topic configuration override to be removed for an existing topic (see the list of configurations under the --config option). --describe List details for the given topics. --disable-rack-aware Disable rack aware replica assignment --force Suppress console prompts --help Print usage information. --if-exists if set when altering or deleting topics, the action will only execute if the topic exists --if-not-exists if set when creating topics, the action will only execute if the topic does not already exist --list List all available topics. --partitions <Integer: # of partitions> The number of partitions for the topic being created or altered (WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected --replica-assignment <String: A list of manual partition-to-broker broker_id_for_part1_replica1 : assignments for the topic being broker_id_for_part1_replica2 , created or altered. broker_id_for_part2_replica1 : broker_id_for_part2_replica2 , ...> --replication-factor <Integer: The replication factor for each replication factor> partition in the topic being created. --topic <String: topic> The topic to be create, alter or describe. Can also accept a regular expression except for --create option --topics-with-overrides if set when describing topics, only show topics that have overridden configs --unavailable-partitions if set when describing topics, only show partitions whose leader is not available --under-replicated-partitions if set when describing topics, only show under replicated partitions --zookeeper <String: urls> REQUIRED: The connection string for the zookeeper connection in the form host:port. Multiple URLS can be given to allow fail-over. 注意:REQUIRED为必选参数,如上面的--zookeeper链接地址。查看所有可用主题kafka-topics.sh --zookeeper localhost:2181/mykafka --list创建主题kafka-topics.sh --zookeeper localhost/mykafka --create --topic topic_1 --partitions 1 --replication-factor 1说明:zookeeper端口可省略,使用的是默认的。--topic: 创建主题名字--partitions:创建几个分区,便于横向扩展。--replication-factor:一个分区创建几个副本,高可用。注意:当只有一个服务一个broker时,是没有意义的,当服务宕机了,数据也没了。因此--replication-factor副本必须在不同的kafka服务器上,才能实现高可用。再次查看可用主题kafka-topics.sh --zookeeper localhost:2181/mykafka --list查看主题详细信息kafka-topics.sh --zookeeper localhost/mykafka --describe --topic topic_1说明:topic_1有一个0号分区,在0号服务器上。实例创建一个topc_2主题,5个分区,每个分区1个副本.kafka-topics.sh --zookeeper localhost/mykafka --create --topic topic_2 --partitions 5 --replication-factor 1查看可用主题kafka-topics.sh --zookeeper localhost:2181/mykafka --list查看topic_2主题详情kafka-topics.sh --zookeeper localhost/mykafka --describe --topic topic_2说明:5个分区都在0号服务器上。消费消费脚本使用帮助:kafka-console-consumer.sh使用参数:The console consumer is a tool that reads data from Kafka and outputs it to standard output. Option Description ------ ----------- --blacklist <String: blacklist> Blacklist of topics to exclude from consumption. --bootstrap-server <String: server to REQUIRED (unless old consumer is connect to> used): The server to connect to. --consumer-property <String: A mechanism to pass user-defined consumer_prop> properties in the form key=value to the consumer. --consumer.config <String: config file> Consumer config properties file. Note that [consumer-property] takes precedence over this config. --csv-reporter-enabled If set, the CSV metrics reporter will be enabled --delete-consumer-offsets If specified, the consumer path in zookeeper is deleted when starting up --enable-systest-events Log lifecycle events of the consumer in addition to logging consumed messages. (This is specific for system tests.) --formatter <String: class> The name of a class to use for formatting kafka messages for display. (default: kafka.tools. DefaultMessageFormatter) --from-beginning If the consumer does not already have an established offset to consume from, start with the earliest message present in the log rather than the latest message. --group <String: consumer group id> The consumer group id of the consumer. --isolation-level <String> Set to read_committed in order to filter out transactional messages which are not committed. Set to read_uncommittedto read all messages. (default: read_uncommitted) --key-deserializer <String: deserializer for key> --max-messages <Integer: num_messages> The maximum number of messages to consume before exiting. If not set, consumption is continual. --metrics-dir <String: metrics If csv-reporter-enable is set, and directory> this parameter isset, the csv metrics will be output here --new-consumer Use the new consumer implementation. This is the default, so this option is deprecated and will be removed in a future release. --offset <String: consume offset> The offset id to consume from (a non- negative number), or 'earliest' which means from beginning, or 'latest' which means from end (default: latest) --partition <Integer: partition> The partition to consume from. Consumption starts from the end of the partition unless '--offset' is specified. --property <String: prop> The properties to initialize the message formatter. --skip-message-on-error If there is an error when processing a message, skip it instead of halt. --timeout-ms <Integer: timeout_ms> If specified, exit if no message is available for consumption for the specified interval. --topic <String: topic> The topic id to consume on. --value-deserializer <String: deserializer for values> --whitelist <String: whitelist> Whitelist of topics to include for consumption. --zookeeper <String: urls> REQUIRED (only when using old consumer): The connection string for the zookeeper connection in the form host:port. Multiple URLS can be given to allow fail-over. REQUIRED:必填参数unless old consumer is used:使用老消费者--bootstrap-server <String: server to REQUIRED (unless old consumer is connect to> used): The server to connect to. only when using old consumer:使用旧消费时使用--zookeeper <String: urls> REQUIRED (only when using old consumer): The connection string for the zookeeper connection in the form host:port. Multiple URLS can be given to allow fail-over.消费者消费链接Kafka服务端,当有多台Kafka时,只需要链接其中一台服务即可。注意,消费者消费端口是9092了。kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic_1说明:--bootstrap-server localhost:9092:指定kafka服务地址端口--topic topic_1:指定消费的主题测试界面像是卡住了,不管。消费者API使用参数配置说明生产生产脚本使用帮助kafka-console-producer.sh使用参数:Read data from standard input and publish it to Kafka. Option Description ------ ----------- --batch-size <Integer: size> Number of messages to send in a single batch if they are not being sent synchronously. (default: 200) --broker-list <String: broker-list> REQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2. --compression-codec [String: The compression codec: either 'none', compression-codec] 'gzip', 'snappy', or 'lz4'.If specified without value, then it defaults to 'gzip' --key-serializer <String: The class name of the message encoder encoder_class> implementation to use for serializing keys. (default: kafka. serializer.DefaultEncoder) --line-reader <String: reader_class> The class name of the class to use for reading lines from standard in. By default each line is read as a separate message. (default: kafka. tools. ConsoleProducer$LineMessageReader) --max-block-ms <Long: max block on The max time that the producer will send> block for during a send request (default: 60000) --max-memory-bytes <Long: total memory The total memory used by the producer in bytes> to buffer records waiting to be sent to the server. (default: 33554432) --max-partition-memory-bytes <Long: The buffer size allocated for a memory in bytes per partition> partition. When records are received which are smaller than this size the producer will attempt to optimistically group them together until this size is reached. (default: 16384) --message-send-max-retries <Integer> Brokers can fail receiving the message for multiple reasons, and being unavailable transiently is just one of them. This property specifies the number of retires before the producer give up and drop this message. (default: 3) --metadata-expiry-ms <Long: metadata The period of time in milliseconds expiration interval> after which we force a refresh of metadata even if we haven't seen any leadership changes. (default: 300000) --old-producer Use the old producer implementation. --producer-property <String: A mechanism to pass user-defined producer_prop> properties in the form key=value to the producer. --producer.config <String: config file> Producer config properties file. Note that [producer-property] takes precedence over this config. --property <String: prop> A mechanism to pass user-defined properties in the form key=value to the message reader. This allows custom configuration for a user- defined message reader. --queue-enqueuetimeout-ms <Integer: Timeout for event enqueue (default: queue enqueuetimeout ms> 2147483647) --queue-size <Integer: queue_size> If set and the producer is running in asynchronous mode, this gives the maximum amount of messages will queue awaiting sufficient batch size. (default: 10000) --request-required-acks <String: The required acks of the producer request required acks> requests (default: 1) --request-timeout-ms <Integer: request The ack timeout of the producer timeout ms> requests. Value must be non-negative and non-zero (default: 1500) --retry-backoff-ms <Integer> Before each retry, the producer refreshes the metadata of relevant topics. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata. (default: 100) --socket-buffer-size <Integer: size> The size of the tcp RECV size. (default: 102400) --sync If set message send requests to the brokers are synchronously, one at a time as they arrive. --timeout <Integer: timeout_ms> If set and the producer is running in asynchronous mode, this gives the maximum amount of time a message will queue awaiting sufficient batch size. The value is given in ms. (default: 1000) --topic <String: topic> REQUIRED: The topic id to produce messages to. --value-serializer <String: The class name of the message encoder encoder_class> implementation to use for serializing values. (default: kafka. serializer.DefaultEncoder) 注意:REQUIRED必填参数生产者链接kafka服务kafka-console-producer.sh --broker-list localhost:9092 --topic topic_1说明:--broker-list:指定broker,如果有很多太kafka服务器,只需要指定2个地址接口,这里只有一台kafka服务器,只指定一个。--topic:指定要发送消息到那个topic主题。此时生产者窗口也像卡住了,说明进入了发送消息界面。生产者API使用参数配置说明消息发送接收测试注意:提示如下信息检查主题名称是否错误WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 8可查看可用主题:明确消费主题和生产者是否是使用的一个主题。kafka-topics.sh --zookeeper localhost:2181/mykafka --list消息的发送与接受测试结果:注意:当关闭消费者后,生产者继续发送消息,当生产者重新链接后,只能接受到后面生产者重新发送的消息。消费历史消息如果要消费以前的消息可以指定参数--from-beginningkafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic_1 --from-beginning查看持久化数据ca /var/kafka/kafka-logs可以看到很多偏移量的文件。服务端参数配置主要是服务器kafka配置文件server.properties的配置zookeeper.connect该参数用于配置kafka要链接的Zookeeper/集群地址。它的值是一个字符串,使用都好分割zookeeper的多个地址。Zookeeper的单个地址是host:port形式的,可以在最后添加kafka在zookeeper中的根节点路径。例如: zookeeper.connect=192.1681.1:2181,192.1681.2:2181,192.1681.3:2181,192.1681.4:2181/mykafka最好服务器地址数量过半,后面zookeeper存放路径/mykafka写一个就好了。listeners用于指定当前Broker向外发布服务的地址和端口。与advertised.listeners配合,用于做内外网隔离。注意:端口号也是可以修改的。内外网隔离配置listeners用于配置broker监听的URL以及监听器名称列表,使用逗号隔开多个URL及监听器名称。例如:服务器有2个ip,ip如下则配置如下:listeners配置:注意监听器名称不能相同,端口不能相同。PLAINTEXT代表了都使用PLAINTEXT协议,也代表了监听器的名称,但是名称又不能相同,因此使用映射配置参数listner.sercurity.protocol.map。加上上面配置后,启动还是会报错,因此必须加上如下配置:整体说明:注意kafka是使用的PLAINTEXT协议listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT listeners=INTERNAL://10.0.2.15:9000,EXTERNAL://192.168.56.30:9001 advertised.listeners=EXTERNAL://192.168.56.30:9001 inter.broker.listener.name=EXTERNALlistener.security.protocol.map:解决监听协议、监听器名称一样问题。listeners:集群多台服务地址advertised.listeners:暴露给消费者生产者,以及节点间通讯使用的地址和端口(需要将该地址发布到zookeeper供客户端使用,如果客户端使用的地址与listeners配置不同),而另一个地址端口INTERNAL://10.0.2.15:9000则可用于内部管理kafka。inter.broker.listener.name:暴力给消费者生产者使用监听的监听器名称,如果要暴露多个直接可以逗号隔开多加几个,前提是listeners里面有的。listener.security.protocol.map用于内外网隔离配置,监听器名称和安全协议的映射配置。例如:将内外网隔离,即使他们都使用SSL,上面的配置问题就可以加下面参数解决。listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL说明:INTERNAL,EXTERNAL:代表监听器名称SSL:代表都是使用SSL协议。加上这个参数配置,就可以解决上面的监听器名称和协议冲突的问题了。注意:每个监听器的名称只能在map中出现一次。如果监听器名称代表的不是安全协议,必须配置listener.security.protocol.map。每个监听器必须使用不同的网络端口。查看zoopeeper信息:客户端脚本链接zookeeper:zkCli.sh查看kafka信息get /mykafka/brokers/ids/0查看可用主题:kafka-topics.sh --zookeeper localhost:2181/mykafka --listbroker.id该属性用于唯一标记一个kafka的Broker,它的值是一个任意integer值。当kafka以分布式集群部署使用时,非常重要。最好该值只跟该Broker所在的物理主机有关的,如主机名为host1.yanxizhu.com,则broker.id=1,如果主机名为192.168.56.30,则broker.id=30.log.dir通过该属性的值,指定kafka在磁盘上保存消息的日志片段的目录。它时一组用逗号分隔的本地文件系统路径。如果指定了多个路径,那么broker会根据”最少使用“原则,把同一个分区的日志片段保存到同一个路径下。broker会往拥有最少数目分区的路径新增分区,而不是往拥有最小磁盘空间的路径新增分区。
2023年08月13日
248 阅读
0 评论
1 点赞
2022-03-08
分布锁解决方案
分布锁有哪些解决方案?1.Reids的分布式锁,很多大公司会基于Reidis做扩展开发。setnxkey value,Redisson。基于Zookeeper。顺序临时节点。基于数据库,比如Mysql。主键或唯一索引的唯一性。
2022年03月08日
154 阅读
0 评论
3 点赞
2022-03-08
ZooKeeper实现分布式锁原理
基于ZooKeeper的分布式锁实现原理是什么?顺序节点特性:使用ZooKeeper的顺序节点特性,假如我们在/lock/目录下创建3个节点,ZK集群会按照发起创建的顺序来创建节点,节点分别为/lock/000000000f/lock/0000000002、/lock/0000000003,最后一位数是依次递增的,节点名由zk来完成。临时节点特性:ZK中还有一种名为临时节点的节点,临时节点由某个客户端创建,当客户端与ZK集群断开连接,则该节点自动被删除。EPHEMERAL_SEQUENTIAL为临时顺序节点。根据ZK中节点是否存在,可以作为分布式锁的锁状态,以此来实现一个分布式锁,下面是分布式锁的基本逻辑:1.客户端1调用create()方法创建名为"/业务ID/lock-”的临时顺序节点。2.客户端1调用getChildren(“业务ID)方法来获取所有已经创建的子节点。3.客户端获取到所有子节点path之后,如果发现自己在步骤1中创建的节点是所有节点中序号最小的,就是看自己创建的序列号是否排第一,如果是第一,那么就认为这个客户端获得了锁,在它前面没有别的客户端拿到4.如果创建的节点不是所有节点中需要最小的,那么则监视比自己创建节点的序列号小的最大的节点,进入等待。直到下次监视的子节点变更的时候,再进行子节点的获取,判断是否获取锁。
2022年03月08日
136 阅读
0 评论
0 点赞
2022-03-08
ZooKeeper、Reids做分布式锁的区别
ZooKeeper和Reids做分布式锁的区别?Reids:1.Redis只保证最终一致性,副本间的数据复制是异步进行(Set是写,Get是读,Reids集群一般是读写分离架构,存在主从同步延迟情况),主从切换之后可能有部分数据没有复制过去可能会[丢失锁]情况,故强一致性要求的业务不推荐使用Reids,推荐使用zk。2.Redis集群各方法的响应时间均为最低。随着并发量和业务数量的提升其响应时间会有明显上升(公网集群影响因素偏大),但是极限qps可以达到最大且基本无异常ZooKeeper:1.使用ZooKeeper集群,锁原理是使用ZooKeeper的临时顺序节点,临时顺序节点的生命周期在Client与集群的Session结束时结束。因此如果某个Client节点存在网络问题,与ZooKeeper集群断开连接,Session超时同样会导致锁被错误的释放(导致被其他线程错误地持有),因此ZooKeeper也无法保证完全一致。2.ZK具有较好的稳定性;响应时间抖动很小,没有出现异常。但是随着并发量和业务数量的提升其响应时间和qps会明显下降。总结:1.Zookeeper每次进行锁操作前都要创建若干节点,完成后要释放节点,会浪费很多时间;2.而Redis只是简单的数据操作,没有这个问题。
2022年03月08日
240 阅读
0 评论
0 点赞