重庆小潘seo博客

当前位置:首页 > 重庆网络营销 > 小潘杂谈 >

小潘杂谈

redis集群配置与管理的详细介绍(附代码)

时间:2020-09-22 17:00:06 作者:重庆seo小潘 来源:
本篇文章给大家带来的内容是关于redis集群配置与管理的详细介绍(附代码),有一定的参考价值,有需要的朋友可以参考一下,希望对你有所帮助。 Redis在3.0版本以后开始支持集群,经过中间几个版本的不断更新优化,最新的版本集群功能已经非常完善。本文简单

本篇文章给大家带来的内容是关于redis集群配置与管理的详细介绍(附代码),有一定的参考价值,有需要的朋友可以参考一下,希望对你有所帮助。

Redis在3.0版本以后开始支持集群,经过中间几个版本的不断更新优化,最新的版本集群功能已经非常完善。本文简单介绍一下Redis集群搭建的过程和配置方法,redis版本是5.0.4,操作系统是中标麒麟(和Centos内核基本一致)。

1、Redis集群原理

Redis 集群是一个提供在多个Redis间节点间共享数据的程序集,集群节点共同构建了一个去中心化的网络,集群中的每个节点拥有平等的身份,节点各自保存各自的数据和集群状态。节点之间采用Gossip协议进行通信,保证了节点状态的信息同步。

Redis 集群数据通过分区来进行管理,每个节点保存集群数据的一个子集。数据的分配采用一种叫做哈希槽(hash slot)的方式来分配,和传统的一致性哈希不太相同。Redis 集群有16384个哈希槽,每个key通过CRC16校验后对16384取模来决定放置哪个槽。

为了使在部分节点失败或者大部分节点无法通信的情况下集群仍然可用,集群使用了主从复制模型。读取数据时,根据一致性哈希算法到对应的 master 节点获取数据,如果master 挂掉之后,会启动一个对应的 salve 节点来充当 master 。

2、环境准备

这里准备在一台PC上搭建一个3主3从的redis集群。

在/opt/目录下新建一个文件夹rediscluster,用来存放集群节点目录。

然后分别新建server10、server11、server20、server21、server30、server31 6个文件夹准备6个redis节点,这些节点分别使用6379、6380、6381、6382、6383、6384端口,以server10为例配置如下:port 6379daemonize yespidfile /var/run/redis_6379.pidcluster-enabled yescluster-node-timeout 15000cluster-config-filenodes-6379.conf其他节点只需修改端口和文件名,依次按此进行配置即可,配置完成后启动这些节点。[root@localhost rediscluster]# ./server10/redis-server ./server10/redis.conf &[root@localhost rediscluster]# ./server11/redis-server ./server11/redis.conf &[root@localhost rediscluster]# ./server20/redis-server ./server20/redis.conf &[root@localhost rediscluster]# ./server21/redis-server ./server21/redis.conf &[root@localhost rediscluster]# ./server30/redis-server ./server30/redis.conf &[root@localhost rediscluster]# ./server31/redis-server ./server31/redis.conf &查看启动状态:[root@localhost rediscluster]# ps -ef|grep redisroot1184210 15:03 ?00:00:12 ./server10/redis-server 127.0.0.1:6379 [cluster]root1195010 15:03 ?00:00:13 ./server11/redis-server 127.0.0.1:6380 [cluster]root1207410 15:04 ?00:00:13 ./server20/redis-server 127.0.0.1:6381 [cluster]root1218110 15:04 ?00:00:12 ./server21/redis-server 127.0.0.1:6382 [cluster]root1229710 15:04 ?00:00:12 ./server30/redis-server 127.0.0.1:6383 [cluster]root1240410 15:04 ?00:00:12 ./server31/redis-server 127.0.0.1:6384 [cluster]3、集群配置

非常简单:redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 --cluster -replicas 1

其中-replicas 1表示每个主节点1个从节点[root@localhost rediscluster]# ./server10/redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 --cluster-replicas 1>>> Performing hash slots allocation on 6 nodes...Master[0] -> Slots 0 - 5460Master[1] -> Slots 5461 - 10922Master[2] -> Slots 10923 - 16383Adding replica 127.0.0.1:6383 to 127.0.0.1:6379Adding replica 127.0.0.1:6384 to 127.0.0.1:6380Adding replica 127.0.0.1:6382 to 127.0.0.1:6381>>> Trying to optimize slaves allocation for anti-affinity[WARNING] Some slaves are in the same host as their masterM: efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379slots:[0-5460] (5461 slots) masterM: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380slots:[5461-10922] (5462 slots) masterM: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381slots:[10923-16383] (5461 slots) masterS: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382replicates 63e20c75984e493892265ddd2a441c81bcdc575cS: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383replicates d9a79ed6204e558b2fcee78ea05218b4de006acdS: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384replicates efa84a74525749b8ea20585074dda81b852e9c29Can I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join....>>> Performing Cluster Check (using node 127.0.0.1:6379)M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379slots:[0-5460] (5461 slots) masteradditional replica(s)M: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381slots:[10923-16383] (5461 slots) masteradditional replica(s)S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382slots: (0 slots) slavereplicates 63e20c75984e493892265ddd2a441c81bcdc575cS: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384slots: (0 slots) slavereplicates efa84a74525749b8ea20585074dda81b852e9c29M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380slots:[5461-10922] (5462 slots) masteradditional replica(s)S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383slots: (0 slots) slavereplicates d9a79ed6204e558b2fcee78ea05218b4de006acd[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.创建完成,主从节点分配如下:Adding replica 127.0.0.1:6383 to 127.0.0.1:6379Adding replica 127.0.0.1:6384 to 127.0.0.1:6380Adding replica 127.0.0.1:6382 to 127.0.0.1:63814、集群测试

通过6379客户端连接后进行测试,发现转向了6381:[root@localhost rediscluster]# ./server10/redis-cli -h 127.0.0.1 -c -p 6379127.0.0.1:6379> set foo bar-> Redirected to slot [12182] located at 127.0.0.1:6381OK127.0.0.1:6381> get foo"bar"在6381上连接测试:[root@localhost rediscluster]# ./server10/redis-cli -h 127.0.0.1 -c -p 6381127.0.0.1:6381> get foo"bar"结果相同,说明集群配置正常。

5、集群节点扩容

在rediscluster目录下在新增两个目录server40和server41,新增2个redis节点配置6385和6386两个端口。将6385作为新增的master节点,6386作为从节点,然后启动节点:[root@localhost server41]# ps -ef|grep redisroot1184210 15:03 ?00:00:18 ./server10/redis-server 127.0.0.1:6379 [cluster]root1195010 15:03 ?00:00:19 ./server11/redis-server 127.0.0.1:6380 [cluster]root1207410 15:04 ?00:00:18 ./server20/redis-server 127.0.0.1:6381 [cluster]root1218110 15:04 ?00:00:18 ./server21/redis-server 127.0.0.1:6382 [cluster]root1229710 15:04 ?00:00:17 ./server30/redis-server 127.0.0.1:6383 [cluster]root1240410 15:04 ?00:00:18 ./server31/redis-server 127.0.0.1:6384 [cluster]root3056310 18:01 ?00:00:00 ./redis-server 127.0.0.1:6385 [cluster]root3058210 18:02 ?00:00:00 ./redis-server 127.0.0.1:6386 [cluster]添加主节点:[root@localhost server41]# ./redis-cli --cluster add-node 127.0.0.1:6385 127.0.0.1:6379>>> Adding node 127.0.0.1:6385 to cluster 127.0.0.1:6379>>> Performing Cluster Check (using node 127.0.0.1:6379)M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379slots:[0-5460] (5461 slots) masteradditional replica(s)M: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381slots:[10923-16383] (5461 slots) masteradditional replica(s)S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382slots: (0 slots) slavereplicates 63e20c75984e493892265ddd2a441c81bcdc575cS: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384slots: (0 slots) slavereplicates efa84a74525749b8ea20585074dda81b852e9c29M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380slots:[5461-10922] (5462 slots) masteradditional replica(s)S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383slots: (0 slots) slavereplicates d9a79ed6204e558b2fcee78ea05218b4de006acd[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.>>> Send CLUSTER MEET to node 127.0.0.1:6385 to make it join the cluster.[OK] New node added correctly.查看节点列表:[root@localhost server41]# ./redis-cli 127.0.0.1:6379> cluster nodes22e8a8e97d6f7cc7d627e577a986384d4d181a4f 127.0.0.1:6385@16385 master - 0 1555064037664 0 connectedefa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379@16379 myself,master - 0 1555064036000 1 connected 0-5460d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381@16381 master - 0 1555064038666 3 connected 10923-163830469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382@16382 slave 63e20c75984e493892265ddd2a441c81bcdc575c 0 1555064035000 4 connectedddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384@16384 slave efa84a74525749b8ea20585074dda81b852e9c29 0 1555064037000 6 connected63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380@16380 master - 0 1555064037000 2 connected 5461-10922fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383@16383 slave d9a79ed6204e558b2fcee78ea05218b4de006acd 0 1555064037000 5 connected添加从节点:[root@localhost server41]# ./redis-cli --cluster add-node 127.0.0.1:6386 127.0.0.1:6379 --cluster-slave --cluster-master-id 22e8a8e97d6f7cc7d627e577a986384d4d181a4f>>> Adding node 127.0.0.1:6386 to cluster 127.0.0.1:6379>>> Performing Cluster Check (using node 127.0.0.1:6379)M: efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379slots:[0-5460] (5461 slots) masteradditional replica(s)M: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f 127.0.0.1:6385slots: (0 slots) masterM: d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381slots:[10923-16383] (5461 slots) masteradditional replica(s)S: 0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382slots: (0 slots) slavereplicates 63e20c75984e493892265ddd2a441c81bcdc575cS: ddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384slots: (0 slots) slavereplicates efa84a74525749b8ea20585074dda81b852e9c29M: 63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380slots:[5461-10922] (5462 slots) masteradditional replica(s)S: fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383slots: (0 slots) slavereplicates d9a79ed6204e558b2fcee78ea05218b4de006acd[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.>>> Send CLUSTER MEET to node 127.0.0.1:6386 to make it join the cluster.Waiting for the cluster to join>>> Configure node as replica of 127.0.0.1:6385.[OK] New node added correctly.添加成功后,为新节点分配数据:[root@localhost server41]# ./redis-cli --cluster reshard 127.0.0.1:6385How many slots do you want to move (from 1 to 16384)? 1000What is the receiving node ID? 22e8a8e97d6f7cc7d627e577a986384d4d181a4fPlease enter all the source node IDs.Type 'all' to use all the nodes as source nodes for the hash slots.Type 'done' once you entered all the source nodes IDs.Source node #1: all这样就新增完毕了,可以通过cluster nodes命令查看一下新增后的slot分布127.0.0.1:6379> cluster nodes22e8a8e97d6f7cc7d627e577a986384d4d181a4f 127.0.0.1:6385@16385 master - 0 1555064706000 7 connected 0-332 5461-5794 10923-11255efa84a74525749b8ea20585074dda81b852e9c29 127.0.0.1:6379@16379 myself,master - 0 1555064707000 1 connected 333-5460d9a79ed6204e558b2fcee78ea05218b4de006acd 127.0.0.1:6381@16381 master - 0 1555064705000 3 connected 11256-163837c24e205301b38caa1ff3cd8b270a1ceb7249a2e 127.0.0.1:6386@16386 slave 22e8a8e97d6f7cc7d627e577a986384d4d181a4f 0 1555064705000 7 connected0469ec03b43e27dc2b7b4eb24de34e10969e3adf 127.0.0.1:6382@16382 slave 63e20c75984e493892265ddd2a441c81bcdc575c 0 1555064707000 4 connectedddebc3ca467d15c7d25125e4e16bcc5576a13699 127.0.0.1:6384@16384 slave efa84a74525749b8ea20585074dda81b852e9c29 0 1555064707236 6 connected63e20c75984e493892265ddd2a441c81bcdc575c 127.0.0.1:6380@16380 master - 0 1555064706000 2 connected 5795-10922fd8ea61503e7c9b6e950894c0da41aed3ee19e7e 127.0.0.1:6383@16383 slave d9a79ed6204e558b2fcee78ea05218b4de006acd 0 1555064708238 5 connected6、集群节点缩减

缩减节点时先缩减从节点:[root@localhost server41]# ./redis-cli --cluster del-node 127.0.0.1:6386 7c24e205301b38caa1ff3cd8b270a1ceb7249a2e>>> Removing node 7c24e205301b38caa1ff3cd8b270a1ceb7249a2e from cluster 127.0.0.1:6386>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.然后进行主节点slot转移:[root@localhost server41]# ./redis-cli --cluster reshard 127.0.0.1:6385How many slots do you want to move (from 1 to 16384)? 1000What is the receiving node ID? efa84a74525749b8ea20585074dda81b852e9c29//要移到的节点Please enter all the source node IDs.Type 'all' to use all the nodes as source nodes for the hash slots.Type 'done' once you entered all the source nodes IDs.Source node #1: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f//要删除的主节点Source node #2: done最后在缩减主节点[root@localhost server41]# ./redis-cli --cluster reshard 127.0.0.1:6385How many slots do you want to move (from 1 to 16384)? 1000What is the receiving node ID? efa84a74525749b8ea20585074dda81b852e9c29//要移到的节点Please enter all the source node IDs.Type 'all' to use all the nodes as source nodes for the hash slots.Type 'done' once you entered all the source nodes IDs.Source node #1: 22e8a8e97d6f7cc7d627e577a986384d4d181a4f//要删除的主节点Source node #2: done缩减完成!【相关推荐:Redis教程】以上就是redis集群配置与管理的详细介绍(附代码)的详细内容,更多请关注小潘博客其它相关文章!