[2016-02-24 12:00:21,799: WARNING/MainProcess] celery@localhost.localdomain ready. [2016-02-24 12:00:22,297: WARNING/MainProcess] Substantial drift from celery@centos7-181 may mean clocks are out of sync. Current drift is 70 seconds. [orig: 2016-02-24 12:00:22.297798 recv: 2016-02-24 11:59:12.438481] [2016-02-24 12:00:22,300: WARNING/MainProcess] Substantial drift from celery@centos7-xiaoqiao may mean clocks are out of sync. Current drift is 764 seconds. [orig: 2016-02-24 12:00:22.300171 recv: 2016-02-24 11:47:38.863792] [2016-02-24 12:00:22,302: WARNING/MainProcess] Substantial drift from celery@centos7-186 may mean clocks are out of sync. Current drift is 65 seconds. [orig: 2016-02-24 12:00:22.302378 recv: 2016-02-24 11:59:17.157844] [2016-02-24 12:00:22,303: WARNING/MainProcess] Substantial drift from celery@centos7-182 may mean clocks are out of sync. Current drift is 70 seconds. [orig: 2016-02-24 12:00:22.303616 recv: 2016-02-24 11:59:12.633749] [2016-02-24 12:00:22,306: WARNING/MainProcess] Substantial drift from celery@centos7-189 may mean clocks are out of sync. Current drift is 64 seconds. [orig: 2016-02-24 12:00:22.306499 recv: 2016-02-24 11:59:18.100351] [2016-02-24 12:00:22,307: WARNING/MainProcess] Substantial drift from celery@centos7-188 may mean clocks are out of sync. Current drift is 65 seconds. [orig: 2016-02-24 12:00:22.307066 recv: 2016-02-24 11:59:17.197479] [2016-02-24 12:00:22,310: WARNING/MainProcess] Substantial drift from celery@centos7-184 may mean clocks are out of sync. Current drift is 64 seconds. [orig: 2016-02-24 12:00:22.309941 recv: 2016-02-24 11:59:18.490792] [2016-02-24 12:00:22,524: WARNING/MainProcess] Substantial drift from celery@centos7-183 may mean clocks are out of sync. Current drift is 64 seconds. [orig: 2016-02-24 12:00:22.524618 recv: 2016-02-24 11:59:18.005931] [2016-02-24 12:00:22,525: WARNING/MainProcess] Substantial drift from celery@centos7-187 may mean clocks are out of sync. Current drift is 70 seconds. [orig: 2016-02-24 12:00:22.525161 recv: 2016-02-24 11:59:12.294425] [2016-02-24 12:00:22,525: WARNING/MainProcess] Substantial drift from celery@centos7-185 may mean clocks are out of sync. Current drift is 65 seconds. [orig: 2016-02-24 12:00:22.525489 recv: 2016-02-24 11:59:17.309409]
host "msf_database" "msf_user" 127.0.0.1/32 md5 host all all 127.0.0.1/32 ident
保存后初始化数据库并启动服务,这里我直接就是root权限:
1 2 3 4 5 6 7 8 9 10
postgresql-setup initdb systemctl start postgresql.service su postgres createuser msf_user -P Enter password for new role: yourmsfpassword Enter it again: yourmsfpassword Shall the new role be a superuser? (y/n) n Shall the new role be allowed to create databases? (y/n) n Shall the new role be allowed to create more new roles? (y/n) n createdb --owner=msf_user msf_database
然后切换回root用户,执行msfconsole:
1 2 3 4 5 6 7 8 9 10
msf> db_status [*] postgresql selected, no connection msf> db_connect msf_user:yourmsfpassword@127.0.0.1:5432/msf_database NOTICE: CREATE TABLE will create implicit sequence "hosts_id_seq" for serial column "hosts.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "hosts_pkey" for table "hosts" [..] NOTICE: CREATE TABLE will create implicit sequence "mod_refs_id_seq" for serial column "mod_refs.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "mod_refs_pkey" for table "mod_refs" msf > db_status [*] postgresql connected to msf_database
[mysqld] server_id=4 datadir=/var/lib/mysql user=mysql # Path to Galeralibrary wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connectionURL contains the IPs of node#1, node#2 and node#3----所有节点的ip wsrep_cluster_address=gcomm://192.168.0.152,192.168.0.154,192.168.0.153 # In order for Galerato work correctly binlog format should be ROW binlog_format=ROW # MyISAM storageengine has only experimental support default_storage_engine=InnoDB # This changes howInnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node #1 address----本机ip wsrep_node_address=192.168.0.154 # SST method----节点间同步的方式 wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=my_centos_cluster # Authentication forSST method----来做节点间数据同步的账号密码 wsrep_sst_auth="root:asdasd" # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log
mysql> show status like 'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec | ... | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | ... | wsrep_incoming_addresses | 192.168.0.152:3306 | | wsrep_cluster_size | 1 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | ... | wsrep_ready | ON | +----------------------------+--------------------------------------+
当第一个节点成功启动后,启动其他节点,注意此时命令是
1
/etc/init.d/mysql start
正常情况下很快就会启动完成,如果启动了很长时间后出现如下提示:
1 2 3 4
Shutting down MySQL (Percona XtraDB Cluster)..... SUCCESS! Starting MySQL (Percona XtraDB Cluster).................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ERROR! ERROR! MySQL (Percona XtraDB Cluster) server startup failed! ERROR! Failed to restart server.
[root@test3 ~]# service mysql start ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists Stale sst_in_progress file in datadir Starting MySQL (Percona XtraDB Cluster)State transfer in progress, setting sleep higher .. ERROR! The server quit without updating PID file (/var/lib/mysql/test3.pid). ERROR! MySQL (Percona XtraDB Cluster) server startup failed!
#--------------------------------------------------------------------- # Example configuration for a possibleweb application. See the # full configuration options online. # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # 1) configure syslog to accept network log events. This is done # by adding the '-r' option tothe SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like thefollowing can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option tcplog option dontlognull # option http-server-close # option forwardfor except127.0.0.0/8 option redispatch retries 3 maxconn 2000 timeout connect 5s timeout client 50s timeout server 50s # timeout http-keep-alive 10s timeout check 10s listen mysql-cluster 0.0.0.0:3306 mode tcp balance roundrobin server node1 192.168.0.152:3306 check server node2 192.168.0.153:3306 check server node3 192.168.0.154:3306 check listen status 192.168.0.151:8080 stats enable stats uri /status stats auth admin:admin stats realm (haproxy\ statistic)
[root@test4 ~]# /etc/init.d/xinetd start Starting xinetd: [ OK ]
此时9200端口应该已经处于了监听状态,执行检测命令:
1 2 3 4 5 6
[root@test5 ~]# clustercheck HTTP/1.1 200 OK Content-Type: text/plain Connection: close Content-Length: 40 Percona XtraDB Cluster Node is synced.
接下来修改HA节点的配置文件,修改成:
1 2 3 4 5 6 7
listen mysql-cluster 0.0.0.0:3306 mode tcp balance roundrobin option httpchk server node1 192.168.0.152:3306 check port 9200 inter 12000 rise 3 fall 3 server node2 192.168.0.153:3306 check port 9200 inter 12000 rise 3 fall 3 server node3 192.168.0.154:3306 check port 9200 inter 12000 rise 3 fall 3
zone "xxx.com" IN { type master; file "xxx.com.zone"; }; zone "2.168.192.in-addr.arpa" IN { type master; file "2.168.192.zone"; };
这里注意修改options中的listen-on port 以及allow-query,默认是localhost,测试的话可以修改成any。
在相应目录下建立 xxx.com.zone和2.168.192.zone文件:
xxx.com.zone
1 2 3 4 5 6 7 8 9 10
$TTL 1D @ IN SOA xxx.com. root ( 20140929 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) ; minimum @ IN NS ns1.xxx.com. ns1 IN A 192.168.2.26 www IN A 192.168.2.26
2.168.192.zone
1 2 3 4 5 6 7 8 9 10
TTL 1D @ IN SOA xxx.com. root ( 20140929 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) ; minimum @ IN NS ns1.xxx.com. 26 IN PTR ns1.xxx.com. 26 IN PTR www.xxx.com.