打开flower的监控页面,发现monitor页的succeeded tasks图表始终为空,打印日志发现有下面的警告:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[2016-02-24 12:00:21,799: WARNING/MainProcess] celery@localhost.localdomain ready.
[2016-02-24 12:00:22,297: WARNING/MainProcess] Substantial drift from celery@centos7-181 may mean clocks are out of sync. Current drift is
70 seconds. [orig: 2016-02-24 12:00:22.297798 recv: 2016-02-24 11:59:12.438481]
[2016-02-24 12:00:22,300: WARNING/MainProcess] Substantial drift from celery@centos7-xiaoqiao may mean clocks are out of sync. Current drift is
764 seconds. [orig: 2016-02-24 12:00:22.300171 recv: 2016-02-24 11:47:38.863792]
[2016-02-24 12:00:22,302: WARNING/MainProcess] Substantial drift from celery@centos7-186 may mean clocks are out of sync. Current drift is
65 seconds. [orig: 2016-02-24 12:00:22.302378 recv: 2016-02-24 11:59:17.157844]
[2016-02-24 12:00:22,303: WARNING/MainProcess] Substantial drift from celery@centos7-182 may mean clocks are out of sync. Current drift is
70 seconds. [orig: 2016-02-24 12:00:22.303616 recv: 2016-02-24 11:59:12.633749]
[2016-02-24 12:00:22,306: WARNING/MainProcess] Substantial drift from celery@centos7-189 may mean clocks are out of sync. Current drift is
64 seconds. [orig: 2016-02-24 12:00:22.306499 recv: 2016-02-24 11:59:18.100351]
[2016-02-24 12:00:22,307: WARNING/MainProcess] Substantial drift from celery@centos7-188 may mean clocks are out of sync. Current drift is
65 seconds. [orig: 2016-02-24 12:00:22.307066 recv: 2016-02-24 11:59:17.197479]
[2016-02-24 12:00:22,310: WARNING/MainProcess] Substantial drift from celery@centos7-184 may mean clocks are out of sync. Current drift is
64 seconds. [orig: 2016-02-24 12:00:22.309941 recv: 2016-02-24 11:59:18.490792]
[2016-02-24 12:00:22,524: WARNING/MainProcess] Substantial drift from celery@centos7-183 may mean clocks are out of sync. Current drift is
64 seconds. [orig: 2016-02-24 12:00:22.524618 recv: 2016-02-24 11:59:18.005931]
[2016-02-24 12:00:22,525: WARNING/MainProcess] Substantial drift from celery@centos7-187 may mean clocks are out of sync. Current drift is
70 seconds. [orig: 2016-02-24 12:00:22.525161 recv: 2016-02-24 11:59:12.294425]
[2016-02-24 12:00:22,525: WARNING/MainProcess] Substantial drift from celery@centos7-185 may mean clocks are out of sync. Current drift is
65 seconds. [orig: 2016-02-24 12:00:22.525489 recv: 2016-02-24 11:59:17.309409]

字面看是由于时间不同步造成的,可是在settings里已经设置了USE_TZ = True以及相应的时区选项,经过一番费力的寻找发现造成这个问题的原因居然是每台机器的时间不同!至于为什么机器设置为同一时区却有时间差就不探究了,解决起来就是把各个机器的时间同步就可以了:

1
2
yum install ntpdate -y
ntpdate 0.asia.pool.ntp.org

网上很多Ntp服务器都失效了,可以使用下面的:

1
2
3
4
0.asia.pool.ntp.org
1.asia.pool.ntp.org
2.asia.pool.ntp.org
3.asia.pool.ntp.org

另外最好设置个计划任务定时去同步。

评论和分享

centos7安装Metasploit框架

发布在 Centos

Metasploit应该不用多说了吧?Kali自带、Win和Mac都提供了现成的安装包,不过我莫名的对apt系不感冒,这里记录一下在centos7上安装metasploit框架的步骤,理论上yum系的都应该通用。
首先执行

1
2
3
curl https://raw.githubusercontent.com/rapid7/metasploit-omnibus/master/config/templates/metasploit-framework-wrappers/msfupdate.erb > msfinstall
chmod 755 msfinstall
./msfinstall

然后安装postgresql:

1
2
yum install postgresql
yum install postgresql-server

安装完成后先别启动服务,这里有个 关键的步骤 就是修改验证方式,编辑/var/lib/pgsql/data/pg_hba.conf文件,添加一行

1
2
host    "msf_database"	"msf_user"      127.0.0.1/32          md5
host all all 127.0.0.1/32 ident

保存后初始化数据库并启动服务,这里我直接就是root权限:

1
2
3
4
5
6
7
8
9
10
postgresql-setup initdb
systemctl start postgresql.service
su postgres
createuser msf_user -P
Enter password for new role: yourmsfpassword
Enter it again: yourmsfpassword
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n
createdb --owner=msf_user msf_database

然后切换回root用户,执行msfconsole:

1
2
3
4
5
6
7
8
9
10
msf> db_status
[*] postgresql selected, no connection
msf> db_connect msf_user:yourmsfpassword@127.0.0.1:5432/msf_database
NOTICE: CREATE TABLE will create implicit sequence "hosts_id_seq" for serial column "hosts.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "hosts_pkey" for table "hosts"
[..]
NOTICE: CREATE TABLE will create implicit sequence "mod_refs_id_seq" for serial column "mod_refs.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "mod_refs_pkey" for table "mod_refs"
msf > db_status
[*] postgresql connected to msf_database

然后就可以愉快的玩耍了。这里还有个地方需要注意的就是如果用上面的办法安装后执行了msfupdate,更新完成后再执行msfconsole会提示找不到命令,不知为什么/usr/bin目录下的链接被删除了,上面的安装方法将程序安装到了/opt/metasploit-framework/,所有的命令都在/opt/metasploit-framework/bin目录,可以自己重建链接。

另外,如果觉得每次进入msfconsole都要重新链接一次数据库麻烦的话,可以使用alias命令:

1
alias msfconsole='msfconsole -d db_connect -y /opt/framework/config/database.yml'

自己修改对应路径即可。

或者把database.yml放到~/.msf4目录中。

评论和分享

entOS6升级docker1.6

发布在 Centos

最近饱受Python版本、Django版本、系统不同导致部署方法不同这一类问题的折磨,虽然之前也有接触过docker但一直没抽出时间仔细学习,正好趁这个机会学习docker使用方法。

阅读全文

centos6搭建在线web代理

发布在 Centos

最近有个需求需要搭建一个在线web代理,懒得自己写就找到两个php程序:

  1. knProxy:https://github.com/jabbany/knProxy
  2. glype:https://www.glype.com/download.php

从部署角度来讲这两个都很简单,功能上也很类似,这里我就以第二个为例吧,使用nginx来部署。先来安装nginx和php-fpm

1
2
yum install nginx
yum install php-fpm</pre>

下载glype后解压到/var/www/html/glype目录,并在/etc/nginx/conf.d/中创建配置文件如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
server {
listen 8000;
server_name 192.168.2.46;
access_log /var/log/nginx-proxy-access.log;
error_log /var/log/nginx-proxy-error.log;
charset utf-8;
default_type text/html;
root /var/www/html/glype;
location / {
index index.php;
}
location ~* \.php$ {
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}

当然了,防火墙什么的根据具体情况开放。之后启动php-fpm以及nginx

1
2
service php-fpm start
service nginx start

这样一个简单地web在线代理就完成了,如果服务器在国外,你懂得。

评论和分享

新增一台HA节点192.168.0.155,虚拟IP192.168.0.160,其它IP见上文。

安装KEEPALIVE:yum install keepalived

分别在151和155 安装完成后,修改/etc/keepalived/keepalived.conf,我使用151作为主节点,155为备节点。

151配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
! Configuration File for keepalived
global_defs {
notification_email {
test@xxx.net.cn
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
###########
vrrp_script chk_ha {
script "/opt/chk_ha.sh"
interval 2
weight 2
}
###########
vrrp_instance VI_1 {
state MASTER # 主节点
interface eth0 #根据实际情况修改
virtual_router_id 51
mcast_src_ip 192.168.0.151 # 本机ip
priority 100 # 这里要大于备节点
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_ha
}
virtual_ipaddress {
192.168.0.160 # 注意这里改成虚拟IP
}
}

155配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
! Configuration File for keepalived
global_defs {
notification_email {
test@xxx.net.cn
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
###########
vrrp_script chk_ha {
script "/opt/chk_ha.sh"
interval 2
weight 2
}
###########
vrrp_instance VI_1 {
state BACKUP # 备份节点
interface eth0 #根据实际情况修改
virtual_router_id 51
mcast_src_ip 192.168.0.155 # 本机ip
priority 95 # 这里要小于主节点
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_ha
}
virtual_ipaddress {
192.168.0.160 # 注意这里改成虚拟IP
}
}

其中,chk_ha是用于检测HAPROXY是否存活的脚本,内容如下:

1
2
3
4
5
6
7
8
#!/bin/bash
if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
/etc/init.d/haproxy start
fi
sleep 2
if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
/etc/init.d/keepalived stop
fi

添加执行权限后,给151和155 分别添加虚拟IP:ifconfig eth0:0 192.168.0.160 netmask 255.255.255.0 up

添加后结果如下

1
2
3
4
5
[root@test5 ~]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr AE:A9:9C:02:C3:28
inet addr:192.168.0.160 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:17

155上的HA配置完全和151一样,但这里注意把listen status改成虚拟IP的192.168.0.160:8080。

启动KEEPALIVED服务:

1
2
[root@test5 ~]# service keepalived start
Starting keepalived: [ OK ]

由于有chk_ha这个脚本存在所以ha服务会自动启动。

此时,即便151的HA节点故障,155将自动接替151的工作。

参考资料:

  1. http://blog.chinaunix.net/uid-25266990-id-3989321.html
  2. http://www.cnblogs.com/dkblog/archive/2011/07/06/2098949.html
  3. http://blog.chinaunix.net/uid-25267728-id-3874670.html

评论和分享

最近公司有个异地多机房数据同步需求,mysql原生支持双主同步,所以只能另寻他法,于是找到了Percona XtraDB Cluster。这个可以理解为给Mysql打了个补丁,以便支持多主同步。

测试环境:centos 6.5

IP分配:

  1. 192.168.0.154(DB)
  2. 192.168.0.152(DB)
  3. 192.168.0.153(DB)
  4. 192.168.0.151(HA)
    首先安装Percona XtraDB Cluster的源:
    yum install http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm

然后
yum install Percona-XtraDB-Cluster-56

安装完毕后,修改/etc/my.cnf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[mysqld]
server_id=4
datadir=/var/lib/mysql
user=mysql
# Path to Galeralibrary
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connectionURL contains the IPs of node#1, node#2 and node#3----所有节点的ip
wsrep_cluster_address=gcomm://192.168.0.152,192.168.0.154,192.168.0.153
# In order for Galerato work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storageengine has only experimental support
default_storage_engine=InnoDB
# This changes howInnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address----本机ip
wsrep_node_address=192.168.0.154
# SST method----节点间同步的方式
wsrep_sst_method=xtrabackup-v2
# Cluster name
wsrep_cluster_name=my_centos_cluster
# Authentication forSST method----来做节点间数据同步的账号密码
wsrep_sst_auth="root:asdasd"
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
[mysqld_safe]
log-error=/var/log/mysqld.log

注意,修改不同节点的server_id以及 wsrep_node_address。

然后在192.168.0.152上执行/etc/init.d/mysql bootstrap-pxc

网上有些文章说需要修改 wsrep_cluster_address=gcomm://,在新版本中不需要了,上面这句就是初始化集群。结果如下:

1
2
[root@localhost ~]# /etc/init.d/mysql bootstrap-pxc
Bootstrapping PXC (Percona XtraDB Cluster)Starting MySQL (Percona XtraDB Cluster).. SUCCESS!

然后根据配置文件修改用于同步的用户名和密码,由于是实验环境我偷懒直接使用root了:

1
mysqladmin -u root password asdasd

进入mysql终端后可以看当前信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid | c2883338-834d-11e2-0800-03c9c68e41ec |
...
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
...
| wsrep_incoming_addresses | 192.168.0.152:3306 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
...
| wsrep_ready | ON |
+----------------------------+--------------------------------------+

当第一个节点成功启动后,启动其他节点,注意此时命令是

1
/etc/init.d/mysql start

正常情况下很快就会启动完成,如果启动了很长时间后出现如下提示:

1
2
3
4
Shutting down MySQL (Percona XtraDB Cluster)..... SUCCESS!
Starting MySQL (Percona XtraDB Cluster).................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... ERROR!
ERROR! MySQL (Percona XtraDB Cluster) server startup failed!
ERROR! Failed to restart server.

但日志里没相关错误信息,那么请 检查selinux是否关闭以及防火墙4444和4567端口 !!!(我就忘了防火墙的原因纠结了好久)

启动成功提示如下:

1
2
3
4
5
6
7
8
9
10
11
[root@test4 ~]# service mysql start
Starting MySQL (Percona XtraDB Cluster)...... SUCCESS!
[root@test4 ~]# netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 963/sshd
tcp 0 0 0.0.0.0:4567 0.0.0.0:* LISTEN 5184/mysqld
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1041/master
tcp 0 0 192.168.0.154:22 192.168.0.37:38500 ESTABLISHED 1080/sshd
tcp 0 0 192.168.0.154:4567 192.168.0.153:53681 ESTABLISHED 5184/mysqld
tcp 0 0 192.168.0.154:59348 192.168.0.152:4567 ESTABLISHED 5184/mysqld

可以看出,4567端口也处于监听状态。此时,在任意机器上进行数据库操作其他2个也会自动同步了。

此时,手动关闭152,然后在154上插入数据,153也同步了,同时自动把152从集群中删除了:

1
| wsrep_incoming_addresses     | 192.168.0.154:3306,192.168.0.153:3306 |

再启动152后,数据也自动同步了。

如果非常非常不幸,集群中所有节点都挂掉了,修复后需要在最后挂掉的节点上执行bootstrap-pxc命令,这样才能拯救多一些的数据。

如果重启时候报错:

1
2
3
4
5
6
[root@test3 ~]# service mysql start
ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists
Stale sst_in_progress file in datadir
Starting MySQL (Percona XtraDB Cluster)State transfer in progress, setting sleep higher
.. ERROR! The server quit without updating PID file (/var/lib/mysql/test3.pid).
ERROR! MySQL (Percona XtraDB Cluster) server startup failed!

直接删除/var/lock/subsys/mysql即可。还有一点需要注意的,数据库表需要使用INNODB而不是MYISAM引擎,否则会出现表结构同步了而数据无法同步的情况。

安装HAproxy:yum install haproxy

修改配置文件,位于/etc/haproxy/haproxy.cfg:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#---------------------------------------------------------------------
# Example configuration for a possibleweb application. See the
# full configuration options online.
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option tothe SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like thefollowing can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option tcplog
option dontlognull
# option http-server-close
# option forwardfor except127.0.0.0/8
option redispatch
retries 3
maxconn 2000
timeout connect 5s
timeout client 50s
timeout server 50s
# timeout http-keep-alive 10s
timeout check 10s
listen mysql-cluster 0.0.0.0:3306
mode tcp
balance roundrobin
server node1 192.168.0.152:3306 check
server node2 192.168.0.153:3306 check
server node3 192.168.0.154:3306 check
listen status 192.168.0.151:8080
stats enable
stats uri /status
stats auth admin:admin
stats realm (haproxy\ statistic)

启动服务后,访问192.168.0.151:8080/status登录即可看到界面。对外则使用192.168.0.151:3306访问数据库即可。

—-20150120更新—-

对于上面的HA配置,默认是监控第4层,换言之如果由于某情况下3306端口开放而MYSQL实际并没提供服务时,HA就无法解决这种情况。为了模拟这种情况,停止某个节点的MYSQL服务后,使用NC监听3306端口,成功欺骗了HA。为了解决这中情况,我们就需要针对应用层进行监控。

首先在节点上安装xinetd:yum install -y xinetd

然后编辑/etc/services,添加

1
mysqlchk        9200/tcp

然后编辑/usr/bin/clustercheck,修改

1
2
MYSQL_USERNAME="${1-root}"
MYSQL_PASSWORD="${2-asdasd}"

这里我偷懒使用root,大家根据实际情况修改。保存后启动xinetd服务

1
2
[root@test4 ~]# /etc/init.d/xinetd start
Starting xinetd: [ OK ]

此时9200端口应该已经处于了监听状态,执行检测命令:

1
2
3
4
5
6
[root@test5 ~]# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
Percona XtraDB Cluster Node is synced.

接下来修改HA节点的配置文件,修改成:

1
2
3
4
5
6
7
listen mysql-cluster 0.0.0.0:3306
mode tcp
balance roundrobin
option httpchk
server node1 192.168.0.152:3306 check port 9200 inter 12000 rise 3 fall 3
server node2 192.168.0.153:3306 check port 9200 inter 12000 rise 3 fall 3
server node3 192.168.0.154:3306 check port 9200 inter 12000 rise 3 fall 3

保存后重启HAPROXY即可,如果使用了KEEPALIVE(这里)把备份HA配置也修改成一样的。此时就是基于应用层的监控了。

评论和分享

centos部署Ganglia

发布在 Centos

Ganglia+RRDTool这对组合可以说是不错的机群监控软件了,在centos6下安装是十分简单的,服务端安装

1
2
3
yum install rrdtool-devel
yum install ganglia-gmetad
yum install ganglia-web

然后配置文件基本不用大量修改,位于/etc/ganglia/gmetad.conf

需要修改的就是添加客户端,比如:

1
2
3
data_source "proxy" 192.168.2.28:8749 192.168.2.36:8749 192.168.2.37:8749
data_source "dc" 192.168.2.45 192.168.2.44 192.168.2.43
data_source "v5" 192.168.2.32:8750 192.168.2.33:8750 192.168.2.62:8750 192.168.2.63:8750

不加端口则表示使用客户端默认的8649端口。

客户端安装:yum install ganglia-gmond即可,配置文件位于 /etc/ganglia/gmond.conf ,这里需要注意,如果需要分组管理的话,修改cluster name值和port值,同一分组中的机器的这两个值都必须一致,不同组的机器,这两个值不同。 cluster name的值要与gmetad.conf中的相应条目保持一致;port值的配置需要在udp_send_channel、udp_recv_channel和tcp_accept_channel三个部分同时设定,注间端口不要冲突。

最后,启动服务端和客户端的服务,web访问服务端的url(比如http://127.0.0.1/ganglia/ )就可以看到结果。

我遇到的问题就是某些客户端的内存读取不出来,重起客户端机器即可。原因未知。

评论和分享

iptables禁止某Ip

发布在 Centos

今天登录服务器一看,好家伙:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
tcp        0      0 198.74.121.150:80           38.103.160.12:33873         TIME_WAIT   -                   
tcp 0 0 198.74.121.150:80 38.103.160.12:45654 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:49337 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:35410 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:53982 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:55487 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:38964 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:39560 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:51861 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:60211 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:38490 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:48588 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:51625 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:47497 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:40164 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:42071 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:49687 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:59726 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:52097 TIME_WAIT -
tcp 0 0 198.74.121.150:80 38.103.160.12:46378 TIME_WAIT -

这尼玛绝对是非正常现象,想查看各个链接状态可以用下面的语句

1
netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'

结果如下:

1
TIME_WAIT 10968

我就呵呵了,看了一下这个ip,米国的,发现blocklist上面这个IP也上榜了

1
2
3
4
5
6
7
8
9
10
11
12
Date +-1 Min +0100:	Host:	                    Service:	    On Server:	        to:	Status:
15.11.2014 15:00:54 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
15.11.2014 12:00:51 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
14.11.2014 21:07:00 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
14.11.2014 18:05:50 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot 1 x blocked
12.11.2014 06:04:33 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
12.11.2014 03:07:11 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
12.11.2014 00:10:14 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
11.11.2014 21:10:53 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
11.11.2014 18:09:53 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
11.11.2014 15:05:37 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot blocked
11.11.2014 12:08:18 cpanel2.ospdx.com bruteforcelogin hacked-joomla/brobot 1 x blocked

没啥说的,直接封IP吧,执行
iptables -I INPUT -s ***.***.***.*** -j DROP

如果想解封某ip,把I换成D即可。

最后,可以使用
iptables --list
查看当前规则列表。

评论和分享

centos安装denyhosts

发布在 Centos

最近不知道得罪了哪路大神,一直都有坏人暴力破解本站的SSH登录密码。想安安静静的写博客就这么难么…没办法,那就安个denyhosts吧。

centos下直接使用Yum安装即可:yum install denyhosts.noarch

当然也可以下载编译安装。安装好后,配置文件是/etc/denyhosts.conf,参数说明如下:

SECURE_LOG = /var/log/secure#ssh日志文件

HOSTS_DENY = /etc/hosts.deny #将阻止IP写入到hosts.deny

PURGE_DENY =30m#过多久后清除已阻止IP

BLOCK_SERVICE = sshd#阻止服务名

DENY_THRESHOLD_INVALID = 5//允许无效用户(在/etc/passwd未列出)登录失败次数,允许无效用户登录失败的次数.

DENY_THRESHOLD_VALID = 10#允许普通用户登录失败的次数

DENY_THRESHOLD_ROOT = 1//允许root登录失败的次数

DENY_THRESHOLD_RESTRICTED = 1#设定 deny host 写入到该资料夹

WORK_DIR = /usr/share/denyhosts/data #将deny的host或ip纪录到Work_dir中

SUSPICIOUS_LOGIN_REPORT_ALLOWED_HOSTS=YES

HOSTNAME_LOOKUP=YES #是否做域名反解

LOCK_FILE = /var/lock/subsys/denyhosts #将DenyHOts启动的pid纪录到LOCK_FILE中,已确保服务正确启动,防止同时启动多个服务。

ADMIN_EMAIL = a@b.com #设置管理员邮件地址

SMTP_HOST = localhost

SMTP_PORT = 25

SMTP_FROM = DenyHosts nobody@localhost

SMTP_SUBJECT = DenyHosts Report

AGE_RESET_VALID=5d

AGE_RESET_ROOT=25d

AGE_RESET_RESTRICTED=25d

AGE_RESET_INVALID=10d

DAEMON_LOG = /var/log/denyhosts#自己的日志文件

DAEMON_SLEEP = 30s

DAEMON_PURGE = 1h#该项与PURGE_DENY 设置成一样,也是清除hosts.deniedssh 用户的时间

配置完成后,启动服务并设置开机启动:

1
2
service denyhosts start
chkconfig denyhosts on

然后查看 /etc/hosts.deny 文件,发现里面已经有记录了:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@miss_yi ~]# tail /etc/hosts.deny
# DenyHosts: Mon Oct 20 17:19:38 2014 | ALL: 74.221.172.28
ALL: 74.221.172.28
# DenyHosts: Mon Oct 20 17:19:38 2014 | ALL: 117.21.173.175
ALL: 117.21.173.175
# DenyHosts: Mon Oct 20 17:19:38 2014 | ALL: 112.78.3.196
ALL: 112.78.3.196
# DenyHosts: Mon Oct 20 17:19:38 2014 | ALL: 181.143.230.74
ALL: 181.143.230.74
# DenyHosts: Mon Oct 20 17:19:38 2014 | ALL: 216.151.221.194
ALL: 216.151.221.194
[root@miss_yi ~]# wc /etc/hosts.deny
771 4628 30294 /etc/hosts.deny

评论和分享

centos搭建dns服务器

发布在 Centos

安装bind
yum install bind

编辑/etc/named.conf,添加域配置:

1
2
3
4
5
6
7
8
zone "xxx.com" IN {
type master;
file "xxx.com.zone";
};
zone "2.168.192.in-addr.arpa" IN {
type master;
file "2.168.192.zone";
};

这里注意修改options中的listen-on port 以及allow-query,默认是localhost,测试的话可以修改成any。

在相应目录下建立 xxx.com.zone和2.168.192.zone文件:

xxx.com.zone

1
2
3
4
5
6
7
8
9
10
$TTL 1D
@ IN SOA xxx.com. root (
20140929 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS ns1.xxx.com.
ns1 IN A 192.168.2.26
www IN A 192.168.2.26

2.168.192.zone

1
2
3
4
5
6
7
8
9
10
TTL 1D
@ IN SOA xxx.com. root (
20140929 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS ns1.xxx.com.
26 IN PTR ns1.xxx.com.
26 IN PTR www.xxx.com.

保存后记得修改权限!否则/var/log/message中会出现权限被拒绝错误。

然后修改防火墙配置以及selinux。

启动服务service named start

验证正向解析:

1
2
3
4
5
6
7
root@xsy:~# host www.xxx.com
www.xxx.com has address 192.168.2.26
root@xsy:~# nslookup www.xxx.com
Server: 192.168.2.222
Address: 192.168.2.222#53
Name: www.xxx.com
Address: 192.168.2.26

反向解析:

1
2
3
4
5
root@xsy:~# nslookup 192.168.2.26
Server: 192.168.2.222
Address: 192.168.2.222#53
26.2.168.192.in-addr.arpa name = ns1.xxx.com.
26.2.168.192.in-addr.arpa name = www.xxx.com.

如果检查文件、启动服务都没错误,但客户端就是显示“connect time out”,原因可能有下面几点:

  1. zone文件路径不对。
  2. zone文件权限不对。
  3. 防火墙和selinux设置。
  4. options中listen-on port 以及allow-query设置。
    其中1、2看日志有明显输出,比如:
    1
    2
    3
    4
    5
    6
    [root@localhost named]# tail -f /var/log/messages
    Sep 28 17:15:04 localhost named[13020]: command channel listening on ::1#953
    Sep 28 17:15:04 localhost named[13020]: zone 0.in-addr.arpa/IN: loaded serial 0
    Sep 28 17:15:04 localhost named[13020]: zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0
    Sep 28 17:15:04 localhost named[13020]: zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0
    Sep 28 17:15:04 localhost named[13020]: zone xxx.com/IN: loading from master file xxx.com.zone failed: permission denied

评论和分享

作者的图片

Roy

微信公众号:hi-roy


野生程序猿


China