澳门娱乐6165:造福人士越来越好的问询系统状态

作者:澳门娱乐

centos6.5下部署elk

1、介绍

elk是实时日志分析平台,主要是为开发和运维人员提供实时的日志分析,方便人员更好的了解系统状态和代码问题。

2、elk中的e(elasticsearch):

(2.1)先安装依赖包,官方文档说明使用java1.8

yum -y install java-1.8.0-openjdk

安装elasticsearch:

tar zvxf elasticsearch-1.7.0.tar.gz

mv elasticsearch-1.7.0 /usr/local/elasticsearch

vim /usr/local/elasticsearch/config

cp elasticsearch.yml elasticsearch.yml.bak

vim elasticsearch.yml(修改)

cluster.name: elasticsearch

node.name: syk

node.master: true

node.data: true

index.number_of_shards: 5

index.number_of_replicas: 1(分片副本)

path.data: /usr/local/elasticsearch/data

path.conf: /usr/local/elasticsearch/conf

path.work: /usr/local/elasticsearch/work

path.plugins: /usr/local/elasticsearch/plugins

path.logs: /usr/local/elasticsearch/logs

bootstrap.mlockall: true (内存)

启动:/usr/local/elasticsearch/bin/elasticsearch -d

netstat -tlnp查看

会有9200与9300的java进程

curl

显示:

{

"status" : 200,

"name" : "syk",

"cluster_name" : "elasticsearch",

"version" : {

"number" : "1.7.0",

"build_hash" : "929b9739cae115e73c346cb5f9a6f24ba735a743",

"build_timestamp" : "2015-07-16T14:31:07Z",

"build_snapshot" : false,

"lucene_version" : "4.10.4"

},

"tagline" : "You Know, for Search"

}

(2.2)使用官方给的启动脚本:

用rz命令传到服务器上

unzip elasticsearch-servicewrapper-master.zip

mv elasticsearch-servicewrapper-master/service/ /usr/local/elasticsearch/bin/

cd /usr/local/elasticsearch/bin/service

./elasticsearch install(在init.d下自动创建服务脚本)

/etc/init.d/elasticsearch restart

curl -XGET '' -d '

> {

> "query":{

> "match_all":{}

> }

> }

> '

会返回:

{

"count" : 0,

"_shards" : {

"total" : 0,

"successful" : 0,

"failed" : 0

}

}

(2.3)基于rest api的界面(可以增删改差)

安装插件:/usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest (自动安装)

网页访问:

安装集群管理插件

/usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head

或者:

unzip elasticsearch-head-master.zip

mv elasticsearch-head-master plugins/head

网页访问:

可以以网页的方式显示你的分片已分片副本。

3、elk中的l(logstash):

(3.1)安装logstash:

i)、官方提供了yum安装的安装方式:

1、rpm --import

2、vim /etc/yum.repos.d/logstash.repo

添加:

[logstash-2.3]

name=Logstash repository for 2.3.x packages

baseurl=

gpgcheck=1

gpgkey=

enabled=1

3、yum --enablerepo=logstash-2.3 -y install logstash

ii)、下载tar包安装:

tar zvxf logstash-1.5.3.tar.gz

mv logstash-1.5.3 /usr/local/logstash

(3.2)测试

/usr/local/logstash/bin/logstash -e 'input { stdin{} } output { stdout{codec => rubydebug} }'

输入hehe

显示:

Logstash startup completed

hehe

{

"message" => "hehe",

"@version" => "1",

"@timestamp" => "2016-08-07T17:46:10.836Z",

"host" => "web10.syk.com"

}

这表示正常。

(3.3)写logstash配置文件

注意:

必须input{}与output{}

写法:符号使用=>

vim /etc/logstash.conf

input{

file {

path => "/var/log/syk.log"

}

}

output{

file {

path => "/tmp/%{+YYYY-MM-dd}.syk.gz"

gzip => true

}

}

启动logstash:/usr/local/logstash/bin/logstash -f /etc/logstash.conf

cd /var/log

cat maillog >> syk.log(追加到syk.log里)

在/tmp下可以看到以日期命名的syk.gz压缩文件

(3.4)使用redis存储logstash:

yum -y install redis(redis放在另外一台服务器上)

vim /etc/redis.conf(修改)

bind 192.168.137.52

在192.168.137.52服务器上也安装logstash

编写配置文件:

vim /etc/logstash.conf

input{

file {

path => "/var/log/syk.log"

}

}

output{

redis {

data_type => "list"

key => "system-messages"

host => "192.168.137.52"

port => "6379"

db => "1"

}

}

启动52服务器的logstash:

/usr/local/logstash/bin/logstash -f /etc/logstash.conf

cd /var/log

cat maillog >> syk.log(追加到syk.log里)

进去redis里查看:

redis-cli -h192.168.137.52 -p 6379

select 1

keys *(可以看到system-messages这个key)

llensystem-messages(可以看大system-messages这个key的长度)

(3.4)将logstash收集的日志信息传到es上

在192.168.137.50的服务器上写logstash配置文件:

vim /etc/logstash.conf

input {

redis {

data_type => "list"

key => "system-messages"

host => "192.168.137.52"

port => "6379"

db => "1"

}

}

output {

elasticsearch {

host => "192.168.137.50"

protocol => "http"

index => "system-messages-%{+YYYY.MM.dd}"

}

}

启动logstash:

/usr/local/logstash/bin/logstash -f /etc/logstash.conf

这时我们去看redis的LLENsystem-messages,会发现已经变成了0,这说明数据已经传输到es上了。

网页访问:

会多出来一个system-messages-2016.08.07的分片副本

4、elk中的k(kibana):

(4.1)安装:

解压 mv就行

cd /usr/local/kiabna/config/

vim kibana.yml修改:

elastcsearch: ""

启动:

nohup ./bin/kiban &(默认端口5601)

网页访问:

相关操作需要配合图片说明,这里暂时不说了。

1、介绍 elk是实时日志分析平台,主要是为开发和运维人员提供实时的日志分析,方便人员更好的了解系统状态和代码问题...

实验拓扑图

澳门娱乐6165 1

lab

1、jdk环境部署好(java-1.8.0-openjdk)
2、ELK软件版
redis 2.8 epel
logstash 1.5 rpm
es 1.7 rpm
kibana 4.1 rpm
3、安装部署
elk-node3:(logstash,nginx 192.168.9.120)
# ~]# yum install /data/pkg/logstash-1.5.4-1.noarch.rpm nginx -y
# systemctl start nginx.service
# systemctl enable nginx.service
# vim /etc/logstash/conf.d/nginx-redis.conf
# input {
# file {
# path => ["/var/log/nginx/access.log"]
# type => "nginxlog"
# }
# }
# output {
# redis {
# host => "192.168.9.119"
# port => "6379"
# data_type => "list"
# key => "logstash-nginxlog"
# }
# }
# ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-redis.conf --configtest
# ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-redis.conf

elk-node2:(redis 192.168.9.119)
# vim /etc/redis.conf
# bind 0.0.0.0
# systemctl start redis.service
# systemctl enable redis.service

elk-node1:(logstash-server 192.168.9.118)
# ~]# yum install /data/pkg/logstash-1.5.4-1.noarch.rpm -y
# vi /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns/nginx
# NGUSERNAME [a-zA-Z.@-+_%]+
# NGUSER %{NGUSERNAME}
# NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} [%{HTTPDATE:timestamp}] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}
# ========
# vim /etc/logstash/conf.d/redis-grok-es.conf
# input {
# redis {
# host => "192.168.9.119"
# port => "6379"
# data_type => "list"
# key => "logstash-nginxlog"
# }
# }
# filter {
# grok {
# match => {"message" => "%{NGINXACCESS}"}
# }
# }
# output {
# elasticsearch {
# cluster => "loges"
# index => "logstash-%{+YYYY.MM.dd}"
# }
# }
elk:(elasticsearch,kibana 192.168.9.77)
# yum install -y elasticsearch-1.7.2.noarch.rpm
# vi /etc/elasticsearch/elasticsearch.yml
# cluster.name: loges
# node.name: "elk"
# 安装head插件,上传到plugins目录解压就可以用了
# cd /usr/share/elasticsearch/plugins/
# unzip elasticsearch-head-latest.zip
# mv elasticsearch-head-master/ head
# systemctl enable elasticsearch.service
# systemctl start elasticsearch.service
注意点:(简单部署测试)
1、elasticsearch是最终的数据分布式存储。
2、logstash-server实时去redis拉取数据
3、logstash-agent实时向redis推送数据
4、当运气起来后,redis中是看不到数据了,查看keys是显示没有一个keys,其实数据已经被推送到了ES。
5、web日志数据结构化(grok)是在logstash-server上实施(当然也可以在agent,在大规模环境下,数据统一由logstash-server处理会比较好,减少前段web负载,日志时间)
ES查看数据信息:

澳门娱乐6165 2

head

# kibana
# tar xf kibana-4.1.2-linux-x64.tar.gz -C /usr/local/
# chown -R root.root kibana-4.1.2-linux-x64/
# ln -s kibana-4.1.2-linux-x64/ kibana
# vim kibana/config/kibana.yml
# elasticsearch_url: "http://localhost:9200"
# /usr/local/kibana/bin/kibana &
kibana展示:

澳门娱乐6165 3

注意了:
做这个实验的时候,我系统时间没有同步ntp,时区也不对
ntpdate cn.ntp.org.cn
timedatectl set-timezone Asia/Shanghai
导致在kibana discovery数据的时候,没有一笔数据,最好是先设置时间,时区,
在kibana上查看最近几天的数据才可以发现数据。

澳门娱乐6165 4

本文由澳门娱乐6165发布,转载请注明来源

关键词: