开源网络流量与日志分析-ELK+elasticflow

admin 2022年9月14日15:27:58评论210 views字数 6039阅读20分7秒阅读模式

转自:https://zhuanlan.zhihu.com/p/561417540 

作者:知乎--攻城狮的手 

群友:Root

开源网络流量与日志分析-ELK+elasticflow


1、安装软件依赖

systemctl stop firewalld
sed -i 's/enforcing/disabled/g' /etc/selinux/config
setenforce 0
yum -y install vim bash-c* net-tools lrzsz wget unzip gcc gcc-c++ epel-release tree 
yum -y install java-openjdk  java-1.8.0-openjdk-devel

2、安装启动ELK

wget -c http://itityunwei.cn/tools/elastiflow/elasticsearch-7.8.1-x86_64.rpm
wget -c http://itityunwei.cn/tools/elastiflow/logstash-7.8.1.rpm
wget -c http://itityunwei.cn/tools/elastiflow/kibana-7.8.1-x86_64.rpm
rpm -ivh elasticsearch-7.8.1-x86_64.rpm logstash-7.8.1.rpm kibana-7.8.1-x86_64.rpm
systemctl daemon-reload
systemctl enable elasticsearch.service
systemctl enable kibana.service
systemctl enable logstash.service

3、ELK配置文件修改

vim /etc/elasticsearch/elasticsearch.yml
cluster.name: elk
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
--------------------------------------------------------------------------------------
vim /etc/elasticsearch/jvm.options
-Xms4g
-Xmx8g
--------------------------------------------------------------------------------------
vim /etc/logstash/jvm.options
-Xms4g
-Xmx8g
--------------------------------------------------------------------------------------
vim /etc/logstash/startup.options
JAVACMD=/usr/bin/java
--------------------------------------------------------------------------------------
vim /etc/kibana/kibana.yml
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
i18n.locale: "zh-CN"
--------------------------------------------------------------------------------------
systemctl restart elasticsearch.service
systemctl restart kibana.service

4、logstash模块安装

/usr/share/logstash/bin/logstash-plugin install logstash-codec-sflow
/usr/share/logstash/bin/logstash-plugin install logstash-codec-netflow
/usr/share/logstash/bin/logstash-plugin install logstash-input-udp
/usr/share/logstash/bin/logstash-plugin install logstash-input-tcp
/usr/share/logstash/bin/logstash-plugin install logstash-filter-dns
/usr/share/logstash/bin/logstash-plugin install logstash-filter-geoip
/usr/share/logstash/bin/logstash-plugin install logstash-filter-translate
/usr/share/logstash/bin/system-install

5、ELK权限设置

chown -R kibana:kibana /etc/kibana
chown -R kibana:kibana /usr/share/kibana
chown -R kibana:kibana /etc/default/kibana
chown -R elasticsearch:elasticsearch /etc/elasticsearch
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
chown -R elasticsearch:elasticsearch /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /etc/sysconfig/elasticsearch
chown -R logstash:logstash /etc/logstash
chown -R logstash:logstash /usr/share/logstash
chown -R logstash:logstash /var/lib/logstash
chown -R logstash:logstash /var/log/logstash
chown -R logstash:logstash /etc/default/logstash

6、elasticflow安装

cd /usr/local/src
wget -c http://itityunwei.cn/tools/elastiflow/elastiflow.tar.gz
tar -zxvf elastiflow.tar.gz
cd /usr/local/src/elastiflow
cp -r logstash/elastiflow /etc/logstash/
cp -r logstash.service.d /etc/systemd/system/
chown -R logstash:logstash /etc/logstash/elastiflow
--------------------------------------------------------
vim /etc/logstash/pipelines.yml
path.config: "/etc/logstash/elastiflow/conf.d/*.conf"
--------------------------------------------------------
systemctl daemon-reload
systemctl enable logstash
systemctl start logstash
systemctl restart logstash

7、logstash日志收集配置

cd /etc/logstash/conf.d
vim network.conf
------写入以下内容--------
input {
    udp {
        port => 3500
        type => "network"
    }
}
filter {
 if [type] == "network" {
        grok {
            match => {
                "message" => "<%{BASE10NUM:syslog_pri}>(?<switchtime>.*) %{DATA:hostname} %{DATA:ddModuleName}/%{POSINT:severity}/%{DATA:Brief}:%{GREEDYDATA:message}"
            }
            remove_field => [ "timestamp" ]
            add_field => {
                "severity_code" => "%{severity}"
            }
            overwrite => ["message"]
        }
  }
  mutate {
        gsub => [
            "severity""0""Emergency",
            "severity""1""Alert",
            "severity""2""Critical",
            "severity""3""Error",
            "severity""4""Warning",
            "severity""5""Notice",
            "severity""6""Informational",
            "severity""7""Debug"
        ]
    }
}
output {
  #stdout { codec => rubydebug }
  if [type] == "network" {
        elasticsearch {
                hosts => ["http://127.0.0.1:9200"]
                index => "filebeat-syslog-%{+YYYY.MM.dd}"
#                ssl => true
#               cacert => "/etc/logstash/certs/ca.crt"
                user => "elastic"
                password => "xxxxx"

        }
  }
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/network.conf     ###运行

8、交换机sflow配置和日志配置

华为:
sflow agen ip  交换机IP 
sflow collector 1 ip 服务器IP 
sflow collector 1
int g0/0/1
sflow flow-sampling inbound
sflow flow-sampling outbound
sflow flow-sampling collector 1
sflow flow-sampling rate 256 
-----------------------------------------
info-center loghost 交换机IP
info-center loghost 服务器IP port 3500

思科:
flow record flow_record
 match ipv4 source address
 match ipv4 destination address
 match ipv4 protocol
 match transport source-port
 match transport destination-port
 match ipv4 tos
 match interface input
 collect interface output
 collect counter bytes layer2 long
 collect counter packets long

flow exporter flow_export
 destination 服务器IP
 transport udp 2055
 template data timeout 60

flow monitor flow_monitor
 exporter flow_export
 cache timeout active 60
 record flow_record

int g0/0/1
ip flow monitor flow_monitor input

华三:
sflow agent ip 交换机IP
sflow source ip 交换机IP
sflow collector 1 ip 服务器IP 
int g1/0/1
 sflow flow collector 1
 sflow sampling-rate 4000
 sflow counter collector 1
 sflow counter interval 120
-----------------------------------------
info-center loghost 交换机IP
info-center loghost 服务器IP port 3500

9、kibana配置与效果

开源网络流量与日志分析-ELK+elasticflow
把交换机日志收集的索引新建为filebeat-syslog-* 流量分析的索引是默认产生
开源网络流量与日志分析-ELK+elasticflow
把路径 /usr/local/src/elastiflow/kibana下的文件elastiflow.kibana.7.8.x.ndjson拷到电脑本地 ,然后如上操作上传
开源网络流量与日志分析-ELK+elasticflow
开源网络流量与日志分析-ELK+elasticflow
开源网络流量与日志分析-ELK+elasticflow

效果到此结束

开源网络流量与日志分析-ELK+elasticflow

开源网络流量与日志分析-ELK+elasticflow

开源网络流量与日志分析-ELK+elasticflow

原文始发于微信公众号(释然IT杂谈):开源网络流量与日志分析-ELK+elasticflow

  • 左青龙
  • 微信扫一扫
  • weinxin
  • 右白虎
  • 微信扫一扫
  • weinxin
admin
  • 本文由 发表于 2022年9月14日15:27:58
  • 转载请保留本文链接(CN-SEC中文网:感谢原作者辛苦付出):
                   开源网络流量与日志分析-ELK+elasticflowhttps://cn-sec.com/archives/1295913.html

发表评论

匿名网友 填写信息