距离上一次更新该文章已经过了 637 天,文章所描述的內容可能已经发生变化,请留意。
日志系统配置(长期补充) elastic各产品配置/学习手册 所有的ip不要写本地ip
filebeat.yml 如果同时指定了exclude_lines和include_lines, Filebeat将会先校验include_lines,再校验exclude_lines
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 filebeat.config: inputs: enabled: true path: inputs.d/*.yml reload.enabled: true reload.period: 10s modules: enabled: false path: modules.d/*.yml reload.enabled: true reload.period: 10s output.console: pretty: true pretty: 如果pretty设置为true,则写入stdout的事件将被格式化。默认值为false codec: 输出编解码器配置。如果缺少编解码器部分,事件将使用“pretty”选项进行json编码。 enabled: enabled config是用于启用或禁用输出的布尔设置。如果设置为false,则禁用输出 output.logstash: hosts: ["ip:5044" ] output.kafka: enabled: true hosts: ["192.168.81.210:9092" ,"192.168.81.220:9092" ,"192.168.81.230:9092" ] topic: '%{[fields][log_topic]} ' partition.round_robin: reachable_only: false required_acks: 1 compression: gzip max_message_bytes: 1000000 queue.mem: events: 256 flush.min_events: 128 flush.timeout: 1s max_procs: 1 processors: - script: lang: javascript source: > function process(event) { var message = event.Get("message"); message = message.replace(/\\x22/g,'"'); message = message.replace(/\,-/g,''); event.Put("message", message.trim()); } - decode_json_fields: fields: ["message" ] process_array: false max_depth: 1 target: "" overwrite_keys: false add_error_key: true - drop_fields: fields: ["log" ,"host" ,"input" ,"agent" ,"ecs" ,"message" ] ignore_missing: false logging.level: info logging.to_files: true logging.files: path: /var/log/filebeat name: filebeat keepfiles: 7 permissions: 0644 logging.metrics.enabled: true logging.metrics.period: 300s filebeat.inputs: - type: log enabled: true - /usr/share/filebeat/logs/*info.log enabled: true paths: - /usr/share/filebeat/logs/*info.log fields: filetype: info fields_under_root: true - type: log enabled: true paths: - /usr/share/filebeat/logs/*error.log fields: filetype: error fields_under_root: true
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 - type: log paths: - /xxx/logs/app/debug*.log - /xxx/logs/app/info*.log - /xxx/logs/app/error*.log fields: logtype: debug env: dev log_topic: log-data fields_under_root: true scan_frequency: 10s multiline.pattern: ^\[(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.?\d{0,3})\] multiline.negate: true multiline.match: after multiline.max_lines: 500 multiline.timeout: 500 flush_pattern: ^\[(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.?\d{0,3})\] - type: log ......
logstash.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 input { beats { port => 5044 } kafka { bootstrap_servers => "192.168.1.106:9092,192.168.1.107:9092,192.168.1.108:9092" auto_commit_interval_ms => 5000 group_id => "logstash" client_id => "logstash-0" consumer_threads => 2 auto_offset_reset => "latest" topics => ["topic1" , "topic2" ] add_field => {"logstash" => "192.168.1.143" } codec => json { charset => "UTF-8" } } } filter { if [logtype ] == "bi" { mutate { //替换字符 ,记得转义 gsub => [ "message" ,'\\x22' ,'"' ] gsub => [ "message" ,'\,-' ,'' ] } // json格式 json { source => "message" } } else if [logtype ] == "acc" { mutate { split => ["message" ,"~" ] add_field => { "module" => "%{[message][1]} " } } } } output { elasticsearch { hosts => ["http://192.168.56.100:9200" ] manage_template => false template => "/usr/share/logstash/templates/logstash.template.json" template_name => "sopei" template_overwrite => true index => "xxx-%{indexDay} -%{type} " codec => json user => "logstash_system" password => "xiaowu" } stdout { codec => rubydebug } }
kibana.yml 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 server.name: kibana server.host: "0.0.0.0" elasticsearch.hosts: "http://127.0.0.1:9200" elasticsearch.username: 'kibana_system' elasticsearch.password: 'xiaowu' xpack.monitoring.ui.container.elasticsearch.enabled: true xpack.reporting.encryptionKey: fd7c75cf-6abd-4704-a614-10a8679d64e7 如果您在多个 Kibana 实例之间进行负载平衡,则每个实例都需要具有相同的报告加密密钥 xpack.reporting.capture.browser.chromium.disableSandbox: false monitoring.cluster_alerts.email_notifications.email_address: xxx server.publicBaseUrl: xxx xpack.encryptedSavedObjects.encryptionKey: xxx i18n.locale: "zh-CN"
elasticsearch.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 cluster.name: elasticsearch-cluster node.name: es-node1 network.host: 0.0 .0 .0 network.publish_host: 192.168 .56 .100 http.port: 9200 transport.tcp.port: 9300 http.cors.enabled: true http.cors.allow-origin: "*" discovery.type: single-node node.master: true node.data: true discovery.seed_hosts: ["192.168.56.100" ]http.cors.allow-headers: Authorization xpack.security.enabled: true xpack.security.transport.ssl.enabled: true bootstrap.memory_lock: true
logstash.yml 1 2 3 4 5 xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.username: elastic xpack.monitoring.elasticsearch.password: xiaowu xpack.monitoring.elasticsearch.hosts: ["http://127.0.0.1:9200" ]