In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
ELK 5.X environment building and common plug-ins installation
Environment:
ip: 192.168.250.131
os: CentOS 7.1.1503 (Core)
Memory should not be too low, at least 4G, otherwise elasticsearch will start error.
Software and its version Here packages are decompressed under/opt, note!
logstash-5.4.0.tar.gz
elasticsearch-5.4.0.tar.gz
kibana-5.4.0-linux-x86_64.tar.gz
jdk-8u92-linux-x64.tar.gz
Preparation before installation:
(Unload everything below jdk1.8, otherwise elasticsearch will report an error)
cat /etc/profile Add the following
export jdk=/opt/jdk
export PATH=$jdk/bin:$PATH
export elasticsearch=/opt/elasticsearch
export PATH=$elasticsearch/bin:$PATH
export logstash=/opt/logstash
export PATH=$logstash/bin:$PATH
export kibana=/opt/kibana
export PATH=$kibana/bin:$PATH
environment settings
Add the following to the/etc/security/limits.conf file
* soft nofile 65536
* hard nofile 65536
* soft memlock unlimited
* hard memlock unlimited
/etc/sysctl.conf Add the following sysctl -p takes effect
fs.file-max = 183723 (default in l7)
vm.max_map_count = 262144
cat /etc/hosts
192.168.250.131 elk.cluster1.com
192.168.250.128 elk.cluster2.com
192.168.250.127 elk.cluster3.com
First, install deployment software, elasticsearch, logstash, kibana in turn
elasticsearch:
useradd elasticsearch
chown -R elasticsearch. elasticsearch
/opt/elasticsearch/config/elasticsearch.yml Modify the configuration file (note that there must be spaces after ":", otherwise there will be syntax errors, vim will change color when writing pairs)
cluster.name: elk-cluster #Custom cluster name, nodes in the same cluster set the same cluster name
node.name: elk.cluster1.com #Custom node name. It is recommended to adopt node hostname uniformly.
path.data: /opt/elasticsearch #Define elasticsearch Home
path.logs: /opt/elasticsearch/logs #Define elasticsearch log directory
bootstrap.memory_lock: true #The mlockall attribute of ES allows ES nodes not to swap memory
network.host: 192.168.250.131 #es listening address, or "0.0.0.0" to allow access to all devices
http.port: 9200 #es Listens on port, optional, default to this port
discovery.zen.ping.unicast.hosts: ["elk.cluster1.com","elk.cluster2.com","elk.cluster2.com","elk.cluster2.com"] #Cluster node discovery list, also in ip form
discovery.zen.minimum_master_nodes: 3 #Minimum number of nodes that can be master in a cluster
Here are two things to prepare for installing the head plugin:
http.cors.enabled: true #Enable cross-domain access, default is false
http.cors.allow-origin: "*" #Cross-domain access to allowed domain addresses, using regular expressions
su - elasticsearch -c "/opt/elasticsearch/bin/elasticsearch -d" Start service
Testing for successful installation
curl 192.168.250.131:9200
Install the head plugin for elasticsearch
yum -y install git npm xz #npm, xz is used when installing plugins
1, Download plugin git clone git://github.com/mobz/elasticsearch-head.git
Step 2: Install Node
① Since the head plugin is essentially a nodejs project, you need to install node and use npm to install dependent packages. (NPM can be understood as MAVEN)
Go to the official website to download nodejs, nodejs.org/en/download/
② Then extract the nodejs installation package: under/opt
mv node-v6.10.3-linux-x64 node
# set node environmentexport in /etc/profile
export NODE_HOME=/opt/node
export PATH=$PATH:$NODE_HOME/bin
source /etc/profile
test
echo $NODE_HOME
node -v
v6.10.3
npm -v
3.10.10
③ Install the head plug-in
cd /opt/elasticsearch-head
npm install
3. Install grunt
grunt is a handy build tool for packaging, compression, testing, execution, etc. The head plug-in in 5.X is launched through grunt.
cd /opt/elasticsearch-head/node_modules/grunt/bin
[root@elk bin]# ls
grunt
[root@elk bin]#./ grunt -V
grunt-cli v1.2.0
grunt v1.0.1
Note: When we execute npm install, it has been installed by default. If you install it yourself, you need to execute npm installgrunt-cli.
4, modify the head source code
Since the code of head is still version 2.6, there are many restrictions on direct execution, such as cross-machine access. Therefore, the user needs to modify two places:
① Modify the server listening address
vim/opt/elasticsearch-head/Gruntfile.js #Add the following
connect: {
server: {
options: {
port: 9100,
hostname: "*",
base: '. ',
keepalive: true
}
}
}
});
Add hostname attribute and set it to *
The requested URL/site/app.js was not found on this server.
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") ||"http://192.168.250.131:9200"; ##Change localhost to your es server address
5. Run the head
Then, in the head directory, execute npm install for packages downloaded since:
pwd
/opt/elasticsearch-head
[root@elk elasticsearch-head]# npm install
Restart your elasticsearch
su - elasticsearch -c "/opt/elasticsearch/bin/elasticsearch -d"
Start NodeJS
cd /opt/elasticsearch-head/node_modules/grunt/bin
nohup ./ grunt server & or-d option runs in the background
At this time, visit http://192.168.250.131:9100 to access the head plugin.
Interacting with RESTful APIs
Check out the current index and fragmentation, there will be plug-ins later
curl -i -XGET 'http://192.168.3.159:9200/? pretty' -d '{
"query" {
"match_all": {}
}
}'
logstash:
kibana:
/opt/kibana/config/kibana.yml
server.host: "192.168.250.131"
server.maxPayloadBytes: 1048576
server.name: "elk.cluster1.com"
elasticsearch.url: "http://192.168.250.131:9200"
nohup kibana -c kibana.yml &
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.