Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use ElasticSearch

2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "how to use ElasticSearch", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to use ElasticSearch" this article.

1) install ES

Download the ElasticSearch_ version number .tar.gz, available on the official website, after downloading.

Tar-zvxf elasticsearch-1.1.0.tar.gz cd elasticsearch-1.1.0

Install the plug-in or not. This plug-in is used for monitoring

. / bin/plugin-I elasticsearch/marvel/latest

If you want to know about this plugin, you can refer to the official documentation.

Http://www.elasticsearch.org/guide/en/marvel/current/index.html

2) execute the program

. / elasticsearch

If you see the following, it means success.

[2014-04-09 10 Lorna Dane 1214] [INFO] [node] [Lorna Dane] version [1.1.0], pid [839] Build [2181e11/2014-03-25T15:59:51Z] [2014-04-09 10 INFO] [node] [Lorna Dane] initializing... [2014-04-09 10 25T15:59:51Z] [plugins] [Lorna Dane] loaded [] Sites [] [2014-04-09 10 node] [Lorna Dane] initialized [2014-04-09 10 node] [node] initialized [2014-04-09 10 node] [INFO] [node] [Lorna Dane] starting. [2014-04-09 10 node] [Lorna Dane] bound_address {inet [/ 0VOV 0VL 0VOVOUL 0VOVOUL 0VOUL 0VOULO] 9300} Publish_address {in [/ XXXXXX:9300]} [2014-04-09 10 INFO] [cluster.service] [Lorna Dane] new_master [Lorna Dane] [Ml-gTu_ZTniHR2mkpbMQ_A] [XXXXX] [in [/ XXXXXX:9300]] Reason: zen-disco-join (elected_as_master) [2014-04-09 10 INFO] [discovery] [Lorna Dane] elasticsearch/Ml-gTu_ZTniHR2mkpbMQ_A [2014-04-09 10 elected_as_master 12 elected_as_master 47572] [INFO] [http] [Lorna Dane] bound_address Publish_address {inet [/ XXXXX:9200]} [2014-04-09 10 INFO] [INFO] [gateway] [Lorna Dane] recovered [0] indices into cluster_state [2014-04-09 10 started [INFO] [node] [Lorna Dane] started

If you want to run in the background, execute

. / elasticsearch-d

To confirm that the program is running, run

One is the external service port of the node, and the other is the interaction port between the nodes (if there is a cluster). The lsof-i:9200lsof-iRank 9300 is a node external service port and a node-to-node interaction port.

3) establish a cluster

The configuration file path is:

. (your actual path) / config/elasticsearch.yml

By default, all configuration items are blocked.

The modified configuration items are as follows:

Cluster.name: ctoes-- the name of the configuration cluster node.name: "QiangZiGeGe"-- the name of the configuration node. Note the double quotation marks bootstrap.mlockall: true

Configuration items that are not mentioned use default values, and how to set specific parameters also needs to be analyzed in a specific situation.

After the modification, start es, and you can see that there are other node names in the printed message, which means that the cluster is established successfully.

Note: es automatically detects cluster nodes with the same name in the local area network.

To view the status of the cluster, you can use:

The response from curl 'http://localhost:9200/_cluster/health?pretty' is as follows: {"cluster_name": "ctoes", "status": "green", "timed_out": false, "number_of_nodes": 2, "number_of_data_nodes": 2, "active_primary_shards": 5, "active_shards": 10, "relocating_shards": 0, "initializing_shards": 0 "unassigned_shards": 0}

Next, let's use it to get an intuitive feeling.

4) use the database to feel it

Create an index (equivalent to creating a database)

Examples are as follows:

[deployer@XXXXXXX0013 ~] $curl-XPUT 'http://localhost:9200/test1?pretty'-d' > {> "settings": {> "number_of_shards": 2, > "number_of_replicas": 1 >} >} >' {"acknowledged": true}

Note that the number_of_shards parameter here is set once and can never be modified after setting, but the number_of_replicas can be modified later.

The test1 in the above url is actually the name of the index (database) established. You can modify it as needed.

Create a document

Curl-XPUT 'http://localhost:9200/test1/table1/1'-d' {"first": "dewmobile", "last": "technology", "age": 3000, "about": "hello,world", "interest": ["basketball", "music"]} 'response is as follows: {"_ index": "test1", "_ type": "table1", "_ id": "1", "_ version": 1, "created": true}

Indicates that the document was created successfully

Test1: the name of the database created

Table1: the name of the established type. The type corresponds to the table of the relational database.

1: the primary key of the document developed by yourself, or it can be assigned by the database itself without specifying the primary key.

5) install the database synchronization plug-in

Since our data source is placed in MongoDB, we only talk about data synchronization of MongoDB data source here.

Plug-in source code: https://github.com/richardwilly98/elasticsearch-river-mongodb/

Introduction to MongoDB River Plugin (author Richard Louapre): mongodb synchronization plug-in, mongodb must be built into a replica set mode, because the principle of this plug-in is to synchronize data by periodically reading the oplog in mongodb.

How to install and use it? 2 plug-ins need to be installed

1) plug-in 1

. / plugin-install elasticsearch/elasticsearch-mapper-attachments/2.0.0

2) plug-in 2

. / bin/plugin-- install com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/2.0.0

The installation process is as follows:

. / bin/plugin-- install com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/2.0.0- > Installing com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/2.0.0...Trying http://download.elasticsearch.org/com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/elasticsearch-river-mongodb-2.0.0.zip...Trying http://search.maven.org/remotecontent?filepath=com/github/richardwilly98/elasticsearch/elasticsearch-river-mongodb/ 2.0.0/elasticsearch-river-mongodb-2.0.0.zip...Trying https://oss.sonatype.org/service/local/repositories/releases/content/com/github/richardwilly98/elasticsearch/elasticsearch-river-mongodb/2.0.0/elasticsearch-river-mongodb-2.0.0.zip...Downloading.. ... DONEInstalled com.github.richardwilly98.elasticsearch/elasticsearch-river-mongodb/2.0.0 into / usr/local/elasticsearch_1.1.0/elasticsearch/elasticsearch-1.1.0/plugins/river-mongodb

3) install the elasticsearch-MySql plug-in

For details, please refer to:

Https://github.com/jprante/elasticsearch-river-jdbc can download binary jar packages directly.

Https://github.com/jprante/elasticsearch-river-jdbc

4) install the mysql driver jar package (must! )

In this way, the plug-in is installed.

6) use plug-ins to tell ES to add listening database tasks

The template is as follows:

Curl-XPUT localhost:9200/_river/mongo_resource/_meta-d'{"type": "mongodb", "mongodb": {"servers": [{"host": "10.XX.XX.XX", "port": "60004"}], "db": "zapya_api", "collection": "resources"}, "index": {"name": "mongotest", "type": "resources"}'

If you see the following, the creation is successful.

{"_ index": "_ river", "_ type": "mongodb", "_ id": "_ meta", "_ version": 1, "created": true}

The data is then imported into es and the index is successfully established.

~

If you are importing mysql, the template is as follows:

[deployer@XXX0014 ~] $curl-XPUT 'localhost:9200/_river/my_jdbc_river/_meta'-d' {> "type": "jdbc", > "jdbc": {> "url": "jdbc:mysql://localhost:3306/fastooth", > "user": "XXX", > "password": "XXX", > "sql": "select *, base62Decode (display_name) as name from users" >} >} >'

In more detail:

{"jdbc": {"strategy": "simple", "url": null, "user": null, "password": null, "sql": null, "schedule": null, "poolsize": 1, "rounding": null, "scale": 2, "autocommit": false "fetchsize": 10, / * Integer.MIN for MySQL * / "max_rows": 0, "max_retries": 3, "max_retries_wait": "30s", "locale": Locale.getDefault (). ToLanguageTag (), "index": "jdbc", "type": "jdbc", "bulk_size": 100s "max_bulk_requests": 30, "bulk_flush_interval": "5s", "index_settings": null, "type_mapping": null}}

For the schedule parameter: set the

Format reference to: http://www.quartz-scheduler.org/documentation/quartz-1.x/tutorials/crontrigger

Http://elasticsearch-users.115913.n3.nabble.com/Ann-JDBC-River-Plugin-for-ElasticSearch-td4019418.html

Http://www.quartz-scheduler.org/documentation/quartz-1.x/tutorials/crontrigger

Https://github.com/jprante/elasticsearch-river-jdbc/issues/186

Official documents:

Http://elasticsearch-users.115913.n3.nabble.com/Ann-JDBC-River-Plugin-for-ElasticSearch-td4019418.html

Https://github.com/jprante/elasticsearch-river-jdbc/wiki/JDBC-River-parameters

Https://github.com/jprante/elasticsearch-river-jdbc/wiki/Quickstart (including how to delete a task)

Appendix: http://my.oschina.net/wenhaowu/blog/215219#OSC_h3_7

During the test, an error occurs:

[7]: index [yyyy], type [rrrr], id [1964986], message [RemoteTransportException [[2sdfsdf] [in [/ xxxxxxxxxx:9300]] [bulk/shard]]; nested: EsRejectedExecutionException [rejected execution (queue capacity 50) on org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1@3e82ee89];]

Modify the configuration file and add at the end:

Threadpool: bulk: type: fixed size: 60 queue_size: 1000

As for what these parameters mean, readers are also asked to figure it out for themselves.

Reference:

Http://stackoverflow.com/questions/20683440/elasticsearch-gives-error-about-queue-size

Http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-threadpool.html

~

For the client, we used the Play framework, just as the database needs the driver package, which we saw on the official website

Https://github.com/cleverage/play2-elasticsearch

For Chinese word segmentation, you can try to use Ansj.

~

About creating an index:

Curl-I-XPUT 'XXX:9200/fasth'-d'

{

"settings":

{

"number_of_shards": 3

"number_of_replicas": 1

}

}

'

~

Create Mappin

Curl-I-XPUT 'http://localhost:9200/fa/users/_mapping'-d'

{

"properties":

{

"_ id":

{

"type": "string"

"index": "not_analyzed"

}

"name":

{

"type": "string"

}

"gender":

{

"type": "string"

"index": "not_analyzed"

}

"primary_avatar":

{

"type": "string"

"index": "not_analyzed"

}

"signature":

{

"type": "string"

"index": "not_analyzed"

}

}

}

'

Full task:

Curl-XPUT 'xxx:9200/_river/mysql_users/_meta'-d'

{

"type": "jdbc"

"jdbc":

{

"url": "jdbc:mysql://XXX:3306/fastooth"

"user": "XXX"

"password": "XXX"

"sql": "select distinct _ id,base62Decode (display_name) as name,gender,primary_avatar,signature from users"

"index": "XXX"

"type": "XXX"

}

}

'

The above is all the content of this article "how to use ElasticSearch". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report