Andrew Johnstone
www.ajohnstone.com
Example format
curl -s -XGET 'http://localhost:9200/index1,index2/typeA,type/_search' -d '{
"query": { "match_all": {} }
}
Mapping
curl -s -XGET 'http://localhost:9200/_mapping?pretty=true'
GET /_cluster/health
GET /_cluster/health/index1,index2
GET /_cluster/nodes/stats
GET /_cluster/nodes/nodeId1,nodeId2/stats
POST /_cluster/nodes/nodeId1,nodeId2/_shutdown
POST /_cluster/reroute # Re-route shards and nodes
PUT /member {
"index": {
"number_of_shards": 3,
"number_of_replicas": 2,
}}
curl -XPUT localhost:9200/test/_settings -d '{
"index.routing.allocation.include.tag" : "value1,value2"
}'
curl -XPUT localhost:9200/test/_settings -d '{
"index.routing.allocation.include.group1" : "xxx"
"index.routing.allocation.include.group2" : "yyy",
"index.routing.allocation.exclude.group3" : "zzz",
}'
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.exclude._ip" : "10.0.0.1"
}
}'
curl -XPUT 'http://localhost:9200/_template/template_name/' -d '
{
"template": "match-*",
"mappings": {
"_default_": {
"_source": { "compress": "true" },
"_all" : {"enabled" : false}
}
}
}'
curl -XPOST 'http://localhost:9200/organisations/_optimize?max_num_segments=2'
Use max_num_segments with a value of 2 or 3.
(Setting max_num_segements IO intensive)
curl -XPUT localhost:9200/test/_warmer/warmer_1 -d
{
"query":{
"match_all":{
}
},
"facets":{
"facet_1":{
"terms":{
"field":"field"
}
}
}
}
# get warmer named warmer_1 on test index curl -XGET localhost:9200/test/_warmer/warmer_1 # get all warmers that start with warm on test index curl -XGET localhost:9200/test/_warmer/warm* # get all warmers for test index curl -XGET localhost:9200/test/_warmer/
curl -XPUT localhost:9200/_template/template_1 -d
{
"template":"template*",
"settings":{
"number_of_shards":1
},
"mappings":{
"type1":{
"_source":{
"enabled":false
}
}
}
}
Filters are very handy since they perform an order of magnitude better than plain queries since no scoring is performed and they are automatically cached.
Filters can be a great candidate for caching. Caching the result of a filter does not require a lot of memory, and will cause other queries executing against the same filter (same parameters) to be blazingly fast.
GET /_analyze?analyzer=standard -d 'testing'
GET /_analyze?tokenizer=snowball&filters=lowercase -d'testing'
GET /_analyze?text=testing
GET /_analyze?field=obj1.field1 -d'testing'
analysis:filter:ngram_filter:type: "nGram"min_gram: 3max_gram: 8analyzer:ngram_analyzer:tokenizer: "whitespace"filter: ["ngram_filter"]type: "custom"



A river is a pluggable service running within elasticsearch cluster pulling data (or being pushed with data) that is then indexed into the cluster.
bin/
| bootstrap.php
| search
| build_index.php
| index_all.php
application/libs/ | Application | | Search | | | Criterion | | | | Members | | | | | Country.php | | | | README.md | | | Data | | | | Producer | | | | Example.php | | | | README.md | | | Result.php | | | Structure.php | | Service
| Elastica -> /usr/share/php/Elastica
| Photobox
| Search
| | Criterion
| | | Boolean.php
| | | Integer.php
| | | Intersect.php
| | | Keyword.php
| | | String.php
| | | Type.php
| | | Union.php
| | Criterion.php
| | Data
| | | Producer.php
| | | Transfer.php
| | Engine
| | | Elasticsearch
| | | | StructureAbstract.php
| | | Elasticsearch.php
| | Engine.php
| | Index
| | | Builder.php
| | Result.php
| Service
| Search
| Search.php
curl -s -XGET 'http://localhost:9200/_mapping?pretty=true'
GET /_analyze?field=obj1.field1 -d'testing'






