POST my-index/_update_by_query
{
"script":
{
"inline": "ctx._source.test = ctx._source.test.toUpperCase()"
}
}
PUT /_snapshot/s3_system_index/index_backup?wait_for_completion=true
{
"indices": "my-index",
"ignore_unavailable": true,
"include_global_state": false,
"metadata": {
"taken_by": "mcalves",
"taken_because": "test"
}
}
GET /_search
{
"query": {
"bool": {
"must": [
{
"match": {
"fields.client": "client"
}
},
{
"match": {
"fields.cluster_name": "hpr"
}
},
{
"match": {
"kubernetes.container.name": "filebeat"
}
}
],
"filter": [
{
"range": {
"@timestamp": {
"gte": "now-5d/d",
"lte": "now-3d/d"
}
}
}
]
}
}
}
POST my-index/_delete_by_query
{
"query": {
"bool": {
"filter": {
"range": {
"@timestamp": {
"gte": "2021-01-06T09:00:00.000Z",
"lte": "2021-01-06T09:15:00.000Z",
"format": "strict_date_optional_time"
}
}
}
}
}
}
GET _tasks?&actions=indices:data/write/delete/byquery
{
"nodes" : {
"atTwlxo0SXOBoVi9fLQKEw" : {
"name" : "instance-0000000018",
"transport_address" : "10.28.114.10:19892",
"host" : "10.28.114.10",
"ip" : "10.28.114.10:19892",
"roles" : [
"data",
"data_cold",
"data_content",
"data_frozen",
"data_hot",
"data_warm",
"ingest",
"remote_cluster_client",
"transform"
],
"attributes" : {
"logical_availability_zone" : "zone-0",
"server_name" : "instance-0000000018.e6c12a416bfd4b379f16b69cc2079944",
"availability_zone" : "Paris",
"xpack.installed" : "true",
"data" : "hot",
"instance_configuration" : "69babbd5fa254640a99d976d6a96e6c5",
"transform.node" : "true",
"region" : "unknown-region"
},
"tasks" : {
"atTwlxo0SXOBoVi9fLQKEw:178962026" : {
"node" : "atTwlxo0SXOBoVi9fLQKEw",
"id" : 178962026,
"type" : "transport",
"action" : "indices:data/write/delete/byquery",
"start_time_in_millis" : 1635233212875,
"running_time_in_nanos" : 365365276945,
"cancellable" : true,
"cancelled" : false,
"headers" : { }
}
}
}
}
GET _tasks/atTwlxo0SXOBoVi9fLQKEw:178962026
{
"completed" : false,
"task" : {
"node" : "atTwlxo0SXOBoVi9fLQKEw",
"id" : 178962026,
"type" : "transport",
"action" : "indices:data/write/delete/byquery",
"status" : {
"total" : 16234361,
"updated" : 0,
"created" : 0,
"deleted" : 183000,
"batches" : 183,
"version_conflicts" : 0,
"noops" : 0,
"retries" : {
"bulk" : 0,
"search" : 0
},
"throttled_millis" : 0,
"requests_per_second" : -1.0,
"throttled_until_millis" : 0
},
"description" : "delete-by-query [logs-example]",
"start_time_in_millis" : 1635233212875,
"running_time_in_nanos" : 574710736765,
"cancellable" : true,
"cancelled" : false,
"headers" : { }
}
}
Cette requête permet de convertir un string en int et de faire une division.
POST example-*/_update_by_query
{
"query": {
"match": {
"test" : "example"
}
},
"script": {
"source":
"""
int x = Integer.parseInt(String.valueOf(ctx._source.example_field));
ctx._source.examplefield = x / 2
""",
"lang": "painless"
}
}
Lors d’une migration de votre cluster Elastic vers une nouvelle version, vous pouvez rencontrer un dysfonctionnement comme celui-ci :
Plan change failed: [ClusterFailure:IndicesLockedFailure]: The following indexes are blocking some operations. They must be unlocked to continue: [(.kibana_1,{"wrie":"true"});(.kibana_task_manager_1,{"write":"true"})] Details: [{"indices":"(.kibana_1,{\"write\":\"true\"});(.kibana_task_manager_1,{\"write\":\"true\"})"}]
GET .kibana_1
PUT .kibana_1/_settings
{
"index": {
"blocks.write": false
}
}
GET /_cluster/allocation/explain
Voici un exemple d’erreur sur l’allocation d’un shard :
"allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions": [
{
"node_id": "SGuWuLLeQpu4z-ERy9ukZg",
"node_name": "instance-0000000085",
"transport_address": "10.74.42.62:19311",
"node_attributes": {
"logical_availability_zone": "zone-1",
"server_name": "instance-0000000085.35b51b61187c4e1db024d9f48e053304",
"availability_zone": "paris",
"xpack.installed": "true",
"instance_configuration": "69babbd5fa254640a99d976d6a96e6c5",
"transform.node": "true",
"region": "unknown-region"
},
"node_decision": "no",
"weight_ranking": 1,
"deciders": [
{
"decider": "disk_threshold",
"decision": "NO",
"explanation": "the node is above the low watermark cluster setting [cluster.routing.allocation.disk.watermark.low=85%], using more disk space than the maximum allowed [85.0%], actual free: [14.746126979589464%]"
}
]
},
PUT /_cluster/settings
{
"transient" : {
"cluster.routing.allocation.disk.watermark.low" : "90%"
}
}
PUT /_cluster/settings
{
"transient" : {
"cluster.routing.allocation.disk.watermark.low" : null
}
}
GET /_cluster/settings?include_defaults
Rechercher la configuration watermark, voici un exemple :
"disk": {
"include_relocations": "true",
"reroute_interval": "60s",
"watermark": {
"flood_stage.frozen.max_headroom": "20GB",
"flood_stage": "95%",
"high": "90%",
"low": "85%",
"enable_for_single_data_node": "true",
"flood_stage.frozen": "95%"
}
}
POST /_cluster/reroute
{
"commands": [
{
"move": {
"index": "test", "shard": 0,
"from_node": "node1", "to_node": "node2"
}
},
{
"allocate_replica": {
"index": "test", "shard": 1,
"node": "node3"
}
}
]
}
GET /_cat/recovery?active_only=true&v
index shard time type stage source_host source_node target_host target_node repository snapshot files files_recovered files_percent files_total bytes bytes_recovered bytes_percent bytes_total translog_ops translog_ops_recovered translog_ops_percent
logstash-my-index 0 1.1s peer done 10.74.42.62 instance-0000000080 10.74.42.73 instance-0000000085 n/a n/a 4 4 100.0% 4 35644 35644 100.0% 35644 0 0 100.0%
FATAL Error: Unable to complete saved object migrations for the [.kibana_task_manager] index. Please check the health of your Elasticsearch cluster and try again. Unexpected Elasticsearch ResponseError: statusCode: 400, method: PUT, url: /.kibana_task_manager_7.16.3_reindex_temp?wait_for_active_shards=all&timeout=60s error: [illegal_argument_exception]: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [2000]/[2000] maximum normal shards open;,
Afficher les valeurs par défaut du cluster
GET /_cluster/settings?include_defaults=true
Modifier la valeur à 2000 shards
PUT /_cluster/settings
{
"persistent": {
"cluster.max_shards_per_node": "2000"
}
}
Revenir à la valeur par défaut :
PUT /_cluster/settings
{
"persistent": {
"cluster.max_shards_per_node": "1000"
}
}