Browse Source

Cat API: Add node name to _cat/recovery

Add source_node and target_node fields to the recovery cat API. Also fixed and updated the documentation which was not complete concerning fields names.

Closes #8041
tlrx 11 years ago
parent
commit
e1c75bae87

+ 24 - 29
docs/reference/cat/recovery.asciidoc

@@ -13,13 +13,13 @@ As an example, here is what the recovery state of a cluster may look like when t
 are no shards in transit from one node to another:
 are no shards in transit from one node to another:
 
 
 [source,shell]
 [source,shell]
-----------------------------------------------------------------------------
+-----------------------------------------------------------------------------------------------------------------------------------------------
 > curl -XGET 'localhost:9200/_cat/recovery?v'
 > curl -XGET 'localhost:9200/_cat/recovery?v'
-index shard time type    stage source target files percent bytes     percent
-wiki  0     73   gateway done  hostA  hostA  36    100.0%  24982806 100.0%
-wiki  1     245  gateway done  hostA  hostA  33    100.0%  24501912 100.0%
-wiki  2     230  gateway done  hostA  hostA  36    100.0%  30267222 100.0%
----------------------------------------------------------------------------
+index shard time type    stage source_host source_node target_host target_node repository snapshot files files_percent bytes    bytes_percent
+wiki  0     73   gateway done  hostA       Athena      hostA       Athena      n/a        n/a      36    100.0%        24982806 100.0%
+wiki  1     245  gateway done  hostA       Athena      hostA       Athena      n/a        n/a      33    100.0%        24501912 100.0%
+wiki  2     230  gateway done  hostA       Athena      hostA       Athena      n/a        n/a      36    100.0%        30267222 100.0%
+-----------------------------------------------------------------------------------------------------------------------------------------------
 
 
 In the above case, the source and target nodes are the same because the recovery
 In the above case, the source and target nodes are the same because the recovery
 type was gateway, i.e. they were read from local storage on node start.
 type was gateway, i.e. they were read from local storage on node start.
@@ -29,19 +29,19 @@ of our index and bringing another node online to host the replicas, we can see
 what a live shard recovery looks like.
 what a live shard recovery looks like.
 
 
 [source,shell]
 [source,shell]
-----------------------------------------------------------------------------
+-----------------------------------------------------------------------------------------------------------------------------------------------
 > curl -XPUT 'localhost:9200/wiki/_settings' -d'{"number_of_replicas":1}'
 > curl -XPUT 'localhost:9200/wiki/_settings' -d'{"number_of_replicas":1}'
 {"acknowledged":true}
 {"acknowledged":true}
 
 
-> curl -XGET 'localhost:9200/_cat/recovery?v'
-index shard time type    stage source target files percent bytes    percent
-wiki  0     1252 gateway done  hostA  hostA  4     100.0%  23638870 100.0%
-wiki  0     1672 replica index hostA  hostB  4     75.0%   23638870 48.8%
-wiki  1     1698 replica index hostA  hostB  4     75.0%   23348540 49.4%
-wiki  1     4812 gateway done  hostA  hostA  33    100.0%  24501912 100.0%
-wiki  2     1689 replica index hostA  hostB  4     75.0%   28681851 40.2%
-wiki  2     5317 gateway done  hostA  hostA  36    100.0%  30267222 100.0%
-----------------------------------------------------------------------------
+    > curl -XGET 'localhost:9200/_cat/recovery?v'
+index shard time type    stage source_host source_node target_host target_node repository snapshot files files_percent bytes    bytes_percent
+wiki  0     1252 gateway done  hostA       Athena      hostA       Athena      n/a        n/a      4     100.0%        23638870 100.0%
+wiki  0     1672 replica index hostA       Athena      hostB       Boneyard    n/a        n/a      4     75.0%         23638870 48.8%
+wiki  1     1698 replica index hostA       Athena      hostB       Boneyard    n/a        n/a      4     75.0%         23348540 49.4%
+wiki  1     4812 gateway done  hostA       Athena      hostA       Athena      n/a        n/a      33    100.0%        24501912 100.0%
+wiki  2     1689 replica index hostA       Athena      hostB       Boneyard    n/a        n/a      4     75.0%         28681851 40.2%
+wiki  2     5317 gateway done  hostA       Athena      hostA       Athena      n/a        n/a      36    100.0%        30267222 100.0%
+-----------------------------------------------------------------------------------------------------------------------------------------------
 
 
 We can see in the above listing that our 3 initial shards are in various stages
 We can see in the above listing that our 3 initial shards are in various stages
 of being replicated from one node to another. Notice that the recovery type is
 of being replicated from one node to another. Notice that the recovery type is
@@ -52,19 +52,14 @@ made a backup of my index, I can restore it using the <<modules-snapshots,snapsh
 API.
 API.
 
 
 [source,shell]
 [source,shell]
---------------------------------------------------------------------------------
+-----------------------------------------------------------------------------------------------------------------------------------------------
 > curl -XPOST 'localhost:9200/_snapshot/imdb/snapshot_2/_restore'
 > curl -XPOST 'localhost:9200/_snapshot/imdb/snapshot_2/_restore'
 {"acknowledged":true}
 {"acknowledged":true}
 > curl -XGET 'localhost:9200/_cat/recovery?v'
 > curl -XGET 'localhost:9200/_cat/recovery?v'
-index shard time type     stage repository snapshot files percent bytes percent
-imdb  0     1978 snapshot done  imdb       snap_1   79    8.0%    12086 9.0%
-imdb  1     2790 snapshot index imdb       snap_1   88    7.7%    11025 8.1%
-imdb  2     2790 snapshot index imdb       snap_1   85    0.0%    12072 0.0%
-imdb  3     2796 snapshot index imdb       snap_1   85    2.4%    12048 7.2%
-imdb  4     819  snapshot init  imdb       snap_1   0     0.0%    0     0.0%
---------------------------------------------------------------------------------
-
-
-
-
-
+index shard time type     stage  source_host source_node target_host target_node  repository snapshot files files_percent bytes    bytes_percent
+imdb  0     1978 snapshot done   n/a         n/a         hostA       Athena       imdb       snap_1   79    8.0%          12086    9.0%
+imdb  1     2790 snapshot index  n/a         n/a         hostA       Athena       imdb       snap_1   88    7.7%          11025    8.1%
+imdb  2     2790 snapshot index  n/a         n/a         hostA       Athena       imdb       snap_1   85    0.0%          12072    0.0%
+imdb  3     2796 snapshot index  n/a         n/a         hostA       Athena       imdb       snap_1   85    2.4%          12048    7.2%
+imdb  4     819  snapshot init   n/a         n/a         hostA       Athena       imdb       snap_1   0     0.0%          0        0.0%
+-----------------------------------------------------------------------------------------------------------------------------------------------

+ 2 - 0
rest-api-spec/test/cat.recovery/10_basic.yaml

@@ -31,7 +31,9 @@
                 (gateway|replica|snapshot|relocating)     \s+   # type
                 (gateway|replica|snapshot|relocating)     \s+   # type
                 (init|index|start|translog|finalize|done) \s+   # stage
                 (init|index|start|translog|finalize|done) \s+   # stage
                 [-\w./]+    \s+                                 # source_host
                 [-\w./]+    \s+                                 # source_host
+                [-\w./]+    \s+                                 # source_node
                 [-\w./]+    \s+                                 # target_host
                 [-\w./]+    \s+                                 # target_host
+                [-\w./]+    \s+                                 # target_node
                 [-\w./]+    \s+                                 # repository
                 [-\w./]+    \s+                                 # repository
                 [-\w./]+    \s+                                 # snapshot
                 [-\w./]+    \s+                                 # snapshot
                 \d+         \s+                                 # files
                 \d+         \s+                                 # files

+ 4 - 0
src/main/java/org/elasticsearch/rest/action/cat/RestRecoveryAction.java

@@ -86,7 +86,9 @@ public class RestRecoveryAction extends AbstractCatAction {
                 .addCell("type", "alias:ty;desc:recovery type")
                 .addCell("type", "alias:ty;desc:recovery type")
                 .addCell("stage", "alias:st;desc:recovery stage")
                 .addCell("stage", "alias:st;desc:recovery stage")
                 .addCell("source_host", "alias:shost;desc:source host")
                 .addCell("source_host", "alias:shost;desc:source host")
+                .addCell("source_node", "alias:snode;desc:source node name")
                 .addCell("target_host", "alias:thost;desc:target host")
                 .addCell("target_host", "alias:thost;desc:target host")
+                .addCell("target_node", "alias:tnode;desc:target node name")
                 .addCell("repository", "alias:rep;desc:repository")
                 .addCell("repository", "alias:rep;desc:repository")
                 .addCell("snapshot", "alias:snap;desc:snapshot")
                 .addCell("snapshot", "alias:snap;desc:snapshot")
                 .addCell("files", "alias:f;desc:number of files")
                 .addCell("files", "alias:f;desc:number of files")
@@ -142,7 +144,9 @@ public class RestRecoveryAction extends AbstractCatAction {
                 t.addCell(state.getType().toString().toLowerCase(Locale.ROOT));
                 t.addCell(state.getType().toString().toLowerCase(Locale.ROOT));
                 t.addCell(state.getStage().toString().toLowerCase(Locale.ROOT));
                 t.addCell(state.getStage().toString().toLowerCase(Locale.ROOT));
                 t.addCell(state.getSourceNode() == null ? "n/a" : state.getSourceNode().getHostName());
                 t.addCell(state.getSourceNode() == null ? "n/a" : state.getSourceNode().getHostName());
+                t.addCell(state.getSourceNode() == null ? "n/a" : state.getSourceNode().getName());
                 t.addCell(state.getTargetNode().getHostName());
                 t.addCell(state.getTargetNode().getHostName());
+                t.addCell(state.getTargetNode().getName());
                 t.addCell(state.getRestoreSource() == null ? "n/a" : state.getRestoreSource().snapshotId().getRepository());
                 t.addCell(state.getRestoreSource() == null ? "n/a" : state.getRestoreSource().snapshotId().getRepository());
                 t.addCell(state.getRestoreSource() == null ? "n/a" : state.getRestoreSource().snapshotId().getSnapshot());
                 t.addCell(state.getRestoreSource() == null ? "n/a" : state.getRestoreSource().snapshotId().getSnapshot());
                 t.addCell(state.getIndex().totalFileCount());
                 t.addCell(state.getIndex().totalFileCount());