1
0
Эх сурвалжийг харах

[DOC] Remove obsolete node names from documentation

Funny node names have been removed in #19456 and replaced by UUID. This commit removes these obsolete node names and replace them by real UUIDs in the documentation.

closes #20065
Tanguy Leroux 9 жил өмнө
parent
commit
656596c2a9

+ 5 - 5
docs/reference/cat.asciidoc

@@ -59,11 +59,11 @@ only those columns to appear.
 [source,sh]
 --------------------------------------------------
 % curl 'n1:9200/_cat/nodes?h=ip,port,heapPercent,name'
-192.168.56.40 9300 40.3 Captain Universe
-192.168.56.20 9300 15.3 Kaluu
-192.168.56.50 9300 17.0 Yellowjacket
-192.168.56.10 9300 12.3 Remy LeBeau
-192.168.56.30 9300 43.9 Ramsey, Doug
+192.168.56.40 9300 40.3 bGG90GE
+192.168.56.20 9300 15.3 H5dfFeA
+192.168.56.50 9300 17.0 I8hydUG
+192.168.56.10 9300 12.3 DKDM97B
+192.168.56.30 9300 43.9 6-bjhwl
 --------------------------------------------------
 
 You can also request multiple columns using simple wildcards like

+ 3 - 3
docs/reference/cat/allocation.asciidoc

@@ -8,9 +8,9 @@ and how much disk space they are using.
 --------------------------------------------------
 % curl '192.168.56.10:9200/_cat/allocation?v'
 shards disk.indices disk.used disk.avail disk.total disk.percent host          ip            node
-     1        3.1gb     5.6gb     72.2gb     77.8gb          7.8 192.168.56.10 192.168.56.10 Jarella
-     1        3.1gb     5.6gb     72.2gb     77.8gb          7.8 192.168.56.30 192.168.56.30 Solarr
-     1        3.0gb     5.5gb     72.3gb     77.8gb          7.6 192.168.56.20 192.168.56.20 Adam II
+     1        3.1gb     5.6gb     72.2gb     77.8gb          7.8 192.168.56.10 192.168.56.10 bGG90GE
+     1        3.1gb     5.6gb     72.2gb     77.8gb          7.8 192.168.56.30 192.168.56.30 I8hydUG
+     1        3.0gb     5.5gb     72.3gb     77.8gb          7.6 192.168.56.20 192.168.56.20 H5dfFeA
 --------------------------------------------------
 
 Here we can see that each node has been allocated a single shard and

+ 18 - 18
docs/reference/cat/fielddata.asciidoc

@@ -7,13 +7,13 @@ on every data node in the cluster.
 [source,sh]
 --------------------------------------------------
 % curl '192.168.56.10:9200/_cat/fielddata?v'
-id                     host    ip            node          field   size
-c223lARiSGeezlbrcugAYQ myhost1 10.20.100.200 Jessica Jones body    159.8kb
-c223lARiSGeezlbrcugAYQ myhost1 10.20.100.200 Jessica Jones text    225.7kb
-waPCbitNQaCL6xC8VxjAwg myhost2 10.20.100.201 Adversary     body    159.8kb
-waPCbitNQaCL6xC8VxjAwg myhost2 10.20.100.201 Adversary     text    275.3kb
-yaDkp-G3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 Microchip     body    109.2kb
-yaDkp-G3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 Microchip     text    175.3kb
+id                     host    ip            node    field size
+bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE body  159.8kb
+bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE text  225.7kb
+H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA body  159.8kb
+H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA text  275.3kb
+I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG body  109.2kb
+I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG text  175.3kb
 --------------------------------------------------
 
 Fields can be specified either as a query parameter, or in the URL path:
@@ -21,19 +21,19 @@ Fields can be specified either as a query parameter, or in the URL path:
 [source,sh]
 --------------------------------------------------
 % curl '192.168.56.10:9200/_cat/fielddata?v&fields=body'
-id                     host    ip            node          field   size
-c223lARiSGeezlbrcugAYQ myhost1 10.20.100.200 Jessica Jones body    159.8kb
-waPCbitNQaCL6xC8VxjAwg myhost2 10.20.100.201 Adversary     body    159.8kb
-yaDkp-G3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 Microchip     body    109.2kb
+id                     host    ip            node    field size
+bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE body  159.8kb
+H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA body  159.8kb
+I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG body  109.2kb
 
 % curl '192.168.56.10:9200/_cat/fielddata/body,text?v'
-id                     host    ip            node          field   size
-c223lARiSGeezlbrcugAYQ myhost1 10.20.100.200 Jessica Jones body    159.8kb
-c223lARiSGeezlbrcugAYQ myhost1 10.20.100.200 Jessica Jones text    225.7kb
-waPCbitNQaCL6xC8VxjAwg myhost2 10.20.100.201 Adversary     body    159.8kb
-waPCbitNQaCL6xC8VxjAwg myhost2 10.20.100.201 Adversary     text    275.3kb
-yaDkp-G3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 Microchip     body    109.2kb
-yaDkp-G3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 Microchip     text    175.3kb
+id                     host    ip            node    field size
+bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE body  159.8kb
+bGG90GEiSGeezlbrcugAYQ myhost1 10.20.100.200 bGG90GE text  225.7kb
+H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA body  159.8kb
+H5dfFeANQaCL6xC8VxjAwg myhost2 10.20.100.201 H5dfFeA text  275.3kb
+I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG body  109.2kb
+I8hydUG3R0q1AJ-HUEvkSQ myhost3 10.20.100.202 I8hydUG text  175.3kb
 --------------------------------------------------
 
 The output shows the individual fielddata for the`body` and `text` fields, one row per field per node.

+ 4 - 4
docs/reference/cat/master.asciidoc

@@ -8,7 +8,7 @@ master's node ID, bound IP address, and node name.
 --------------------------------------------------
 % curl 'localhost:9200/_cat/master?v'
 id                     ip            node
-Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 Solarr
+Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 H5dfFeA
 --------------------------------------------------
 
 This information is also available via the `nodes` command, but this
@@ -19,9 +19,9 @@ all nodes agree on the master:
 --------------------------------------------------
 % pssh -i -h list.of.cluster.hosts curl -s localhost:9200/_cat/master
 [1] 19:16:37 [SUCCESS] es3.vm
-Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 Solarr
+Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 H5dfFeA
 [2] 19:16:37 [SUCCESS] es2.vm
-Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 Solarr
+Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 H5dfFeA
 [3] 19:16:37 [SUCCESS] es1.vm
-Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 Solarr
+Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 H5dfFeA
 --------------------------------------------------

+ 10 - 10
docs/reference/cat/nodeattrs.asciidoc

@@ -6,9 +6,9 @@ The `nodeattrs` command shows custom node attributes.
 ["source","sh",subs="attributes,callouts"]
 --------------------------------------------------
 % curl 192.168.56.10:9200/_cat/nodeattrs
-node       host    ip          attr  value
-Black Bolt epsilon 192.168.1.8 rack  rack314
-Black Bolt epsilon 192.168.1.8 azone us-east-1
+node    host    ip          attr  value
+DKDM97B epsilon 192.168.1.8 rack  rack314
+DKDM97B epsilon 192.168.1.8 azone us-east-1
 --------------------------------------------------
 
 The first few columns give you basic info per node.
@@ -16,9 +16,9 @@ The first few columns give you basic info per node.
 
 ["source","sh",subs="attributes,callouts"]
 --------------------------------------------------
-node       host    ip
-Black Bolt epsilon 192.168.1.8
-Black Bolt epsilon 192.168.1.8
+node    host    ip
+DKDM97B epsilon 192.168.1.8
+DKDM97B epsilon 192.168.1.8
 --------------------------------------------------
 
 
@@ -52,15 +52,15 @@ mode (`v`). The header name will match the supplied value (e.g.,
 ["source","sh",subs="attributes,callouts"]
 --------------------------------------------------
 % curl 192.168.56.10:9200/_cat/nodeattrs?v&h=name,pid,attr,value
-name       pid   attr  value
-Black Bolt 28000 rack  rack314
-Black Bolt 28000 azone us-east-1
+name    pid   attr  value
+DKDM97B 28000 rack  rack314
+DKDM97B 28000 azone us-east-1
 --------------------------------------------------
 
 [cols="<,<,<,<,<",options="header",subs="normal"]
 |=======================================================================
 |Header |Alias |Appear by Default |Description |Example
-|`node`|`name`|Yes|Name of the node|Black Bolt
+|`node`|`name`|Yes|Name of the node|DKDM97B
 |`id` |`nodeId` |No |Unique node ID |k0zy
 |`pid` |`p` |No |Process ID |13061
 |`host` |`h` |Yes |Host name |n1

+ 2 - 2
docs/reference/cat/plugins.asciidoc

@@ -7,8 +7,8 @@ The `plugins` command provides a view per node of running plugins. This informat
 ------------------------------------------------------------------------------
 % curl 'localhost:9200/_cat/plugins?v'
 name    component       version        description
-Abraxas discovery-gce   5.0.0          The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism. 
-Abraxas lang-javascript 5.0.0          The JavaScript language plugin allows to have javascript as the language of scripts to execute.
+I8hydUG discovery-gce   5.0.0          The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism.
+I8hydUG lang-javascript 5.0.0          The JavaScript language plugin allows to have javascript as the language of scripts to execute.
 -------------------------------------------------------------------------------
 
 We can tell quickly how many plugins per node we have and which versions.

+ 7 - 6
docs/reference/cat/recovery.asciidoc

@@ -15,12 +15,13 @@ are no shards in transit from one node to another:
 [source,sh]
 ----------------------------------------------------------------------------
 > curl -XGET 'localhost:9200/_cat/recovery?v'
-index shard time type  stage source_host source_node target_host target_node repository snapshot files files_percent bytes bytes_percent total_files total_bytes translog translog_percent total_translog
-index 0     87ms store done  127.0.0.1        Athena      127.0.0.1        Athena      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
-index 1     97ms store done  127.0.0.1        Athena      127.0.0.1        Athena      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
-index 2     93ms store done  127.0.0.1        Athena      127.0.0.1        Athena      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
-index 3     90ms store done  127.0.0.1        Athena      127.0.0.1        Athena      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
-index 4     9ms  store done  127.0.0.1        Athena      127.0.0.1        Athena      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
+index shard time type  stage source_host  source_node target_host target_node repository snapshot files files_percent bytes bytes_percent
+ total_files total_bytes translog translog_percent total_translog
+index 0     87ms store done  127.0.0.1        I8hydUG      127.0.0.1        I8hydUG      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
+index 1     97ms store done  127.0.0.1        I8hydUG      127.0.0.1        I8hydUG      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
+index 2     93ms store done  127.0.0.1        I8hydUG      127.0.0.1        I8hydUG      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
+index 3     90ms store done  127.0.0.1        I8hydUG      127.0.0.1        I8hydUG      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
+index 4     9ms  store done  127.0.0.1        I8hydUG      127.0.0.1        I8hydUG      n/a        n/a      0     0.0%          0     0.0%          0           0           0        100.0%           0
 ---------------------------------------------------------------------------
 
 In the above case, the source and target nodes are the same because the recovery

+ 23 - 23
docs/reference/cat/shards.asciidoc

@@ -10,9 +10,9 @@ Here we see a single index, with three primary shards and no replicas:
 [source,sh]
 --------------------------------------------------
 % curl 192.168.56.20:9200/_cat/shards
-wiki1 0 p STARTED 3014 31.1mb 192.168.56.10 Stiletto
-wiki1 1 p STARTED 3013 29.6mb 192.168.56.30 Frankie Raye
-wiki1 2 p STARTED 3973 38.1mb 192.168.56.20 Commander Kraken
+wiki1 0 p STARTED 3014 31.1mb 192.168.56.10 H5dfFeA
+wiki1 1 p STARTED 3013 29.6mb 192.168.56.30 bGG90GE
+wiki1 2 p STARTED 3973 38.1mb 192.168.56.20 I8hydUG
 --------------------------------------------------
 
 [float]
@@ -26,9 +26,9 @@ some bandwidth by supplying an index pattern to the end.
 [source,sh]
 --------------------------------------------------
 % curl 192.168.56.20:9200/_cat/shards/wiki*
-wiki2 0 p STARTED 197 3.2mb 192.168.56.10 Stiletto
-wiki2 1 p STARTED 205 5.9mb 192.168.56.30 Frankie Raye
-wiki2 2 p STARTED 275 7.8mb 192.168.56.20 Commander Kraken
+wiki2 0 p STARTED 197 3.2mb 192.168.56.10 H5dfFeA
+wiki2 1 p STARTED 205 5.9mb 192.168.56.30 bGG90GE
+wiki2 2 p STARTED 275 7.8mb 192.168.56.20 I8hydUG
 --------------------------------------------------
 
 
@@ -44,8 +44,8 @@ shards.  Where are they from and where are they going?
 % curl 192.168.56.10:9200/_cat/health
 1384315316 20:01:56 foo green 3 3 12 6 2 0 0
 % curl 192.168.56.10:9200/_cat/shards | fgrep RELO
-wiki1 0 r RELOCATING 3014 31.1mb 192.168.56.20 Commander Kraken -> 192.168.56.30 Frankie Raye
-wiki1 1 r RELOCATING 3013 29.6mb 192.168.56.10 Stiletto -> 192.168.56.30 Frankie Raye
+wiki1 0 r RELOCATING 3014 31.1mb 192.168.56.20 I8hydUG -> 192.168.56.30 bGG90GE
+wiki1 1 r RELOCATING 3013 29.6mb 192.168.56.10 H5dfFeA -> 192.168.56.30 bGG90GE
 --------------------------------------------------
 
 [float]
@@ -60,12 +60,12 @@ Before a shard can be used, it goes through an `INITIALIZING` state.
 % curl -XPUT 192.168.56.20:9200/_settings -d'{"number_of_replicas":1}'
 {"acknowledged":true}
 % curl 192.168.56.20:9200/_cat/shards
-wiki1 0 p STARTED      3014 31.1mb 192.168.56.10 Stiletto
-wiki1 0 r INITIALIZING    0 14.3mb 192.168.56.30 Frankie Raye
-wiki1 1 p STARTED      3013 29.6mb 192.168.56.30 Frankie Raye
-wiki1 1 r INITIALIZING    0 13.1mb 192.168.56.20 Commander Kraken
-wiki1 2 r INITIALIZING    0   14mb 192.168.56.10 Stiletto
-wiki1 2 p STARTED      3973 38.1mb 192.168.56.20 Commander Kraken
+wiki1 0 p STARTED      3014 31.1mb 192.168.56.10 H5dfFeA
+wiki1 0 r INITIALIZING    0 14.3mb 192.168.56.30 bGG90GE
+wiki1 1 p STARTED      3013 29.6mb 192.168.56.30 bGG90GE
+wiki1 1 r INITIALIZING    0 13.1mb 192.168.56.20 I8hydUG
+wiki1 2 r INITIALIZING    0   14mb 192.168.56.10 H5dfFeA
+wiki1 2 p STARTED      3973 38.1mb 192.168.56.20 I8hydUG
 --------------------------------------------------
 
 If a shard cannot be assigned, for example you've overallocated the
@@ -78,17 +78,17 @@ will remain `UNASSIGNED` with the <<reason-unassigned,reason code>> `ALLOCATION_
 % curl 192.168.56.20:9200/_cat/health
 1384316325 20:18:45 foo yellow 3 3 9 3 0 0 3
 % curl 192.168.56.20:9200/_cat/shards
-wiki1 0 p STARTED    3014 31.1mb 192.168.56.10 Stiletto
-wiki1 0 r STARTED    3014 31.1mb 192.168.56.30 Frankie Raye
-wiki1 0 r STARTED    3014 31.1mb 192.168.56.20 Commander Kraken
+wiki1 0 p STARTED    3014 31.1mb 192.168.56.10 H5dfFeA
+wiki1 0 r STARTED    3014 31.1mb 192.168.56.30 bGG90GE
+wiki1 0 r STARTED    3014 31.1mb 192.168.56.20 I8hydUG
 wiki1 0 r UNASSIGNED ALLOCATION_FAILED
-wiki1 1 r STARTED    3013 29.6mb 192.168.56.10 Stiletto
-wiki1 1 p STARTED    3013 29.6mb 192.168.56.30 Frankie Raye
-wiki1 1 r STARTED    3013 29.6mb 192.168.56.20 Commander Kraken
+wiki1 1 r STARTED    3013 29.6mb 192.168.56.10 H5dfFeA
+wiki1 1 p STARTED    3013 29.6mb 192.168.56.30 bGG90GE
+wiki1 1 r STARTED    3013 29.6mb 192.168.56.20 I8hydUG
 wiki1 1 r UNASSIGNED ALLOCATION_FAILED
-wiki1 2 r STARTED    3973 38.1mb 192.168.56.10 Stiletto
-wiki1 2 r STARTED    3973 38.1mb 192.168.56.30 Frankie Raye
-wiki1 2 p STARTED    3973 38.1mb 192.168.56.20 Commander Kraken
+wiki1 2 r STARTED    3973 38.1mb 192.168.56.10 H5dfFeA
+wiki1 2 r STARTED    3973 38.1mb 192.168.56.30 bGG90GE
+wiki1 2 p STARTED    3973 38.1mb 192.168.56.20 I8hydUG
 wiki1 2 r UNASSIGNED ALLOCATION_FAILED
 --------------------------------------------------
 

+ 6 - 6
docs/reference/cluster/allocation-explain.asciidoc

@@ -41,7 +41,7 @@ The response looks like:
   "remaining_delay_ms" : 0,                       <5>
   "nodes" : {
     "V-Spi0AyRZ6ZvKbaI3691w" : {
-      "node_name" : "node1",
+      "node_name" : "H5dfFeA",
       "node_attributes" : {                       <6>
         "bar" : "baz"
       },
@@ -58,7 +58,7 @@ The response looks like:
       } ]
     },
     "Qc6VL8c5RWaw1qXZ0Rg57g" : {
-      "node_name" : "node2",
+      "node_name" : "bGG90GE",
       "node_attributes" : {
         "bar" : "baz",
         "foo" : "bar"
@@ -76,7 +76,7 @@ The response looks like:
       } ]
     },
     "PzdyMZGXQdGhqTJHF_hGgA" : {
-      "node_name" : "node3",
+      "node_name" : "DKDM97B",
       "node_attributes" : { },
       "store" : {
         "shard_copy" : "NONE"
@@ -122,7 +122,7 @@ For a shard that is already assigned, the output looks similar to:
   "remaining_delay_ms" : 0,
   "nodes" : {
     "V-Spi0AyRZ6ZvKbaI3691w" : {
-      "node_name" : "Susan Storm",
+      "node_name" : "bGG90GE",
       "node_attributes" : {
         "bar" : "baz"
       },
@@ -139,7 +139,7 @@ For a shard that is already assigned, the output looks similar to:
       } ]
     },
     "Qc6VL8c5RWaw1qXZ0Rg57g" : {
-      "node_name" : "Slipstream",
+      "node_name" : "I8hydUG",
       "node_attributes" : {
         "bar" : "baz",
         "foo" : "bar"
@@ -157,7 +157,7 @@ For a shard that is already assigned, the output looks similar to:
       } ]
     },
     "PzdyMZGXQdGhqTJHF_hGgA" : {
-      "node_name" : "The Symbiote",
+      "node_name" : "H5dfFeA",
       "node_attributes" : { },
       "store" : {
         "shard_copy" : "NONE"

+ 1 - 1
docs/reference/cluster/tasks.asciidoc

@@ -28,7 +28,7 @@ The result will look similar to the following:
 {
   "nodes" : {
     "oTUltX4IQMOUUVeiohTt8A" : {
-      "name" : "Tamara Rahn",
+      "name" : "H5dfFeA",
       "transport_address" : "127.0.0.1:9300",
       "host" : "127.0.0.1",
       "ip" : "127.0.0.1:9300",

+ 2 - 2
docs/reference/docs/delete-by-query.asciidoc

@@ -244,7 +244,7 @@ The responses looks like:
 {
   "nodes" : {
     "r1A2WoRbTwKZ516z6NEs5A" : {
-      "name" : "Tyrannus",
+      "name" : "r1A2WoR",
       "transport_address" : "127.0.0.1:9300",
       "host" : "127.0.0.1",
       "ip" : "127.0.0.1:9300",
@@ -314,7 +314,7 @@ POST _tasks/taskid:1/_cancel
 
 The `task_id` can be found using the tasks API above.
 
-Cancelation should happen quickly but might take a few seconds. The task status
+Cancellation should happen quickly but might take a few seconds. The task status
 API above will continue to list the task until it is wakes to cancel itself.
 
 

+ 1 - 1
docs/reference/docs/reindex.asciidoc

@@ -528,7 +528,7 @@ The responses looks like:
 {
   "nodes" : {
     "r1A2WoRbTwKZ516z6NEs5A" : {
-      "name" : "Tyrannus",
+      "name" : "r1A2WoR",
       "transport_address" : "127.0.0.1:9300",
       "host" : "127.0.0.1",
       "ip" : "127.0.0.1:9300",

+ 2 - 2
docs/reference/docs/update-by-query.asciidoc

@@ -306,7 +306,7 @@ The responses looks like:
 {
   "nodes" : {
     "r1A2WoRbTwKZ516z6NEs5A" : {
-      "name" : "Tyrannus",
+      "name" : "r1A2WoR",
       "transport_address" : "127.0.0.1:9300",
       "host" : "127.0.0.1",
       "ip" : "127.0.0.1:9300",
@@ -379,7 +379,7 @@ POST _tasks/taskid:1/_cancel
 
 The `task_id` can be found using the tasks API above.
 
-Cancelation should happen quickly but might take a few seconds. The task status
+Cancellation should happen quickly but might take a few seconds. The task status
 API above will continue to list the task until it is wakes to cancel itself.
 
 

+ 31 - 17
docs/reference/getting-started.asciidoc

@@ -40,7 +40,8 @@ Note that it is valid and perfectly fine to have a cluster with only a single no
 [float]
 === Node
 
-A node is a single server that is part of your cluster, stores your data, and participates in the cluster's indexing and search capabilities. Just like a cluster, a node is identified by a name which by default is a random Marvel character name that is assigned to the node at startup. You can define any node name you want if you do not want the default.  This name is important for administration purposes where you want to identify which servers in your network correspond to which nodes in your Elasticsearch cluster.
+A node is a single server that is part of your cluster, stores your data, and participates in the cluster's indexing and search
+capabilities. Just like a cluster, a node is identified by a name which by default is a random Universally Unique IDentifier (UUID) that is assigned to the node at startup. You can define any node name you want if you do not want the default.  This name is important for administration purposes where you want to identify which servers in your network correspond to which nodes in your Elasticsearch cluster.
 
 A node can be configured to join a specific cluster by the cluster name. By default, each node is set up to join a cluster named `elasticsearch` which means that if you start up a number of nodes on your network and--assuming they can discover each other--they will all automatically form and join a single cluster named `elasticsearch`.
 
@@ -144,20 +145,33 @@ If everything goes well, you should see a bunch of messages that look like below
 ["source","sh",subs="attributes,callouts"]
 --------------------------------------------------
 ./elasticsearch
-[2014-03-13 13:42:17,218][INFO ][node           ] [New Goblin] version[{version}], pid[2085], build[5c03844/2014-02-25T15:52:53Z]
-[2014-03-13 13:42:17,219][INFO ][node           ] [New Goblin] initializing ...
-[2014-03-13 13:42:17,223][INFO ][plugins        ] [New Goblin] loaded [], sites []
-[2014-03-13 13:42:19,831][INFO ][node           ] [New Goblin] initialized
-[2014-03-13 13:42:19,832][INFO ][node           ] [New Goblin] starting ...
-[2014-03-13 13:42:19,958][INFO ][transport      ] [New Goblin] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.8.112:9300]}
-[2014-03-13 13:42:23,030][INFO ][cluster.service] [New Goblin] new_master [New Goblin][rWMtGj3dQouz2r6ZFL9v4g][mwubuntu1][inet[/192.168.8.112:9300]], reason: zen-disco-join (elected_as_master)
-[2014-03-13 13:42:23,100][INFO ][discovery      ] [New Goblin] elasticsearch/rWMtGj3dQouz2r6ZFL9v4g
-[2014-03-13 13:42:23,125][INFO ][http           ] [New Goblin] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.8.112:9200]}
-[2014-03-13 13:42:23,629][INFO ][gateway        ] [New Goblin] recovered [1] indices into cluster_state
-[2014-03-13 13:42:23,630][INFO ][node           ] [New Goblin] started
---------------------------------------------------
-
-Without going too much into detail, we can see that our node named "New Goblin" (which will be a different Marvel character in your case) has started and elected itself as a master in a single cluster. Don't worry yet at the moment what master means. The main thing that is important here is that we have started one node within one cluster.
+[2016-09-16T14:17:51,251][INFO ][o.e.n.Node               ] [] initializing ...
+[2016-09-16T14:17:51,329][INFO ][o.e.e.NodeEnvironment    ] [6-bjhwl] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [317.7gb], net total_space [453.6gb], spins? [no], types [ext4]
+[2016-09-16T14:17:51,330][INFO ][o.e.e.NodeEnvironment    ] [6-bjhwl] heap size [1.9gb], compressed ordinary object pointers [true]
+[2016-09-16T14:17:51,333][INFO ][o.e.n.Node               ] [6-bjhwl] node name [6-bjhwl] derived from node ID; set [node.name] to override
+[2016-09-16T14:17:51,334][INFO ][o.e.n.Node               ] [6-bjhwl] version[{version}], pid[21261], build[f5daa16/2016-09-16T09:12:24.346Z], OS[Linux/4.4.0-36-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_60/25.60-b23]
+[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [aggs-matrix-stats]
+[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [ingest-common]
+[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [lang-expression]
+[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [lang-groovy]
+[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [lang-mustache]
+[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [lang-painless]
+[2016-09-16T14:17:51,967][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [percolator]
+[2016-09-16T14:17:51,968][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [reindex]
+[2016-09-16T14:17:51,968][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [transport-netty3]
+[2016-09-16T14:17:51,968][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded module [transport-netty4]
+[2016-09-16T14:17:51,968][INFO ][o.e.p.PluginsService     ] [6-bjhwl] loaded plugin [mapper-murmur3]
+[2016-09-16T14:17:53,521][INFO ][o.e.n.Node               ] [6-bjhwl] initialized
+[2016-09-16T14:17:53,521][INFO ][o.e.n.Node               ] [6-bjhwl] starting ...
+[2016-09-16T14:17:53,671][INFO ][o.e.t.TransportService   ] [6-bjhwl] publish_address {192.168.8.112:9300}, bound_addresses {{192.168.8.112:9300}
+[2016-09-16T14:17:53,676][WARN ][o.e.b.BootstrapCheck     ] [6-bjhwl] max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
+[2016-09-16T14:17:56,718][INFO ][o.e.c.s.ClusterService   ] [6-bjhwl] new_master {6-bjhwl}{6-bjhwl4TkajjoD2oEipnQ}{8m3SNKoFR6yQl1I0JUfPig}{192.168.8.112}{192.168.8.112:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
+[2016-09-16T14:17:56,731][INFO ][o.e.h.HttpServer         ] [6-bjhwl] publish_address {192.168.8.112:9200}, bound_addresses {[::1]:9200}, {192.168.8.112:9200}
+[2016-09-16T14:17:56,732][INFO ][o.e.g.GatewayService     ] [6-bjhwl] recovered [0] indices into cluster_state
+[2016-09-16T14:17:56,748][INFO ][o.e.n.Node               ] [6-bjhwl] started
+--------------------------------------------------
+
+Without going too much into detail, we can see that our node named "I8hydUG" (which will be a different Marvel character in your case) has started and elected itself as a master in a single cluster. Don't worry yet at the moment what master means. The main thing that is important here is that we have started one node within one cluster.
 
 As mentioned previously, we can override either the cluster or node name. This can be done from the command line when starting Elasticsearch as follows:
 
@@ -218,10 +232,10 @@ And the response:
 --------------------------------------------------
 curl 'localhost:9200/_cat/nodes?v'
 host         ip        heap.percent ram.percent load node.role master name
-mwubuntu1    127.0.1.1            8           4 0.00 d         *      New Goblin
+mwubuntu1    127.0.1.1            8           4 0.00 d         *      I8hydUG
 --------------------------------------------------
 
-Here, we can see our one node named "New Goblin", which is the single node that is currently in our cluster.
+Here, we can see our one node named "I8hydUG", which is the single node that is currently in our cluster.
 
 === List All Indices
 

+ 1 - 1
docs/reference/setup/stopping.asciidoc

@@ -23,7 +23,7 @@ From the Elasticsearch startup logs:
 
 [source,sh]
 --------------------------------------------------
-[2016-07-07 12:26:18,908][INFO ][node                     ] [Reaper] version[5.0.0-alpha4], pid[15399], build[3f5b994/2016-06-27T16:23:46.861Z], OS[Mac OS X/10.11.5/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_92/25.92-b14]
+[2016-07-07 12:26:18,908][INFO ][node                     ] [I8hydUG] version[5.0.0-alpha4], pid[15399], build[3f5b994/2016-06-27T16:23:46.861Z], OS[Mac OS X/10.11.5/x86_64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_92/25.92-b14]
 --------------------------------------------------
 
 Or by specifying a location to write a PID file to on startup (`-p <path>`):