浏览代码

Uppercasing some docs section title (#37781)

Section titles are mostly uppercase, only a few cases where query DSL parameters
or Java method names are used as the title they should be lowercased.
Christoph Büscher 6 年之前
父节点
当前提交
967de04257

+ 1 - 1
docs/community-clients/index.asciidoc

@@ -131,7 +131,7 @@ The following project appears to be abandoned:
   Node.js client for the Elasticsearch REST API
   Node.js client for the Elasticsearch REST API
 
 
 [[kotlin]]
 [[kotlin]]
-== kotlin
+== Kotlin
 
 
 * https://github.com/mbuhot/eskotlin[ES Kotlin]:
 * https://github.com/mbuhot/eskotlin[ES Kotlin]:
   Elasticsearch Query DSL for kotlin based on the {client}/java-api/current/index.html[official Elasticsearch Java client].
   Elasticsearch Query DSL for kotlin based on the {client}/java-api/current/index.html[official Elasticsearch Java client].

+ 3 - 3
docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc

@@ -337,7 +337,7 @@ The JLH score can be used as a significance score by adding the parameter
 
 
 The scores are derived from the doc frequencies in _foreground_ and _background_ sets. The _absolute_ change in popularity (foregroundPercent - backgroundPercent) would favor common terms whereas the _relative_ change in popularity (foregroundPercent/ backgroundPercent) would favor rare terms. Rare vs common is essentially a precision vs recall balance and so the absolute and relative changes are multiplied to provide a sweet spot between precision and recall.
 The scores are derived from the doc frequencies in _foreground_ and _background_ sets. The _absolute_ change in popularity (foregroundPercent - backgroundPercent) would favor common terms whereas the _relative_ change in popularity (foregroundPercent/ backgroundPercent) would favor rare terms. Rare vs common is essentially a precision vs recall balance and so the absolute and relative changes are multiplied to provide a sweet spot between precision and recall.
 
 
-===== mutual information
+===== Mutual information
 Mutual information as described in "Information Retrieval", Manning et al., Chapter 13.5.1 can be used as significance score by adding the parameter
 Mutual information as described in "Information Retrieval", Manning et al., Chapter 13.5.1 can be used as significance score by adding the parameter
 
 
 [source,js]
 [source,js]
@@ -373,7 +373,7 @@ Chi square as described in "Information Retrieval", Manning et al., Chapter 13.5
 Chi square behaves like mutual information and can be configured with the same parameters `include_negatives` and `background_is_superset`.
 Chi square behaves like mutual information and can be configured with the same parameters `include_negatives` and `background_is_superset`.
 
 
 
 
-===== google normalized distance
+===== Google normalized distance
 Google normalized distance  as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (http://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter
 Google normalized distance  as described in "The Google Similarity Distance", Cilibrasi and Vitanyi, 2007 (http://arxiv.org/pdf/cs/0412098v3.pdf) can be used as significance score by adding the parameter
 
 
 [source,js]
 [source,js]
@@ -412,7 +412,7 @@ It is hard to say which one of the different heuristics will be the best choice
 
 
 If none of the above measures suits your usecase than another option is to implement a custom significance measure:
 If none of the above measures suits your usecase than another option is to implement a custom significance measure:
 
 
-===== scripted
+===== Scripted
 Customized scores can be implemented via a script:
 Customized scores can be implemented via a script:
 
 
 [source,js]
 [source,js]

+ 2 - 2
docs/reference/migration/migrate_7_0/mappings.asciidoc

@@ -67,8 +67,8 @@ should also be changed in the template to explicitly define `tree` to one of `ge
 or `quadtree`. This will ensure compatibility with previously created indexes.
 or `quadtree`. This will ensure compatibility with previously created indexes.
 
 
 [float]
 [float]
-==== deprecated `geo_shape` parameters
+==== Deprecated `geo_shape` parameters
 
 
 The following type parameters are deprecated for the `geo_shape` field type: `tree`,
 The following type parameters are deprecated for the `geo_shape` field type: `tree`,
 `precision`, `tree_levels`, `distance_error_pct`, `points_only`, and `strategy`. They
 `precision`, `tree_levels`, `distance_error_pct`, `points_only`, and `strategy`. They
-will be removed in a future version.
+will be removed in a future version.

+ 2 - 2
docs/reference/testing/testing-framework.asciidoc

@@ -8,7 +8,7 @@ Testing is a crucial part of your application, and as information retrieval itse
 
 
 
 
 [[why-randomized-testing]]
 [[why-randomized-testing]]
-=== why randomized testing?
+=== Why randomized testing?
 
 
 The key concept of randomized testing is not to use the same input values for every testcase, but still be able to reproduce it in case of a failure. This allows to test with vastly different input variables in order to make sure, that your implementation is actually independent from your provided test data.
 The key concept of randomized testing is not to use the same input values for every testcase, but still be able to reproduce it in case of a failure. This allows to test with vastly different input variables in order to make sure, that your implementation is actually independent from your provided test data.
 
 
@@ -48,7 +48,7 @@ We provide a few classes that you can inherit from in your own test classes whic
 
 
 
 
 [[unit-tests]]
 [[unit-tests]]
-=== unit tests
+=== Unit tests
 
 
 If your test is a well isolated unit test which doesn't need a running Elasticsearch cluster, you can use the `ESTestCase`. If you are testing lucene features, use `ESTestCase` and if you are testing concrete token streams, use the `ESTokenStreamTestCase` class. Those specific classes execute additional checks which ensure that no resources leaks are happening, after the test has run.
 If your test is a well isolated unit test which doesn't need a running Elasticsearch cluster, you can use the `ESTestCase`. If you are testing lucene features, use `ESTestCase` and if you are testing concrete token streams, use the `ESTokenStreamTestCase` class. Those specific classes execute additional checks which ensure that no resources leaks are happening, after the test has run.