Browse Source

[DOCS] Documentation update for creating plugins (#93413)

* [DOCS] Documentation for the stable plugin API

* Removed references to rivers

* Add link to Cloud docs for managing plugins

* Add caveat about needing to update plugins

* Remove reference to site plugins

* Wording and clarifications

* Fix test

* Add link to text analysis docs

* Text analysis API dependencies

* Remove reference to REST endpoints and fix list

* Move plugin descriptor file to its own page

* Typos

* Review feedback

* Delete unused properties file

* Changed  into

* Changed 'elasticsearchVersion' into 'pluginApiVersion'

* Swap 'The analysis plugin API' and 'Plugin file structure' sections

* Update docs/plugins/authors.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-non-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-non-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-non-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/example-text-analysis-plugin.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/plugin-descriptor-file.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/plugin-script.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-non-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Update docs/plugins/development/creating-non-text-analysis-plugins.asciidoc

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>

* Rewording

* Add modulename and extended.plugins descriptions for descriptor file

* Add link to existing plugins in Github

* Review feedback

* Use 'stable' and 'classic' plugin naming

* Fix capitalization

* Review feedback

---------

Co-authored-by: Arianna Laudazzi <46651782+alaudazzi@users.noreply.github.com>
Co-authored-by: William Brafford <william.brafford@elastic.co>
Abdon Pijpelink 2 years ago
parent
commit
f93a94009f

+ 10 - 10
docs/plugins/analysis-icu.asciidoc

@@ -1,5 +1,5 @@
 [[analysis-icu]]
-=== ICU Analysis Plugin
+=== ICU analysis plugin
 
 The ICU Analysis plugin integrates the Lucene ICU module into {es},
 adding extended Unicode support using the https://icu.unicode.org/[ICU]
@@ -27,7 +27,7 @@ characters.
 include::install_remove.asciidoc[]
 
 [[analysis-icu-analyzer]]
-==== ICU Analyzer
+==== ICU analyzer
 
 The `icu_analyzer` analyzer performs basic normalization, tokenization and character folding, using the
 `icu_normalizer` char filter, `icu_tokenizer` and `icu_folding` token filter
@@ -45,7 +45,7 @@ The following parameters are accepted:
     Normalization mode. Accepts `compose` (default) or `decompose`.
 
 [[analysis-icu-normalization-charfilter]]
-==== ICU Normalization Character Filter
+==== ICU normalization character filter
 
 Normalizes characters as explained
 https://unicode-org.github.io/icu/userguide/transforms/normalization/[here].
@@ -100,7 +100,7 @@ PUT icu_sample
 <2> Uses the customized `nfd_normalizer` token filter, which is set to use `nfc` normalization with decomposition.
 
 [[analysis-icu-tokenizer]]
-==== ICU Tokenizer
+==== ICU tokenizer
 
 Tokenizes text into words on word boundaries, as defined in
 https://www.unicode.org/reports/tr29/[UAX #29: Unicode Text Segmentation].
@@ -199,7 +199,7 @@ The above `analyze` request returns the following:
 
 
 [[analysis-icu-normalization]]
-==== ICU Normalization Token Filter
+==== ICU normalization token filter
 
 Normalizes characters as explained
 https://unicode-org.github.io/icu/userguide/transforms/normalization/[here]. It registers
@@ -254,7 +254,7 @@ PUT icu_sample
 
 
 [[analysis-icu-folding]]
-==== ICU Folding Token Filter
+==== ICU folding token filter
 
 Case folding of Unicode characters based on `UTR#30`, like the
 {ref}/analysis-asciifolding-tokenfilter.html[ASCII-folding token filter]
@@ -324,7 +324,7 @@ PUT icu_sample
 
 
 [[analysis-icu-collation]]
-==== ICU Collation Token Filter
+==== ICU collation token filter
 
 [WARNING]
 ======
@@ -333,7 +333,7 @@ This token filter has been deprecated since Lucene 5.0. Please use
 ======
 
 [[analysis-icu-collation-keyword-field]]
-==== ICU Collation Keyword Field
+==== ICU collation keyword field
 
 Collations are used for sorting documents in a language-specific word order.
 The `icu_collation_keyword` field type is available to all indices and will encode
@@ -385,7 +385,7 @@ GET /my-index-000001/_search <3>
     a single token doc values, and applies the German ``phonebook'' order.
 <3> An example query which searches the `name` field and sorts on the `name.sort` field.
 
-===== Parameters for ICU Collation Keyword Fields
+===== Parameters for ICU collation keyword fields
 
 The following parameters are accepted by `icu_collation_keyword` fields:
 
@@ -488,7 +488,7 @@ Hiragana characters in `quaternary` strength.
 
 
 [[analysis-icu-transform]]
-==== ICU Transform Token Filter
+==== ICU transform token filter
 
 Transforms are used to process Unicode text in many different ways, such as
 case mapping, normalization, transliteration and bidirectional text handling.

+ 2 - 2
docs/plugins/analysis-kuromoji.asciidoc

@@ -1,7 +1,7 @@
 [[analysis-kuromoji]]
-=== Japanese (kuromoji) Analysis Plugin
+=== Japanese (kuromoji) analysis plugin
 
-The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis
+The Japanese (kuromoji) analysis plugin integrates Lucene kuromoji analysis
 module into {es}.
 
 :plugin_name: analysis-kuromoji

+ 1 - 1
docs/plugins/analysis-nori.asciidoc

@@ -1,5 +1,5 @@
 [[analysis-nori]]
-=== Korean (nori) Analysis Plugin
+=== Korean (nori) analysis plugin
 
 The Korean (nori) Analysis plugin integrates Lucene nori analysis
 module into elasticsearch. It uses the https://bitbucket.org/eunjeon/mecab-ko-dic[mecab-ko-dic dictionary]

+ 1 - 1
docs/plugins/analysis-phonetic.asciidoc

@@ -1,5 +1,5 @@
 [[analysis-phonetic]]
-=== Phonetic Analysis Plugin
+=== Phonetic analysis plugin
 
 The Phonetic Analysis plugin provides token filters which convert tokens to
 their phonetic representation using Soundex, Metaphone, and a variety of other

+ 1 - 1
docs/plugins/analysis-smartcn.asciidoc

@@ -1,5 +1,5 @@
 [[analysis-smartcn]]
-=== Smart Chinese Analysis Plugin
+=== Smart Chinese analysis plugin
 
 The Smart Chinese Analysis plugin integrates Lucene's Smart Chinese analysis
 module into elasticsearch.

+ 2 - 2
docs/plugins/analysis-stempel.asciidoc

@@ -1,7 +1,7 @@
 [[analysis-stempel]]
-=== Stempel Polish Analysis Plugin
+=== Stempel Polish analysis plugin
 
-The Stempel Analysis plugin integrates Lucene's Stempel analysis
+The Stempel analysis plugin integrates Lucene's Stempel analysis
 module for Polish into elasticsearch.
 
 :plugin_name: analysis-stempel

+ 2 - 2
docs/plugins/analysis-ukrainian.asciidoc

@@ -1,7 +1,7 @@
 [[analysis-ukrainian]]
-=== Ukrainian Analysis Plugin
+=== Ukrainian analysis plugin
 
-The Ukrainian Analysis plugin integrates Lucene's UkrainianMorfologikAnalyzer into elasticsearch.
+The Ukrainian analysis plugin integrates Lucene's UkrainianMorfologikAnalyzer into elasticsearch.
 
 It provides stemming for Ukrainian using the https://github.com/morfologik/morfologik-stemming[Morfologik project].
 

+ 1 - 1
docs/plugins/analysis.asciidoc

@@ -1,5 +1,5 @@
 [[analysis]]
-== Analysis Plugins
+== Analysis plugins
 
 Analysis plugins extend Elasticsearch by adding new analyzers, tokenizers,
 token filters, or character filters to Elasticsearch.

+ 1 - 1
docs/plugins/api.asciidoc

@@ -1,5 +1,5 @@
 [[api]]
-== API Extension Plugins
+== API extension plugins
 
 API extension plugins add new functionality to Elasticsearch by adding new APIs or features, usually to do with search or mapping.
 

+ 20 - 109
docs/plugins/authors.asciidoc

@@ -1,116 +1,27 @@
 [[plugin-authors]]
-== Help for plugin authors
+== Creating an {es} plugin
 
-:plugin-properties-files: {elasticsearch-root}/build-tools/src/main/resources
+{es} plugins are modular bits of code that add functionality to
+{es}. Plugins are written in Java and implement Java interfaces that
+are defined in the source code. Plugins are composed of JAR files and metadata
+files, compressed in a single zip file.
 
-The Elasticsearch repository contains {es-repo}tree/main/plugins/examples[examples of plugins]. Some of these include:
+There are two ways to create a plugin:
 
-* a plugin with {es-repo}tree/main/plugins/examples/custom-settings[custom settings]
-* adding {es-repo}tree/main/plugins/examples/rest-handler[custom rest endpoints]
-* adding a {es-repo}tree/main/plugins/examples/rescore[custom rescorer]
-* a script {es-repo}tree/main/plugins/examples/script-expert-scoring[implemented in Java]
+<<creating-stable-plugins>>:: 
+Text analysis plugins can be developed against the stable plugin API to provide
+{es} with custom Lucene analyzers, token filters, character filters, and
+tokenizers.
 
-These examples provide the bare bones needed to get started. For more
-information about how to write a plugin, we recommend looking at the plugins
-listed in this documentation for inspiration.
+<<creating-classic-plugins>>::
+Other plugins can be developed against the classic plugin API to provide custom
+authentication, authorization, or scoring mechanisms, and more.
 
-[discrete]
-=== Plugin descriptor file
+:plugin-type: stable
+include::development/creating-stable-plugins.asciidoc[]
+include::development/example-text-analysis-plugin.asciidoc[]
+:!plugin-type:
 
-All plugins must contain a file called `plugin-descriptor.properties`.
-The format for this file is described in detail in this example:
-
-["source","properties"]
---------------------------------------------------
-include::{plugin-properties-files}/plugin-descriptor.properties[]
---------------------------------------------------
-
-Either fill in this template yourself or, if you are using Elasticsearch's Gradle build system, you
-can fill in the necessary values in the `build.gradle` file for your plugin.
-
-[discrete]
-==== Mandatory elements for plugins
-
-
-[cols="<,<,<",options="header",]
-|=======================================================================
-|Element                    | Type   | Description
-
-|`description`              |String  | simple summary of the plugin
-
-|`version`                  |String  | plugin's version
-
-|`name`                     |String  | the plugin name
-
-|`classname`                |String  | the name of the class to load, fully-qualified.
-
-|`java.version`             |String  | version of java the code is built against.
-Use the system property `java.specification.version`. Version string must be a sequence
-of nonnegative decimal integers separated by "."'s and may have leading zeros.
-
-|`elasticsearch.version`    |String  | version of Elasticsearch compiled against.
-
-|=======================================================================
-
-Note that only jar files at the root of the plugin are added to the classpath for the plugin!
-If you need other resources, package them into a resources jar.
-
-[IMPORTANT]
-.Plugin release lifecycle
-==============================================
-
-You will have to release a new version of the plugin for each new Elasticsearch release.
-This version is checked when the plugin is loaded so Elasticsearch will refuse to start
-in the presence of plugins with the incorrect `elasticsearch.version`.
-
-==============================================
-
-
-[discrete]
-=== Testing your plugin
-
-When testing a Java plugin, it will only be auto-loaded if it is in the
-`plugins/` directory. Use `bin/elasticsearch-plugin install file:///path/to/your/plugin`
-to install your plugin for testing.
-
-You may also load your plugin within the test framework for integration tests.
-Read more in {ref}/integration-tests.html#changing-node-configuration[Changing Node Configuration].
-
-
-[discrete]
-[[plugin-authors-jsm]]
-=== Java Security permissions
-
-Some plugins may need additional security permissions. A plugin can include
-the optional `plugin-security.policy` file containing `grant` statements for
-additional permissions. Any additional permissions will be displayed to the user
-with a large warning, and they will have to confirm them when installing the
-plugin interactively. So if possible, it is best to avoid requesting any
-spurious permissions!
-
-If you are using the Elasticsearch Gradle build system, place this file in
-`src/main/plugin-metadata` and it will be applied during unit tests as well.
-
-Keep in mind that the Java security model is stack-based, and the additional
-permissions will only be granted to the jars in your plugin, so you will have
-write proper security code around operations requiring elevated privileges.
-It is recommended to add a check to prevent unprivileged code (such as scripts)
-from gaining escalated permissions. For example:
-
-[source,java]
---------------------------------------------------
-// ES permission you should check before doPrivileged() blocks
-import org.elasticsearch.SpecialPermission;
-
-SecurityManager sm = System.getSecurityManager();
-if (sm != null) {
-  // unprivileged code such as scripts do not have SpecialPermission
-  sm.checkPermission(new SpecialPermission());
-}
-AccessController.doPrivileged(
-  // sensitive operation
-);
---------------------------------------------------
-
-See https://www.oracle.com/technetwork/java/seccodeguide-139067.html[Secure Coding Guidelines for Java SE]
-for more information.
+:plugin-type: classic
+include::development/creating-classic-plugins.asciidoc[]
+:!plugin-type:

+ 94 - 0
docs/plugins/development/creating-classic-plugins.asciidoc

@@ -0,0 +1,94 @@
+[[creating-classic-plugins]]
+=== Creating classic plugins
+
+Classic plugins provide {es} with mechanisms for custom authentication,
+authorization, scoring, and more.
+
+[IMPORTANT]
+.Plugin release lifecycle
+==============================================
+
+Classic plugins require you to build a new version for each new {es} release.
+This version is checked when the plugin is installed and when it is loaded. {es}
+will refuse to start in the presence of plugins with the incorrect
+`elasticsearch.version`.
+
+==============================================
+
+[discrete]
+==== Classic plugin file structure
+
+Classis plugins are ZIP files composed of JAR files and
+<<plugin-descriptor-file-{plugin-type},a metadata file called
+`plugin-descriptor.properties`>>, a Java properties file that describes the
+plugin.
+
+Note that only JAR files at the root of the plugin are added to the classpath
+for the plugin. If you need other resources, package them into a resources JAR.
+
+[discrete]
+==== Example plugins
+
+The {es} repository contains {es-repo}tree/main/plugins/examples[examples of plugins]. Some of these include:
+
+* a plugin with {es-repo}tree/main/plugins/examples/custom-settings[custom settings]
+* adding {es-repo}tree/main/plugins/examples/rest-handler[custom rest endpoints]
+* adding a {es-repo}tree/main/plugins/examples/rescore[custom rescorer]
+* a script {es-repo}tree/main/plugins/examples/script-expert-scoring[implemented in Java]
+
+These examples provide the bare bones needed to get started. For more
+information about how to write a plugin, we recommend looking at the 
+{es-repo}tree/main/plugins/[source code of existing plugins] for inspiration.
+
+[discrete]
+==== Testing your plugin
+
+Use `bin/elasticsearch-plugin install file:///path/to/your/plugin`
+to install your plugin for testing. The Java plugin is auto-loaded only if it's in the
+`plugins/` directory.
+
+You may also load your plugin within the test framework for integration tests.
+Check {ref}/integration-tests.html#changing-node-configuration[Changing Node Configuration] for more information.
+
+[discrete]
+[[plugin-authors-jsm]]
+==== Java Security permissions
+
+Some plugins may need additional security permissions. A plugin can include
+the optional `plugin-security.policy` file containing `grant` statements for
+additional permissions. Any additional permissions will be displayed to the user
+with a large warning, and they will have to confirm them when installing the
+plugin interactively. So if possible, it is best to avoid requesting any
+spurious permissions!
+
+If you are using the {es} Gradle build system, place this file in
+`src/main/plugin-metadata` and it will be applied during unit tests as well.
+
+The Java security model is stack-based, and additional
+permissions are granted to the jars in your plugin, so you have to
+write proper security code around operations requiring elevated privileges.
+You might add a check to prevent unprivileged code (such as scripts)
+from gaining escalated permissions. For example:
+
+[source,java]
+--------------------------------------------------
+// ES permission you should check before doPrivileged() blocks
+import org.elasticsearch.SpecialPermission;
+
+SecurityManager sm = System.getSecurityManager();
+if (sm != null) {
+  // unprivileged code such as scripts do not have SpecialPermission
+  sm.checkPermission(new SpecialPermission());
+}
+AccessController.doPrivileged(
+  // sensitive operation
+);
+--------------------------------------------------
+
+Check https://www.oracle.com/technetwork/java/seccodeguide-139067.html[Secure Coding Guidelines for Java SE]
+for more information.
+
+[[plugin-descriptor-file-classic]]
+==== The plugin descriptor file for classic plugins
+
+include::plugin-descriptor-file.asciidoc[]

+ 131 - 0
docs/plugins/development/creating-stable-plugins.asciidoc

@@ -0,0 +1,131 @@
+[[creating-stable-plugins]]
+=== Creating text analysis plugins with the stable plugin API
+
+Text analysis plugins provide {es} with custom {ref}/analysis.html[Lucene 
+analyzers, token filters, character filters, and tokenizers]. 
+
+[discrete]
+==== The stable plugin API
+
+Text analysis plugins can be developed against the stable plugin API. This API
+consists of the following dependencies:
+
+* `plugin-api` - an API used by plugin developers to implement custom {es} 
+plugins.
+* `plugin-analysis-api` - an API used by plugin developers to implement analysis
+plugins and integrate them into {es}.
+* `lucene-analysis-common` - a dependency of `plugin-analysis-api` that contains
+core Lucene analysis interfaces like `Tokenizer`, `Analyzer`, and `TokenStream`.
+
+For new versions of {es} within the same major version, plugins built against
+this API do not need to be recompiled. Future versions of the API will be
+backwards compatible and plugins are binary compatible with future versions of
+{es}. In other words, once you have a working artifact, you can re-use it when
+you upgrade {es} to a new bugfix or minor version.
+
+A text analysis plugin can implement four factory classes that are provided by
+the analysis plugin API.
+
+* `AnalyzerFactory` to create a Lucene analyzer
+* `CharFilterFactory` to create a character character filter
+* `TokenFilterFactory` to create a Lucene token filter
+* `TokenizerFactory` to create a Lucene tokenizer
+
+The key to implementing a stable plugin is the `@NamedComponent` annotation.
+Many of {es}'s components have names that are used in configurations. For
+example, the keyword analyzer is referenced in configuration with the name
+`"keyword"`. Once your custom plugin is installed in your cluster, your named
+components may be referenced by name in these configurations as well.
+
+You can also create text analysis plugins as a <<creating-classic-plugins,
+classic plugin>>. However, classic plugins are pinned to a specific version of
+{es}. You need to recompile them when upgrading {es}. Because classic plugins
+are built against internal APIs that can change, upgrading to a new version may
+require code changes.
+
+[discrete]
+==== Stable plugin file structure
+
+Stable plugins are ZIP files composed of JAR files and two metadata files:
+
+* `stable-plugin-descriptor.properties` - a Java properties file that describes 
+the plugin. Refer to <<plugin-descriptor-file-{plugin-type}>>.
+* `named_components.json` - a JSON file mapping interfaces to key-value pairs 
+of component names and implementation classes.
+
+Note that only JAR files at the root of the plugin are added to the classpath
+for the plugin. If you need other resources, package them into a resources JAR.
+
+[discrete]
+==== Development process
+
+Elastic provides a Grade plugin, `elasticsearch.stable-esplugin`, that makes it
+easier to develop and package stable plugins. The steps in this section assume
+you use this plugin. However, you don't need Gradle to create plugins.
+
+The {es} Github repository contains
+{es-repo}tree/main/plugins/examples/stable-analysis[an example analysis plugin].
+The example `build.gradle` build script provides a good starting point for 
+developing your own plugin.
+
+[discrete]
+===== Prerequisites
+
+Plugins are written in Java, so you need to install a Java Development Kit
+(JDK). Install Gradle if you want to use Gradle.
+
+[discrete]
+===== Step by step
+
+. Create a directory for your project. 
+. Copy the example `build.gradle` build script to your project directory.  Note
+that this build script uses the `elasticsearch.stable-esplugin` gradle plugin to
+build your plugin.
+. Edit the `build.gradle` build script:
+** Add a definition for the `pluginApiVersion` and matching `luceneVersion` 
+variables to the top of the file. You can find these versions in the 
+`build-tools-internal/version.properties` file in the {es-repo}[Elasticsearch 
+Github repository].
+** Edit the `name` and `description` in the `esplugin` section of the build 
+script. This will create the plugin descriptor file. If you're not using the 
+`elasticsearch.stable-esplugin` gradle plugin, refer to 
+<<plugin-descriptor-file-{plugin-type}>> to create the file manually.
+** Add module information.
+** Ensure you have declared the following compile-time dependencies. These 
+dependencies are compile-time only because {es} will provide these libraries at 
+runtime.
+*** `org.elasticsearch.plugin:elasticsearch-plugin-api`
+*** `org.elasticsearch.plugin:elasticsearch-plugin-analysis-api`
+*** `org.apache.lucene:lucene-analysis-common`
+** For unit testing, ensure these dependencies have also been added to the 
+`build.gradle` script as `testImplementation` dependencies.
+. Implement an interface from the analysis plugin API, annotating it with 
+`NamedComponent`. Refer to <<example-text-analysis-plugin>> for an example.
+. You should now be able to assemble a plugin ZIP file by running:
++
+[source,sh]
+----
+gradle bundlePlugin
+----
+The resulting plugin ZIP file is written to the  `build/distributions` 
+directory.
+
+[discrete]
+===== YAML REST tests
+
+The Gradle `elasticsearch.yaml-rest-test` plugin enables testing of your 
+plugin using the {es-repo}blob/main/rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/README.asciidoc[{es} yamlRestTest framework]. 
+These tests use a YAML-formatted domain language to issue REST requests against
+an internal {es} cluster that has your plugin installed, and to check the 
+results of those requests. The structure of a YAML REST test directory is as 
+follows:
+
+* A test suite class, defined under `src/yamlRestTest/java`. This class should 
+extend `ESClientYamlSuiteTestCase`.
+* The YAML tests themselves should be defined under 
+`src/yamlRestTest/resources/test/`.
+
+[[plugin-descriptor-file-stable]]
+==== The plugin descriptor file for stable plugins
+
+include::plugin-descriptor-file.asciidoc[]

+ 222 - 0
docs/plugins/development/example-text-analysis-plugin.asciidoc

@@ -0,0 +1,222 @@
+[[example-text-analysis-plugin]]
+==== Example text analysis plugin
+
+This example shows how to create a simple "Hello world" text analysis plugin
+using the stable plugin API. The plugin provides a custom Lucene token filter
+that strips all tokens except for "hello" and "world". 
+
+Elastic provides a Grade plugin, `elasticsearch.stable-esplugin`, that makes it
+easier to develop and package stable plugins. The steps in this guide assume you
+use this plugin. However, you don't need Gradle to create plugins.
+
+. Create a new directory for your project.
+. In this example, the source code is organized under the `main` and 
+`test` directories. In your project's home directory, create `src/` `src/main/`,
+and `src/test/` directories.
+. Create the following `build.gradle` build script in your project's home 
+directory:
++
+[source,gradle]
+----
+ext.pluginApiVersion = '8.7.0-SNAPSHOT'
+ext.luceneVersion = '9.5.0-snapshot-d19c3e2e0ed'
+
+buildscript {
+  ext.pluginApiVersion = '8.7.0-SNAPSHOT'
+  repositories {
+    maven {
+      url = 'https://snapshots.elastic.co/maven/'
+    }
+    mavenCentral()
+  }
+  dependencies {
+    classpath "org.elasticsearch.gradle:build-tools:${pluginApiVersion}"
+  }
+}
+
+apply plugin: 'elasticsearch.stable-esplugin'
+apply plugin: 'elasticsearch.yaml-rest-test'
+
+esplugin {
+  name 'my-plugin'
+  description 'My analysis plugin'
+}
+
+group 'org.example'
+version '1.0-SNAPSHOT'
+
+repositories {
+  maven {
+    url = "https://s3.amazonaws.com/download.elasticsearch.org/lucenesnapshots/d19c3e2e0ed/"
+  }
+  maven {
+    url = 'https://snapshots.elastic.co/maven/'
+  }
+  mavenLocal()
+  mavenCentral()
+}
+
+dependencies {
+
+  //TODO transitive dependency off and plugin-api dependency?
+  compileOnly "org.elasticsearch.plugin:elasticsearch-plugin-api:${pluginApiVersion}"
+  compileOnly "org.elasticsearch.plugin:elasticsearch-plugin-analysis-api:${pluginApiVersion}"
+  compileOnly "org.apache.lucene:lucene-analysis-common:${luceneVersion}"
+
+  //TODO for testing this also have to be declared
+  testImplementation "org.elasticsearch.plugin:elasticsearch-plugin-api:${pluginApiVersion}"
+  testImplementation "org.elasticsearch.plugin:elasticsearch-plugin-analysis-api:${pluginApiVersion}"
+  testImplementation "org.apache.lucene:lucene-analysis-common:${luceneVersion}"
+
+  testImplementation ('junit:junit:4.13.2'){
+    exclude group: 'org.hamcrest'
+  }
+  testImplementation 'org.mockito:mockito-core:4.4.0'
+  testImplementation 'org.hamcrest:hamcrest:2.2'
+
+}
+----
+. In `src/main/java/org/example/`, create `HelloWorldTokenFilter.java`. This
+file provides the code for a token filter that strips all tokens except for 
+"hello" and "world":
++
+[source,java]
+----
+package org.example;
+
+import org.apache.lucene.analysis.FilteringTokenFilter;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
+
+import java.util.Arrays;
+
+public class HelloWorldTokenFilter extends FilteringTokenFilter {
+    private final CharTermAttribute term = addAttribute(CharTermAttribute.class);
+
+    public HelloWorldTokenFilter(TokenStream input) {
+        super(input);
+    }
+
+    @Override
+    public boolean accept() {
+        if (term.length() != 5) return false;
+        return Arrays.equals(term.buffer(), 0, 4, "hello".toCharArray(), 0, 4)
+                || Arrays.equals(term.buffer(), 0, 4, "world".toCharArray(), 0, 4);
+    }
+}
+----
+. This filter can be provided to Elasticsearch using the following
+`HelloWorldTokenFilterFactory.java` factory class. The `@NamedComponent`
+annotation is used to give the filter the `hello_world` name. This is the name
+you can use to refer to the filter, once the plugin has been deployed.
++
+[source,java]
+----
+package org.example;
+
+import org.apache.lucene.analysis.TokenStream;
+import org.elasticsearch.plugin.analysis.TokenFilterFactory;
+import org.elasticsearch.plugin.NamedComponent;
+
+@NamedComponent(value = "hello_world")
+public class HelloWorldTokenFilterFactory implements TokenFilterFactory {
+
+    @Override
+    public TokenStream create(TokenStream tokenStream) {
+        return new HelloWorldTokenFilter(tokenStream);
+    }
+
+}
+----
+. Unit tests may go under the `src/test` directory. You will have to add
+dependencies for your preferred testing framework.
+
+. Run:
++
+[source,sh]
+----
+gradle bundlePlugin
+----
+This builds the JAR file, generates the metadata files, and bundles them into a 
+plugin ZIP file. The resulting ZIP file will be written to the 
+`build/distributions` directory.
+. <<plugin-management,Install the plugin>>.
+. You can use the `_analyze` API to verify that the `hello_world` token filter 
+works as expected:
++
+[source,console]
+----
+GET /_analyze
+{
+  "text": "hello to everyone except the world",
+  "tokenizer": "standard",
+  "filter":  ["hello_world"]
+}
+----
+// TEST[skip:would require this plugin to be installed]
+
+[discrete]
+=== YAML REST tests
+
+If you are using the `elasticsearch.stable-esplugin` plugin for Gradle, you can
+use {es}'s YAML Rest Test framework. This framework allows you to load your
+plugin in a running test cluster and issue real REST API queries against it. The
+full syntax for this framework is beyond the scope of this tutorial, but there
+are many examples in the Elasticsearch repository. Refer to the
+{es-repo}tree/main/plugins/examples/stable-analysis[example analysis plugin] in
+the {es} Github repository for an example.
+
+. Create a `yamlRestTest` directory in the `src` directory.
+. Under the `yamlRestTest` directory, create a `java` folder for Java sources
+and a `resources` folder.
+. In `src/yamlRestTest/java/org/example/`, create 
+`HelloWorldPluginClientYamlTestSuiteIT.java`. This class implements 
+`ESClientYamlSuiteTestCase`.
++
+[source,java]
+----
+import com.carrotsearch.randomizedtesting.annotations.Name;
+import com.carrotsearch.randomizedtesting.annotations.ParametersFactory;
+import org.elasticsearch.test.rest.yaml.ClientYamlTestCandidate;
+import org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase;
+
+public class HelloWorldPluginClientYamlTestSuiteIT extends ESClientYamlSuiteTestCase {
+
+    public HelloWorldPluginClientYamlTestSuiteIT(
+            @Name("yaml") ClientYamlTestCandidate testCandidate
+    ) {
+        super(testCandidate);
+    }
+
+    @ParametersFactory
+    public static Iterable<Object[]> parameters() throws Exception {
+        return ESClientYamlSuiteTestCase.createParameters();
+    }
+}
+----
+. In `src/yamlRestTest/resources/rest-api-spec/test/plugin`, create the 
+`10_token_filter.yml` YAML file:
++
+[source,yaml]
+----
+## Sample rest test
+---
+"Hello world plugin test - removes all tokens except hello and world":
+  - do:
+      indices.analyze:
+        body:
+          text: hello to everyone except the world
+          tokenizer: standard
+          filter:
+            - type: "hello_world"
+  - length: { tokens: 2 }
+  - match:  { tokens.0.token: "hello" }
+  - match:  { tokens.1.token: "world" }
+
+----
+. Run the test with:
++
+[source,sh]
+----
+gradle yamlRestTest
+----

+ 58 - 0
docs/plugins/development/plugin-descriptor-file.asciidoc

@@ -0,0 +1,58 @@
+ifeval::["{plugin-type}" ==  "stable"]
+The stable plugin descriptor file is a Java properties file called 
+`stable-plugin-descriptor.properties`
+endif::[]
+ifeval::["{plugin-type}" == "classic"]
+The classic plugin descriptor file is a Java properties file called 
+`plugin-descriptor.properties`
+endif::[]
+that describes the plugin. The file is automatically created if you are 
+using {es}'s Gradle build system. If you're not using the gradle plugin, you 
+can create it manually using the following template.
+
+[source,properties]
+:plugin-properties-files: {elasticsearch-root}/build-tools/src/main/resources
+[source,yaml]
+----
+include::{plugin-properties-files}/plugin-descriptor.properties[]
+----
+
+[discrete]
+==== Properties
+
+
+[cols="<,<,<",options="header",]
+|=======================================================================
+|Element                    | Type   | Description
+
+|`description`              |String  | simple summary of the plugin
+
+|`version`                  |String  | plugin's version
+
+|`name`                     |String  | the plugin name
+
+ifeval::["{plugin-type}" ==  "stable"]
+|`classname`                |String  | this property is for classic plugins. Do
+not include this property for stable plugins.
+endif::[]
+
+ifeval::["{plugin-type}" == "classic"]
+|`classname`                |String  | the name of the class to load, 
+fully-qualified.
+
+|`extended.plugins`         |String  | other plugins this plugin extends through
+SPI.
+
+|`modulename`               |String  | the name of the module to load classname
+from. Only applies to "isolated" plugins. This is optional. Specifying it causes
+the plugin to be loaded as a module.
+endif::[]
+
+|`java.version`             |String  | version of java the code is built against.
+Use the system property `java.specification.version`. Version string must be a
+sequence of nonnegative decimal integers separated by "."'s and may have leading
+zeros.
+
+|`elasticsearch.version`    |String  | version of {es} compiled against.
+
+|=======================================================================

+ 3 - 3
docs/plugins/discovery-azure-classic.asciidoc

@@ -1,5 +1,5 @@
 [[discovery-azure-classic]]
-=== Azure Classic Discovery Plugin
+=== Azure Classic discovery plugin
 
 The Azure Classic Discovery plugin uses the Azure Classic API to identify the
 addresses of seed hosts.
@@ -11,7 +11,7 @@ include::install_remove.asciidoc[]
 
 
 [[discovery-azure-classic-usage]]
-==== Azure Virtual Machine Discovery
+==== Azure Virtual Machine discovery
 
 Azure VM discovery allows to use the Azure APIs to perform automatic discovery.
 Here is a simple sample configuration:
@@ -376,7 +376,7 @@ sudo systemctl start elasticsearch
 If anything goes wrong, check your logs in `/var/log/elasticsearch`.
 
 [[discovery-azure-classic-scale]]
-==== Scaling Out!
+==== Scaling out!
 
 You need first to create an image of your previous machine.
 Disconnect from your machine and run locally the following commands:

+ 1 - 1
docs/plugins/discovery-ec2.asciidoc

@@ -1,5 +1,5 @@
 [[discovery-ec2]]
-=== EC2 Discovery Plugin
+=== EC2 Discovery plugin
 
 The EC2 discovery plugin provides a list of seed addresses to the
 {ref}/modules-discovery-hosts-providers.html[discovery process] by querying the

+ 2 - 2
docs/plugins/discovery-gce.asciidoc

@@ -1,5 +1,5 @@
 [[discovery-gce]]
-=== GCE Discovery Plugin
+=== GCE Discovery plugin
 
 The Google Compute Engine Discovery plugin uses the GCE API to identify the
 addresses of seed hosts.
@@ -8,7 +8,7 @@ addresses of seed hosts.
 include::install_remove.asciidoc[]
 
 [[discovery-gce-usage]]
-==== GCE Virtual Machine Discovery
+==== GCE Virtual Machine discovery
 
 Google Compute Engine VM discovery allows to use the google APIs to perform
 automatic discovery of seed hosts. Here is a simple sample configuration:

+ 1 - 1
docs/plugins/discovery.asciidoc

@@ -1,5 +1,5 @@
 [[discovery]]
-== Discovery Plugins
+== Discovery plugins
 
 Discovery plugins extend Elasticsearch by adding new seed hosts providers that
 can be used to extend the {ref}/modules-discovery.html[cluster formation

+ 1 - 4
docs/plugins/index.asciidoc

@@ -31,10 +31,7 @@ the Elasticsearch project. They are provided by individual developers or private
 companies and have their own licenses as well as their own versioning system.
 Issues and bug reports can usually be reported on the community plugin's web site.
 
-For advice on writing your own plugin, see <<plugin-authors>>.
-
-IMPORTANT: Site plugins -- plugins containing HTML, CSS and JavaScript -- are
-no longer supported.
+For advice on writing your own plugin, refer to <<plugin-authors>>.
 
 include::plugin-script.asciidoc[]
 

+ 0 - 7
docs/plugins/integrations.asciidoc

@@ -22,13 +22,6 @@ Integrations are not plugins, but are external tools or modules that make it eas
 * https://extensions.xwiki.org/xwiki/bin/view/Extension/Elastic+Search+Macro/[XWiki Next Generation Wiki]:
   XWiki has an Elasticsearch and Kibana macro allowing to run Elasticsearch queries and display the results in XWiki pages using XWiki's scripting language as well as include Kibana Widgets in XWiki pages
 
-[discrete]
-[[data-integrations]]
-=== Data import/export and validation
-
-NOTE: Rivers were used to import data from external systems into Elasticsearch prior to the 2.0 release. Elasticsearch
-releases 2.0 and later do not support rivers.
-
 [discrete]
 ==== Supported by Elastic:
 

+ 1 - 1
docs/plugins/mapper-annotated-text.asciidoc

@@ -1,5 +1,5 @@
 [[mapper-annotated-text]]
-=== Mapper Annotated Text Plugin
+=== Mapper annotated text plugin
 
 experimental[]
 

+ 1 - 1
docs/plugins/mapper-murmur3.asciidoc

@@ -1,5 +1,5 @@
 [[mapper-murmur3]]
-=== Mapper Murmur3 Plugin
+=== Mapper murmur3 plugin
 
 The mapper-murmur3 plugin provides the ability to compute hash of field values
 at index-time and store them in the index. This can sometimes be helpful when

+ 1 - 1
docs/plugins/mapper-size.asciidoc

@@ -1,5 +1,5 @@
 [[mapper-size]]
-=== Mapper Size Plugin
+=== Mapper size plugin
 
 The mapper-size plugin provides the `_size` metadata field which, when enabled,
 indexes the size in bytes of the original

+ 1 - 1
docs/plugins/mapper.asciidoc

@@ -1,5 +1,5 @@
 [[mapper]]
-== Mapper Plugins
+== Mapper plugins
 
 Mapper plugins allow new field data types to be added to Elasticsearch.
 

+ 17 - 7
docs/plugins/plugin-script.asciidoc

@@ -1,5 +1,14 @@
 [[plugin-management]]
-== Plugin Management
+== Plugin management
+
+[discrete]
+=== Managing plugins on {ess}
+
+Refer to the {cloud}/ec-adding-plugins.html[{ess} documentation] for information
+about managing plugins on {ecloud}.
+
+[discrete]
+=== Managing plugins for self-managed deployments
 
 Use the `elasticsearch-plugin` command line tool to install, list, and remove plugins. It is
 located in the `$ES_HOME/bin` directory by default but it may be in a
@@ -34,7 +43,7 @@ If you run {es} using Docker, you can manage plugins using a
 <<manage-plugins-using-configuration-file,configuration file>>.
 
 [[installation]]
-=== Installing Plugins
+=== Installing plugins
 
 The documentation for each plugin usually includes specific installation
 instructions for that plugin, but below we document the various available
@@ -139,7 +148,7 @@ that all the plugins will be installed, or none of the plugins will be installed
 if any installation fails.
 
 [[mandatory-plugins]]
-=== Mandatory Plugins
+=== Mandatory plugins
 
 If you rely on some plugins, you can define mandatory plugins by adding
 `plugin.mandatory` setting to the `config/elasticsearch.yml` file, for
@@ -153,7 +162,7 @@ plugin.mandatory: analysis-icu,lang-js
 For safety reasons, a node will not start if it is missing a mandatory plugin.
 
 [[listing-removing-updating]]
-=== Listing, Removing and Updating Installed Plugins
+=== Listing, removing and updating installed plugins
 
 [discrete]
 === Listing plugins
@@ -202,8 +211,9 @@ sudo bin/elasticsearch-plugin remove [pluginname] [pluginname] ... [pluginname]
 [discrete]
 === Updating plugins
 
-Plugins are built for a specific version of Elasticsearch, and therefore must be reinstalled
-each time Elasticsearch is updated.
+Except for text analysis plugins that are created using the
+<<creating-stable-plugins,stable plugin API>>, plugins are built for a specific
+version of {es}, and must be reinstalled each time {es} is updated.
 
 [source,shell]
 -----------------------------------
@@ -216,7 +226,7 @@ sudo bin/elasticsearch-plugin install [pluginname]
 The `plugin` scripts supports a number of other command line parameters:
 
 [discrete]
-=== Silent/Verbose mode
+=== Silent/verbose mode
 
 The `--verbose` parameter outputs more debug information, while the `--silent`
 parameter turns off all output including the progress bar. The script may

+ 7 - 7
docs/plugins/repository-hdfs.asciidoc

@@ -1,5 +1,5 @@
 [[repository-hdfs]]
-=== Hadoop HDFS Repository Plugin
+=== Hadoop HDFS repository plugin
 
 The HDFS repository plugin adds support for using HDFS File System as a repository for
 {ref}/modules-snapshots.html[Snapshot/Restore].
@@ -20,7 +20,7 @@ Using Apache Hadoop on Windows is problematic and thus it is not recommended. Fo
 plugin folder and point `HADOOP_HOME` variable to it; this should minimize the amount of permissions Hadoop requires (though one would still have to add some more).
 
 [[repository-hdfs-config]]
-==== Configuration Properties
+==== Configuration properties
 
 Once installed, define the configuration for the `hdfs` repository through the
 {ref}/modules-snapshots.html[REST API]:
@@ -79,7 +79,7 @@ include::repository-shared-settings.asciidoc[]
 
 [[repository-hdfs-availability]]
 [discrete]
-===== A Note on HDFS Availability
+===== A note on HDFS availability
 When you initialize a repository, its settings are persisted in the cluster state. When a node comes online, it will
 attempt to initialize all repositories for which it has settings. If your cluster has an HDFS repository configured, then
 all nodes in the cluster must be able to reach HDFS when starting. If not, then the node will fail to initialize the
@@ -87,9 +87,9 @@ repository at start up and the repository will be unusable. If this happens, you
 repository or restart the offending node.
 
 [[repository-hdfs-security]]
-==== Hadoop Security
+==== Hadoop security
 
-The HDFS Repository Plugin integrates seamlessly with Hadoop's authentication model. The following authentication
+The HDFS repository plugin integrates seamlessly with Hadoop's authentication model. The following authentication
 methods are supported by the plugin:
 
 [horizontal]
@@ -107,7 +107,7 @@ methods are supported by the plugin:
 
 [[repository-hdfs-security-keytabs]]
 [discrete]
-===== Principals and Keytabs
+===== Principals and keytabs
 Before attempting to connect to a secured HDFS cluster, provision the Kerberos principals and keytabs that the
 Elasticsearch nodes will use for authenticating to Kerberos. For maximum security and to avoid tripping up the Kerberos
 replay protection, you should create a service principal per node, following the pattern of
@@ -138,7 +138,7 @@ host!
 // Setup at runtime (principal name)
 [[repository-hdfs-security-runtime]]
 [discrete]
-===== Creating the Secure Repository
+===== Creating the secure repository
 Once your keytab files are in place and your cluster is started, creating a secured HDFS repository is simple. Just
 add the name of the principal that you will be authenticating as in the repository settings under the
 `security.principal` option:

+ 1 - 1
docs/plugins/repository.asciidoc

@@ -1,5 +1,5 @@
 [[repository]]
-== Snapshot/Restore Repository Plugins
+== Snapshot/restore repository plugins
 
 Repository plugins extend the {ref}/modules-snapshots.html[Snapshot/Restore]
 functionality in Elasticsearch by adding repositories backed by the cloud or

+ 1 - 1
docs/plugins/store-smb.asciidoc

@@ -1,5 +1,5 @@
 [[store-smb]]
-=== Store SMB Plugin
+=== Store SMB plugin
 
 The Store SMB plugin works around for a bug in Windows SMB and Java on windows.
 

+ 1 - 1
docs/plugins/store.asciidoc

@@ -1,5 +1,5 @@
 [[store]]
-== Store Plugins
+== Store plugins
 
 Store plugins offer alternatives to default Lucene stores.