update-connector-pipeline-api.asciidoc 2.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102
  1. [[update-connector-pipeline-api]]
  2. === Update connector pipeline API
  3. ++++
  4. <titleabbrev>Update connector pipeline</titleabbrev>
  5. ++++
  6. preview::[]
  7. Updates the `pipeline` configuration of a connector.
  8. When you create a new connector, the configuration of an <<ingest-pipeline-search-details-generic-reference, ingest pipeline>> is populated with default settings.
  9. [[update-connector-pipeline-api-request]]
  10. ==== {api-request-title}
  11. `PUT _connector/<connector_id>/_pipeline`
  12. [[update-connector-pipeline-api-prereq]]
  13. ==== {api-prereq-title}
  14. * To sync data using self-managed connectors, you need to deploy the {enterprise-search-ref}/build-connector.html[Elastic connector service] on your own infrastructure. This service runs automatically on Elastic Cloud for native connectors.
  15. * The `connector_id` parameter should reference an existing connector.
  16. [[update-connector-pipeline-api-path-params]]
  17. ==== {api-path-parms-title}
  18. `<connector_id>`::
  19. (Required, string)
  20. [role="child_attributes"]
  21. [[update-connector-pipeline-api-request-body]]
  22. ==== {api-request-body-title}
  23. `pipeline`::
  24. (Required, object) The pipeline configuration of the connector. The pipeline determines how data is processed during ingestion into Elasticsearch.
  25. Pipeline configuration must include the following attributes:
  26. - `extract_binary_content` (Required, boolean) A flag indicating whether to extract binary content during ingestion.
  27. - `name` (Required, string) The name of the ingest pipeline.
  28. - `reduce_whitespace` (Required, boolean) A flag indicating whether to reduce extra whitespace in the ingested content.
  29. - `run_ml_inference` (Required, boolean) A flag indicating whether to run machine learning inference on the ingested content.
  30. [[update-connector-pipeline-api-response-codes]]
  31. ==== {api-response-codes-title}
  32. `200`::
  33. Connector `pipeline` field was successfully updated.
  34. `400`::
  35. The `connector_id` was not provided or the request payload was malformed.
  36. `404` (Missing resources)::
  37. No connector matching `connector_id` could be found.
  38. [[update-connector-pipeline-api-example]]
  39. ==== {api-examples-title}
  40. The following example updates the `pipeline` property for the connector with ID `my-connector`:
  41. ////
  42. [source, console]
  43. --------------------------------------------------
  44. PUT _connector/my-connector
  45. {
  46. "index_name": "search-google-drive",
  47. "name": "My Connector",
  48. "service_type": "google_drive"
  49. }
  50. --------------------------------------------------
  51. // TESTSETUP
  52. [source,console]
  53. --------------------------------------------------
  54. DELETE _connector/my-connector
  55. --------------------------------------------------
  56. // TEARDOWN
  57. ////
  58. [source,console]
  59. ----
  60. PUT _connector/my-connector/_pipeline
  61. {
  62. "pipeline": {
  63. "extract_binary_content": true,
  64. "name": "my-connector-pipeline",
  65. "reduce_whitespace": true,
  66. "run_ml_inference": true
  67. }
  68. }
  69. ----
  70. [source,console-result]
  71. ----
  72. {
  73. "result": "updated"
  74. }
  75. ----