limitations.asciidoc 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309
  1. [role="xpack"]
  2. [[transform-limitations]]
  3. = {transform-cap} limitations
  4. [subs="attributes"]
  5. ++++
  6. <titleabbrev>Limitations</titleabbrev>
  7. ++++
  8. The following limitations and known problems apply to the {version} release of
  9. the Elastic {transform} feature. The limitations are grouped into the following
  10. categories:
  11. * <<transform-config-limitations>> apply to the configuration process of the
  12. {transforms}.
  13. * <<transform-operational-limitations>> affect the behavior of the {transforms}
  14. that are running.
  15. * <<transform-ui-limitations>> only apply to {transforms} managed via the user
  16. interface.
  17. [discrete]
  18. [[transform-config-limitations]]
  19. == Configuration limitations
  20. [discrete]
  21. [[transforms-underscore-limitation]]
  22. === Field names prefixed with underscores are omitted from latest {transforms}
  23. If you use the `latest` type of {transform} and the source index has field names
  24. that start with an underscore (_) character, they are assumed to be internal
  25. fields. Those fields are omitted from the documents in the destination index.
  26. [discrete]
  27. [[transforms-ccs-limitation]]
  28. === {transforms-cap} support {ccs} if the remote cluster is configured properly
  29. If you use <<modules-cross-cluster-search,{ccs}>>, the remote cluster must
  30. support the search and aggregations you use in your {transforms}.
  31. {transforms-cap} validate their configuration; if you use {ccs} and the
  32. validation fails, make sure that the remote cluster supports the query and
  33. aggregations you use.
  34. [discrete]
  35. [[transform-painless-limitation]]
  36. === Using scripts in {transforms}
  37. {transforms-cap} support scripting in every case when aggregations support them.
  38. However, there are certain factors you might want to consider when using scripts
  39. in {transforms}:
  40. * {transforms-cap} cannot deduce index mappings for output fields when the
  41. fields are created by a script. In this case, you might want to create the
  42. mappings of the destination index yourself prior to creating the transform.
  43. * Scripted fields may increase the runtime of the {transform}.
  44. * {transforms-cap} cannot optimize queries when you use scripts for all the
  45. groupings defined in `group_by`, you will receive a warning message when you
  46. use scripts this way.
  47. [discrete]
  48. [[transform-painless-warning-limitation]]
  49. === Deprecation warnings for Painless scripts in {transforms}
  50. If a {transform} contains Painless scripts that use deprecated syntax,
  51. deprecation warnings are displayed when the {transform} is previewed or started.
  52. However, it is not possible to check for deprecation warnings across all
  53. {transforms} as a bulk action because running the required queries might be a
  54. resource intensive process. Therefore any deprecation warnings due to deprecated
  55. Painless syntax are not available in the Upgrade Assistant.
  56. [discrete]
  57. [[transform-runtime-field-limitation]]
  58. === {transforms-cap} perform better on indexed fields
  59. {transforms-cap} sort data by a user-defined time field, which is frequently
  60. accessed. If the time field is a {ref}/runtime.html[runtime field], the
  61. performance impact of calculating field values at query time can significantly
  62. slow the {transform}. Use an indexed field as a time field when using
  63. {transforms}.
  64. [discrete]
  65. [[transform-scheduling-limitations]]
  66. === {ctransform-cap} scheduling limitations
  67. A {ctransform} periodically checks for changes to source data. The functionality
  68. of the scheduler is currently limited to a basic periodic timer which can be
  69. within the `frequency` range from 1s to 1h. The default is 1m. This is designed
  70. to run little and often. When choosing a `frequency` for this timer consider
  71. your ingest rate along with the impact that the {transform}
  72. search/index operations has other users in your cluster. Also note that retries
  73. occur at `frequency` interval.
  74. [discrete]
  75. [[transform-operational-limitations]]
  76. == Operational limitations
  77. [discrete]
  78. [[transform-aggresponse-limitations]]
  79. === Aggregation responses may be incompatible with destination index mappings
  80. When a pivot {transform} is first started, it will deduce the mappings
  81. required for the destination index. This process is based on the field types of
  82. the source index and the aggregations used. If the fields are derived from
  83. <<search-aggregations-metrics-scripted-metric-aggregation,`scripted_metrics`>>
  84. or <<search-aggregations-pipeline-bucket-script-aggregation,`bucket_scripts`>>,
  85. <<dynamic-mapping,dynamic mappings>> will be used. In some instances the
  86. deduced mappings may be incompatible with the actual data. For example, numeric
  87. overflows might occur or dynamically mapped fields might contain both numbers
  88. and strings. Please check {es} logs if you think this may have occurred.
  89. You can view the deduced mappings by using the
  90. <<preview-transform,preview transform API>>.
  91. See the `generated_dest_index` object in the API response.
  92. If it's required, you may define custom mappings prior to starting the
  93. {transform} by creating a custom destination index using the
  94. <<indices-create-index,create index API>>.
  95. As deduced mappings cannot be overwritten by an index template, use the create
  96. index API to define custom mappings. The index templates only apply to fields
  97. derived from scripts that use dynamic mappings.
  98. [discrete]
  99. [[transform-batch-limitations]]
  100. === Batch {transforms} may not account for changed documents
  101. A batch {transform} uses a
  102. <<search-aggregations-bucket-composite-aggregation,composite aggregation>>
  103. which allows efficient pagination through all buckets. Composite aggregations
  104. do not yet support a search context, therefore if the source data is changed
  105. (deleted, updated, added) while the batch {dataframe} is in progress, then the
  106. results may not include these changes.
  107. [discrete]
  108. [[transform-consistency-limitations]]
  109. === {ctransform-cap} consistency does not account for deleted or updated documents
  110. While the process for {transforms} allows the continual recalculation of the
  111. {transform} as new data is being ingested, it does also have some limitations.
  112. Changed entities will only be identified if their time field has also been
  113. updated and falls within the range of the action to check for changes. This has
  114. been designed in principle for, and is suited to, the use case where new data is
  115. given a timestamp for the time of ingest.
  116. If the indices that fall within the scope of the source index pattern are
  117. removed, for example when deleting historical time-based indices, then the
  118. composite aggregation performed in consecutive checkpoint processing will search
  119. over different source data, and entities that only existed in the deleted index
  120. will not be removed from the {dataframe} destination index.
  121. Depending on your use case, you may wish to recreate the {transform} entirely
  122. after deletions. Alternatively, if your use case is tolerant to historical
  123. archiving, you may wish to include a max ingest timestamp in your aggregation.
  124. This will allow you to exclude results that have not been recently updated when
  125. viewing the destination index.
  126. [discrete]
  127. [[transform-deletion-limitations]]
  128. === Deleting a {transform} does not delete the destination index or {kib} index pattern
  129. When deleting a {transform} using `DELETE _transform/index`
  130. neither the destination index nor the {kib} index pattern, should one have been
  131. created, are deleted. These objects must be deleted separately.
  132. [discrete]
  133. [[transform-aggregation-page-limitations]]
  134. === Handling dynamic adjustment of aggregation page size
  135. During the development of {transforms}, control was favoured over performance.
  136. In the design considerations, it is preferred for the {transform} to take longer
  137. to complete quietly in the background rather than to finish quickly and take
  138. precedence in resource consumption.
  139. Composite aggregations are well suited for high cardinality data enabling
  140. pagination through results. If a <<circuit-breaker,circuit breaker>> memory
  141. exception occurs when performing the composite aggregated search then we try
  142. again reducing the number of buckets requested. This circuit breaker is
  143. calculated based upon all activity within the cluster, not just activity from
  144. {transforms}, so it therefore may only be a temporary resource
  145. availability issue.
  146. For a batch {transform}, the number of buckets requested is only ever adjusted
  147. downwards. The lowering of value may result in a longer duration for the
  148. {transform} checkpoint to complete. For {ctransforms}, the number of buckets
  149. requested is reset back to its default at the start of every checkpoint and it
  150. is possible for circuit breaker exceptions to occur repeatedly in the {es} logs.
  151. The {transform} retrieves data in batches which means it calculates several
  152. buckets at once. Per default this is 500 buckets per search/index operation. The
  153. default can be changed using `max_page_search_size` and the minimum value is 10.
  154. If failures still occur once the number of buckets requested has been reduced to
  155. its minimum, then the {transform} will be set to a failed state.
  156. [discrete]
  157. [[transform-dynamic-adjustments-limitations]]
  158. === Handling dynamic adjustments for many terms
  159. For each checkpoint, entities are identified that have changed since the last
  160. time the check was performed. This list of changed entities is supplied as a
  161. <<query-dsl-terms-query,terms query>> to the {transform} composite aggregation,
  162. one page at a time. Then updates are applied to the destination index for each
  163. page of entities.
  164. The page `size` is defined by `max_page_search_size` which is also used to
  165. define the number of buckets returned by the composite aggregation search. The
  166. default value is 500, the minimum is 10.
  167. The index setting <<dynamic-index-settings,`index.max_terms_count`>> defines
  168. the maximum number of terms that can be used in a terms query. The default value
  169. is 65536. If `max_page_search_size` exceeds `index.max_terms_count` the
  170. {transform} will fail.
  171. Using smaller values for `max_page_search_size` may result in a longer duration
  172. for the {transform} checkpoint to complete.
  173. [discrete]
  174. [[transform-failed-limitations]]
  175. === Handling of failed {transforms}
  176. Failed {transforms} remain as a persistent task and should be handled
  177. appropriately, either by deleting it or by resolving the root cause of the
  178. failure and re-starting.
  179. When using the API to delete a failed {transform}, first stop it using
  180. `_stop?force=true`, then delete it.
  181. [discrete]
  182. [[transform-availability-limitations]]
  183. === {ctransforms-cap} may give incorrect results if documents are not yet available to search
  184. After a document is indexed, there is a very small delay until it is available
  185. to search.
  186. A {ctransform} periodically checks for changed entities between the time since
  187. it last checked and `now` minus `sync.time.delay`. This time window moves
  188. without overlapping. If the timestamp of a recently indexed document falls
  189. within this time window but this document is not yet available to search then
  190. this entity will not be updated.
  191. If using a `sync.time.field` that represents the data ingest time and using a
  192. zero second or very small `sync.time.delay`, then it is more likely that this
  193. issue will occur.
  194. [discrete]
  195. [[transform-date-nanos]]
  196. === Support for date nanoseconds data type
  197. If your data uses the <<date_nanos,date nanosecond data type>>, aggregations
  198. are nonetheless on millisecond resolution. This limitation also affects the
  199. aggregations in your {transforms}.
  200. [discrete]
  201. [[transform-data-streams-destination]]
  202. === Data streams as destination indices are not supported
  203. {transforms-cap} update data in the destination index which requires writing
  204. into the destination. <<data-streams>> are designed to be append-only, which
  205. means you cannot send update or delete requests directly to a data stream. For
  206. this reason, data streams are not supported as destination indices for
  207. {transforms}.
  208. [discrete]
  209. [[transform-ilm-destination]]
  210. === ILM as destination index may cause duplicated documents
  211. <<index-lifecycle-management,ILM>> is not recommended to use as a {transform}
  212. destination index. {transforms-cap} update documents in the current destination,
  213. and cannot delete documents in the indices previously used by ILM. This may lead
  214. to duplicated documents when you use {transforms} combined with ILM in case of a
  215. rollover.
  216. If you use ILM to have time-based indices, please consider using the
  217. <<date-index-name-processor>> instead. The processor works without duplicated
  218. documents if your {transform} contains a `group_by` based on `date_histogram`.
  219. [discrete]
  220. [[transform-ui-limitations]]
  221. == Limitations in {kib}
  222. [discrete]
  223. [[transform-space-limitations]]
  224. === {transforms-cap} are visible in all {kib} spaces
  225. {kibana-ref}/xpack-spaces.html[Spaces] enable you to organize your source and
  226. destination indices and other saved objects in {kib} and to see only the objects
  227. that belong to your space. However, a {transform} is a long running task which
  228. is managed on cluster level and therefore not limited in scope to certain
  229. spaces. Space awareness can be implemented for a {data-source} under
  230. **Stack Management > Kibana** which allows privileges to the {transform}
  231. destination index.
  232. [discrete]
  233. [[transform-kibana-limitations]]
  234. === Up to 1,000 {transforms} are listed in {kib}
  235. The {transforms} management page in {kib} lists up to 1000 {transforms}.
  236. [discrete]
  237. [[transform-ui-support]]
  238. === {kib} might not support every {transform} configuration option
  239. There might be configuration options available via the {transform} APIs that are
  240. not supported in {kib}. For an exhaustive list of configuration options, refer
  241. to the <<transform-apis,documentation>>.