limitations.asciidoc 9.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219
  1. [role="xpack"]
  2. [[dataframe-limitations]]
  3. == {transform-cap} limitations
  4. [subs="attributes"]
  5. ++++
  6. <titleabbrev>Limitations</titleabbrev>
  7. ++++
  8. beta[]
  9. The following limitations and known problems apply to the 7.4 release of
  10. the Elastic {dataframe} feature:
  11. [float]
  12. [[df-compatibility-limitations]]
  13. === Beta {transforms} do not have guaranteed backwards or forwards compatibility
  14. Whilst {transforms} are beta, it is not guaranteed that a
  15. {transform} created in a previous version of the {stack} will be able
  16. to start and operate in a future version. Neither can support be provided for
  17. {transform} tasks to be able to operate in a cluster with mixed node
  18. versions.
  19. Please note that the output of a {transform} is persisted to a
  20. destination index. This is a normal {es} index and is not affected by the beta
  21. status.
  22. [float]
  23. [[df-ui-limitation]]
  24. === {dataframe-cap} UI will not work during a rolling upgrade from 7.2
  25. If your cluster contains mixed version nodes, for example during a rolling
  26. upgrade from 7.2 to a newer version, and {transforms} have been
  27. created in 7.2, the {dataframe} UI will not work. Please wait until all nodes
  28. have been upgraded to the newer version before using the {dataframe} UI.
  29. [float]
  30. [[df-datatype-limitations]]
  31. === {dataframe-cap} data type limitation
  32. {dataframes-cap} do not (yet) support fields containing arrays – in the UI or
  33. the API. If you try to create one, the UI will fail to show the source index
  34. table.
  35. [float]
  36. [[df-ccs-limitations]]
  37. === {ccs-cap} is not supported
  38. {ccs-cap} is not supported for {transforms}.
  39. [float]
  40. [[df-kibana-limitations]]
  41. === Up to 1,000 {transforms} are supported
  42. A single cluster will support up to 1,000 {transforms}.
  43. When using the
  44. {ref}/get-data-frame-transform.html[GET {transforms} API] a total
  45. `count` of {transforms} is returned. Use the `size` and `from` parameters to
  46. enumerate through the full list.
  47. [float]
  48. [[df-aggresponse-limitations]]
  49. === Aggregation responses may be incompatible with destination index mappings
  50. When a {transform} is first started, it will deduce the mappings
  51. required for the destination index. This process is based on the field types of
  52. the source index and the aggregations used. If the fields are derived from
  53. {ref}/search-aggregations-metrics-scripted-metric-aggregation.html[`scripted_metrics`]
  54. or {ref}/search-aggregations-pipeline-bucket-script-aggregation.html[`bucket_scripts`],
  55. {ref}/dynamic-mapping.html[dynamic mappings] will be used. In some instances the
  56. deduced mappings may be incompatible with the actual data. For example, numeric
  57. overflows might occur or dynamically mapped fields might contain both numbers
  58. and strings. Please check {es} logs if you think this may have occurred. As a
  59. workaround, you may define custom mappings prior to starting the
  60. {transform}. For example,
  61. {ref}/indices-create-index.html[create a custom destination index] or
  62. {ref}/indices-templates.html[define an index template].
  63. [float]
  64. [[df-batch-limitations]]
  65. === Batch {transforms} may not account for changed documents
  66. A batch {transform} uses a
  67. {ref}/search-aggregations-bucket-composite-aggregation.html[composite aggregation]
  68. which allows efficient pagination through all buckets. Composite aggregations
  69. do not yet support a search context, therefore if the source data is changed
  70. (deleted, updated, added) while the batch {dataframe} is in progress, then the
  71. results may not include these changes.
  72. [float]
  73. [[df-consistency-limitations]]
  74. === {cdataframe-cap} consistency does not account for deleted or updated documents
  75. While the process for {transforms} allows the continual recalculation
  76. of the {transform} as new data is being ingested, it does also have
  77. some limitations.
  78. Changed entities will only be identified if their time field
  79. has also been updated and falls within the range of the action to check for
  80. changes. This has been designed in principle for, and is suited to, the use case
  81. where new data is given a timestamp for the time of ingest.
  82. If the indices that fall within the scope of the source index pattern are
  83. removed, for example when deleting historical time-based indices, then the
  84. composite aggregation performed in consecutive checkpoint processing will search
  85. over different source data, and entities that only existed in the deleted index
  86. will not be removed from the {dataframe} destination index.
  87. Depending on your use case, you may wish to recreate the {transform}
  88. entirely after deletions. Alternatively, if your use case is tolerant to
  89. historical archiving, you may wish to include a max ingest timestamp in your
  90. aggregation. This will allow you to exclude results that have not been recently
  91. updated when viewing the {dataframe} destination index.
  92. [float]
  93. [[df-deletion-limitations]]
  94. === Deleting a {transform} does not delete the {dataframe} destination index or {kib} index pattern
  95. When deleting a {transform} using `DELETE _data_frame/transforms/index`
  96. neither the {dataframe} destination index nor the {kib} index pattern, should
  97. one have been created, are deleted. These objects must be deleted separately.
  98. [float]
  99. [[df-aggregation-page-limitations]]
  100. === Handling dynamic adjustment of aggregation page size
  101. During the development of {transforms}, control was favoured over
  102. performance. In the design considerations, it is preferred for the
  103. {transform} to take longer to complete quietly in the background
  104. rather than to finish quickly and take precedence in resource consumption.
  105. Composite aggregations are well suited for high cardinality data enabling
  106. pagination through results. If a {ref}/circuit-breaker.html[circuit breaker]
  107. memory exception occurs when performing the composite aggregated search then we
  108. try again reducing the number of buckets requested. This circuit breaker is
  109. calculated based upon all activity within the cluster, not just activity from
  110. {transforms}, so it therefore may only be a temporary resource
  111. availability issue.
  112. For a batch {transform}, the number of buckets requested is only ever
  113. adjusted downwards. The lowering of value may result in a longer duration for the
  114. {transform} checkpoint to complete. For {cdataframes}, the number of
  115. buckets requested is reset back to its default at the start of every checkpoint
  116. and it is possible for circuit breaker exceptions to occur repeatedly in the
  117. {es} logs.
  118. The {transform} retrieves data in batches which means it calculates
  119. several buckets at once. Per default this is 500 buckets per search/index
  120. operation. The default can be changed using `max_page_search_size` and the
  121. minimum value is 10. If failures still occur once the number of buckets
  122. requested has been reduced to its minimum, then the {transform} will
  123. be set to a failed state.
  124. [float]
  125. [[df-dynamic-adjustments-limitations]]
  126. === Handling dynamic adjustments for many terms
  127. For each checkpoint, entities are identified that have changed since the last
  128. time the check was performed. This list of changed entities is supplied as a
  129. {ref}/query-dsl-terms-query.html[terms query] to the {transform}
  130. composite aggregation, one page at a time. Then updates are applied to the
  131. destination index for each page of entities.
  132. The page `size` is defined by `max_page_search_size` which is also used to
  133. define the number of buckets returned by the composite aggregation search. The
  134. default value is 500, the minimum is 10.
  135. The index setting
  136. {ref}/index-modules.html#dynamic-index-settings[`index.max_terms_count`] defines
  137. the maximum number of terms that can be used in a terms query. The default value
  138. is 65536. If `max_page_search_size` exceeds `index.max_terms_count` the
  139. {transform} will fail.
  140. Using smaller values for `max_page_search_size` may result in a longer duration
  141. for the {transform} checkpoint to complete.
  142. [float]
  143. [[df-scheduling-limitations]]
  144. === {cdataframe-cap} scheduling limitations
  145. A {cdataframe} periodically checks for changes to source data. The functionality
  146. of the scheduler is currently limited to a basic periodic timer which can be
  147. within the `frequency` range from 1s to 1h. The default is 1m. This is designed
  148. to run little and often. When choosing a `frequency` for this timer consider
  149. your ingest rate along with the impact that the {transform}
  150. search/index operations has other users in your cluster. Also note that retries
  151. occur at `frequency` interval.
  152. [float]
  153. [[df-failed-limitations]]
  154. === Handling of failed {transforms}
  155. Failed {transforms} remain as a persistent task and should be handled
  156. appropriately, either by deleting it or by resolving the root cause of the
  157. failure and re-starting.
  158. When using the API to delete a failed {transform}, first stop it using
  159. `_stop?force=true`, then delete it.
  160. If starting a failed {transform}, after the root cause has been
  161. resolved, the `_start?force=true` parameter must be specified.
  162. [float]
  163. [[df-availability-limitations]]
  164. === {cdataframes-cap} may give incorrect results if documents are not yet available to search
  165. After a document is indexed, there is a very small delay until it is available
  166. to search.
  167. A {ctransform} periodically checks for changed entities between the
  168. time since it last checked and `now` minus `sync.time.delay`. This time window
  169. moves without overlapping. If the timestamp of a recently indexed document falls
  170. within this time window but this document is not yet available to search then
  171. this entity will not be updated.
  172. If using a `sync.time.field` that represents the data ingest time and using a
  173. zero second or very small `sync.time.delay`, then it is more likely that this
  174. issue will occur.