| 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253 | [role="xpack"][[ml-delayed-data-detection]]= Handling delayed dataDelayed data are documents that are indexed late. That is to say, it is data related to a time that your {dfeed} has already processed and it is thereforenever analyzed by your {anomaly-job}.When you create a {dfeed}, you can specify a{ref}/ml-put-datafeed.html#ml-put-datafeed-request-body[`query_delay`] setting.This setting enables the {dfeed} to wait for some time past real-time, whichmeans any "late" data in this period is fully indexed before the {dfeed} triesto gather it. However, if the setting is set too low, the {dfeed} may query fordata before it has been indexed and consequently miss that document. Conversely,if it is set too high, analysis drifts farther away from real-time. The balancethat is struck depends upon each use case and the environmental factors of thecluster.== Why worry about delayed data?This is a particularly prescient question. If data are delayed randomly (andconsequently are missing from analysis), the results of certain types offunctions are not really affected. In these situations, it all comes out okay inthe end as the delayed data is distributed randomly. An example would be a `mean`metric for a field in a large collection of data. In this case, checking fordelayed data may not provide much benefit. If data are consistently delayed,however, {anomaly-jobs} with a `low_count` function may provide false positives.In this situation, it would be useful to see if data comes in after an anomaly isrecorded so that you can determine a next course of action.== How do we detect delayed data?In addition to the `query_delay` field, there is a delayed data check config,which enables you to configure the datafeed to look in the past for delayed data.Every 15 minutes or every `check_window`, whichever is smaller, the datafeedtriggers a document search over the configured indices. This search looks over atime span with a length of `check_window` ending with the latest finalized bucket.That time span is partitioned into buckets, whose length equals the bucket spanof the associated {anomaly-job}. The `doc_count` of those buckets are thencompared with the job's finalized analysis buckets to see whether any data hasarrived since the analysis. If there is indeed missing data due to their ingestdelay, the end user is notified. For example, you can see annotations in {kib}for the periods where these delays occur.== What to do about delayed data?The most common course of action is to simply to do nothing. For many functionsand situations, ignoring the data is acceptable. However, if the amount ofdelayed data is too great or the situation calls for it, the next course ofaction to consider is to increase the `query_delay` of the datafeed. Thisincreased delay allows more time for data to be indexed. If you have real-timeconstraints, however, an increased delay might not be desirable. In which case,you would have to {ref}/tune-for-indexing-speed.html[tune for better indexing speed].
 |