Browse Source

[DOCS] Updates custom rules example (#52731)

Lisa Cawley 5 years ago
parent
commit
cd069a861c
1 changed files with 14 additions and 8 deletions
  1. 14 8
      docs/reference/ml/anomaly-detection/detector-custom-rules.asciidoc

+ 14 - 8
docs/reference/ml/anomaly-detection/detector-custom-rules.asciidoc

@@ -10,10 +10,11 @@ of following its default behavior. To specify the _when_ a rule uses
 a `scope` and `conditions`. You can think of `scope` as the categorical
 specification of a rule, while `conditions` are the numerical part.
 A rule can have a scope, one or more conditions, or a combination of
-scope and conditions.
-
-Let us see how those can be configured by examples.
+scope and conditions. For the full list of specification details, see the
+{ref}/ml-put-job.html#put-customrules[`custom_rules` object] in the create
+{anomaly-jobs} API.
 
+[[ml-custom-rules-scope]]
 ==== Specifying custom rule scope
 
 Let us assume we are configuring an {anomaly-job} in order to detect DNS data
@@ -29,7 +30,8 @@ to achieve this.
 First, we need to create a list of our safe domains. Those lists are called 
 _filters_ in {ml}. Filters can be shared across {anomaly-jobs}.
 
-We create our filter using the {ref}/ml-put-filter.html[put filter API]:
+You can create a filter in **Anomaly Detection > Settings > Filter Lists** in 
+{kib} or by using the {ref}/ml-put-filter.html[put filter API]:
 
 [source,console]
 ----------------------------------
@@ -42,7 +44,7 @@ PUT _ml/filters/safe_domains
 // TEST[skip:needs-licence]
 
 Now, we can create our {anomaly-job} specifying a scope that uses the
-`safe_domains`  filter for the `highest_registered_domain` field:
+`safe_domains` filter for the `highest_registered_domain` field:
 
 [source,console]
 ----------------------------------
@@ -73,7 +75,8 @@ PUT _ml/anomaly_detectors/dns_exfiltration_with_rule
 // TEST[skip:needs-licence]
 
 As time advances and we see more data and more results, we might encounter new 
-domains that we want to add in the filter. We can do that by using the 
+domains that we want to add in the filter. We can do that in the
+**Anomaly Detection > Settings > Filter Lists** in {kib} or by using the 
 {ref}/ml-update-filter.html[update filter API]:
 
 [source,console]
@@ -127,6 +130,7 @@ PUT _ml/anomaly_detectors/scoping_multiple_fields
 Such a detector will skip results when the values of all 3 scoped fields
 are included in the referenced filters.
 
+[[ml-custom-rules-conditions]]
 ==== Specifying custom rule conditions
 
 Imagine a detector that looks for anomalies in CPU utilization.
@@ -206,7 +210,8 @@ PUT _ml/anomaly_detectors/rule_with_range
 ----------------------------------
 // TEST[skip:needs-licence]
 
-==== Custom rules in the life-cycle of a job
+[[ml-custom-rules-lifecycle]]
+==== Custom rules in the lifecycle of a job
 
 Custom rules only affect results created after the rules were applied.
 Let us imagine that we have configured an {anomaly-job} and it has been running
@@ -214,8 +219,9 @@ for some time. After observing its results we decide that we can employ
 rules in order to get rid of some uninteresting results. We can use
 the {ref}/ml-update-job.html[update {anomaly-job} API] to do so. However, the
 rule we added will only be in effect for any results created from the moment we
-added  the rule onwards. Past results will remain unaffected.
+added the rule onwards. Past results will remain unaffected.
 
+[[ml-custom-rules-filtering]]
 ==== Using custom rules vs. filtering data
 
 It might appear like using rules is just another way of filtering the data