Selaa lähdekoodia

[DOCS] Expand ES|QL DISSECT and GROK documentation (#101225)

* Add 'Process data with DISSECT and GROK' page

* Expand DISSECT docs

* More DISSECT and GROK enhancements

* Improve examples

* Fix CSV tests

* Review feedback

* Reword
Abdon Pijpelink 1 vuosi sitten
vanhempi
commit
284f81873f

+ 2 - 1
docs/reference/esql/esql-language.asciidoc

@@ -12,6 +12,7 @@ Detailed information about the {esql} language:
 * <<esql-functions>>
 * <<esql-multivalued-fields>>
 * <<esql-metadata-fields>>
+* <<esql-process-data-with-dissect-and-grok>>
 * <<esql-enrich-data>>
 
 include::esql-syntax.asciidoc[]
@@ -19,5 +20,5 @@ include::esql-commands.asciidoc[]
 include::esql-functions-operators.asciidoc[]
 include::multivalued-fields.asciidoc[]
 include::metadata-fields.asciidoc[]
+include::esql-process-data-with-dissect-grok.asciidoc[]
 include::esql-enrich-data.asciidoc[]
-

+ 258 - 0
docs/reference/esql/esql-process-data-with-dissect-grok.asciidoc

@@ -0,0 +1,258 @@
+[[esql-process-data-with-dissect-and-grok]]
+=== Data processing with `DISSECT` and `GROK`
+
+++++
+<titleabbrev>Data processing with `DISSECT` and `GROK`</titleabbrev>
+++++
+
+Your data may contain unstructured strings that you want to structure. This
+makes it easier to analyze the data. For example, log messages may contain IP
+addresses that you want to extract so you can find the most active IP addresses.
+
+image::images/esql/unstructured-data.png[align="center",width=75%]
+
+{es} can structure your data at index time or query time. At index time, you can
+use the <<dissect-processor,Dissect>> and <<grok-processor,Grok>> ingest
+processors, or the {ls} {logstash-ref}/plugins-filters-dissect.html[Dissect] and
+{logstash-ref}/plugins-filters-grok.html[Grok] filters. At query time, you can
+use the {esql} <<esql-dissect>> and <<esql-grok>> commands.
+
+[[esql-grok-or-dissect]]
+==== `DISSECT` or `GROK`? Or both?
+
+`DISSECT` works by breaking up a string using a delimiter-based pattern. `GROK`
+works similarly, but uses regular expressions. This make `GROK` more powerful,
+but generally also slower. `DISSECT` works well when data is reliably repeated.
+`GROK` is a better choice when you really need the power of regular expressions,
+for example when the structure of your text varies from row to row.
+
+You can use both `DISSECT` and `GROK` for hybrid use cases. For example when a
+section of the line is reliably repeated, but the entire line is not. `DISSECT`
+can deconstruct the section of the line that is repeated. `GROK` can process the
+remaining field values using regular expressions.
+
+[[esql-process-data-with-dissect]]
+==== Process data with `DISSECT`
+
+The <<esql-dissect>> processing command matches a string against a
+delimiter-based pattern, and extracts the specified keys as columns.
+
+For example, the following pattern:
+[source,txt]
+----
+%{clientip} [%{@timestamp}] %{status} 
+----
+
+matches a log line of this format:
+[source,txt]
+----
+1.2.3.4 [2023-01-23T12:15:00.000Z] Connected
+----
+
+and results in adding the following columns to the input table:
+
+[%header.monospaced.styled,format=dsv,separator=|]
+|===
+clientip:keyword | @timestamp:keyword | status:keyword
+1.2.3.4 | 2023-01-23T12:15:00.000Z | Connected
+|===
+
+[[esql-dissect-patterns]]
+===== Dissect patterns
+
+include::../ingest/processors/dissect.asciidoc[tag=intro-example-explanation]
+
+An empty key `%{}` or a <<esql-named-skip-key,named skip key>> can be used to
+match values, but exclude the value from the output.
+
+All matched values are output as keyword string data types. Use the
+<<esql-type-conversion-functions>> to convert to another data type.
+
+Dissect also supports <<esql-dissect-key-modifiers,key modifiers>> that can
+change dissect's default behavior. For example, you can instruct dissect to
+ignore certain fields, append fields, skip over padding, etc.
+
+[[esql-dissect-terminology]]
+===== Terminology
+
+dissect pattern::
+the set of fields and delimiters describing the textual 
+format. Also known as a dissection. 
+The dissection is described using a set of `%{}` sections:
+`%{a} - %{b} - %{c}`
+
+field::
+the text from `%{` to `}` inclusive.
+
+delimiter::
+the text between `}` and the next `%{` characters.
+Any set of characters other than `%{`, `'not }'`, or `}` is a delimiter.
+
+key::
++
+--
+the text between the `%{` and `}`, exclusive of the `?`, `+`, `&` prefixes 
+and the ordinal suffix. 
+
+Examples:
+
+* `%{?aaa}` - the key is `aaa` 
+* `%{+bbb/3}` - the key is `bbb` 
+* `%{&ccc}` - the key is `ccc` 
+--
+
+[[esql-dissect-examples]]
+===== Examples
+
+include::processing-commands/dissect.asciidoc[tag=examples]
+
+[[esql-dissect-key-modifiers]]
+===== Dissect key modifiers
+
+include::../ingest/processors/dissect.asciidoc[tag=dissect-key-modifiers]
+
+[[esql-dissect-key-modifiers-table]]
+.Dissect key modifiers
+[options="header",role="styled"]
+|======
+| Modifier      | Name               | Position       | Example                      | Description                                                  | Details
+| `->`          | Skip right padding | (far) right    | `%{keyname1->}`  | Skips any repeated characters to the right                   | <<esql-dissect-modifier-skip-right-padding,link>>
+| `+`           | Append             | left           | `%{+keyname} %{+keyname}`    | Appends two or more fields together                          | <<esql-append-modifier,link>>
+| `+` with `/n` | Append with order  | left and right | `%{+keyname/2} %{+keyname/1}` | Appends two or more fields together in the order specified   | <<esql-append-order-modifier,link>>
+| `?`           | Named skip key     | left           | `%{?ignoreme}`  | Skips the matched value in the output. Same behavior as `%{}`| <<esql-named-skip-key,link>>
+| `*` and `&`   | Reference keys     | left           | `%{*r1} %{&r1}`    | Sets the output key as value of `*` and output value of `&`  | <<esql-reference-keys,link>>
+|======
+
+[[esql-dissect-modifier-skip-right-padding]]
+====== Right padding modifier (`->`)
+include::../ingest/processors/dissect.asciidoc[tag=dissect-modifier-skip-right-padding]
+
+[[esql-append-modifier]]
+====== Append modifier (`+`)
+include::../ingest/processors/dissect.asciidoc[tag=append-modifier]
+
+[[esql-append-order-modifier]]
+====== Append with order modifier (`+` and `/n`)
+include::../ingest/processors/dissect.asciidoc[tag=append-order-modifier]
+
+[[esql-named-skip-key]]
+====== Named skip key (`?`)
+include::../ingest/processors/dissect.asciidoc[tag=named-skip-key]
+
+[[esql-reference-keys]]
+====== Reference keys (`*` and `&`)
+include::../ingest/processors/dissect.asciidoc[tag=reference-keys]
+
+[[esql-process-data-with-grok]]
+==== Process data with `GROK`
+
+The <<esql-grok>> processing command matches a string against a pattern based on
+regular expressions, and extracts the specified keys as columns.
+
+For example, the following pattern:
+[source,txt]
+----
+%{IP:ip} \[%{TIMESTAMP_ISO8601:@timestamp}\] %{GREEDYDATA:status}
+----
+
+matches a log line of this format:
+[source,txt]
+----
+1.2.3.4 [2023-01-23T12:15:00.000Z] Connected
+----
+
+and results in adding the following columns to the input table:
+
+[%header.monospaced.styled,format=dsv,separator=|]
+|===
+@timestamp:keyword | ip:keyword | status:keyword
+2023-01-23T12:15:00.000Z | 1.2.3.4 | Connected
+|===
+
+[[esql-grok-patterns]]
+===== Grok patterns
+
+The syntax for a grok pattern is `%{SYNTAX:SEMANTIC}`
+
+The `SYNTAX` is the name of the pattern that matches your text. For example,
+`3.44` is matched by the `NUMBER` pattern and `55.3.244.1` is matched by the
+`IP` pattern. The syntax is how you match.
+
+The `SEMANTIC` is the identifier you give to the piece of text being matched.
+For example, `3.44` could be the duration of an event, so you could call it
+simply `duration`. Further, a string `55.3.244.1` might identify the `client`
+making a request.
+
+By default, matched values are output as keyword string data types. To convert a
+semantic's data type, suffix it with the target data type. For example
+`%{NUMBER:num:int}`, which converts the `num` semantic from a string to an
+integer. Currently the only supported conversions are `int` and `float`. For
+other types, use the <<esql-type-conversion-functions>>.
+
+For an overview of the available patterns, refer to
+{es-repo}/blob/{branch}/libs/grok/src/main/resources/patterns[GitHub]. You can
+also retrieve a list of all patterns using a <<grok-processor-rest-get,REST
+API>>.
+
+[[esql-grok-regex]]
+===== Regular expressions
+
+Grok is based on regular expressions. Any regular expressions are valid in grok
+as well. Grok uses the Oniguruma regular expression library. Refer to
+https://github.com/kkos/oniguruma/blob/master/doc/RE[the Oniguruma GitHub
+repository] for the full supported regexp syntax.
+
+[NOTE]
+====
+Special regex characters like `[` and `]` need to be escaped with a `\`. For 
+example, in the earlier pattern:
+[source,txt]
+----
+%{IP:ip} \[%{TIMESTAMP_ISO8601:@timestamp}\] %{GREEDYDATA:status}
+----
+
+In {esql} queries, the backslash character itself is a special character that
+needs to be escaped with another `\`. For this example, the corresponding {esql}
+query becomes:
+[source.merge.styled,esql]
+----
+include::{esql-specs}/docs.csv-spec[tag=grokWithEscape]
+----
+====
+
+[[esql-custom-patterns]]
+===== Custom patterns
+
+If grok doesn't have a pattern you need, you can use the Oniguruma syntax for
+named capture which lets you match a piece of text and save it as a column:
+[source,txt]
+----
+(?<field_name>the pattern here)
+----
+
+For example, postfix logs have a `queue id` that is a 10 or 11-character
+hexadecimal value. This can be captured to a column named `queue_id` with:
+[source,txt]
+----
+(?<queue_id>[0-9A-F]{10,11})
+----
+
+[[esql-grok-examples]]
+===== Examples
+
+include::processing-commands/grok.asciidoc[tag=examples]
+
+[[esql-grok-debugger]]
+===== Grok debugger
+
+To write and debug grok patterns, you can use the
+{kibana-ref}/xpack-grokdebugger.html[Grok Debugger]. It provides a UI for
+testing patterns against sample data. Under the covers, it uses the same engine
+as the `GROK` command.
+
+[[esql-grok-limitations]]
+===== Limitations
+
+The `GROK` command does not support configuring <<custom-patterns,custom
+patterns>>, or <<trace-match,multiple patterns>>. The `GROK` command is not
+subject to <<grok-watchdog,Grok watchdog settings>>.

+ 47 - 7
docs/reference/esql/processing-commands/dissect.asciidoc

@@ -2,18 +2,58 @@
 [[esql-dissect]]
 === `DISSECT`
 
-`DISSECT` enables you to extract structured data out of a string. `DISSECT`
-matches the string against a delimiter-based pattern, and extracts the specified
-keys as columns.
+**Syntax**
 
-Refer to the <<dissect-processor,dissect processor documentation>> for the
-syntax of dissect patterns.
+[source,txt]
+----
+DISSECT input "pattern" [ append_separator="<separator>"]
+----
+
+*Parameters*
+
+`input`::
+The column that contains the string you want to structure.  If the column has
+multiple values, `DISSECT` will process each value.
+
+`pattern`::
+A dissect pattern.
+
+`append_separator="<separator>"`::
+A string used as the separator between appended values, when using the <<esql-append-modifier,append modifier>>.
+
+*Description*
+
+`DISSECT` enables you to <<esql-process-data-with-dissect-and-grok,extract
+structured data out of a string>>. `DISSECT` matches the string against a
+delimiter-based pattern, and extracts the specified keys as columns.
+
+Refer to <<esql-process-data-with-dissect>> for the syntax of dissect patterns.
+
+*Example*
+
+// tag::examples[]
+The following example parses a string that contains a timestamp, some text, and
+an IP address:
+
+[source.merge.styled,esql]
+----
+include::{esql-specs}/docs.csv-spec[tag=basicDissect]
+----
+[%header.monospaced.styled,format=dsv,separator=|]
+|===
+include::{esql-specs}/docs.csv-spec[tag=basicDissect-result]
+|===
+
+By default, `DISSECT` outputs keyword string columns. To convert to another
+type, use <<esql-type-conversion-functions>>:
 
 [source.merge.styled,esql]
 ----
-include::{esql-specs}/dissect.csv-spec[tag=dissect]
+include::{esql-specs}/docs.csv-spec[tag=dissectWithToDatetime]
 ----
 [%header.monospaced.styled,format=dsv,separator=|]
 |===
-include::{esql-specs}/dissect.csv-spec[tag=dissect-result]
+include::{esql-specs}/docs.csv-spec[tag=dissectWithToDatetime-result]
 |===
+
+// end::examples[]

+ 54 - 8
docs/reference/esql/processing-commands/grok.asciidoc

@@ -2,20 +2,66 @@
 [[esql-grok]]
 === `GROK`
 
-`GROK` enables you to extract structured data out of a string. `GROK` matches
-the string against patterns, based on regular expressions, and extracts the
-specified patterns as columns.
+**Syntax**
 
-Refer to the <<grok-processor,grok processor documentation>> for the syntax for
-of grok patterns.
+[source,txt]
+----
+GROK input "pattern"
+----
+
+*Parameters*
+
+`input`::
+The column that contains the string you want to structure. If the column has
+multiple values, `GROK` will process each value.
+
+`pattern`::
+A grok pattern.
+
+*Description*
+
+`GROK` enables you to <<esql-process-data-with-dissect-and-grok,extract
+structured data out of a string>>. `GROK` matches the string against patterns,
+based on regular expressions, and extracts the specified patterns as columns.
+
+Refer to <<esql-process-data-with-grok>> for the syntax of grok patterns.
+
+*Examples*
+
+// tag::examples[]
+The following example parses a string that contains a timestamp, an IP address,
+an email address, and a number:
+
+[source.merge.styled,esql]
+----
+include::{esql-specs}/docs.csv-spec[tag=basicGrok]
+----
+[%header.monospaced.styled,format=dsv,separator=|]
+|===
+include::{esql-specs}/docs.csv-spec[tag=basicGrok-result]
+|===
+
+By default, `GROK` outputs keyword string columns. `int` and `float` types can
+be converted by appending `:type` to the semantics in the pattern. For example
+`{NUMBER:num:int}`:
+
+[source.merge.styled,esql]
+----
+include::{esql-specs}/docs.csv-spec[tag=grokWithConversionSuffix]
+----
+[%header.monospaced.styled,format=dsv,separator=|]
+|===
+include::{esql-specs}/docs.csv-spec[tag=grokWithConversionSuffix-result]
+|===
 
-For example:
+For other type conversions, use <<esql-type-conversion-functions>>:
 
 [source.merge.styled,esql]
 ----
-include::{esql-specs}/grok.csv-spec[tag=grok]
+include::{esql-specs}/docs.csv-spec[tag=grokWithToDatetime]
 ----
 [%header.monospaced.styled,format=dsv,separator=|]
 |===
-include::{esql-specs}/grok.csv-spec[tag=grok-result]
+include::{esql-specs}/docs.csv-spec[tag=grokWithToDatetime-result]
 |===
+// end::examples[]

BIN
docs/reference/images/esql/unstructured-data.png


+ 18 - 4
docs/reference/ingest/processors/dissect.asciidoc

@@ -44,11 +44,13 @@ and result in a document with the following fields:
 --------------------------------------------------
 // NOTCONSOLE
 
-A dissect pattern is defined by the parts of the string that will be discarded. In the example above the first part
-to be discarded is a single space. Dissect finds this space, then assigns the value of `clientip` is everything up
+// tag::intro-example-explanation[]
+A dissect pattern is defined by the parts of the string that will be discarded. In the previous example, the first part
+to be discarded is a single space. Dissect finds this space, then assigns the value of `clientip` everything up
 until that space.
-Later dissect matches the `[` and then `]` and then assigns `@timestamp` to everything in-between `[` and `]`.
-Paying special attention the parts of the string to discard will help build successful dissect patterns.
+Next, dissect matches the `[` and then `]` and then assigns `@timestamp` to everything in-between `[` and `]`.
+Paying special attention to the parts of the string to discard will help build successful dissect patterns.
+// end::intro-example-explanation[]
 
 Successful matches require all keys in a pattern to have a value. If any of the `%{keyname}` defined in the pattern do
 not have a value, then an exception is thrown and may be handled by the <<handling-pipeline-failures,`on_failure`>> directive.
@@ -85,9 +87,11 @@ include::common-options.asciidoc[]
 
 [[dissect-key-modifiers]]
 ==== Dissect key modifiers
+// tag::dissect-key-modifiers[]
 Key modifiers can change the default behavior for dissection. Key modifiers may be found on the left or right
 of the `%{keyname}` always inside the `%{` and `}`. For example `%{+keyname ->}` has the append and right padding
 modifiers.
+// end::dissect-key-modifiers[]
 
 [[dissect-key-modifiers-table]]
 .Dissect Key Modifiers
@@ -104,6 +108,7 @@ modifiers.
 [[dissect-modifier-skip-right-padding]]
 ===== Right padding modifier (`->`)
 
+// tag::dissect-modifier-skip-right-padding[]
 The algorithm that performs the dissection is very strict in that it requires all characters in the pattern to match
 the source string. For example, the pattern `%{fookey} %{barkey}` (1 space), will match the string "foo{nbsp}bar"
 (1 space), but will not match the string "foo{nbsp}{nbsp}bar" (2 spaces) since the pattern has only 1 space and the
@@ -137,10 +142,12 @@ Right padding modifier with empty key example
 * ts = 1998-08-10T17:15:42,466
 * level = WARN
 |======
+// end::dissect-modifier-skip-right-padding[]
 
 [[append-modifier]]
 ===== Append modifier (`+`)
 [[dissect-modifier-append-key]]
+// tag::append-modifier[]
 Dissect supports appending two or more results together for the output.
 Values are appended left to right. An append separator can be specified.
 In this example the append_separator is defined as a space.
@@ -152,10 +159,12 @@ Append modifier example
 | *Result*  a|
 * name = john jacob jingleheimer schmidt
 |======
+// end::append-modifier[]
 
 [[append-order-modifier]]
 ===== Append with order modifier (`+` and `/n`)
 [[dissect-modifier-append-key-with-order]]
+// tag::append-order-modifier[]
 Dissect supports appending two or more results together for the output.
 Values are appended based on the order defined (`/n`). An append separator can be specified.
 In this example the append_separator is defined as a comma.
@@ -167,10 +176,12 @@ Append with order modifier example
 | *Result*  a|
 * name = schmidt,john,jingleheimer,jacob
 |======
+// end::append-order-modifier[]
 
 [[named-skip-key]]
 ===== Named skip key (`?`)
 [[dissect-modifier-named-skip-key]]
+// tag::named-skip-key[]
 Dissect supports ignoring matches in the final result. This can be done with an empty key `%{}`, but for readability
 it may be desired to give that empty key a name.
 
@@ -182,10 +193,12 @@ Named skip key modifier example
 * clientip = 1.2.3.4
 * @timestamp = 30/Apr/1998:22:00:52 +0000
 |======
+// end::named-skip-key[]
 
 [[reference-keys]]
 ===== Reference keys (`*` and `&`)
 [[dissect-modifier-reference-keys]]
+// tag::reference-keys[]
 Dissect support using parsed values as the key/value pairings for the structured content. Imagine a system that
 partially logs in key/value pairs. Reference keys allow you to maintain that key/value relationship.
 
@@ -199,3 +212,4 @@ Reference key modifier example
 * ip = 1.2.3.4
 * error = REFUSED
 |======
+// end::reference-keys[]

+ 0 - 4
x-pack/plugin/esql/qa/testFixtures/src/main/resources/dissect.csv-spec

@@ -15,17 +15,13 @@ foo bar   | null       | null
 
 
 complexPattern
-// tag::dissect[]
 ROW a = "1953-01-23T12:15:00Z - some text - 127.0.0.1;" 
 | DISSECT a "%{Y}-%{M}-%{D}T%{h}:%{m}:%{s}Z - %{msg} - %{ip};" 
 | KEEP Y, M, D, h, m, s, msg, ip
-// end::dissect[]
 ;
 
-// tag::dissect-result[]
 Y:keyword | M:keyword | D:keyword | h:keyword | m:keyword | s:keyword | msg:keyword  | ip:keyword
 1953      | 01        | 23        | 12        | 15        | 00        | some text    | 127.0.0.1
-// end::dissect-result[]
 ;
 
 

+ 86 - 0
x-pack/plugin/esql/qa/testFixtures/src/main/resources/docs.csv-spec

@@ -468,4 +468,90 @@ count:long | languages:integer
 19         |2
 15         |1
 // end::countAll-result[]
+;
+
+basicGrok
+// tag::basicGrok[]
+ROW a = "2023-01-23T12:15:00.000Z 127.0.0.1 some.email@foo.com 42" 
+| GROK a "%{TIMESTAMP_ISO8601:date} %{IP:ip} %{EMAILADDRESS:email} %{NUMBER:num}" 
+| KEEP date, ip, email, num
+// end::basicGrok[]
+;
+
+// tag::basicGrok-result[]
+date:keyword          | ip:keyword    | email:keyword       | num:keyword
+2023-01-23T12:15:00.000Z  | 127.0.0.1     | some.email@foo.com  | 42
+// end::basicGrok-result[]
+;
+
+grokWithConversionSuffix
+// tag::grokWithConversionSuffix[]
+ROW a = "2023-01-23T12:15:00.000Z 127.0.0.1 some.email@foo.com 42" 
+| GROK a "%{TIMESTAMP_ISO8601:date} %{IP:ip} %{EMAILADDRESS:email} %{NUMBER:num:int}" 
+| KEEP date, ip, email, num
+// end::grokWithConversionSuffix[]
+;
+
+// tag::grokWithConversionSuffix-result[]
+date:keyword              | ip:keyword    | email:keyword       | num:integer
+2023-01-23T12:15:00.000Z  | 127.0.0.1     | some.email@foo.com  | 42
+// end::grokWithConversionSuffix-result[]
+;
+
+grokWithToDatetime
+// tag::grokWithToDatetime[]
+ROW a = "2023-01-23T12:15:00.000Z 127.0.0.1 some.email@foo.com 42" 
+| GROK a "%{TIMESTAMP_ISO8601:date} %{IP:ip} %{EMAILADDRESS:email} %{NUMBER:num:int}" 
+| KEEP date, ip, email, num
+| EVAL date = TO_DATETIME(date)
+// end::grokWithToDatetime[]
+;
+
+// tag::grokWithToDatetime-result[]
+ip:keyword    | email:keyword       | num:integer | date:date
+127.0.0.1     | some.email@foo.com  | 42          | 2023-01-23T12:15:00.000Z
+// end::grokWithToDatetime-result[]
+;
+
+grokWithEscape
+// tag::grokWithEscape[]
+ROW a = "1.2.3.4 [2023-01-23T12:15:00.000Z] Connected"
+| GROK a "%{IP:ip} \\[%{TIMESTAMP_ISO8601:@timestamp}\\] %{GREEDYDATA:status}"
+// end::grokWithEscape[]
+| KEEP @timestamp
+;
+
+// tag::grokWithEscape-result[]
+@timestamp:keyword
+2023-01-23T12:15:00.000Z
+// end::grokWithEscape-result[]
+;
+
+basicDissect
+// tag::basicDissect[]
+ROW a = "2023-01-23T12:15:00.000Z - some text - 127.0.0.1" 
+| DISSECT a "%{date} - %{msg} - %{ip}"
+| KEEP date, msg, ip
+// end::basicDissect[]
+;
+
+// tag::basicDissect-result[]
+date:keyword             | msg:keyword  | ip:keyword
+2023-01-23T12:15:00.000Z | some text    | 127.0.0.1
+// end::basicDissect-result[]
+;
+
+dissectWithToDatetime
+// tag::dissectWithToDatetime[]
+ROW a = "2023-01-23T12:15:00.000Z - some text - 127.0.0.1" 
+| DISSECT a "%{date} - %{msg} - %{ip}" 
+| KEEP date, msg, ip
+| EVAL date = TO_DATETIME(date)
+// end::dissectWithToDatetime[]
+;
+
+// tag::dissectWithToDatetime-result[]
+msg:keyword  | ip:keyword | date:date
+some text    | 127.0.0.1  | 2023-01-23T12:15:00.000Z
+// end::dissectWithToDatetime-result[]
 ;

+ 1 - 6
x-pack/plugin/esql/qa/testFixtures/src/main/resources/grok.csv-spec

@@ -15,17 +15,12 @@ foo bar   | null
 
 
 complexPattern
-// tag::grok[]
 ROW a = "1953-01-23T12:15:00Z 127.0.0.1 some.email@foo.com 42" 
 | GROK a "%{TIMESTAMP_ISO8601:date} %{IP:ip} %{EMAILADDRESS:email} %{NUMBER:num:int}" 
-| KEEP date, ip, email, num
-// end::grok[]
-;
+| KEEP date, ip, email, num;
 
-// tag::grok-result[]
 date:keyword          | ip:keyword    | email:keyword       | num:integer
 1953-01-23T12:15:00Z  | 127.0.0.1     | some.email@foo.com  | 42
-// end::grok-result[]
 ;