intro.asciidoc 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270
  1. [[elasticsearch-intro]]
  2. = Elasticsearch introduction
  3. [partintro]
  4. --
  5. _**You know, for search (and analysis)**_
  6. {es} is the distributed search and analytics engine at the heart of
  7. the {stack}. {ls} and {beats} facilitate collecting, aggregating, and
  8. enriching your data and storing it in {es}. {kib} enables you to
  9. interactively explore, visualize, and share insights into your data and manage
  10. and monitor the stack. {es} is where the indexing, search, and analysis
  11. magic happen.
  12. {es} provides real-time search and analytics for all types of data. Whether you
  13. have structured or unstructured text, numerical data, or geospatial data,
  14. {es} can efficiently store and index it in a way that supports fast searches.
  15. You can go far beyond simple data retrieval and aggregate information to discover
  16. trends and patterns in your data. And as your data and query volume grows, the
  17. distributed nature of {es} enables your deployment to grow seamlessly right
  18. along with it.
  19. While not _every_ problem is a search problem, {es} offers speed and flexibility
  20. to handle data in a wide variety of use cases:
  21. * Add a search box to an app or website
  22. * Store and analyze logs, metrics, and security event data
  23. * Use machine learning to automatically model the behavior of your data in real
  24. time
  25. * Automate business workflows using {es} as a storage engine
  26. * Manage, integrate, and analyze spatial information using {es} as a geographic
  27. information system (GIS)
  28. * Store and process genetic data using {es} as a bioinformatics research tool
  29. We’re continually amazed by the novel ways people use search. But whether
  30. your use case is similar to one of these, or you're using {es} to tackle a new
  31. problem, the way you work with your data, documents, and indices in {es} is
  32. the same.
  33. --
  34. [[documents-indices]]
  35. == Data in: documents and indices
  36. {es} is a distributed document store. Instead of storing information as rows of
  37. columnar data, {es} stores complex data structures that have been serialized
  38. as JSON documents. When you have multiple {es} nodes in a cluster, stored
  39. documents are distributed across the cluster and can be accessed immediately
  40. from any node.
  41. When a document is stored, it is indexed and fully searchable in near
  42. real-time--within 1 second. {es} uses a data structure called an
  43. inverted index that supports very fast full-text searches. An inverted index
  44. lists every unique word that appears in any document and identifies all of the
  45. documents each word occurs in.
  46. An index can be thought of as an optimized collection of documents and each
  47. document is a collection of fields, which are the key-value pairs that contain
  48. your data. By default, {es} indexes all data in every field and each indexed
  49. field has a dedicated, optimized data structure. For example, text fields are
  50. stored in inverted indices, and numeric and geo fields are stored in BKD trees.
  51. The ability to use the per-field data structures to assemble and return search
  52. results is what makes {es} so fast.
  53. {es} also has the ability to be schema-less, which means that documents can be
  54. indexed without explicitly specifying how to handle each of the different fields
  55. that might occur in a document. When dynamic mapping is enabled, {es}
  56. automatically detects and adds new fields to the index. This default
  57. behavior makes it easy to index and explore your data--just start
  58. indexing documents and {es} will detect and map booleans, floating point and
  59. integer values, dates, and strings to the appropriate {es} datatypes.
  60. Ultimately, however, you know more about your data and how you want to use it
  61. than {es} can. You can define rules to control dynamic mapping and explicitly
  62. define mappings to take full control of how fields are stored and indexed.
  63. Defining your own mappings enables you to:
  64. * Distinguish between full-text string fields and exact value string fields
  65. * Perform language-specific text analysis
  66. * Optimize fields for partial matching
  67. * Use custom date formats
  68. * Use data types such as `geo_point` and `geo_shape` that cannot be automatically
  69. detected
  70. It’s often useful to index the same field in different ways for different
  71. purposes. For example, you might want to index a string field as both a text
  72. field for full-text search and as a keyword field for sorting or aggregating
  73. your data. Or, you might choose to use more than one language analyzer to
  74. process the contents of a string field that contains user input.
  75. The analysis chain that is applied to a full-text field during indexing is also
  76. used at search time. When you query a full-text field, the query text undergoes
  77. the same analysis before the terms are looked up in the index.
  78. [[search-analyze]]
  79. == Information out: search and analyze
  80. While you can use {es} as a document store and retrieve documents and their
  81. metadata, the real power comes from being able to easily access the full suite
  82. of search capabilities built on the Apache Lucene search engine library.
  83. {es} provides a simple, coherent REST API for managing your cluster and indexing
  84. and searching your data. For testing purposes, you can easily submit requests
  85. directly from the command line or through the Developer Console in {kib}. From
  86. your applications, you can use the
  87. https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} client]
  88. for your language of choice: Java, JavaScript, Go, .NET, PHP, Perl, Python
  89. or Ruby.
  90. [float]
  91. [[search-data]]
  92. === Searching your data
  93. The {es} REST APIs support structured queries, full text queries, and complex
  94. queries that combine the two. Structured queries are
  95. similar to the types of queries you can construct in SQL. For example, you
  96. could search the `gender` and `age` fields in your `employee` index and sort the
  97. matches by the `hire_date` field. Full-text queries find all documents that
  98. match the query string and return them sorted by _relevance_—how good a
  99. match they are for your search terms.
  100. In addition to searching for individual terms, you can perform phrase searches,
  101. similarity searches, and prefix searches, and get autocomplete suggestions.
  102. Have geospatial or other numerical data that you want to search? {es} indexes
  103. non-textual data in optimized data structures that support
  104. high-performance geo and numerical queries.
  105. You can access all of these search capabilities using {es}'s
  106. comprehensive JSON-style query language (<<query-dsl, Query DSL>>). You can also
  107. construct <<sql-overview, SQL-style queries>> to search and aggregate data
  108. natively inside {es}, and JDBC and ODBC drivers enable a broad range of
  109. third-party applications to interact with {es} via SQL.
  110. [float]
  111. [[analyze-data]]
  112. === Analyzing your data
  113. {es} aggregations enable you to build complex summaries of your data and gain
  114. insight into key metrics, patterns, and trends. Instead of just finding the
  115. proverbial “needle in a haystack”, aggregations enable you to answer questions
  116. like:
  117. * How many needles are in the haystack?
  118. * What is the average length of the needles?
  119. * What is the median length of the needles, broken down by manufacturer?
  120. * How many needles were added to the haystack in each of the last six months?
  121. You can also use aggregations to answer more subtle questions, such as:
  122. * What are your most popular needle manufacturers?
  123. * Are there any unusual or anomalous clumps of needles?
  124. Because aggregations leverage the same data-structures used for search, they are
  125. also very fast. This enables you to analyze and visualize your data in real time.
  126. Your reports and dashboards update as your data changes so you can take action
  127. based on the latest information.
  128. What’s more, aggregations operate alongside search requests. You can search
  129. documents, filter results, and perform analytics at the same time, on the same
  130. data, in a single request. And because aggregations are calculated in the
  131. context of a particular search, you’re not just displaying a count of all
  132. size 70 needles, you’re displaying a count of the size 70 needles
  133. that match your users' search criteria--for example, all size 70 _non-stick
  134. embroidery_ needles.
  135. [float]
  136. [[more-features]]
  137. ==== But wait, there’s more
  138. Want to automate the analysis of your time-series data? You can use
  139. {ml-docs}/ml-overview.html[machine learning] features to create accurate
  140. baselines of normal behavior in your data and identify anomalous patterns. With
  141. machine learning, you can detect:
  142. * Anomalies related to temporal deviations in values, counts, or frequencies
  143. * Statistical rarity
  144. * Unusual behaviors for a member of a population
  145. And the best part? You can do this without having to specify algorithms, models,
  146. or other data science-related configurations.
  147. [[scalability]]
  148. == Scalability and resilience: clusters, nodes, and shards
  149. ++++
  150. <titleabbrev>Scalability and resilience</titleabbrev>
  151. ++++
  152. {es} is built to be always available and to scale with your needs. It does this
  153. by being distributed by nature. You can add servers (nodes) to a cluster to
  154. increase capacity and {es} automatically distributes your data and query load
  155. across all of the available nodes. No need to overhaul your application, {es}
  156. knows how to balance multi-node clusters to provide scale and high availability.
  157. The more nodes, the merrier.
  158. How does this work? Under the covers, an {es} index is really just a logical
  159. grouping of one or more physical shards, where each shard is actually a
  160. self-contained index. By distributing the documents in an index across multiple
  161. shards, and distributing those shards across multiple nodes, {es} can ensure
  162. redundancy, which both protects against hardware failures and increases
  163. query capacity as nodes are added to a cluster. As the cluster grows (or shrinks),
  164. {es} automatically migrates shards to rebalance the cluster.
  165. There are two types of shards: primaries and replicas. Each document in an index
  166. belongs to one primary shard. A replica shard is a copy of a primary shard.
  167. Replicas provide redundant copies of your data to protect against hardware
  168. failure and increase capacity to serve read requests
  169. like searching or retrieving a document.
  170. The number of primary shards in an index is fixed at the time that an index is
  171. created, but the number of replica shards can be changed at any time, without
  172. interrupting indexing or query operations.
  173. [float]
  174. [[it-depends]]
  175. === It depends...
  176. There are a number of performance considerations and trade offs with respect
  177. to shard size and the number of primary shards configured for an index. The more
  178. shards, the more overhead there is simply in maintaining those indices. The
  179. larger the shard size, the longer it takes to move shards around when {es}
  180. needs to rebalance a cluster.
  181. Querying lots of small shards makes the processing per shard faster, but more
  182. queries means more overhead, so querying a smaller
  183. number of larger shards might be faster. In short...it depends.
  184. As a starting point:
  185. * Aim to keep the average shard size between a few GB and a few tens of GB. For
  186. use cases with time-based data, it is common to see shards in the 20GB to 40GB
  187. range.
  188. * Avoid the gazillion shards problem. The number of shards a node can hold is
  189. proportional to the available heap space. As a general rule, the number of
  190. shards per GB of heap space should be less than 20.
  191. The best way to determine the optimal configuration for your use case is
  192. through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[
  193. testing with your own data and queries].
  194. [float]
  195. [[disaster-ccr]]
  196. === In case of disaster
  197. For performance reasons, the nodes within a cluster need to be on the same
  198. network. Balancing shards in a cluster across nodes in different data centers
  199. simply takes too long. But high-availability architectures demand that you avoid
  200. putting all of your eggs in one basket. In the event of a major outage in one
  201. location, servers in another location need to be able to take over. Seamlessly.
  202. The answer? {ccr-cap} (CCR).
  203. CCR provides a way to automatically synchronize indices from your primary cluster
  204. to a secondary remote cluster that can serve as a hot backup. If the primary
  205. cluster fails, the secondary cluster can take over. You can also use CCR to
  206. create secondary clusters to serve read requests in geo-proximity to your users.
  207. {ccr-cap} is active-passive. The index on the primary cluster is
  208. the active leader index and handles all write requests. Indices replicated to
  209. secondary clusters are read-only followers.
  210. [float]
  211. [[admin]]
  212. === Care and feeding
  213. As with any enterprise system, you need tools to secure, manage, and
  214. monitor your {es} clusters. Security, monitoring, and administrative features
  215. that are integrated into {es} enable you to use {kibana-ref}/introduction.html[{kib}]
  216. as a control center for managing a cluster. Features like <<rollup-overview,
  217. data rollups>> and <<index-lifecycle-management, index lifecycle management>>
  218. help you intelligently manage your data over time.