store.asciidoc 4.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121
  1. [[index-modules-store]]
  2. == Store
  3. The store module allows you to control how index data is stored.
  4. The index can either be stored in-memory (no persistence) or on-disk
  5. (the default). In-memory indices provide better performance at the cost
  6. of limiting the index size to the amount of available physical memory.
  7. When using a local gateway (the default), file system storage with *no*
  8. in memory storage is required to maintain index consistency. This is
  9. required since the local gateway constructs its state from the local
  10. index state of each node.
  11. Another important aspect of memory based storage is the fact that
  12. ElasticSearch supports storing the index in memory *outside of the JVM
  13. heap space* using the "Memory" (see below) storage type. It translates
  14. to the fact that there is no need for extra large JVM heaps (with their
  15. own consequences) for storing the index in memory.
  16. [float]
  17. [[store-throttling]]
  18. === Store Level Throttling
  19. The way Lucene, the IR library elasticsearch uses under the covers,
  20. works is by creating immutable segments (up to deletes) and constantly
  21. merging them (the merge policy settings allow to control how those
  22. merges happen). The merge process happens in an asynchronous manner
  23. without affecting the indexing / search speed. The problem though,
  24. especially on systems with low IO, is that the merge process can be
  25. expensive and affect search / index operation simply by the fact that
  26. the box is now taxed with more IO happening.
  27. The store module allows to have throttling configured for merges (or
  28. all) either on the node level, or on the index level. The node level
  29. throttling will make sure that out of all the shards allocated on that
  30. node, the merge process won't pass the specific setting bytes per
  31. second. It can be set by setting `indices.store.throttle.type` to
  32. `merge`, and setting `indices.store.throttle.max_bytes_per_sec` to
  33. something like `5mb`. The node level settings can be changed dynamically
  34. using the cluster update settings API. The default is set
  35. to `20mb` with type `merge`.
  36. If specific index level configuration is needed, regardless of the node
  37. level settings, it can be set as well using the
  38. `index.store.throttle.type`, and
  39. `index.store.throttle.max_bytes_per_sec`. The default value for the type
  40. is `node`, meaning it will throttle based on the node level settings and
  41. participate in the global throttling happening. Both settings can be set
  42. using the index update settings API dynamically.
  43. The following sections lists all the different storage types supported.
  44. [float]
  45. [[file-system]]
  46. === File System
  47. File system based storage is the default storage used. There are
  48. different implementations or storage types. The best one for the
  49. operating environment will be automatically chosen: `mmapfs` on
  50. Solaris/Linux/Windows 64bit, `simplefs` on Windows 32bit, and
  51. `niofs` for the rest.
  52. The following are the different file system based storage types:
  53. [float]
  54. ==== Simple FS
  55. The `simplefs` type is a straightforward implementation of file system
  56. storage (maps to Lucene `SimpleFsDirectory`) using a random access file.
  57. This implementation has poor concurrent performance (multiple threads
  58. will bottleneck). It is usually better to use the `niofs` when you need
  59. index persistence.
  60. [float]
  61. ==== NIO FS
  62. The `niofs` type stores the shard index on the file system (maps to
  63. Lucene `NIOFSDirectory`) using NIO. It allows multiple threads to read
  64. from the same file concurrently. It is not recommended on Windows
  65. because of a bug in the SUN Java implementation.
  66. [float]
  67. ==== MMap FS
  68. The `mmapfs` type stores the shard index on the file system (maps to
  69. Lucene `MMapDirectory`) by mapping a file into memory (mmap). Memory
  70. mapping uses up a portion of the virtual memory address space in your
  71. process equal to the size of the file being mapped. Before using this
  72. class, be sure your have plenty of virtual address space.
  73. [float]
  74. [[store-memory]]
  75. === Memory
  76. The `memory` type stores the index in main memory with the following
  77. configuration options:
  78. There are also *node* level settings that control the caching of buffers
  79. (important when using direct buffers):
  80. [cols="<,<",options="header",]
  81. |=======================================================================
  82. |Setting |Description
  83. |`cache.memory.direct` |Should the memory be allocated outside of the
  84. JVM heap. Defaults to `true`.
  85. |`cache.memory.small_buffer_size` |The small buffer size, defaults to
  86. `1kb`.
  87. |`cache.memory.large_buffer_size` |The large buffer size, defaults to
  88. `1mb`.
  89. |`cache.memory.small_cache_size` |The small buffer cache size, defaults
  90. to `10mb`.
  91. |`cache.memory.large_cache_size` |The large buffer cache size, defaults
  92. to `500mb`.
  93. |=======================================================================