translog.asciidoc 3.4 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485
  1. [[index-modules-translog]]
  2. == Translog
  3. Changes to a shard are only persisted to disk when the shard is ``flushed'',
  4. which is a relatively heavy operation and so cannot be performed after every
  5. index or delete operation. Instead, changes are accumulated in an in-memory
  6. indexing buffer and only written to disk periodically. This would mean that
  7. the contents of the in-memory buffer would be lost in the event of power
  8. failure or some other hardware crash.
  9. To prevent this data loss, each shard has a _transaction log_ or write ahead
  10. log associated with it. Any index or delete operation is first written to the
  11. translog before being processed by the internal Lucene index. This translog is
  12. only cleared once the shard has been flushed and the data in the in-memory
  13. buffer persisted to disk as a Lucene segment.
  14. In the event of a crash, recent transactions can be replayed from the
  15. transaction log when the shard recovers.
  16. [float]
  17. === Flush settings
  18. The following <<indices-update-settings,dynamically updatable>> settings
  19. control how often the in-memory buffer is flushed to disk:
  20. `index.translog.flush_threshold_size`::
  21. Once the translog hits this size, a flush will happen. Defaults to `512mb`.
  22. `index.translog.flush_threshold_ops`::
  23. After how many operations to flush. Defaults to `unlimited`.
  24. `index.translog.flush_threshold_period`::
  25. How long to wait before triggering a flush regardless of translog size. Defaults to `30m`.
  26. `index.translog.interval`::
  27. How often to check if a flush is needed, randomized between the interval value
  28. and 2x the interval value. Defaults to `5s`.
  29. [float]
  30. === Translog settings
  31. The translog itself is only persisted to disk when it is ++fsync++ed. Until
  32. then, data recently written to the translog may only exist in the file system
  33. cache and could potentially be lost in the event of hardware failure.
  34. The following <<indices-update-settings,dynamically updatable>> settings
  35. control the behaviour of the transaction log:
  36. `index.translog.sync_interval`::
  37. How often the translog is ++fsync++ed to disk. Defaults to `5s`.
  38. `index.translog.fs.type`::
  39. Either a `buffered` translog (default) which buffers 64kB in memory before
  40. writing to disk, or a `simple` translog which writes every entry to disk
  41. immediately. Whichever is used, these writes are only ++fsync++ed according
  42. to the `sync_interval`.
  43. The `buffered` translog is written to disk when it reaches 64kB in size, or
  44. whenever an `fsync` is triggered by the `sync_interval`.
  45. .Why don't we `fsync` the translog after every write?
  46. ******************************************************
  47. The disk is the slowest part of any server. An `fsync` ensures that data in
  48. the file system buffer has been physically written to disk, but this
  49. persistence comes with a performance cost.
  50. However, the translog is not the only persistence mechanism in Elasticsearch.
  51. Any index or update request is first written to the primary shard, then
  52. forwarded in parallel to any replica shards. The primary waits for the action
  53. to be completed on the replicas before returning to success to the client.
  54. If the node holding the primary shard dies for some reason, its transaction
  55. log could be missing the last 5 seconds of data. However, that data should
  56. already be available on a replica shard on a different node. Of course, if
  57. the whole data centre loses power at the same time, then it is possible that
  58. you could lose the last 5 seconds (or `sync_interval`) of data.
  59. ******************************************************