Ver código fonte

Default S3 Chunk Size to 5TB (#64980)

Just like we did for Azure and GCS we should just go with the
maximum possible chunk size in S3 as well.
Armin Braun 5 anos atrás
pai
commit
b873e1d65d

+ 1 - 1
docs/plugins/repository-s3.asciidoc

@@ -270,7 +270,7 @@ The following settings are supported:
 
     Big files can be broken down into chunks during snapshotting if needed.
     Specify the chunk size as a value and unit, for example:
-    `1GB`, `10MB`, `5KB`, `500B`. Defaults to `1GB`.
+    `1TB`, `1GB`, `10MB`. Defaults to the maximum size of a blob in the S3 which is `5TB`.
 
 `compress`::
 

+ 3 - 3
plugins/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3Repository.java

@@ -126,10 +126,10 @@ class S3Repository extends MeteredBlobStoreRepository {
         Setting.byteSizeSetting("buffer_size", DEFAULT_BUFFER_SIZE, MIN_PART_SIZE_USING_MULTIPART, MAX_PART_SIZE_USING_MULTIPART);
 
     /**
-     * Big files can be broken down into chunks during snapshotting if needed. Defaults to 1g.
+     * Big files can be broken down into chunks during snapshotting if needed. Defaults to 5tb.
      */
-    static final Setting<ByteSizeValue> CHUNK_SIZE_SETTING = Setting.byteSizeSetting("chunk_size", new ByteSizeValue(1, ByteSizeUnit.GB),
-            new ByteSizeValue(5, ByteSizeUnit.MB), new ByteSizeValue(5, ByteSizeUnit.TB));
+    static final Setting<ByteSizeValue> CHUNK_SIZE_SETTING = Setting.byteSizeSetting("chunk_size", MAX_FILE_SIZE_USING_MULTIPART,
+            new ByteSizeValue(5, ByteSizeUnit.MB), MAX_FILE_SIZE_USING_MULTIPART);
 
     /**
      * Sets the S3 storage class type for the backup files. Values may be standard, reduced_redundancy,