bloomfilter 库比较

当前问题

  • 存储服务部署时会规划的内存,但是运行一段时候后,会发生oom
  • 很奇怪,存储oom时,dump的堆大小比规划的内存常常小几个GB
  • 存储服务运行一段时候后,容易发生full gc

背景

存储是全局重删的. 每一个节点负责自身数据的重删索引管理. 目前直接把重删索引存储在ssd的文件上,并且使用开放寻址法处理hash冲突问题.

文件是否写满通过hash冲突次数和在这个文件已经保存的hash值总数. 一个索引文件保存的最大数量hash限制为:

((Integer.MAX_VALUE / ENTRY_SIZE) – 100 )的下个质数

((2**31-1)/37.0-100)的下个质数 为 58039999

算法如何来的,我只能说,第一次见它就是这样算的.可能是考虑到java 对文件内存映射不能 Integer.MAX_VALUE

因此一个keys文件长度最大为 hash_count_limit * ENTRY_SIZE 约等于 2GB左右.

bloom filter 库选型

目前项目使用的是guava-1.19 比较久. 本次测试主要针对

  1. guava-19.0 存储稳定版使用的guava库版本
  2. guava-28.1-jre 存储服务开发版本使用的guava库版本
  3. bloom-filter-for-scala

根据项目情况 创建一个bloom filter时 内容占用

-javaagent:/home/wjh/.m2/repository/com/google/code/java-allocation-instrumenter/java-allocation-instrumenter/3.3.0/java-allocation-instrumenter-3.3.0.jar
-Xms2048m
-Xmx2048m
-XX:G1HeapRegionSize=16m

使用google 查看内存分配情况.

 AllocationRecorder.addSampler((count, desc, newObj, size) -> {
            if (size < 1024) {
                return;
            }
            System.out.println("I just allocated the object " + newObj
                    + " of type " + desc + " whose size is " + size);
            if (count != -1) {
                System.out.println("It's an array of size " + count);
            }
        });

生成bloomfilter

  List<FLBF> list = new ArrayList<>();
            for (int i = 0; i < bfNum; i++) {
                scan.nextLine();
                int c = i + 1;
                System.out.println("build bf : " + c);
                list.add(genFLBF());
            }

    //guava
    private static FLBF genFLBF() {
        FLBF flbf = FLBF.createNew(58039999, 0.01f);
        for (int i = 0; i < 1; i++) {
            flbf.put(genRandHash());
        }
        return flbf;
    }

    //bloomfilter-scala
    private static FLBF genFLBF() {
        long expectedElements = 58039999;
        double falsePositiveRate = 0.01f;
        BloomFilter<byte[]> flbf = BloomFilter.apply(
                expectedElements,
                falsePositiveRate,
                CanGenerateHashFrom.CanGenerateHashFromByteArray$.MODULE$);

        for (int i = 0; i < 100; i++) {
            byte[] element = genRandHash();
            flbf.add(element);
        }
        return new FLBF(flbf);
    }

guava-19.0

每生成一个bloomfilter 需要分配69539616

// 代码打印
wait 5 sec create bf
I just allocated the object [J@47f6473 of type long whose size is 69539616
It's an array of size 8692450

// 生成bf前,堆占用
➜  ~ jmap  -histo 3138150|tail -n 3
 800:             1             16  sun.util.resources.LocaleData$LocaleDataStrategy (java.base@11.0.11)
 801:             1             16  sun.util.resources.provider.NonBaseLocaleDataMetaInfo (jdk.localedata@11.0.11)
Total       2074027      394242576
// 生成bf后,堆占用
➜  ~ jmap  -histo 3138150|tail -n 3
 800:             1             16  sun.util.resources.LocaleData$LocaleDataStrategy (java.base@11.0.11)
 801:             1             16  sun.util.resources.provider.NonBaseLocaleDataMetaInfo (jdk.localedata@11.0.11)
Total       2074121      463782192

guava-28.0-jre

和guava-19.0内存使用一样.

  • 每生成一个bloomfilter 需要分配69539616 (66MB).
  • top res 也是增加66MB
  • 观察年轻代占用没变, 老年代占用增加的内存占用大约80MB. 是实际对象占用内存的1.2倍.有部分内存浪费,堆再也无法使用.
I just allocated the object [J@2d6d8735 of type long whose size is 69539616
It's an array of size 8692450

➜  ~ jmap  -histo 3205663|head -n 3
 num     #instances         #bytes  class name (module)
-------------------------------------------------------
➜  ~ jmap  -histo 3205663|tail -n 3
 815:             1             16  sun.util.resources.LocaleData$LocaleDataStrategy (java.base@11.0.11)
 816:             1             16  sun.util.resources.provider.NonBaseLocaleDataMetaInfo (jdk.localedata@11.0.11)
Total       2192716      377265048
➜  ~ jmap  -histo 3205663|tail -n 3
 815:             1             16  sun.util.resources.LocaleData$LocaleDataStrategy (java.base@11.0.11)
 816:             1             16  sun.util.resources.provider.NonBaseLocaleDataMetaInfo (jdk.localedata@11.0.11)
Total       2192791      446804664

// 具体分配情况
// 生成bf前,堆占用
➜  ~ jstat -gcnew 3205663 1000 3
 S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT
   0.0 16384.0    0.0 16384.0 15  15 8192.0 1294336.0 147456.0      1    0.011
   0.0 16384.0    0.0 16384.0 15  15 8192.0 1294336.0 147456.0      1    0.011

➜  ~ jstat -gcold 3205663 1000 3
   MC       MU      CCSC     CCSU       OC          OU       YGC    FGC    FGCT     GCT
 17664.0  17119.5   1280.0   1137.4    786432.0    394069.6      1     0    0.000    0.011
 17664.0  17119.5   1280.0   1137.4    786432.0    394069.6      1     0    0.000    0.011
// 生成bf后,堆占用
➜  ~ jstat -gcnew 3205663 1000 3
 S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT
   0.0 16384.0    0.0 16384.0 15  15 8192.0 1294336.0 147456.0      1    0.011
   0.0 16384.0    0.0 16384.0 15  15 8192.0 1294336.0 147456.0      1    0.011
➜  ~ jstat -gcold 3205663 1000 3
   MC       MU      CCSC     CCSU       OC          OU       YGC    FGC    FGCT     GCT
 17664.0  17119.5   1280.0   1137.4    786432.0    475989.6      1     0    0.000    0.011
 17664.0  17119.5   1280.0   1137.4    786432.0    475989.6      1     0    0.000    0.011

bloomfilter scala

  • 生成bf使用直接内存.
  • 每生成一个 bloomfilter 内存占用多66MB.

# 生成bf前,内存占用
➜  ~ cat /proc/3243013/statm
1711985 269642 7878 2 0 749802 0

# 生成bf后,内存占用
➜  ~ cat /proc/3243013/statm
1728963 286604 7878 2 0 766780 0

# (286604 -269642) * 4/1024.0 = 66MB. 

Guava bloomfilter 调整fpp

Guava bloomfilter 调整fpp时,内存分配真是变化

  • fpp = 0.01d
    I just allocated the object [J@1ef7fe8e of type long whose size is 69539616
    It's an array of size 8692450
    
  • fpp = 0.02d
    I just allocated the object [J@1ef7fe8e of type long whose size is 59072864
    It's an array of size 7384106
    
  • fpp = 0.1d
    I just allocated the object [J@1ef7fe8e of type long whose size is 34769816
    It's an array of size 4346225
    

测试bloomfilter 性能

重建bloomfilter 速度

40个keys文件,每个存储情况Max size: 58039999, Curr size: 43529999, Ratio: 0.7499999956926257,两者差异不大,scala版本会快一点.

  • guava 7分9秒
    2021-06-03 09:03:28,771 [I] org.octopus.mdfs.fuse.MountMDFS:setup:147 [main] -> ################### FSS BOOT ###################
    2021-06-03 09:10:37,219 [I] org.octopus.mdfs.fuse.MDFSFileSystem:<init>:77 [MDFS-Mount] -> ################### MOUNTED ###################
    

    NOTE:

    重建bloomfilter 后,从top看,整个java程序内存占用18GB. 关闭存储重启,java程序占用内存9.5GB. 因此重建后应该重启存储.

    重建完
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    412753 root      20   0  112.7g  18.1g  24656 S   0.0 38.6  47:46.78 jsvc
    
    
    重启后
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    486540 root      20   0  112.7g   9.5g  24608 S   0.0 20.3   0:58.28 jsvc
    
  • scala 6分16秒
    2021-06-03 10:08:02,050 [I] org.octopus.mdfs.fuse.MountMDFS:setup:147 [main] -> ################### FSS BOOT ###################
    2021-06-03 10:14:18,553 [I] org.octopus.mdfs.fuse.MDFSFileSystem:<init>:77 [MDFS-Mount] -> ################### MOUNTED ###################
    

测试bloomfilter 假良性率

  1. 创建bloomfilter.设置大小为102400,fpp 0.01. 然后写入1024个不同key
  2. 往bloomfilter 写入1024000 个key完全不重复.

  3. 可以看出来假良性率 都符合预期

    GUAVA :might in bf count 102762
    GUAVA :no exist in bf count 10137238
    SCALA :might in bf count 102948
    SCALA :no exist in bf count 10137052
    

兼容测试

  1. 初始化mdfs,使用guava bloomfilter,启动mdfs,写入1GB数据,关闭mdfs,记录各个节点的数据块md5到/tmp/x_1
  2. 启动mdfs 然后使用killall -9 jsvc强杀mdfs,使mdfs下次重启触发bloomfilter 重建

  3. 修改配置使用SCALA 版本bloomfilter.启动mdfs. 再次写入相同的1GB数据.关闭mdfs,记录各个节点的数据块md5到/tmpx_2

  4. 最后每个节点对比 /tmp/x_1 和/tmp/x_2

    [root@node1 hui]# ssh node3 md5sum  /tmp/x*
    6c7523e1961ccc474e4adefb90a37986  /tmp/x_1
    6c7523e1961ccc474e4adefb90a37986  /tmp/x_2
    [root@node1 hui]# ssh node2 md5sum  /tmp/x*
    ff6053978148dbc81feded2745507557  /tmp/x_1
    ff6053978148dbc81feded2745507557  /tmp/x_2
    [root@node1 hui]# ssh node1 md5sum  /tmp/x*
    27e5e61911a20d6a3b5892c7eb616c1b  /tmp/x_1
    27e5e61911a20d6a3b5892c7eb616c1b  /tmp/x_2
    [root@node1 hui]# tail -f /tmp/x_1
    0523cf6e9afea4823685f96d69e0b6b7  /data/octopus/chunks/f/9/1/0e797bfb-d5a3-11eb-8caa-08002758d32b_2.block
    3cb40c648b7fc548d7106961c7a7c498  /data/octopus/chunks/f/9/1/0e797bfb-d5a3-11eb-8caa-08002758d32b_3.block
    1b6d69764509e7d4327ac0ba03e7e9fe  /data/octopus/chunks/f/9/b/fe1550fd-d5a2-11eb-8caa-08002758d32b_0.block
    ....
    
    
  5. 验证替换bloomfilter库,不影响已有数据重删.

替换库,存储服务加载bloomfilter 后内存占用情况

启动,加载bloomfilter 缓存文件之后,内存使用情况对比.

  • guava
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    1134604 root      20   0  125.9g  10.2g  24752 S   0.0 21.8   0:37.58 jsvc
    
    [12.652s][info][gc             ] GC(28) Pause Cleanup 13230M->13230M(16384M) 0.275ms
    
  • scala
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    1222234 root      20   0  122.2g   8.8g  24812 S   0.0 18.8   1:23.47 jsvc
    
    [28.336s][info][gc             ] GC(9) Pause Cleanup 861M->861M(5120M) 0.072ms
    

替换库,进行读测试后,内存情况

存储服务内存真实使用情况,数据块存储服务使用 16GB 内存.

  • guava
# 读取数据之后,dse内存情况

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
1129494 root      20   0  127.1g  12.1g  25132 S   0.3 25.8   2:22.28 jsvc

# 整个读测试产生4次FGC
[root@node2 ~]# /usr/lib/jvm/java-11/bin/jstat -gccause 1129494  1000
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT    CGC    CGCT     GCT    LGCC                 GCC
  0.00   0.00  59.26  87.74  96.42  89.33     65    1.174     4    0.235    68    0.051    1.460 G1 Evacuation Pause  No GC
  0.00   0.00  59.26  87.74  96.42  89.33     65    1.174     4    0.235    68    0.051    1.460 G1 Evacuation Pause  No GC
  0.00   0.00  59.26  87.74  96.42  89.33     65    1.174     4    0.235    68    0.051    1.460 G1 Evacuation Pause  No GC

# java堆巨型对象变化情况,一直在增加
[0.136s][info][gc,heap        ] GC(0) Humongous regions: 0->0
[2.036s][info][gc,heap        ] GC(1) Humongous regions: 0->0
[10.636s][info][gc,heap        ] GC(2) Humongous regions: 250->250
[11.383s][info][gc,heap        ] GC(3) Humongous regions: 412->412
[11.627s][info][gc,heap        ] GC(5) Humongous regions: 484->484
[11.955s][info][gc,heap        ] GC(7) Humongous regions: 572->572
[12.040s][info][gc,heap        ] GC(9) Humongous regions: 594->594
[12.116s][info][gc,heap        ] GC(11) Humongous regions: 612->612
[12.175s][info][gc,heap        ] GC(13) Humongous regions: 626->626
[12.238s][info][gc,heap        ] GC(15) Humongous regions: 640->640
[12.334s][info][gc,heap        ] GC(17) Humongous regions: 668->668
[12.397s][info][gc,heap        ] GC(19) Humongous regions: 676->676
[12.488s][info][gc,heap        ] GC(21) Humongous regions: 706->706
[12.553s][info][gc,heap        ] GC(23) Humongous regions: 716->716
[12.661s][info][gc,heap        ] GC(25) Humongous regions: 750->750
[208.162s][info][gc,heap        ] GC(27) Humongous regions: 772->772
...
...
...
[280.593s][info][gc,heap        ] GC(104) Humongous regions: 772->772

## 读测试之后,regions使用情况
[280.593s][info][gc,heap        ] GC(104) Eden regions: 0->0(51)
[280.593s][info][gc,heap        ] GC(104) Survivor regions: 0->0(7)
[280.593s][info][gc,heap        ] GC(104) Old regions: 252->100
[280.593s][info][gc,heap        ] GC(104) Humongous regions: 772->772

## Full gc
[root@node2 ~]# grep 'Pause Full' /tmp/dse.log
[0.110s][info][gc,start     ] GC(0) Pause Full (System.gc())
[0.136s][info][gc             ] GC(0) Pause Full (System.gc()) 30M->18M(16384M) 26.765ms
[252.043s][info][gc,start       ] GC(59) Pause Full (G1 Evacuation Pause)
[252.123s][info][gc             ] GC(59) Pause Full (G1 Evacuation Pause) 16332M->13616M(16384M) 80.576ms
[266.588s][info][gc,start       ] GC(82) Pause Full (G1 Evacuation Pause)
[266.651s][info][gc             ] GC(82) Pause Full (G1 Evacuation Pause) 16094M->13616M(16384M) 63.386ms
[280.530s][info][gc,start       ] GC(104) Pause Full (G1 Evacuation Pause)
[280.593s][info][gc             ] GC(104) Pause Full (G1 Evacuation Pause) 16065M->13616M(16384M) 63.244ms
  • scala octopus 存储服务使用的内存使用16GB内存
# scala 版本bloomfilter 内存由系统分配,dse使用16GB内存,因此gc次数很少.
# 可以看到ygc 也只发生了17次
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
1016318 root      20   0  134.5g  19.6g  25100 S   0.0 41.7   2:58.69 jsvc

[root@node2 ~]# /usr/lib/jvm/java-11/bin/jstat -gccause 1016318  1000
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT    CGC    CGCT     GCT    LGCC                 GCC
  0.00 100.00  72.37  39.48  97.07  91.88     17    0.376     1    0.026     2    0.002    0.404 G1 Evacuation Pause  No GC
  0.00 100.00  72.37  39.48  97.07  91.88     17    0.376     1    0.026     2    0.002    0.404 G1 Evacuation Pause  No GC
  0.00 100.00  72.37  39.48  97.07  91.88     17    0.376     1    0.026     2    0.002    0.404 G1 Evacuation Pause  No GC
  0.00 100.00  72.37  39.48  97.07  91.88     17    0.376     1    0.026     2    0.002    0.404 G1 Evacuation Pause  No GC

## 读测试之后,regions使用情况
[255.533s][info][gc,heap        ] GC(18) Eden regions: 578->0(577)
[255.533s][info][gc,heap        ] GC(18) Survivor regions: 36->37(77)
[255.533s][info][gc,heap        ] GC(18) Old regions: 153->153
[255.533s][info][gc,heap        ] GC(18) Humongous regions: 0->0

  • 考虑 scala 版本内存直接由系统分配,减少 octopus 存储服务使用的内存为 5GB
[root@node2 ~]# ps -ef |grep dse
root     1024630       1  0 15:51 ?        00:00:00 jsvc.exec -server -outfile /var/log/octopus/dse.out -errfile &1 -pidfile /var/run/octopus-dse.pid -home /usr/lib/jvm/java-11 -cp /usr/share/octopus/lib/* -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Dlog4j.configurationFile=/etc/octopus/log4j2-octopus.xml -Djgroups.logging.log_factory_class=org.octopus.logging.JGroupsLogger -Xmx5120M -Xms5120M -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1HeapWastePercent=10 -XX:HeapDumpPath=/data/octopus/oom/storage -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -XX:+PrintGC -XX:+PrintGCDetails -Xloggc:/tmp/dse.log -agentpath:/usr/share/octopus/bin/libemmet.so org.octopus.mdfs.network.ClusteredHCServer -c /etc/octopus/dse-cfg.xml
root     1024632 1024630 39 15:51 ?        00:03:05 jsvc.exec -server -outfile /var/log/octopus/dse.out -errfile &1 -pidfile /var/run/octopus-dse.pid -home /usr/lib/jvm/java-11 -cp /usr/share/octopus/lib/* -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -Dlog4j.configurationFile=/etc/octopus/log4j2-octopus.xml -Djgroups.logging.log_factory_class=org.octopus.logging.JGroupsLogger -Xmx5120M -Xms5120M -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1HeapWastePercent=10 -XX:HeapDumpPath=/data/octopus/oom/storage -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -XX:+PrintGC -XX:+PrintGCDetails -Xloggc:/tmp/dse.log -agentpath:/usr/share/octopus/bin/libemmet.so org.octopus.mdfs.network.ClusteredHCServer -c /etc/octopus/dse-cfg.xml

# 读取数据之后,dse内存情况.没有发生full gc. 有发生了mix gc
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
1024632 root      20   0  123.2g  12.3g  25132 S   0.0 26.3   3:05.43 jsvc

[root@node2 ~]# /usr/lib/jvm/java-11/bin/jstat -gccause 1024632  1000
  S0     S1     E      O      M     CCS    YGC     YGCT    FGC    FGCT    CGC    CGCT     GCT    LGCC                 GCC
  0.00 100.00  25.45  66.03  97.01  91.86     49    0.938     1    0.014    16    0.015    0.967 G1 Evacuation Pause  No GC
  0.00 100.00  25.45  66.03  97.01  91.86     49    0.938     1    0.014    16    0.015    0.967 G1 Evacuation Pause  No GC
  0.00 100.00  25.45  66.03  97.01  91.86     49    0.938     1    0.014    16    0.015    0.967 G1 Evacuation Pause  No GC
  0.00 100.00  25.45  66.03  97.01  91.86     49    0.938     1    0.014    16    0.015    0.967 G1 Evacuation Pause  No GC

[162.416s][info][gc,phases      ] GC(57)   Post Evacuate Collection Set: 0.4ms
[162.416s][info][gc,phases      ] GC(57)   Other: 0.1ms
[162.416s][info][gc,heap        ] GC(57) Eden regions: 74->0(51)
[162.416s][info][gc,heap        ] GC(57) Survivor regions: 2->10(10)
[162.416s][info][gc,heap        ] GC(57) Old regions: 146->171
[162.416s][info][gc,heap        ] GC(57) Humongous regions: 0->0


# Mix gc
[root@node2 ~]# grep 'Pause Young (Mixed)' /tmp/dse.log.2
[146.911s][info][gc,start       ] GC(26) Pause Young (Mixed) (G1 Evacuation Pause)
[146.923s][info][gc             ] GC(26) Pause Young (Mixed) (G1 Evacuation Pause) 2390M->2330M(5120M) 11.833ms
[149.069s][info][gc,start       ] GC(30) Pause Young (Mixed) (G1 Evacuation Pause)
[149.081s][info][gc             ] GC(30) Pause Young (Mixed) (G1 Evacuation Pause) 2482M->2310M(5120M) 11.751ms
[151.064s][info][gc,start       ] GC(34) Pause Young (Mixed) (G1 Evacuation Pause)
[151.076s][info][gc             ] GC(34) Pause Young (Mixed) (G1 Evacuation Pause) 2626M->2334M(5120M) 12.391ms
[152.981s][info][gc,start       ] GC(38) Pause Young (Mixed) (G1 Evacuation Pause)
[152.994s][info][gc             ] GC(38) Pause Young (Mixed) (G1 Evacuation Pause) 2626M->2334M(5120M) 12.527ms
[155.814s][info][gc,start       ] GC(44) Pause Young (Mixed) (G1 Evacuation Pause)
[155.828s][info][gc             ] GC(44) Pause Young (Mixed) (G1 Evacuation Pause) 2802M->2358M(5120M) 13.447ms
[158.460s][info][gc,start       ] GC(50) Pause Young (Mixed) (G1 Evacuation Pause)
[158.473s][info][gc             ] GC(50) Pause Young (Mixed) (G1 Evacuation Pause) 2778M->2366M(5120M) 12.995ms
[161.027s][info][gc,start       ] GC(56) Pause Young (Mixed) (G1 Evacuation Pause)
[161.041s][info][gc             ] GC(56) Pause Young (Mixed) (G1 Evacuation Pause) 2806M->2326M(5120M) 13.409ms
[583.224s][info][gc,start       ] GC(67) Pause Young (Mixed) (G1 Evacuation Pause)
[583.236s][info][gc             ] GC(67) Pause Young (Mixed) (G1 Evacuation Pause) 2438M->2230M(5120M) 12.031ms
[589.197s][info][gc,start       ] GC(73) Pause Young (Mixed) (G1 Evacuation Pause)
[589.208s][info][gc             ] GC(73) Pause Young (Mixed) (G1 Evacuation Pause) 2982M->2426M(5120M) 11.158ms
[589.436s][info][gc,start       ] GC(74) Pause Young (Mixed) (G1 Evacuation Pause)
[589.448s][info][gc             ] GC(74) Pause Young (Mixed) (G1 Evacuation Pause) 2646M->2510M(5120M) 12.461ms
[592.562s][info][gc,start       ] GC(80) Pause Young (Mixed) (G1 Evacuation Pause)
[592.575s][info][gc             ] GC(80) Pause Young (Mixed) (G1 Evacuation Pause) 2956M->2442M(5120M) 13.475ms
[592.875s][info][gc,start       ] GC(81) Pause Young (Mixed) (G1 Evacuation Pause)
[592.889s][info][gc             ] GC(81) Pause Young (Mixed) (G1 Evacuation Pause) 2664M->2574M(5120M) 14.505ms
[595.598s][info][gc,start       ] GC(87) Pause Young (Mixed) (G1 Evacuation Pause)
[595.611s][info][gc             ] GC(87) Pause Young (Mixed) (G1 Evacuation Pause) 2850M->2342M(5120M) 13.480ms
[595.811s][info][gc,start       ] GC(88) Pause Young (Mixed) (G1 Evacuation Pause)
[595.823s][info][gc             ] GC(88) Pause Young (Mixed) (G1 Evacuation Pause) 2562M->2510M(5120M) 11.801ms
[598.483s][info][gc,start       ] GC(94) Pause Young (Mixed) (G1 Evacuation Pause)
[598.498s][info][gc             ] GC(94) Pause Young (Mixed) (G1 Evacuation Pause) 2890M->2410M(5120M) 15.002ms
[598.653s][info][gc,start       ] GC(95) Pause Young (Mixed) (G1 Evacuation Pause)
[598.666s][info][gc             ] GC(95) Pause Young (Mixed) (G1 Evacuation Pause) 2630M->2534M(5120M) 13.046ms
[601.089s][info][gc,start       ] GC(101) Pause Young (Mixed) (G1 Evacuation Pause)
[601.103s][info][gc             ] GC(101) Pause Young (Mixed) (G1 Evacuation Pause) 2818M->2318M(5120M) 14.206ms
[604.047s][info][gc,start       ] GC(108) Pause Young (Mixed) (G1 Evacuation Pause)
[604.060s][info][gc             ] GC(108) Pause Young (Mixed) (G1 Evacuation Pause) 2870M->2342M(5120M) 12.531ms
[604.242s][info][gc,start       ] GC(109) Pause Young (Mixed) (G1 Evacuation Pause)
[604.255s][info][gc             ] GC(109) Pause Young (Mixed) (G1 Evacuation Pause) 2564M->2506M(5120M) 12.905ms
[606.729s][info][gc,start       ] GC(115) Pause Young (Mixed) (G1 Evacuation Pause)
[606.743s][info][gc             ] GC(115) Pause Young (Mixed) (G1 Evacuation Pause) 2778M->2326M(5120M) 14.019ms
[609.866s][info][gc,start       ] GC(122) Pause Young (Mixed) (G1 Evacuation Pause)
[609.877s][info][gc             ] GC(122) Pause Young (Mixed) (G1 Evacuation Pause) 2778M->2214M(5120M) 11.045ms
[610.337s][info][gc,start       ] GC(123) Pause Young (Mixed) (G1 Evacuation Pause)
[610.346s][info][gc             ] GC(123) Pause Young (Mixed) (G1 Evacuation Pause) 2435M->2318M(5120M) 8.438ms

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.