rocksdb-doc-cn

2024-0101-0121 社区代码都改了什么

一些java api增强/工具增强/代码cleanup就不列举了

这个assert能抓到, 重构修改字段引入,遍历时对应的key没及时更新

把iter 遍历和parsekey放一起,封个函数

这个修改还是很简单的

  void SaveBlobFilesTo(VersionStorageInfo* vstorage) const {
    assert(vstorage);

    assert(base_vstorage_);
    vstorage->ReserveBlob(base_vstorage_->GetBlobFiles().size() +
                          mutable_blob_file_metas_.size());

    const uint64_t oldest_blob_file_with_linked_ssts =
        GetMinOldestBlobFileNumber();

+    // If there are no blob files with linked SSTs, meaning that there are no
+    // valid blob files
+    if (oldest_blob_file_with_linked_ssts == kInvalidBlobFileNumber) {
+      return;
+    }
+

这种场景没考虑到,没这个判断,0也merge到meta里了导致有关联

这个是改这个GetPendingCompactionBytesForCompactionSpeedup这个效果

原来逻辑

      } else if (vstorage->estimated_compaction_needed_bytes() >=
                 GetPendingCompactionBytesForCompactionSpeedup(
                     mutable_cf_options, vstorage)) {
        write_controller_token_ =
            write_controller->GetCompactionPressureToken();
        ROCKS_LOG_INFO(
            ioptions_.logger,
            "[%s] Increasing compaction threads because of estimated pending "
            "compaction "
            "bytes %" PRIu64,
            name_.c_str(), vstorage->estimated_compaction_needed_bytes());
      } else {

显然,GetPendingCompactionBytesForCompactionSpeedup越小,越能加速写入速度

uint64_t GetPendingCompactionBytesForCompactionSpeedup(
    const MutableCFOptions& mutable_cf_options,
    const VersionStorageInfo* vstorage) {
  // Compaction debt relatively large compared to the stable (bottommost) data
  // size indicates compaction fell behind.
  const uint64_t kBottommostSizeDivisor = 8;
  // Meaningful progress toward the slowdown trigger is another good indication.
  const uint64_t kSlowdownTriggerDivisor = 4;

  uint64_t bottommost_files_size = 0;
  for (const auto& level_and_file : vstorage->BottommostFiles()) {
    bottommost_files_size += level_and_file.second->fd.GetFileSize();
  }

  // Slowdown trigger might be zero but that means compaction speedup should
  // always happen (undocumented/historical), so no special treatment is needed.
  uint64_t slowdown_threshold =
      mutable_cf_options.soft_pending_compaction_bytes_limit /
      kSlowdownTriggerDivisor;

  // Size of zero, however, should not be used to decide to speedup compaction.
  if (bottommost_files_size == 0) {
    return slowdown_threshold;
  }

  uint64_t size_threshold = bottommost_files_size / kBottommostSizeDivisor;
  return std::min(size_threshold, slowdown_threshold);
}

原来的算法,倒数第二行是乘法,改成除了,数字小一些。。。

这个值是怎么定的?就全凭经验感觉

以前的逻辑可能忽略了部分损坏的文件,这里直接强制报错

这俩是重构FilePrefetchBuffer 更多是可读性提升,数据结构没有大改

看到这里或许你有建议或者疑问或者指出错误,请留言评论! 多谢! 你的评论非常重要!也可以帮忙点赞收藏转发!多谢支持! 觉得写的不错那就给点吧, 在线乞讨 微信转账