Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Serialization] Don't read all declaration id eagerly when merge the tables #95506

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ChuanqiXu9
Copy link
Member

See the post commit message in
#92083 for rationale.

Previously, when we merge the lookup tables, we will read the tables completely to get the data. But the above page shows that it might be problematic in specific cases where there a lot of DeclIDs to be read.

In this patch, we mitigated the problem by not reading the contents of the merged tables. But we may need to pay for the additional memory usages.

…tables

See the post commit message in
llvm#92083 for rationale.

Previously, when we merge the lookup tables, we will read the tables
completely to get the data. But the above page shows that it might be
problematic in specific cases where there a lot of DeclIDs to be read.

In this patch, we mitigated the problem by not reading the contents of
the merged tables. But we may need to pay for the additional memory
usages.
@llvmbot llvmbot added clang Clang issues not falling into any other category clang:modules C++20 modules and Clang Header Modules labels Jun 14, 2024
@llvmbot
Copy link
Member

llvmbot commented Jun 14, 2024

@llvm/pr-subscribers-clang-modules

@llvm/pr-subscribers-clang

Author: Chuanqi Xu (ChuanqiXu9)

Changes

See the post commit message in
#92083 for rationale.

Previously, when we merge the lookup tables, we will read the tables completely to get the data. But the above page shows that it might be problematic in specific cases where there a lot of DeclIDs to be read.

In this patch, we mitigated the problem by not reading the contents of the merged tables. But we may need to pay for the additional memory usages.


Full diff: https://github.com/llvm/llvm-project/pull/95506.diff

1 Files Affected:

  • (modified) clang/lib/Serialization/MultiOnDiskHashTable.h (+54-10)
diff --git a/clang/lib/Serialization/MultiOnDiskHashTable.h b/clang/lib/Serialization/MultiOnDiskHashTable.h
index a0d75ec3a9e76..f98394e904549 100644
--- a/clang/lib/Serialization/MultiOnDiskHashTable.h
+++ b/clang/lib/Serialization/MultiOnDiskHashTable.h
@@ -73,6 +73,51 @@ template<typename Info> class MultiOnDiskHashTable {
   struct MergedTable {
     std::vector<file_type> Files;
     llvm::DenseMap<internal_key_type, data_type> Data;
+
+    struct UnreadDataInfo {
+      Info *InfoObj;
+      const unsigned char *start;
+      unsigned DataLen;
+    };
+
+    llvm::DenseMap<internal_key_type, llvm::SmallVector<UnreadDataInfo, 16>>
+        UnReadData;
+
+    std::vector<OnDiskTable *> MergedOnDiskTables;
+
+    void clear() {
+      for (OnDiskTable *ODT : MergedOnDiskTables)
+        delete ODT;
+
+      MergedOnDiskTables.clear();
+    }
+
+    void readAll() {
+      for (const auto &Iter : UnReadData) {
+        internal_key_type Key = Iter.first;
+        data_type_builder ValueBuilder(Data[Key]);
+
+        for (const UnreadDataInfo &I : Iter.second)
+          I.InfoObj->ReadDataInto(Key, I.start, I.DataLen, ValueBuilder);
+      }
+
+      UnReadData.clear();
+      clear();
+    }
+
+    data_type find(internal_key_type Key) {
+      auto UnreadIter = UnReadData.find(Key);
+      if (UnreadIter != UnReadData.end()) {
+        data_type_builder ValueBuilder(Data[Key]);
+        for (auto &I : UnreadIter->second)
+          I.InfoObj->ReadDataInto(Key, I.start, I.DataLen, ValueBuilder);
+        UnReadData.erase(UnreadIter);
+      }
+
+      return Data[Key];
+    }
+
+    ~MergedTable() { clear(); }
   };
 
   using Table = llvm::PointerUnion<OnDiskTable *, MergedTable *>;
@@ -159,13 +204,14 @@ template<typename Info> class MultiOnDiskHashTable {
         // FIXME: Don't rely on the OnDiskHashTable format here.
         auto L = InfoObj.ReadKeyDataLength(LocalPtr);
         const internal_key_type &Key = InfoObj.ReadKey(LocalPtr, L.first);
-        data_type_builder ValueBuilder(Merged->Data[Key]);
-        InfoObj.ReadDataInto(Key, LocalPtr + L.first, L.second,
-                             ValueBuilder);
+        // Make sure Merged->Data contains every key.
+        (void)Merged->Data[Key];
+        Merged->UnReadData[Key].push_back(
+            {&InfoObj, LocalPtr + L.first, L.second});
       }
 
       Merged->Files.push_back(ODT->File);
-      delete ODT;
+      Merged->MergedOnDiskTables.push_back(ODT);
     }
 
     Tables.clear();
@@ -239,11 +285,8 @@ template<typename Info> class MultiOnDiskHashTable {
     internal_key_type Key = Info::GetInternalKey(EKey);
     auto KeyHash = Info::ComputeHash(Key);
 
-    if (MergedTable *M = getMergedTable()) {
-      auto It = M->Data.find(Key);
-      if (It != M->Data.end())
-        Result = It->second;
-    }
+    if (MergedTable *M = getMergedTable())
+      Result = M->find(Key);
 
     data_type_builder ResultBuilder(Result);
 
@@ -268,6 +311,7 @@ template<typename Info> class MultiOnDiskHashTable {
       removeOverriddenTables();
 
     if (MergedTable *M = getMergedTable()) {
+      M->readAll();
       for (auto &KV : M->Data)
         Info::MergeDataInto(KV.second, ResultBuilder);
     }
@@ -327,7 +371,7 @@ class MultiOnDiskHashTableGenerator {
         // Add all merged entries from Base to the generator.
         for (auto &KV : Merged->Data) {
           if (!Gen.contains(KV.first, Info))
-            Gen.insert(KV.first, Info.ImportData(KV.second), Info);
+            Gen.insert(KV.first, Info.ImportData(Merged->find(KV.first)), Info);
         }
       } else {
         Writer.write<uint32_t>(0);

@ilya-biryukov
Copy link
Contributor

ilya-biryukov commented Jun 17, 2024

@ChuanqiXu9 are you still planning to chase this given that fixing the hashing function would fix performance for the aforementioned patch?

The trade-off we are making here is hard to assess without benchmarks that show how much latency we win and how much more memory we spend. I am not sure if doing the benchmarks would be high on anyone's priority list, but maybe I'm wrong?

@ChuanqiXu9
Copy link
Member Author

@ChuanqiXu9 are you still planning to chase this given that fixing the hashing function would fix performance for the aforementioned patch?

The trade-off we are making here is hard to assess without benchmarks that show how much latency we win and how much more memory we spend. I am not sure if doing the benchmarks would be high on anyone's priority list, but maybe I'm wrong?

Yeah, I think this patch may be conceptually good except the extra memory using. But benchmarking, I tried it locally but didn't find observable effects. So I'd like to land this after 19's branching to give it more baking time.

@ilya-biryukov
Copy link
Contributor

Yeah, I think this patch may be conceptually good except the extra memory using. But benchmarking, I tried it locally but didn't find observable effects. So I'd like to land this after 19's branching to give it more baking time.

Do you mean that no large increases in the memory use or that things are not getting faster either?
If it's the latter, I would vouch for keeping the code as-is to avoid adding extra complexity.
If this does lead to significant performance wins, though, it's a different story.

@ChuanqiXu9
Copy link
Member Author

Yeah, I think this patch may be conceptually good except the extra memory using. But benchmarking, I tried it locally but didn't find observable effects. So I'd like to land this after 19's branching to give it more baking time.

Do you mean that no large increases in the memory use or that things are not getting faster either? If it's the latter, I would vouch for keeping the code as-is to avoid adding extra complexity. If this does lead to significant performance wins, though, it's a different story.

Yeah, it is the later. But it might not be too complex since all its uses are limited to the file only. My point is on the contrast, if it doesn't lead to significant regression, we should do it. Since it is rare that a single patch did herotical optimizations. Some times the performance gains from the long tails.

@ilya-biryukov
Copy link
Contributor

I feel the increased complexity does not seem like a good trade-off if it does not significantly improve performance.
However I don't feel too strongly about it, so I suggest you get a second opinion if you feel strongly that we should land this.

@ChuanqiXu9
Copy link
Member Author

I feel the increased complexity does not seem like a good trade-off if it does not significantly improve performance. However I don't feel too strongly about it, so I suggest you get a second opinion if you feel strongly that we should land this.

Yeah, and I don't have a strong opinion neither.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clang:modules C++20 modules and Clang Header Modules clang Clang issues not falling into any other category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants