-
Notifications
You must be signed in to change notification settings - Fork 12.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Serialization] Don't read all declaration id eagerly when merge the tables #95506
base: main
Are you sure you want to change the base?
Conversation
…tables See the post commit message in llvm#92083 for rationale. Previously, when we merge the lookup tables, we will read the tables completely to get the data. But the above page shows that it might be problematic in specific cases where there a lot of DeclIDs to be read. In this patch, we mitigated the problem by not reading the contents of the merged tables. But we may need to pay for the additional memory usages.
@llvm/pr-subscribers-clang-modules @llvm/pr-subscribers-clang Author: Chuanqi Xu (ChuanqiXu9) ChangesSee the post commit message in Previously, when we merge the lookup tables, we will read the tables completely to get the data. But the above page shows that it might be problematic in specific cases where there a lot of DeclIDs to be read. In this patch, we mitigated the problem by not reading the contents of the merged tables. But we may need to pay for the additional memory usages. Full diff: https://github.com/llvm/llvm-project/pull/95506.diff 1 Files Affected:
diff --git a/clang/lib/Serialization/MultiOnDiskHashTable.h b/clang/lib/Serialization/MultiOnDiskHashTable.h
index a0d75ec3a9e76..f98394e904549 100644
--- a/clang/lib/Serialization/MultiOnDiskHashTable.h
+++ b/clang/lib/Serialization/MultiOnDiskHashTable.h
@@ -73,6 +73,51 @@ template<typename Info> class MultiOnDiskHashTable {
struct MergedTable {
std::vector<file_type> Files;
llvm::DenseMap<internal_key_type, data_type> Data;
+
+ struct UnreadDataInfo {
+ Info *InfoObj;
+ const unsigned char *start;
+ unsigned DataLen;
+ };
+
+ llvm::DenseMap<internal_key_type, llvm::SmallVector<UnreadDataInfo, 16>>
+ UnReadData;
+
+ std::vector<OnDiskTable *> MergedOnDiskTables;
+
+ void clear() {
+ for (OnDiskTable *ODT : MergedOnDiskTables)
+ delete ODT;
+
+ MergedOnDiskTables.clear();
+ }
+
+ void readAll() {
+ for (const auto &Iter : UnReadData) {
+ internal_key_type Key = Iter.first;
+ data_type_builder ValueBuilder(Data[Key]);
+
+ for (const UnreadDataInfo &I : Iter.second)
+ I.InfoObj->ReadDataInto(Key, I.start, I.DataLen, ValueBuilder);
+ }
+
+ UnReadData.clear();
+ clear();
+ }
+
+ data_type find(internal_key_type Key) {
+ auto UnreadIter = UnReadData.find(Key);
+ if (UnreadIter != UnReadData.end()) {
+ data_type_builder ValueBuilder(Data[Key]);
+ for (auto &I : UnreadIter->second)
+ I.InfoObj->ReadDataInto(Key, I.start, I.DataLen, ValueBuilder);
+ UnReadData.erase(UnreadIter);
+ }
+
+ return Data[Key];
+ }
+
+ ~MergedTable() { clear(); }
};
using Table = llvm::PointerUnion<OnDiskTable *, MergedTable *>;
@@ -159,13 +204,14 @@ template<typename Info> class MultiOnDiskHashTable {
// FIXME: Don't rely on the OnDiskHashTable format here.
auto L = InfoObj.ReadKeyDataLength(LocalPtr);
const internal_key_type &Key = InfoObj.ReadKey(LocalPtr, L.first);
- data_type_builder ValueBuilder(Merged->Data[Key]);
- InfoObj.ReadDataInto(Key, LocalPtr + L.first, L.second,
- ValueBuilder);
+ // Make sure Merged->Data contains every key.
+ (void)Merged->Data[Key];
+ Merged->UnReadData[Key].push_back(
+ {&InfoObj, LocalPtr + L.first, L.second});
}
Merged->Files.push_back(ODT->File);
- delete ODT;
+ Merged->MergedOnDiskTables.push_back(ODT);
}
Tables.clear();
@@ -239,11 +285,8 @@ template<typename Info> class MultiOnDiskHashTable {
internal_key_type Key = Info::GetInternalKey(EKey);
auto KeyHash = Info::ComputeHash(Key);
- if (MergedTable *M = getMergedTable()) {
- auto It = M->Data.find(Key);
- if (It != M->Data.end())
- Result = It->second;
- }
+ if (MergedTable *M = getMergedTable())
+ Result = M->find(Key);
data_type_builder ResultBuilder(Result);
@@ -268,6 +311,7 @@ template<typename Info> class MultiOnDiskHashTable {
removeOverriddenTables();
if (MergedTable *M = getMergedTable()) {
+ M->readAll();
for (auto &KV : M->Data)
Info::MergeDataInto(KV.second, ResultBuilder);
}
@@ -327,7 +371,7 @@ class MultiOnDiskHashTableGenerator {
// Add all merged entries from Base to the generator.
for (auto &KV : Merged->Data) {
if (!Gen.contains(KV.first, Info))
- Gen.insert(KV.first, Info.ImportData(KV.second), Info);
+ Gen.insert(KV.first, Info.ImportData(Merged->find(KV.first)), Info);
}
} else {
Writer.write<uint32_t>(0);
|
@ChuanqiXu9 are you still planning to chase this given that fixing the hashing function would fix performance for the aforementioned patch? The trade-off we are making here is hard to assess without benchmarks that show how much latency we win and how much more memory we spend. I am not sure if doing the benchmarks would be high on anyone's priority list, but maybe I'm wrong? |
Yeah, I think this patch may be conceptually good except the extra memory using. But benchmarking, I tried it locally but didn't find observable effects. So I'd like to land this after 19's branching to give it more baking time. |
Do you mean that no large increases in the memory use or that things are not getting faster either? |
Yeah, it is the later. But it might not be too complex since all its uses are limited to the file only. My point is on the contrast, if it doesn't lead to significant regression, we should do it. Since it is rare that a single patch did herotical optimizations. Some times the performance gains from the long tails. |
I feel the increased complexity does not seem like a good trade-off if it does not significantly improve performance. |
Yeah, and I don't have a strong opinion neither. |
See the post commit message in
#92083 for rationale.
Previously, when we merge the lookup tables, we will read the tables completely to get the data. But the above page shows that it might be problematic in specific cases where there a lot of DeclIDs to be read.
In this patch, we mitigated the problem by not reading the contents of the merged tables. But we may need to pay for the additional memory usages.