-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core: Provide mechanism to cache manifest file content #4518
Conversation
485e2ab Switch ManifestCache class to use Guava Cache instead. This is easier to do in iceberg-api since we can simply include the Guava cache classes into the shadow jar of iceberg-bundled-guava. |
Switching to use more Guava classes is probably not a good idea. @rizaon, do you have use cases where this has helped? If so, what were they? This adds quite a bit of complexity and memory overhead that we purposely avoided up to now so I want to make sure it is worth the change. Have you considered adding a caching FileIO instance? That could be used for more use cases than just manifest files and job planning. For example, a FileIO read-through cache could cache delete files that might be reused for tasks. This could be configured by file path, making it easy to determine what to cache. Plus, we could use more options than just in-memory, like a distributed cache or local disk. |
Hi @rdblue, thank you for your feedback. We found a slow query compilation issue against the Iceberg table in our recent Apache Impala build. Impala uses Iceberg's HiveCatalog and HadoopFileIO instance with an S3A input stream to access data from S3. We did a full 10 TB TPC-DS benchmark and found that query compilation can go for several seconds, while it used to be less than a second with native hive tables. This slowness in single query compilation is due to the requirement to call planFiles several times, even for scan nodes targetting the same table. We also see several socket read operations that spend hundreds of milliseconds during planFiles, presumably due to S3A HTTP HEAD request overhead and backward seek overhead (issue #4508). This is especially hurt fast-running queries. We tried this caching solution and it help speed up Impala query compilation almost 5x faster compared to without on Iceberg tables. Our original solution, however, is to put a Caffeine cache as a singleton in AvroIO.java. I thought it is better to supply the cache from outside. I have not considered the solution of adding a caching FileIO instance. I'm pretty new to the Iceberg codebase but interested to follow up on that if it can yield a better integration. Will it require a new class of Catalog/Table as well, or can we improve on the existing HiveCatalog & HadoopFileIO? |
A relevant Impala's JIRA is here: |
@rizaon, caching in the FileIO layer would be much more general. You could do things like detect that the file size is less than some threshold and cache it in memory, or detect file names under |
e019c6a
to
c032b7a
Compare
c032b7a implement caching as a new FileIO class, CachingHadoopFileIO. A new Tables class, CachingHadoopTables, is also added to assist with testing. We tried to avoid |
core/src/main/java/org/apache/iceberg/hadoop/HadoopInputFile.java
Outdated
Show resolved
Hide resolved
Hello @rdblue. This PR is ready for review. Please let me know if there is any new feedback or request. I'm happy to follow up. |
core/src/main/java/org/apache/iceberg/hadoop/CachingHadoopFileIO.java
Outdated
Show resolved
Hide resolved
core/src/main/java/org/apache/iceberg/hadoop/CachingHadoopTables.java
Outdated
Show resolved
Hide resolved
core/src/main/java/org/apache/iceberg/hadoop/ConfigProperties.java
Outdated
Show resolved
Hide resolved
*/ | ||
public class CachingFileIO implements FileIO, HadoopConfigurable { | ||
private static final Logger LOG = LoggerFactory.getLogger(CachingFileIO.class); | ||
private static ContentCache sharedCache; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way to pass in the cache rather than making it static?
@jackye1995, do you have any ideas here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This remain static in 47b8008. Please let me know if there is a better way to access the cache from Catalog object or outside.
@rizaon, there are a couple of PRs that should help you with this. #4608 adds |
Got it, thank you. Will rebase and update this PR once they are merged. |
CatalogProperties.IO_MANIFEST_CACHE_MAX_TOTAL_BYTES_DEFAULT); | ||
} | ||
|
||
public static long cacheMaxContentLength(FileIO io) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be package level protection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Hey @rizaon I think this is getting real close now (thanks for sticking with us on this). Now that we've gotten this far, I see there's a little more we can do to consolidate the logic in ContentCache. For example, if you pull the property configuration into ContentCache constructor, we can remove a lot from ManifestFiles. For example: public ContentCache(FileIO fileio) {
this.fileio = fileio;
long expireAfterAccessMs = PropertyUtil.propertyAsLong(
fileio.properties(),
CatalogProperties.IO_MANIFEST_CACHE_EXPIRATION_INTERVAL_MS,
CatalogProperties.IO_MANIFEST_CACHE_EXPIRATION_INTERVAL_MS_DEFAULT);
long maxTotalBytes = PropertyUtil.propertyAsLong(
fileio.properties(),
CatalogProperties.IO_MANIFEST_CACHE_MAX_TOTAL_BYTES,
CatalogProperties.IO_MANIFEST_CACHE_MAX_TOTAL_BYTES_DEFAULT);
long maxContentLength = PropertyUtil.propertyAsLong(
fileio.properties(),
CatalogProperties.IO_MANIFEST_CACHE_MAX_CONTENT_LENGTH,
CatalogProperties.IO_MANIFEST_CACHE_MAX_CONTENT_LENGTH_DEFAULT);
ValidationException.check(expireAfterAccessMs >= 0, "expireAfterAccessMs is less than 0");
ValidationException.check(maxTotalBytes > 0, "maxTotalBytes is equal or less than 0");
ValidationException.check(maxContentLength > 0, "maxContentLength is equal or less than 0");
this.expireAfterAccessMs = expireAfterAccessMs;
this.maxTotalBytes = maxTotalBytes;
this.maxContentLength = maxContentLength; You can also pull I played around with moving more of the logic into |
Hi @danielcweeks that looks like a nice cleanup. But I have 2 concerns:
|
Good points. Let me take one more pass (I was just hoping to consolidate more of the logic in ContentCache, but you're probably right that it makes more sense to keep the manifest settings with ManifestFiles). |
4175090
to
ba7329d
Compare
In the last github check runs, there was a checkstye issue and exception handling when given I have rebased this PR and add ba7329d to fix this issue. |
@rizaon Thanks for your contribution here and keeping this updated! I'm excited to see how this works out. |
@danielcweeks thank you for accepting this PR! |
This is a draft PR for ManifestCache implementation.
The
ManifestCache
interface closely followcom.github.benmanes.caffeine.cache.Cache
interface, with addition of specifyingmaxContentLength()
. If stream length is longer thanmaxContentLength()
, AvroIO will skip caching it to avoid memory pressure. An example of implementation,CaffeineManifestCache
, can be seen in TestAvroFileSplit.java.I will tidy this up and add documentations as we go. Looking forward for any feedback.
Closes #4508