You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Deduplicator is an awesome tool. But when there are many result duplicates to decide on, it can become a long process deciding every deduplication action manually (when a "Delete-All" operation can't be trusted).
This could be greatly assisted by some additional functionality:
Ability to set (and/or apply) bulk file-operation rules on results, e.g.:
Delete oldest duplicates
Delete all but newest duplicate
Delete all duplicates within X\Y\Z path
Delete duplicates with *X* in filename
If all duplicates reside in the same folder, auto-delete.... _____.
Ability to export results list to CSV,/TXT (for sorting/analyzing result list externally to find path/location patterns of duplicates' locations)
Regex capability
Ability to save Deduplicator results sessions, for later actioning/reviewing (searches can take a long time depending on storage size, meaning actioning the results may not happen till next day, which app may lose results by that point due to battery/reboots/etc)
Out of the above suggestions, most helpful and desired would be the first, since it'd enable the ability to automate much of the deduplication decisions/processing.
As an example, I have hundreds of duplicates to process. I can't allow a bulk delete-all operation since there are varied result locations, many of which folders I don't want to be the sole holder of a file, e.g. files in any directories containing "cache" in their path, nor keep any duplicates in *\data\com.*? folders, etc.
The text was updated successfully, but these errors were encountered:
The Deduplicator is an awesome tool. But when there are many result duplicates to decide on, it can become a long process deciding every deduplication action manually (when a "Delete-All" operation can't be trusted).
This could be greatly assisted by some additional functionality:
X\Y\Z
pathOut of the above suggestions, most helpful and desired would be the first, since it'd enable the ability to automate much of the deduplication decisions/processing.
As an example, I have hundreds of duplicates to process. I can't allow a bulk delete-all operation since there are varied result locations, many of which folders I don't want to be the sole holder of a file, e.g. files in any directories containing "cache" in their path, nor keep any duplicates in
*\data\com.*?
folders, etc.The text was updated successfully, but these errors were encountered: