Joerg Jaspert :debian:<p>Ok. Remotely cleaning a huge (>2 TB, many many files and subdirs) <a href="https://fulda.social/tags/Nextcloud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nextcloud</span></a>-hosted folder (not the whole user) is *painful*. Without access to the host it runs on I am limited to either the webinterface - which breaks - or using <a href="https://fulda.social/tags/webdav" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>webdav</span></a> with a tool like <a href="https://fulda.social/tags/rclone" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rclone</span></a>.</p><p><a href="https://fulda.social/tags/rclone" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rclone</span></a> purge breaks (timeout), so rclone delete it is. Which is *slow*, really slow. Probably because the remote moves a deleted file into the (for this case) useless trashbin which can't be turned off.</p><p>At least one can use <a href="https://fulda.social/tags/xargs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>xargs</span></a> to run multiple rclones in parallel - first get a list of entries of the to-be-deleted-dir (rclone lsf), format them the way rclone expects (basically put name of remote in front) and use something like `xargs -n 1 -P0 rclone delete -v --rmdirs` on it.</p><p>Still, its running since yesterday later afternoon and we are down to 1.4Tb left, of 2Tb. Even in parallel, the webdav shit manages to delete 2 to 4 files a second only.</p>