Since I am still looking for a good solution a customer and myself I am getting back to this. Currently I am testing SBP vs. Arq and other solutions
Arq seems to be using two concurrent connections. While this is less than what the Dropbox app uses, it does at least increase the 0% throughput phases of SBP to about 50% with large files. No erratic behaviour here, so maybe you could give this another try in form of an option that allows me to set the no. of connections myself with a default of 1?!
It should be possible to make SBP scan the files on both sides (and then notice that the files are one and the same. The whole reason for me uploading the initial set of files via web interface or Dropbox' own utility is that SBP uploads them so much slower. This does not seem like such an unusual scenario, especially since some cloud services still offer customers to send in a seed via harddrive. Since the Dropbox utility is able to deal with this I would expect the API enabling third party software to deal with it as well?!
Unfortunately I encountered another culprit: Using compression (and encryption) only seems to utilize a single CPU core (out of 8 logical cores) with SBP. Obviously this slows down a large backup job considerably compared to solutions to fully utilize the CPU during compression.
Kostas wrote:
The limitation is not placed by the Dropbox API per se but during our tests we encountered erratic behaviour (heavy throttling and dropped connections) when trying to do so.
Arq seems to be using two concurrent connections. While this is less than what the Dropbox app uses, it does at least increase the 0% throughput phases of SBP to about 50% with large files. No erratic behaviour here, so maybe you could give this another try in form of an option that allows me to set the no. of connections myself with a default of 1?!
Having said that, it's easy to understand why the Delta didn't help much in your case. When the files are uploaded on the cloud storage via the website or their native sync tool then the metadata for those uploaded files will not be available in SyncBackPro's database. Thus, SyncBackPro assumes the files at your Source and Destination are different and proposes to recopy again to your Destination. It then, builds the database for the recopied files and also for the new/changed files uploaded to your Destination (during the first profile run). But, in the subsequent runs it will not propose to recopy the files again as the cloud database contains the details of the files uploaded to the server.
It should be possible to make SBP scan the files on both sides (and then notice that the files are one and the same. The whole reason for me uploading the initial set of files via web interface or Dropbox' own utility is that SBP uploads them so much slower. This does not seem like such an unusual scenario, especially since some cloud services still offer customers to send in a seed via harddrive. Since the Dropbox utility is able to deal with this I would expect the API enabling third party software to deal with it as well?!
Unfortunately I encountered another culprit: Using compression (and encryption) only seems to utilize a single CPU core (out of 8 logical cores) with SBP. Obviously this slows down a large backup job considerably compared to solutions to fully utilize the CPU during compression.
Statistics: Posted by Timur Born — Tue May 24, 2016 6:10 am