I am trying to understand how uploading and downloads large files works.
From looking at the code it appears to me that the actual transfer over the wire is an atomic operation, i.e. not interruptable. I have been looking at the WebDAV logic but think that all the providers act this way. Obviously the underlying API
to the provider would have to support loading data in chunks for this to not be atomic.
What can be interrupted is the copying of information from one stream to another, i.e. CopyStreamData. I am thinking that the transfer operation is going to be largely where the time is spent (by orders of magnitude).
I am just looking for confirmation that my understanding is correct (or not).
Thanks for the good work with regards to abstracting out the various cloud providers.