We're currently using Syncdocs at numerous client locations and it's been working generally quite well. But we've run into problems with a client whose data store is in the neighborhood of 800 gigs, i.e., a very large amount of data (but not necessarily unusual for a business). In this case Syncdocs has needed about 3 months to move half the store up to Google and has run into numerous snags. For example, each time the host machine needs to be rebooted, Syncdocs has to run through hundreds of thousands of files again, to determine what needs to be done. It frequently seems to go into loops with messages like 'processing local changes' that can last for days and for which the only cure is to exit and restart Syncdocs. One big problem we've confronted is Syndocs' inability to selectively sync subfolders. We've discovered that we can speed things up by not trying to sync the whole store at once, but not being able to choose subfolders forces us to sync very large numbers of files anyway.
We're seeing these issues as to some degree inherent in the limitations of syncing large stores to the cloud, but there are definitely two things that our testing indicates would help significantly: not needing to run through the entire store everytime Syncdocs wakes up anew, and allowing subfolder selection to minimize the amount of data being handled, at least until the entire store has been moved for the first time.