You cannot use Fast Backup with SmartSync (using one disables the other), for the simple reason that Fast Backup works by side-stepping having to read Destination/Right at all (semi-permanently or 'mostly'), and if syncing (looking for changes on either side to replicate to the other), you obviously cannot/should not skip scanning that side (or you'll never see the changes on it). There are potentially other technical reasons (to do with how it sidesteps scanning that side), but the need to scan there if syncing trumps them anyway.
Synchronisation is strictly two-sided, and any increase on 'two devices' requires a set of profiles referencing them in rotation / covering every permutation (preferably without any edits in the meantime). Even then, you are likely to get collisions if you edit the same file (same path\name, anyhow) stored on more than one machine unless the file is stored centrally (so there only is 'one such file'). Of course, that means you may not be able to edit the shared file simultaneously from two or more machines, but that's life. If you could (by keeping separate copies), you get the problem of how to resolve the separate changes anyway, bearing in mind that our software cannot resolve at the 'content' level, it can only overwrite one whole file with another (thus, losing one set of changes).
I suggest you consider the possibility of using a central copy of the shared file/s (on the NAS) and backing it up using Versioning* to one device/service for at least 1 Version to be retained for (say) 1 week (handy for retrieving accidental deletions). More Versions / for longer is obviously available. You can then make a separate profile/s to regularly back up the same NAS file-set, either a simple disaster-recovery full backup and/or using Incremental or Differential backups (for any 'full rollback' you might require).
* Versioning is not suitable for mass roll-back (you need to cherry-pick each file: there is no process to automate that, so it's suitable for 'only as many files as you're prepared to cherry-pick'). For full roll-back capability ('all files to last Tuesday') you need Incremental or Differential (or rotating sets of full backups, one per day, and so on)
Bear in mind that you can use the PCs themselves as extra repositories (additional 'drives') to store extra backups of the central set. But trying to keep a copy of file-X on every device (and edit it locally/separately), and then trying to resolve those disparate edits is pretty much a non-starter IMHO. Which means the concept of 'synchronisation' (copying changes in both directions) probably shouldn't apply - you probably need to make sure all changes (file writes) happen in only one place, then back them up. Which means that when editing, the changes only flow one way (PC>NAS), thus Sync is irrelevant. And the copying from PC to NAS (that is, 'saving' the edited file) happens c/o your app anyway, so will be instant.
How fast the result (central file-set) is backed up thereafter is up to you. Bear in mind Pro can't run on a NAS, so at least one PC will need to be awake and partially 'dedicated' to the task, and would likely need to run 'after hours' to avoid issues with trying to back up files that were actually open'/locked. (Of course, if you had a Windows server, it could do that part itself...and likely get around the 'open/locked' aspect too, by using shadow-copies, though what it actually backed up might be 'one change behind' - but that's better than no copy at all)
Synchronisation is strictly two-sided, and any increase on 'two devices' requires a set of profiles referencing them in rotation / covering every permutation (preferably without any edits in the meantime). Even then, you are likely to get collisions if you edit the same file (same path\name, anyhow) stored on more than one machine unless the file is stored centrally (so there only is 'one such file'). Of course, that means you may not be able to edit the shared file simultaneously from two or more machines, but that's life. If you could (by keeping separate copies), you get the problem of how to resolve the separate changes anyway, bearing in mind that our software cannot resolve at the 'content' level, it can only overwrite one whole file with another (thus, losing one set of changes).
I suggest you consider the possibility of using a central copy of the shared file/s (on the NAS) and backing it up using Versioning* to one device/service for at least 1 Version to be retained for (say) 1 week (handy for retrieving accidental deletions). More Versions / for longer is obviously available. You can then make a separate profile/s to regularly back up the same NAS file-set, either a simple disaster-recovery full backup and/or using Incremental or Differential backups (for any 'full rollback' you might require).
* Versioning is not suitable for mass roll-back (you need to cherry-pick each file: there is no process to automate that, so it's suitable for 'only as many files as you're prepared to cherry-pick'). For full roll-back capability ('all files to last Tuesday') you need Incremental or Differential (or rotating sets of full backups, one per day, and so on)
Bear in mind that you can use the PCs themselves as extra repositories (additional 'drives') to store extra backups of the central set. But trying to keep a copy of file-X on every device (and edit it locally/separately), and then trying to resolve those disparate edits is pretty much a non-starter IMHO. Which means the concept of 'synchronisation' (copying changes in both directions) probably shouldn't apply - you probably need to make sure all changes (file writes) happen in only one place, then back them up. Which means that when editing, the changes only flow one way (PC>NAS), thus Sync is irrelevant. And the copying from PC to NAS (that is, 'saving' the edited file) happens c/o your app anyway, so will be instant.
How fast the result (central file-set) is backed up thereafter is up to you. Bear in mind Pro can't run on a NAS, so at least one PC will need to be awake and partially 'dedicated' to the task, and would likely need to run 'after hours' to avoid issues with trying to back up files that were actually open'/locked. (Of course, if you had a Windows server, it could do that part itself...and likely get around the 'open/locked' aspect too, by using shadow-copies, though what it actually backed up might be 'one change behind' - but that's better than no copy at all)