It appears that all subject-to-subject copies are processed by one task queue which only has one consumer. I was copying from one subject to another using the REST subject copy API, and it took 11 minutes. During that time, I tried to copy content from within an (unrelated) Ultra subject to another, using Copy Content, and that didn't start processing until the REST copy operation had completed.
As copying between Ultra subjects is the only way to reuse Ultra documents, and many subjects have several mostly-similar offerings, it is expected that these subject-to-subject copies will become increasingly frequent. The fact that a single user can tie up the whole system is likely to be a problem. Should it be possible to add an additional consumer to the task queue? Even if there was one extra queue dedicated to selective content copies, it would allow full subject copies to occur concurrently to the 'quick' in-Ultra copy.
It's also absurd that a full subject copy operation takes so long. There are no large files being moved (they're all in the content collection, so it should all be hardlinks), and the amount of data associated with the subject is MB scale, not GB. So there's probably room for optimisation there. I was able to do a couple of database queries that pulled all related data that I could think of which took less than 5 seconds to execute. Perhaps I am being terribly naive, but I think an efficient copy should take less than 10 seconds.
On a related note, subject-to-subject copies in Ultra involve a 'poll', where it checks '/learn/api/v1/scheduledTasks/_<id>_1' after 90 seconds, and then every 60 seconds thereafter. If the queues are not full, most content copies take far less time to complete. Should it be possible to decrease this poll time to, say, 5 seconds initially followed by every 30 seconds thereafter?
|Product Version (if applicable):||1|