The answer is: No, it doesn't make a lot of sense.
Note we are talking about the software implementation of a StoreOnce server within Data Protector. In this case administrators have full access to the file system where the Store Root is located and so a defragmentation could be run on it.
Just think about the way data is written to this file system. Typical for StoreOnce is that data is split into relatively small chunks. Only when the chunk has not been written before (checked using a hashing algorithm) it will first be compressed (and so becomes even smaller) and after that it will be written to the file system somewhere in one of the 24 dvol directories. Some chunks will be written, others won't. When restoring the file, all the chunks will be read again. That is the ones that have newly been written when storing the file, but possibly also older ones.
The nature of this system is typically resulting in random reads on the file system. So the data being read could be seen as "fragmented" anyhow. That's why defragmentation of this file system will typically not result in a big performance gain.