Pointers in your memmapped file would be invalid after the second load. So it can't be the exact same structure that you would normalize use in a memory structure.
You could do it with relative offsets if you wanted, that's getting pretty close to pre-heating a cache from a snapshot. That way the file would not contain pointers but you could still traverse it relatively quickly by adding the offsets to the base address of the whole file.
Pointers could be valid if each file in this new OS resided at a specific location say in a 64-bit address space. You could allocate for example 4GB of virtual memory to each file and guarantee that a file will always be found at the same location with all the pointers intact and valid.
That sounds pretty hacky but it could work. Better to do it right and make the whole thing relocatable and the software addressing it aware of that. If you don't do it that way you may end up with some very interesting bugs, such as when your code also gets loaded into a 4GB segment (not all machines are 64 bits), and now all those pointers are valid throughout your memory image. That's bound to lead to confusion. Most CPUs can do base+offset with very little overhead anyway so there would be no or very little gain.
I don't think relocations are necessary at all. Within one system each file has a fixed place. Relocation and resolving happens only when files are copied across systems, where serialization should happen anyway. So e.g. once you receive a binary from the network, you resolve it and place it somewhere in the 64-bit space and then just execute it from there every time. The inode number becomes the file's physical address essentially.
Base+offset segmentation does have an overhead since you'd need some extra CPU registers for that if I understand it correctly.