Sort of Dan, but not quite.
Your analysis of relative colormetric is right on. RGB Colour spaces are actually 3D (a red, green and blue axis). The shape of the surface is referred to as the "hull" of the colour space and any out of gamut colour is brought into gamut by mapping the specific R, G or B colour to the value of the hull. The "extra" colour data is discarded and hence the process is not reversible.
Perceptual is different as both the out of gamut colours and the in gamut colours are remapped to new values. The problem though is that there is no consistent approach on how this is achieved between different pieces of software. The software used is referred to as a RIP or Raster Image Processor, and each RIP will not have the same algorithm. Without knowing that, reversal is impossible. Even if it is known, there will be rounding errors (taking an Adobe RGB file that represents some 50% of the colours that humans can see, whereas sRGB is around 33%). If we are representing both colour spaces with the same number (256 x 256 x 256) values I see no possible way of rebuilding exactly the same data as in the original data set. Even this is only possible if the conversion algorithm is known; but taking say, the rendering profile from a OnOne or Capture One product and applying it with an Adobe product will not produce meaningful results.
That being said, this is NOT the way that colour space conversions work; these simply take the sRGB value and convert them to the equivalent AdobeRGB value, but within the hull defined by the sRGB colour space.