Ted - I think your assessment is bang on.
Your typical modern camera (Sigmas aside?) tend to capture a good 12 or more stops of dynamic range and can encode data to 12 to 14 bits. We even have to fit them with UV and IR filters to ensure that they only capture the "visible" range of light. In other words they can significantly outperform any of the devices that we view the output with, The best commonly available screens approach being able to produce 10-bit colour (their gamut being AdobeRGB compliant) and I suspect a dynamic range that might be around 9 or 10 stops (I'm not sure if you have better data, based on your research).
Printers (depending on the specific printer being used) can have a slightly larger colour space than AdobeRGB, but the dynamic range of a print is probably in the order of 5 or 6 stops.
So in theory, best case out there, our modern cameras can likely capture most of the colours visible to humans; i.e. the reference CIE 1931 colour space that defines what an "average" person can see. To me, I usually care less about the total millions and billions of colours that the various devices can reproduced, but rather about the boundary conditions that define the edge of what can be reproduced. Based on the what I've been able to determine, the highest number I've seen suggests that best case is that humans can see around 10 million individual shades, not the billions of shades that can be recorded and reproduced by the equipment we use.
So in the scheme of things we collect a lot more data than we will ever use, in most cases. In casual editing, changes are we will never use all the data available to us. If we totally blow our exposure, mess up the white balance or use some fairly sophisticated techniques to recover shadow and highlight details in post-processing we might want to dip into this "surplus" of data in the raw files.
Where we can get into trouble with 8-bit data is when we make extreme edits that results in the editing software merging different colours with discrete, visible steps. This can show up anywhere, but banding in the sky is where we often notice it. Editing in 16-bit can definitely be one way of solving this issue,. I feel that Ted is quite right, if we do those extreme edits in our raw editor, we can avoid this issue and do the heavy lifting in 16-bit; then it really doesn't matter whether we use 8-bit or 16-bit in the pixel based editor.
The only place I have found a clear advantage to working in a wider colour space using a raw file as a starting point is when using a professional photo printer (one which has icc profiles published for it by the paper manufacturers), especially if there are brilliant colours in the image. I've always used a 16-bit workflow here so can't comment on how well 8-bit will work.