On Thu, Aug 28, 2025 at 8:05 PM Bruce Kellett <[email protected]> wrote:

> On Fri, Aug 29, 2025 at 2:47 AM Jesse Mazer <[email protected]> wrote:
>
>>
>> You were discussing a case of this form: "This is easily seen if one
>> considers a wave function with a binary outcome, |0> and |1> for example.
>> After N repeated trials, one has 2^N strings of possible outcome sequences.
>> One can count the number of, say, ones in each possible outcome sequence."
>>
>> If we are interested in statistics for N trials, let's define a
>> "supertrial" as a sequence of N trials of the individual measurement, and
>> say that we are repeating many supertrials and recording the results of all
>> the individual trials in each supertrial using some kind of physical memory
>> (persistent 'pointer states'). Each supertrial has 2^N possible outcomes,
>> and for a given supertrial outcome O (like up, down, up, up, up, down for
>> N=6) you can define a measurement operator on the pointer states whose
>> eigenvalues correspond to what the records would tell you about the
>> fraction of supertrials where the outcome was O. If I'm understanding the
>> result in those references correctly, then if one models the interaction
>> between quantum system, measuring apparatus, and records using only the
>> deterministic Schrodinger equation, without any collapse assumption or Born
>> rule, one can show that in the limit as the number of supertrials goes to
>> infinity, all the amplitude for the whole system including the records
>> becomes concentrated on state vectors that are parallel to the eigenvector
>> of the measurement operator with the eigenvalue that exactly matches the
>> frequency of outcome O that would have been predicted if you *had* used the
>> collapse assumption and Born rule for individual measurements. And this
>> should be true even if the probability for up vs. down on individual
>> measurements was not 50/50 given the experimental setup.
>>
>
> I haven't looked into this in any detail, but it seems to be a recasting
> of an idea that has been around for a long time. This idea hasn't made it
> into the mainstream because the details failed to work out.
>

Can you point to any sources that explain specific ways the details fail to
work out? David Z Albert is very knowledgeable about results relevant to
interpretation of QM so I'd be surprised if he missed any technical
critique. Of course there is the philosophical argument that this doesn't
resolve the measurement problem because it doesn't lead to definite results
for individual trials (or supertrials) but that's not taking issue with the
technical claim about measuring frequencies of results in the limit of
infinite trials (and David Z Albert brings up this philosophical objection
in the last paragraph before section VI at
https://books.google.com/books?id=_HgF3wfADJIC&lpg=PP1&pg=PA238 , and then
in section VI he goes on to talk about why he thinks this objection means
the fact about frequencies in the limit doesn't really resolve the
measurement problem)



> There are all sorts of problems with the idea, and it doesn't appear to
> translate well to the argument I am making. The 2^N sequences that result
> from repeated measurements on the basic binary system do not form a
> measurement in themselves. There is no operator for this, and no
> eigenfunctions and there is no obvious outcome.
>

I had thought that for any measurable quantity including coarse-grained
statistical ones, it was possible to construct a measurement operator in
QM--doing some googling, it may be that for some coarse-grained quantities
one has to use a "positive operator valued measure", see answer at
https://physics.stackexchange.com/a/791442/59406 , and according to
https://quantumcomputing.stackexchange.com/a/29326 this is not itself an
operator though it is a function defined in terms of a collection of
positive operators. And the page at
https://www.damtp.cam.ac.uk/user/hsr1000/stat_phys_lectures.pdf also
mentions that in quantum statistical mechanics, macrostates can be defined
in terms of the density operator which is used to describe mixed states
(ones where we don't know the precise quantum microstate and just assign
classical probabilities to different possible microstates). I don't know if
either was used here, but p. 13 of the paper I mentioned at
https://www.academia.edu/6975159/Quantum_dispositions_and_the_notion_of_measurement
indicates that some type of operator was used to derive the result about
frequencies in the limit:

"The ingenious method of introducing a quantum-mechanical equivalent of
probabilities that Mittelstaedt follows in his approach relies on a new
operator F^N_k
whose ‘intuitive’ role is to measure the relative frequency of the outcome
a_k in a given sequence of N outcomes."

The full details would presumably be in Mittelstaedt's book The
Interpretation of Quantum Mechanics and the Measurement Process in the
paper's bibliography.

Jesse

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/CAPCWU3LVuOJC9J%3D2dXZNS6T6QtrAX46%3DrDfBBgdJspq0ga2bzw%40mail.gmail.com.

Reply via email to