1

There seems to be a similar question (Java Specification: reads see writes that occur later in the execution order), but it focuses on a different example.


The Java Memory Model states in §17.4.8:

Although allowing reads to see writes that come later in the execution order is sometimes undesirable, it is also sometimes necessary.

It then refers to a program and an execution trace from §17.4.5 as an example where this behaviour is necessary.

Here's the program:

Thread 1 Thread 2
B = 1; A = 2;
r2 = A; r1 = B;

And the corresponding execution trace:

1: r2 = A;  // sees write of A = 2
3: r1 = B;  // sees write of B = 1
2: B = 1;
4: A = 2;

The specification explains:

The trace in Table 17.4.5-A requires some reads to see writes that occur later in the execution order. Since the reads come first in each thread, the very first action in the execution order must be a read. If that read cannot see a write that occurs later, then it cannot see any value other than the initial value for the variable it reads. This is clearly not reflective of all behaviors.

Could someone help clarify what this means? Why can't we simply declare executions that allow reads to observe future writes as illegal?

For instance, if we disallow such executions, wouldn't we still be able to produce the result r1 == 1 and r2 == 2 through the following legal execution?

B = 1;
A = 2;
r2 = A;  // sees write of A = 2
r1 = B;  // sees write of B = 1

What would be lost or break if we restricted the model to disallow reads from seeing future writes?

1
  • 2
    I think getting into the details of "what would be lost or break" would be too complex for this site. Basically the optimizer needs to optimize code, and it needs to do some weird things to be able to do that. The flip side: it is possible to disallow reads from seeing future writes. That's what proper synchronization does. Just use synchronizes-with and Bob's your uncle. What you are looking at is an example of a data race, and those have no useful semantics. The section is explaining to you why you need to synchronize your code. Commented Dec 23, 2024 at 1:26

2 Answers 2

2

There are two sides of things here: the specific section you quoted, and the general motivation for defining things this way.

The line you've quoted is confusingly worded. When constructing the formal concept of a well-formed execution and defining how actions can be committed, for these concepts to correctly reflect the program behaviors allowed and disallowed earlier, it is necessary to allow reads to observe the values of later writes. Disallowing this would break consistency with the earlier sections of the spec.

That's all that section is saying. But there's the broader issue of why the whole spec wasn't written to disallow this. And the answer there is simply that the performance impact would be prohibitive. It'd inhibit compiler optimizations and force constant memory barriers. It'd be like making every field in the entire program volatile and locking around every array indexing operation. And it still wouldn't be enough - making every field in an entire program volatile and locking around every array indexing operation isn't enough to make a program thread-safe.

Sign up to request clarification or add additional context in comments.

Comments

0

Let's look at this code:

Thread 1
B = 1;
r2 = A;

For now let's assume that this code is for a single-threaded program.
Then note:

  1. B = 1 and r2 = A can be reordered: these are independent operations, the reordering is invisible in a single-threaded program.
  2. reordering of B = 1 and r2 = A improves performance, because:
    • both memory reads and writes take time (often a looooot of time) on CPUs
    • reads are more important than writes, because we need the result of the read to continue, while writes can just complete in background

As a result, moving reads as early as possible is one of the most important optimizations for single-threaded code both in compilers and in hardware.
This optimization exists since long before multi-threading.

Had the JVM developers simply

declare executions that allow reads to observe future writes as illegal

then they would have had to remove this optimization from both:

  • JIT compiler: here the optimization can be removed in the JVM code
  • hardware: this would require inserting memory barriers around every memory access

That would have tanked the performance even for single-threaded programs.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.