5

I've read this and this answer. I've also searched the book C++ Concurrency in Action and found no discussion about volatile nor any example using it. Looks like it's not designed for concurrency at all. So for concurrent programming, is it sufficient to just use atomic, mutex, etc., and forget about volatile? Any cases where volatile may be needed for concurrency issues?

4
  • 10
    No, there aren't any -- volatile was designed more for things like handling memory-mapped hardware registers. For concurrent programming, it's simply not the right tool for the job. Commented Feb 18, 2018 at 19:07
  • 1
    There’s a reason it’s not mentioned, and it’s the same reason that pirate ships and bananas and Wittgenstein’s Tractatus aren’t mentioned. Commented Feb 18, 2018 at 19:09
  • @nolbdnilo Arrrr! Commented Feb 18, 2018 at 22:44
  • 1
    This question has been asked and answered a lot on SO. Problem is, some of the green check-mark answers with many thumbs up are just wrong. The short answer is, if you are not interfacing with memory-mapped hardware, just forget you ever heard of volatile. What you don't remember you know won't hurt you. Commented Feb 19, 2018 at 0:56

2 Answers 2

7

No, in C++ the volatile keyword tells the compiler that must not optimize the variable in any way shape or form. This can be useful when dealing with memory that can be changed from outside of your own code e.g a hardware register on a custom board.
For more in depth guide about volatile you should read Volatile vs. volatile By Herb Sutter

Sign up to request clarification or add additional context in comments.

Comments

3

volatile and atomic are two orthogonal concepts. So their combination change the program semantic, otherwise they would not be orthogonal!

atomicity causes constraints on sequencing ( included atomicity of read and write).

volatility causes constraints on the elidability of accesses to the variable.

volatile does not cause any sequencing between thread, nevertheless it is still usefull. Consider this program, that show a progress bar, the computation is performed in one thread, while another thread is responsible for the graphics:

//thread A
extern std::atomic<int> progress;
//
void compute(){
  progress=0;
  //do something;
  progress=1;
  //do something;
  progress=2;
  //[...] a 100 time.
  }

Inside the function compute, the compiler notice that progress is never read but just written many time. So it can optimize the code to:

void compute(){
  //do many thing;
  progress=100;
  }

By declaring volatile atomic<int> progress; all the writes will not be collapsed, and the memory order will ensure that operations will be performed inbetween of all these writes.

See also:

8 Comments

I am pretty sure this is not correct. Volatile makes absolutely no memory order promises. Atomic DOES. Every language lawyer I have heard chime in on this, including the Right Hon. Bjarne Stoustrup, has said not to use volatile on anything higher than silicon and sparks, no exceptions.
Nonsense: the compiler cannot optimize accesses to atomic objects away unless it can prove that there is no other thread possibly accessing the values.
@JiveDadson Unfortunatly there is always a gap between what should be and necessity.
@DietmarKühl Try to prove what you say in this case. In this case an optimizer has the right to colapse all the writes to progress.
@DietmarKühl Implementations should make atomic stores visible to atomic loads within a reasonable amount of time. What do you believe this sentence mean? Do you believe that each time their is a store a load should happen?? A store is visible if a load is performed on that store or on a store performed subsquently to that store!!
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.