/reɪs kənˈdɪʃən/
noun — "outcome depends on timing, not logic."
Race Condition is a concurrency error that occurs when the behavior or final state of a system depends on the relative timing or interleaving of multiple executing threads or processes accessing shared resources. In a race condition, two or more execution paths “race” to read or modify shared data, and the result varies depending on which one happens to run first. This makes the system nondeterministic: the same code, given the same inputs, may produce different results across executions.
Technically, a race condition arises when three conditions are present simultaneously. First, multiple execution units run concurrently. Second, they share mutable state, such as memory, files, or hardware registers. Third, access to that shared state is not properly coordinated using synchronization mechanisms. When these conditions align, operations that were assumed to be logically atomic are instead split into smaller steps that can interleave unpredictably.
A classic example is incrementing a shared counter. The operation “counter = counter + 1” is not a single indivisible action at the machine level. It involves reading the current value, adding 1, and writing the result back. If two threads perform this sequence concurrently without synchronization, both may read the same initial value and overwrite each other’s updates, resulting in a lost increment.
# conceptual sequence without synchronization
Thread A reads counter = 10
Thread B reads counter = 10
Thread A writes counter = 11
Thread B writes counter = 11 # one increment lost
From the system’s perspective, nothing illegal occurred. Each instruction executed correctly. The error emerges only at the semantic level, where the intended invariant “each increment increases the counter by 1” is violated. This is why race conditions are particularly dangerous: they often escape detection during testing and appear only under specific timing, load, or hardware conditions.
Race conditions are not limited to memory. They can occur with file systems, network sockets, hardware devices, or any shared external resource. For example, two processes checking whether a file exists before creating it may both observe that the file is absent and then both attempt to create it, leading to corruption or failure. This class of bug is sometimes called a time-of-check to time-of-use (TOCTOU) race.
Preventing a race condition requires enforcing ordering or exclusivity. This is typically achieved using synchronization primitives such as mutexes, semaphores, or atomic operations. These tools ensure that critical sections of code execute as if they were indivisible, even though they may involve multiple low-level instructions. In well-designed systems, synchronization also establishes memory visibility guarantees, ensuring that updates made by one execution context are observed consistently by others.
However, eliminating race conditions is not just about adding locks everywhere. Over-synchronization can reduce concurrency and harm performance, while incorrect lock ordering can introduce deadlocks. Effective design minimizes shared mutable state, favors immutability where possible, and clearly defines ownership of resources. Many modern programming models encourage message passing or functional paradigms precisely because they reduce the surface area for race conditions.
Conceptually, a race condition is like two people editing the same document at the same time without coordination. Each person acts rationally, but the final document depends on whose changes happen to be saved last. The problem is not intent or correctness of individual actions, but the absence of rules governing their interaction.
See Synchronization, Mutex, Thread, Deadlock.