/ˌsɪŋkrənaɪˈzeɪʃən/

noun — "coordination of concurrent execution."

Synchronization is the set of techniques used in computing to coordinate the execution of concurrent threads or processes so they can safely share resources, exchange data, and maintain correct ordering of operations. Its primary purpose is to prevent race conditions, ensure consistency, and impose well-defined execution relationships in systems where multiple units of execution operate simultaneously.

Technically, synchronization addresses the fundamental problem that concurrent execution introduces nondeterminism. When multiple threads access shared memory or devices, the final outcome can depend on timing, scheduling, or hardware behavior. Synchronization mechanisms impose constraints on execution order, ensuring that critical sections are accessed in a controlled way and that visibility of memory updates is predictable across execution contexts.

Common synchronization primitives include mutexes, semaphores, condition variables, barriers, and atomic operations. A mutex enforces mutual exclusion, allowing only one thread at a time to enter a critical section. Semaphores generalize this concept by allowing a bounded number of concurrent accesses. Condition variables allow threads to wait for specific conditions to become true, while barriers force a group of threads to reach a synchronization point before any may proceed.

At the hardware level, synchronization relies on atomic instructions provided by the CPU, such as compare-and-swap or test-and-set. These instructions guarantee that certain operations complete indivisibly, even in the presence of interrupts or multiple cores. Higher-level synchronization constructs are built on top of these primitives, often with support from the operating system kernel to manage blocking, waking, and scheduling.

Memory visibility is a critical aspect of synchronization. Modern processors may reorder instructions or cache memory locally for performance reasons. Synchronization primitives act as memory barriers, ensuring that writes performed by one thread become visible to others in a defined order. Without proper synchronization, a program may appear to work under light testing but fail unpredictably under load or on different hardware architectures.

A simplified conceptual example of synchronized access to a shared counter:


lock(mutex)
counter = counter + 1
unlock(mutex)

In this example, synchronization guarantees that each increment operation is applied correctly, even if multiple threads attempt to update the counter concurrently. Without the mutex, increments could overlap and produce incorrect results.

Operationally, synchronization is a balance between correctness and performance. Excessive synchronization can reduce parallelism and throughput, while insufficient synchronization can lead to subtle, hard-to-debug errors. Effective system design minimizes the scope and duration of synchronized regions while preserving correctness.

Conceptually, synchronization is like a set of traffic signals in a busy intersection. The signals restrict movement at certain times, not to slow everything down arbitrarily, but to prevent collisions and ensure that all participants eventually move safely and predictably.

See Mutex, Thread, Race Condition, Deadlock.