/paɪp/
noun — “the secret tunnel your data sneaks through between programs.”
Pipe is an inter-process communication mechanism that allows the output of one program to be directly fed as input to another in a sequential, stream-oriented fashion. Commonly used in Unix-like systems and Command Line Interface scripting, Pipes enable chaining commands together to create powerful, modular workflows without the need for intermediate files.
Technically, a Pipe is a buffer managed by the operating system that temporarily stores data from the sending process until the receiving process reads it. This abstraction simplifies data flow management and supports both synchronous and asynchronous communication between processes. Named pipes (FIFOs) extend this concept by providing a persistent endpoint that unrelated processes can access.
Pipes integrate seamlessly with Standard Output and Standard Input, allowing flexible data routing. For example, `ls | grep txt | sort` streams the file listing into `grep` and then into `sort`, all in memory, without creating temporary files. This efficiency makes Pipes foundational for shell scripting, automation, and batch processing.
Advanced Pipes support redirection, buffering, and error handling. They can be combined with File Descriptors to route both output and error streams between processes. Careful management of Pipes prevents deadlocks, where processes wait indefinitely for data that is never produced.
Conceptually, Pipe is like a water slide for bits: data enters at one end, curves through the system, and exits cleanly at the other side.
Pipe is like a secret tunnel for your commands — faster, quieter, and much cooler than leaving a trail of temporary files.
See Redirection, I/O Stream, Shell Scripting, Standard Input, Standard Output.