Bibliographic Information: Appeared in Proceedings of the 37th Annual IEEE Conference on Foundations of Computer Science, pp. 154-163, 1996.
We consider fine-grained parallel computations in which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time $t$ on $w$ processors can be performed on a faulty machine in the coded model using $w \log^{O(1)}w $ processors and time $t \log^{O(1)}w$. The failure probability of the computation will be $t \cdot exp(-w^{1/4})$. We show that any computation that can be performed efficiently by a parallel machine can be performed in the coded model by an exponentially reliable fault-tolerant circuit whose size is greater by a polylogarithmic factor: Any circuit of width $w$ and height $h$ can be simulated in the coded model by a circuit of size $h w \log^{O(1)} w$ that will fail with probability at most $h \cdot \mbox{{\rm exp}}(-w^{1-1/\log \log w})$.
The codes used to communicate with our fault-tolerant circuits can be encoded and decoded in $\O{n \log^{2} n}$ sequential time and are independent of the circuit they are used to communicate with.
We show how techniques from this paper can be used to self-correct many linear functions in parallel with arbitrarily small overhead.