Computing the probability for data loss in two-dimensional parity RAIDs
conference contributionposted on 06.02.2018 by Lars Nagel, Tim Suss
Any type of content contributed to an academic conference, such as papers, presentations, lectures or proceedings.
Parity RAIDs are used to protect storage systems against disk failures. The idea is to add redundancy to the system by storing the parity of subsets of disks on extra parity disks. A simple two-dimensional scheme is the one in which the data disks are arranged in a rectangular grid, and every row and column is extended by one disk which stores the parity of it. In this paper we describe several two-dimensional parity RAIDs and analyse, for each of them, the probability for data loss given that f random disks fail. This probability can be used to determine the overall probability using the model of Hafner and Rao. We reduce subsets of the forest counting problem to the different cases and show that the generalised problem is #Phard. Further we adapt an exact algorithm by Stones for some of the problems whose worst-case runtime is exponential, but which is very efficient for small fixed f and thus sufficient for all real-world applications.
This work was partly developed in the ADA-FS project funded by the DFG Priority Program “Software for Exascale Computing” (SPPEXA, SPP 1648).
- Computer Science