A = np.array([[1,2,3], [4,5,6], [7,8,0]])
for x in A[1,:]:
if x < failure_tolerance:
x = 0
This obviously doesnt work because there is some thing going on with writeability but I can't get behind it.
The problem is that here only x is altered. x does not refer to a specific cell in the array, it holds a reference to an element in the array (but not the cell that holds the element).
Nevertheless usually one uses numpy structures when you want to perform calculations on matrices in bulk:
A = np.array([[1,2,3], [4,5,6], [7,8,0]])
A[1,A[1,:] < failure_tolerance] = 0
Here A[1,:] < failure_tolerance will construct a mask of values that are less than the failure_tolerance. Next we set 0 to all those values in A.
For example (with failure_tolerance = 5):
>>> failure_tolerance = 5
>>> A[1,A[1,:] < failure_tolerance] = 0
>>> A
array([[1, 2, 3],
[0, 5, 6],
[7, 8, 0]])
As you can see the first element of the second row (4) is now replaced by 0 since it is smaller than 5.
Usually when you perform calls with numpy, it will run faster since numpy does not do these calls in Python, but uses high performance datastructures and algorithms in a C. For (very) small matrices there won't be any difference (it is possible that numpy will take longer because of the call overhead), but if you work with huge matrices, numpy will definitely outperform any solution written in Python.
Furthermore this syntax is quite declarative: a trained numpy developer will immediately understand that you set values that are less than the failure_tolerance.
EDIT:
In case you want multiple conditions, you can use | as a logical or and & as a logical and. For instance:
A = np.array([[1,2,3], [4,5,6], [7,8,0]])
A[1,(A[1,:] < failure_tolerance) & (A[1,:] > at_least_value)] = 0
This will set all values of the second row of A to 0 given those values are between at_least_value and failure_tolerance.
A, so that means that the code has no effect.