Let me restate the problem. We're given an array $A_0$ and a sequence $(\ell_1,r_1),\dots,(\ell_m,r_m)$. Set $A_{i+1}=\text{reverse}(A_i,\ell_i,r_i)$. The goal is to compute the array $A_m$.
The naive way to solve this is to just incrementally apply the reverse operations, i.e., incrementally invoke Yuval's reverse algorithm $m$ times. The running time of the naive solution is $\sum_{i=1}^m (r_i-\ell_i)$. The question is, can we do better?
The answer is, yes, in some cases. It's possible to build a data structure to optimize the sequence of reversal operations, in a way that's faster than the naive solution. With sufficient cleverness, you can get the total running time down to $O(n + m \lg n)$ -- in comparison, the naive algorithm could potentially be as large as $O(nm)$, in the worst case. I'll explain how to do it below.
We can model the desired rearrangement by a permutation $\pi$ on the array indices. Such a permutation acts on arrays: given any array $\pi$ and any permutation $\pi$ on the indices, we can obtain an array $\pi A$, namely, the array $A'$ given by $A'[\pi(j)]=A[j]$. Notice that the operation $\text{reverse}(A,\ell,r)$ should return the array $\pi A$, where $\pi(j)=\ell+r-j$ for $\ell \le j \le r$ and $\pi(j)=j$ for all other $j$. Let $\text{rev}(\ell,j)$ denote this permutation.
This gives us a new strategy for solving the problem: first, compute $\pi = \text{rev}(\ell_m,r_m) \circ \cdots \circ \text{rev}(\ell_1,r_1)$; then, compute $A_m = \pi A_0$. The latter step can be done in $O(n)$ time, with no additional space. So, how quickly can we compute $\pi_m$?
Here is one way to compute $\pi_m$. We will represent a permutation $\pi$ as an interval tree, i.e., a binary tree where each leaf has an interval on the indices. The intervals on the leaves will be pairwise disjoint, their union will be $\{1,2,\dots,n\}$, and they will be in sorted order (from left to right). Each internal node will hold an interval representing the union of the intervals on all of its descendants. Also, we'll annotate each leaf with an affine function of the form $f(x)=x+\alpha$ or $f(x)=\alpha-x$. The idea is that if we have a leaf with interval $[a,b]$ and function $f$, then $\pi(j)=f(j)$ for $j=a,a+1,\dots,b$.
In this way, we can represent a permutation as this sort of interval tree. Notice that the identity function can be represented as an interval tree with one leaf, with interval $[1,n]$ and function $f(x)=x$. Also, the permutation $\text{rev}(\ell_m,r_m)$ corresponding to a single reversal operation can be represented as an interval tree with three leaves, namely $[1,\ell_m-1],f(x)=x$, $[\ell_m,r_m],f(x)=\ell_m+r_m-x$, and $[r_m+1,n],f(x)=x$.
Now suppose we have an interval tree representing $\text{rev}(\ell_m,r_m) \circ \cdots \circ \text{rev}(\ell_{i+1},r_{i+1})$. Let me explain how to derive an interval tree representing $\text{rev}(\ell_m,r_m) \circ \cdots \circ \text{rev}(\ell_i,r_i)$. Basically, we search the interval tree to find all leaves whose interval overlaps with $[\ell_i,r_i]$, and split them in two pieces if they partially overlap (one piece that doesn't overlap, the other piece containing the rest that is contained within $[\ell_i,r_i]$). Then, for each such leaf that overlaps $[\ell_i,r_i]$, if it currently contains the function $f$, we replace it with the function $g \circ f$, where $g(x)=\ell_i+r_i-x$.
So, we iteratively do this. When we're done, we have an interval tree representation of $\pi$. Now we can compute $\pi A_0$ in linear time. The running time to construct the interval tree is now dependent on the number of intervals that get treated differently. In other words, say that indices $j,j'$ are treated differently if there exists some $i$ such that $j\in [\ell_i,r_i]$ but $j'\notin [\ell_i,r_i]$ (or vice versa); otherwise, say that $j,j'$ are treated the same. Now construct the equivalence classes of the "treated the same" equivalence relation. Each equivalence class is one interval in the final interval tree, so the total running time is related to the total number of such equivalence relations. In particular, if there are $k$ intervals in the final interval tree (i.e., $k$ such equivalence classes), then the running time is $O(mk+n)$.
It's possible to speed this up even further by annotating each internal node with an affine function as well.
The idea is that such an interval tree represents a permutation $\pi$ as follows: if we have a leaf with interval $[a,b]$ and function $f_0$, and its parent has function $f_1$, and its grandparent function $f_2$, and so on up to the function $f_d$ on the root, then for every $j \in [a,b]$, we have $\pi(j) = (f_d \circ \cdots \circ f_1 \circ f_0)(j)$. In other words, the effective function on this leaf is the composition of all of the functions on the path from the leaf to the root.
This tree representation is even better: it allows you to derive the tree for $\text{rev}(\ell_m,r_m) \circ \cdots \circ \text{rev}(\ell_i,r_i)$ from the tree for $\text{rev}(\ell_m,r_m) \circ \cdots \circ \text{rev}(\ell_{i+1},r_{i+1})$ in $O(\lg k)$ time, where $k = $ the number of intervals in the tree. Basically, we may have to split at most two intervals (one on the left side, if it partially overlaps $[\ell_i,r_i]$ on the left, and one on the right). Then, we need to compose a function $g(x)=\ell_i+r_i-x$ to all of the leaves whose intervals is contained within $[\ell_i,r_i]$. The latter can be done by identifying $O(\lg k)$ nodes of the tree whose (disjoint) union is $[\ell_i,r_i]$ (by construction you can always find such a set of nodes), and then composing $g$ with the function at each of those nodes: for each such node, if its function was $f$, you update it to $g \circ f$.
Consequently, the total running time of this improved solution is $O(n + m \lg k)$, where $k$ is the number of intervals in the final interval tree. A conservative upper bound is $O(n + m \lg n)$ to perform $m$ reversal operations on an array of size $n$. This is better than the naive algorithm, whose running time could be as bad as $O(nm)$.