Don't defer to run-time what you can check at compile-time
When I started playing with your Array, I kept getting weird segfaults until I realized that I had forgotten to set the second template parameter correctly. This is something your constructor could easily check at compile-time with no overhead.
template <typename... Sizes>
Array(Sizes... sizes) : linearSize_(1)
{
static_assert(sizeof...(sizes) == Dimensions, "wrong number of size arguments");
allocate(1, sizes...);
}
Likewise for operator():
template <typename... Ns>
typename std::enable_if<(sizeof...(Ns) == Dimensions), T&>::type
operator()(Ns... ns)
{
const auto idx = getIndex(1, ns...);
return data_[idx];
}
C++14:
Your question is tagged c++11 but if you can use C++14, you can replace
typename std::enable_if<(sizeof...(Ns) == Dimensions), T&>::type
with
std::enable_if_t<(sizeof...(Ns) == Dimensions), T&>
which will do the exact same thing but look far less intimidating.
Note that I have taken the liberty to rename your dimension non-type template parameter to Dimensions. I found the singular very confusing and ambiguous with other usages of the term “dimension” in the code. I'd also recommend that you make it a habit to use CamelCase names for template parameters to distinguish them from other variables.
Avoid arbitrary restrictions
You've commented that your application currently only needs multi-dimensional arrays of at most 3 dimensions. This is fine but no reason to arbitrarily restrict your Array if it doesn't make its implementation easier. The only place where you refer to the limit is inside getSize and there the restriction shows up in the least helpful way. Your code actually allows me to create an Array<T, 42> but when I ask for the size of the 5th dimension, it will crash at run-time. On the other hand, asking for the size of the 3rd dimension of a 2-dimensional Array won't be caught and invoke undefined behavior.
What you should do instead is check getSize's argument against the actual number of dimensions of the Array.
int getSize(int dim = 1) const
{
if (dim < 1 || dim > Dimensions)
throw(std::runtime_error("invalid dimension"));
else
return dimSize_[dim - 1];
}
But I have more to say about this function later…
If you really need to restrict the maximum number of dimensions of an Array, do it statically at class scope. However, instead of disallowing more than 3 dimensions, I'd rather enforce it has at least 1.
template <typename T, int Dimensions>
class Array
{
static_assert(Dimensions >= 1, "Array must have at least 1 dimension");
// …
};
Use 0-based indexing
It really confused me that getSize expects 1-based indexing of dimensions. This choice also makes your implementation more complicated because now you have to internally map the indices back to 0-based indices for subscripting the dimSize_ array.
Avoid non-intuitive default parameters
Defaulting parameters to reasonable default values can help making your interfaces easier to use. However, if the default is non-obvious, it makes the interface more confusing and hides bugs. For example, I don't see why getSize should give me the size of the first dimension if I forget to specify one. Likewise, what justifies defaulting the number of dimensions of the Array to 1? If the default isn't obvious, I'd embrace the guideline that “explicit is better than implicit”.
Consider a re-design of Array::getSize
Combining some of the above thoughts, I would recommend a re-design of getSize. Zeroth, change the index to be 0-based. First, I would make the function a template that takes no arguments and a non-type template parameter for the dimension instead. Once you do this, you can statically check the dimension (against the actual number of dimensions).
template <int Dim>
typename std::enable_if<(Dim >= 0) && (Dim < Dimensions), int>::type
getSize() const noexcept
{
return dimSize_[Dim];
}
Now we could offer an overload that returns the total number of elements in the Array which might be useful sometimes.
int
getSize() const noexcept
{
return static_cast<int>(data_.size());
}
Consider using std::size_t for indices
Note the ugly static_cast in the above snippet? Some people make a point that it was a mistake to use unsigned types for array dimensions and indices but its use is now so prevalent that converting back and forth generally causes more trouble than it does good.
The linearSize_ member of Array is redundant
See the above implementation of getSize. The std::vector already knows how big it is.
Array::operator() should always return a reference
I'd be consistent with the standard library containers and always return a reference from subscripting functions (also the const overloads). If T == int or a similar cheaply copyable type, it doesn't really matter but people might also want to put std::strings or other expensive types into your Array. Since your implementation is based on std::vector and the latter always returns a reference anyway, simply passing it thorough doesn't cost you anything extra.
Make Array a container
I think your Array would be significantly more useful if it would model the Container concept from the standard library. You can look the exact requirements up yourself. Suffice to say that implementing them would for the most part be an exercise of simply delegating to the underlying std::vector. For example:
typename std::vector<T>::const_iterator
begin() const { return data_.begin(); }
typename std::vector<T>::const_iterator
end() const { return data_.end(); }
C++14:
In C++14, you can replace the tedious typename std::vector<T>::const_iterator with simply auto.
Note how this matches with the earlier suggestion of adding a getSize member function that takes no arguments and returns the total number of elements in the Array. You would have to rename it to size to be standards-compliant, of course.
Being able to iterate the elements of your multi-dimensional array in sequential order could be a huge performance gain for algorithms like adding two Arrays (of equal dimensions) element-wise. Instead of repetitively mapping tuples of indices to offsets in the linear array, we could simply iterate over the linear memory in one blast. Not to mention that we wouldn't have to do this ourselves but could leverage the standard library algorithms.
Consider exposing allocator support
All standard library containers that dynamically allocate memory are customizable via an additional template parameter AllocatorT that defaults to std::allocator<T> which simply calls the global new and delete. While the allocator interface is a little clumsy, supporting this customization would be very easy in your case since all you'd have to do is pass the allocator on to the std::vector.
template
<
typename T,
std::size_t Dimensions,
typename AllocatorT = std::allocator<T>
>
class Array
{
static_assert((Dimensions >= 1), "...");
public:
Array() = default;
Array(const AllocatorT& alloc) : data_(alloc) { }
template <typename... SizeTs>
Array(SizeTs... sizes)
{
static_assert((sizeof...(SizeTs) == Dimensions), "...");
resize(sizes...);
}
template <typename... SizeTs>
typename std::enable_if<(sizeof...(SizeTs) == Dimensions)>::type
resize(SizeTs... sizes);
// …
private:
std::array<std::size_t, Dimensions> dimSize_;
std::vector<T, AllocatorT> data_;
// …
};
template <typename T, std::size_t Dimensions, typename AllocatorT>
template <typename... SizeTs>
typename std::enable_if<(sizeof...(SizeTs) == Dimensions)>::type
Array<T, Dimensions, AllocatorT>::resize(SizeTs... sizes)
{
// If you want to be very careful, check that the sizes can be converted to
// std::size_t without loss.
const std::array<std::size_t, Dimensions> dimsize = {{ std::size_t(sizes)... }};
const auto newsize = std::accumulate(std::begin(dimsize), std::end(dimsize),
std::size_t {1},
std::multiplies<std::size_t> {});
dimSize_.fill(0);
data_.resize(newsize);
dimSize_ = dimsize;
}
Since there is no clean way to accept an additional allocator argument in the constructor that takes the dimensions as variable template parameter pack, you'll probably want to add a resize member function so you can first construct an empty Array with a supplied allocator (if the default-constructed allocator is not good enough) and then resize it afterwards.
Why am I taking the allocator by const-reference and not by value? Maybe because the standard library containers make the same mistake.
C++14:
In C++14 you can use std::multiplies<> instead of std::multiplies<T> which is so much better.
Consider iteration instead of recursion
Recursion is often said to be elegant but I doubt that it adds net gain in this case. On the contrary, the recursive implementations of getIndex and computeSubSize rather confused me.
For comparison, I'll show you an iterative approach from the implementation of a multi-dimensional array class I once wrote myself.
// This is a private helper function
template <typename... Indices>
std::size_t
position_(Indices... idcs) const
{
const std::array<std::size_t, Dims> indices = {{idcs...}};
assert(this->indices_valid_(indices));
auto pos = std::size_t {indices[0]};
for (auto dim = std::size_t {1}; dim < Dims; ++dim)
pos = pos * this->extents_[dim] + indices[dim];
return pos;
}
Dims is the equivalent of your dimension and extents_ is what you call dimSize_.
Offer bounds-checking for indexing
Subscription errors in multi-dimensional arrays are hard to debug. For example, if you accidentally swap two indices, chances are high that you still land within a valid index of the backing std::vector so no amount of “debug-STL” will help you there. Notice the assert in the code above? The function indices_valid_ is implemented like this.
// This is a private helper function
bool
indices_valid_(const std::array<std::size_t, Dims>& indices) const noexcept
{
for (auto dim = std::size_t {}; dim < Dims; ++dim)
{
if (indices[dim] >= this->extents_[dim])
return false;
}
return true;
}
If the code is compiled with -DNDEBUG, the assert and all associated overhead will go away.
Redundant inlines
You don't have to declare templates as inline, that's implicit. In the same way, functions defined inside the class body are also implicitly inline.
Random thoughts about the Field…
I'm not sure I can say many useful things about the Field class template. My first thought is: Do you really need this type? I'm having a hard time thinking of an algorithm that can work on arrays of different numbers of dimensions. It seems likely to me that you could refactor your application such that the number of dimensions are always known statically. I'd have to know more about your application to say more about this, of course.
Assuming you really need to make the number of dimensions a run-time value, there is not much that static type-checking can do for you. You'll have to implement your own dynamic checking that can at best terminate your program if it detects a bug. I recommend you use assert again for this. It has the correct semantics and users can disable the checking in favor of more speed in already-debugged builds.
It also irks me a little that there are always three Array members inside Field only one of which will be valid at any time. You could consider using a union or boost::variant to reduce the size of the Field.
What I would not do is using polymorphism through inheritance and virtual functions. It seems unlikely to me that it will make the code easier to use and will probably add the highest overhead.
A final thought would be to always use an Array<double, 3> but set the first dimension to 1 if the data is actually only 2-dimensional and likewise for 1-dimensional data.
dimensionofArrayto 3? I'm also not sure what you mean by “safer”. I guess you will already have thought yourself of the option to add a run-time check so I'm not sure what to say about this. \$\endgroup\$Arrays like this is not the best idea... for instance one may Have anField1D,Field2D,Field3Dinheriting from abstractFieldand each having its own array.... but then the syntax to call the operator becomes ugly like e.g. : I need to cast myFieldtoField2Dand do (*Field2D)(i,j)... Was wondering whether another solution was possible. \$\endgroup\$