Rejection sampling
You can use rejection sampling. The method below does this by sampling in a space of 1 dimension less than the original space.
- Step 1: you sample x(1), x(2), ..., x(n-1) randomly by sampling each x(i) from a uniform distribution
- Step 2: if the sum S = x(1) + x(2) + ... + x(n-1) is below 0 or above 2 then reject and start again in Step 1.
- Step 3: compute the n-th variable as x(n) = 1-S
Intuition
You can view the vector x(1), x(2), ..., x(n-1), x(n) on the interior of a n-dimensional cube with cartesian coordinates ±1, ±1,... , ±1. Such that you follow the constraints -1 <= x(i) <= 1.
The additional constraint that the sum of the coordinates must equal 1 constrains the coordinates to a smaller space than the hypercube and will be a hyperplane with dimension n-1.
If you do regular rejection sampling, sampling from uniform distribution for all the coordinates, then you will never hit the constraint. The sampled point will never be in the hyperplane. Therefore you consider a subspace of n-1 coordinates. Now you can use rejection sampling.
Visually
Say you have dimension 4 then you could plot 3 of the coordinated from the 4. This plot (homogeneously) fills a polyhedron. Below this is illustrated by plotting the polyhedron in slices. Each slice corresponds to a different sum S = x(1) + x(2) + ... + x(n-1) and a different value for x(n).

Image: domain for 3 coordinates. Each colored surface relates to a different value for the 4-th coordinate.
Marginal distributions
For large dimensions, rejection sampling will become less efficient because the fraction of rejections grows with the number of dimensions.
One way to 'solve' this would be by sampling from the marginal distributions. However, it is a bit tedious to compute these marginal distributions. Comparison: For generating samples from a Dirichlet distribution a similar algorithm exists, but in that case, the marginal distributions are relatively easy. (however, it is not impossible to derive these distributions, see below 'relationship with Irwin Hall distribution')
In the example above the marginal distribution of the x(4) coordinate corresponds to the surface area of the cuts. So for 4 dimensions, you might be able to figure out the computation based on that figure (you'd need to compute the area of those irregular polygons) but it starts to get more complicated for larger dimensions.
Relationship with Irwin Hall distribution
To get the marginal distributions you can use truncated Irwin Hall distributions. The Irwin Hall distribution is is the distribution of a sum of uniform distributed variables and will follow some piecewise polynomial shape. This is demonstrated below for one example.
Code
Since my python is rusty I will mostly add R code. The algorithm is very basic and so I imagine that any Python coder can easily adapt it into Python code. The hard part of the question seems to me to be more about the algorithm than about how to code in Python (although I am not a Python coder so I leave that up to others).

Image: output from sampling. The 4 black curves are marginal distributions for the four coordinates. The red curve is a computation based on an Irwin Hall distribution. This can be extended to a sampling method by computing directly instead of rejection sampling.
The rejection sampling in python
import numpy as np
def sampler(size):
reject = 1
while reject:
x = np.random.rand(size - 1) # step 1
S = np.sum(x)
reject = (S<0) or (S>2) # step 2
x = np.append(x,1-S) # step 3
return[x]
y = sampler(5)
print(y, np.sum(y))
Some more code in R, including the comparison with the Irwin Hall distribution. This distribution can be used to compute the marginal distributions and can be used to devise an algorithm to that is more efficient than rejection sampling.
### function to do rejection sample
samp <- function(n) {
S <- -1
## a while loop that performs step 1 (sample) and 2 (compare sum)
while((S<0) || (S>2) ) {
x <- runif(n-1,-1,1)
S <- sum(x)
}
x <- c(x,1-S) ## step 3 (generate n-th coordinate)
x
}
### compute 10^5 samples
y <- replicate(10^5,samp(4))
### plot histograms
h1 <- hist(y[1,], breaks = seq(-1,1,0.05))
h2 <- hist(y[2,], breaks = seq(-1,1,0.05))
h3 <- hist(y[3,], breaks = seq(-1,1,0.05))
h4 <- hist(y[4,], breaks = seq(-1,1,0.05))
### histograms together in a line plot
plot(h1$mids,h1$density, type = 'l', ylim = c(0,1),
xlab = "x[i]", ylab = "frequency", main = "marginal distributions")
lines(h2$mids,h2$density)
lines(h3$mids,h3$density)
lines(h4$mids,h4$density)
### add distribution based on Irwin Hall distribution
### Irwin Hall PDF
dih <- function(x,n=3) {
k <- 0:(floor(x))
terms <- (-1)^k * choose(n,k) *(x-k)^(n-1)
sum(terms)/prod(1:(n-1))
}
dih <- Vectorize(dih)
### Irwin Hall CDF
pih <- function(x,n=3) {
k <- 0:(floor(x))
terms <- (-1)^k * choose(n,k) *(x-k)^n
sum(terms)/prod(1:(n))
}
pih <- Vectorize(pih)
### adding the line
### (note we need to scale the variable for the Erwin Hall distribution)
xn <- seq(-1,1,0.001)
range <- c(-1,1)
cum <- pih(1.5+(1-range)/2,3)
scale <- 0.5/(cum[1]-cum[2]) ### renormalize
### (the factor 0.5 is due to the scale difference)
lines(xn,scale*dih(1.5+(1-xn)/2,3),col = 2)
xwhose elements are each between -1 and 1, and whose elements sum to 1np.random.uniform(-1, 1, 1000).sum()and you should not be surprised to see values around 50 or so.