I am trying to plot asymmetric error bars which are really 95% confidence interval. The output that I get is not the desired outcome. I am not sure what part of the code is not giving rise to the desired outcome.
import numpy as np
import matplotlib.pyplot as plt
x = (18,20,22,24,26,28,30,32,34)
apo_average = (1933.877,1954.596,2058.192,2244.664,2265.383,2265.383,2306.821,2534.731,2576.169)
std_apo=(35.88652754,0,179.4326365,35.88652754,0,0,35.88652754,35.88652696,0)
error = np.array(apo_average)
lower_error_apo=error-((4.303*(np.array(std_apo)))/np.sqrt(3))
higher_error_apo=error+((4.303*(np.array(std_apo)))/np.sqrt(3))
asymmetric_error_apo=[lower_error_apo, higher_error_apo]
fig = plt.figure()
ax = fig.add_subplot(111)
plt.scatter(x,apo_average,marker='o',label="0 Cu", color='none', edgecolor='blue', linewidth='1')
ax.errorbar(x,apo_average,yerr=asymmetric_error_apo, markerfacecolor='blue',markeredgecolor='blue')
This is quite unexpected. For instance, I intended to put a lower error for the first error bar to be 1844.723, which doesn't agree with what's shown in the picture. This trend stays the same with every error bars.
