I know how to activate the GPU in the runtime type, but I'm used to doing machine learning with sklearn or XGBoost which automatically make use of the GPU. Now I've made my own machine learning algorithm but I don't know how to force it do the computations on the GPU. I need the extra RAM from the GPU runtime type, but I don't know how to benefit from the speed of the GPU...
@jit(target ="cuda")
popsize = 1000
File "<ipython-input-82-7cb543a75250>", line 2
popsize = 1000
^
SyntaxError: invalid syntax