1

Im trying to train a Neural Network that I wrote, but it seems that colab is not recognizing the gtx 1050 on my laptop. I can't use their cloud GPU's for this task, because I run into memory constraints

print(cuda.is_available())

is returning False

1
  • Since you did not mentioned in the question, I have to ask: Have you configured local runtimes ? research.google.com/colaboratory/local-runtimes.html Did it report any error? Commented Aug 8, 2019 at 0:22

1 Answer 1

1

Indeed you gotta select the local runtime accelerator to use GPUs or TPUs, go to Runtime then Change runtime type like in the picture:

runtime

And then change it to GPU (takes some secs): GPU

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.