Laptop RTX 3060 GPU vs Google Colaboratory comparison
When you are starting the path of machine learning and deep learning there are several questions that comes to your head. One of them is do I need to buy a PC or laptop with a Expensive GPU or can I use other free options such free version of Google Colaboratory. Will there be any performance impact?
Lets look at some stats of Google Colaboratory vs Laptop GPU. In this test a laptop with NVIDIA Gefore RTX 3060 was used. The GPU RAM is 6GB which is half from the desktop counterpart. Furthermore, laptop contains AMD Ryzen 7 5800H CPU with 16GB RAM.The test environment run Tensorflow 2.0 with GPU support.
Following results were achieved from the conducted test.
So the test took total 1min and 27s.
Now lets check the Google Colaboratory results.
So the test took total 2min and 26s.
As you can see laptop GPU is significantly faster and when you consider the GPU instance provided by Google Colaboratory its a Tesla K80 GPU.
So you may come to a conclusion that spending for PC with laptop or GPU is beneficial, but there are some draw backs. When you advance more in the field training model size may increase and the 6GB or 12GB VRAM may be not enough. Using a cloud instance will be useful in such scenarios as they are easily scalable but you may need pay for it based on the usage.
Free cloud instance may get disconnected sometimes, and tests won't run continuously when you close the tabs.
So If you have some money to spend its handy to have machine with a GPU. Because you can create small training set and finalize everything in larger cloud environment.
Following codes were used to conduct the test using fashion_mnist data set from Keras
import tensorflow as tf from tensorflow import keras fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
print(train_images.shape) print(train_labels[0])
# checking images import matplotlib.pyplot as plt class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] plt.imshow(train_images[0]) class_names[train_labels[0]]
# scaling train_images_scaled = train_images / 255.0 test_images_scaled = test_images / 255.0
def get_model(hidden_layers=1): # Flatten layer for input layers = [keras.layers.Flatten(input_shape=(28, 28))] # hideen layers for i in range(hidden_layers): layers.append(keras.layers.Dense(500, activation='relu'),) # output layer layers.append(keras.layers.Dense(10, activation='sigmoid')) model = keras.Sequential(layers) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model
%%timeit -n1 -r1
with tf.device('/GPU:0'):
gpu_model = get_model(hidden_layers=5)
gpu_model.fit(train_images_scaled, train_labels, epochs=10)