Sunday, February 26, 2017

tensorflow cifar10

https://www.tensorflow.org/tutorials/deep_cnn

First time the training is very slow.
On GTX 970,  ~33 hours.
On GTX 860m, ~40 hours.
On Raspberry pi 3, ~200 hours.

I decided to use GTX 860m to train and will update back after 40 hours. (20170226, 9AM)

Or I am thinking of using multiple GTX860m on multiple PCs.  I have 3 of them.
https://www.tensorflow.org/deploy/distributed

cifar10_image.py: This is a short script displaying 32x32 images from cifar10 bin file
import numpy as np
import PIL.Image

#display larger images
def DisplayPicture(byte, fmt='jpeg'):
print(byte[0])
a = np.fromstring(byte[1:3073], dtype=np.uint8)
b = np.reshape(a, (3, 1024))
c = b.T
d = np.concatenate([c, c, c, c, c, c, c, c], axis=1)
e = np.reshape(d, [32, 32*3*8])
f = np.concatenate([e, e, e, e, e, e, e, e], axis=1)
g = np.reshape(f, [1, 3072*8*8])
im = PIL.Image.frombytes('RGB', (32*8,32*8), g)
im.show()

#open file 
#read an image
#record format: 1 byte type, 1024 byte red, 1024 byte green, 1024 byte blue
#32 x 32, row-major
f = open('data_batch_1.bin', 'rb')
byte = f.read(3073)
while byte != '':
DisplayPicture(byte)
byte = f.read(3073)
f.close




Further reading:
https://www.cs.toronto.edu/~kriz/cifar.html
http://groups.csail.mit.edu/vision/TinyImages/

No comments:

Post a Comment