Tuesday, November 13, 2018

[TensorFlow] The explanation of average gradients by example in data parallelism

When studying some examples of training model using Multi-GPUs ( in data parallelism ), the average gradients function always exists in some kind of ways, and here is a simple version as follows:



def average_gradients(tower_grads):
    average_grads = []
    for grad_and_vars in zip(*tower_grads):
        # Note that each grad_and_vars looks like the following:
        #   ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
        grads = []
        for g, _ in grad_and_vars:
            # Add 0 dimension to the gradients to represent the tower.
            expanded_g = tf.expand_dims(g, 0)

            # Append on a 'tower' dimension which we will average over below.
            grads.append(expanded_g)

        # Average over the 'tower' dimension.
        grad = tf.concat(grads, 0)
        grad = tf.reduce_mean(grad, 0)

        # Keep in mind that the Variables are redundant because they are shared
        # across towers. So .. we will just return the first tower's pointer to
        # the Variable.
        v = grad_and_vars[0][1]
        grad_and_var = (grad, v)
        average_grads.append(grad_and_var)
    return average_grads

The purpose of this function is to grab each trainable variable in GPUs and do the average calculation. Here I use the fake data to show what the function average_gradients do and print out the result in details. Hope this way can help the readers to get understand it.
At least, it works for me!

import numpy as np

average_grads = []

# This is the fake data for tower_grads
# we assume it has 3 variables in the model and uses 4 gpus
# so that the tower_grads will look like the following list:
tower_grads = [
[('grad0_gpu0', 'var0_gpu0'), ('grad1_gpu0', 'var1_gpu0') , ('grad2_gpu0', 'var2_gpu0')],
[('grad0_gpu1', 'var0_gpu1'), ('grad1_gpu1', 'var1_gpu1') , ('grad2_gpu1', 'var2_gpu1')],
[('grad0_gpu2', 'var0_gpu2'), ('grad1_gpu2', 'var1_gpu2') , ('grad2_gpu2', 'var2_gpu2')],
[('grad0_gpu3', 'var0_gpu3'), ('grad1_gpu3', 'var1_gpu3') , ('grad2_gpu3', 'var2_gpu3')]]


for grad_and_vars in zip(*tower_grads):
  grads = []
  for g, _ in grad_and_vars:
    # Add 0 dimension to the gradients to represent the tower.
    expanded_g = np.expand_dims(g, 0)
    
    # Append on a 'tower' dimension which we will average over below.
    grads.append(expanded_g)

  # Average over the 'tower' dimension.
  grad = "Avg: " + str(grads)
  print grad
  
  v = grad_and_vars[0][1]
  grad_and_var = (grad, v)
  average_grads.append(grad_and_var)
  
print average_grads

<<<grad>>>
Avg: ['grad0_gpu0', 'grad0_gpu1', 'grad0_gpu2', 'grad0_gpu3']
Avg: ['grad1_gpu0', 'grad1_gpu1', 'grad1_gpu2', 'grad1_gpu3']
Avg: ['grad2_gpu0', 'grad2_gpu1', 'grad2_gpu2', 'grad2_gpu3']


<<<average_grads>>>
[
(Avg: ['grad0_gpu0', 'grad0_gpu1', 'grad0_gpu2', 'grad0_gpu3'], 'var0_gpu0'),
(Avg: ['grad1_gpu0', 'grad1_gpu1', 'grad1_gpu2', 'grad1_gpu3'], 'var0_gpu0'),
(Avg: ['grad2_gpu0', 'grad2_gpu1', 'grad2_gpu2', 'grad2_gpu3'], 'var0_gpu0')
]



P.S:
Here is a simple example to show zip function in Python:

accum_slots = [ "accm_g1", "accm_g2", "accm_g3", "accm_g4", "accm_g5", "accm_g6", "accm_g7"]
grads_and_vars = [ ("g1", "v1"), ("g2", "v2"), ("g3", "v3"), ("g4", "v4"), ("g5", "v5"), ("g6", "v6"), ("g7", "v7")]

for s, (g, _) in zip(accum_slots, grads_and_vars):
  print(s, g)

('accm_g1', 'g1')
('accm_g2', 'g2')
('accm_g3', 'g3')
('accm_g4', 'g4')
('accm_g5', 'g5')
('accm_g6', 'g6')
('accm_g7', 'g7')

No comments: