I have 2 for loops which would run for a large data mostly. I want to optimise this and improve the speed as much as possible.
source = [['row1', 'row2', 'row3'],['Product', 'Cost', 'Quantity'],['Test17', '3216', '17'], ['Test18' , '3217' , '18' ], ['Test19', '3218', '19' ], ['Test20', '3219', '20']]
creating a generator object
it = iter(source)
variables = ['row2', 'row3']
variables_indices = [1, 2]
getkey = rowgetter(*key_indices)
for row in it:
k = getkey(row)
for v in zip(variables, variables_indices):
try:
o = list(k)
o.append(v)
o.append(row[i])
yield tuple(o)
except IndexError:
pass
def rowgetter(*indices):
if len(indices) == 0:
return lambda row: tuple()
elif len(indices) == 1:
index = indices[0]
return lambda row: (row[index],)
else:
return operator.itemgetter(*indices)
This would return a tuple but it is taking so much time on an average 100 seconds for 100,000 rows (source has 5 rows in the example ). Can anyone help to reduce this timing please.
note : I also tried for inline loops and list comprehension which is not returning for each iteration
What I have tried:
slist = (yieldfun(getkey(row), v, row[1]) for row in it for v, i in zip(variables, variables_indices) if row)