L
logiclips
Hi,
I'm having a dataset which I use to multiply with another dataset. The
number of multiplications is >5000 but constant.
The time for computing varies (~0.1-0.2 s) for different datasets,
although they are of the same size. What is the reason for this
variation? Is it because of the zeros that are in the dataset such
that multiplication with zero is faster than any other multiplication.
So the more zeros the faster? Or is it maybe a memory problem?
Thanks,
Peter Vermeer
I'm having a dataset which I use to multiply with another dataset. The
number of multiplications is >5000 but constant.
The time for computing varies (~0.1-0.2 s) for different datasets,
although they are of the same size. What is the reason for this
variation? Is it because of the zeros that are in the dataset such
that multiplication with zero is faster than any other multiplication.
So the more zeros the faster? Or is it maybe a memory problem?
Thanks,
Peter Vermeer