-->

Calculating vs. lookup tables for sine value perfo

2020-02-17 09:23发布

问题:

Let's say you had to calculate the sine (cosine or tangent - whatever) where the domain is between 0.01 and 360.01. (using C#)

What would be more performant?

  1. Using Math.Sin
  2. Using a lookup array with precalculated values

I would anticpate that given the domain, option 2 would be much faster. At what point in the precision of the domain (0.0000n) does the performance of the calculation exceed the lookup.

回答1:

Update: read through to the end. It looks like the lookup table is faster than Math.Sin after all.

I would guess that the lookup approach would be faster than Math.Sin. I would also say that it would be a lot faster, but Robert's answer made me think that I would still want to benchmark this to be sure. I do a lot of audio buffer processing, and I've noticed that a method like this:

for (int i = 0; i < audiodata.Length; i++)
{
    audiodata[i] *= 0.5; 
}

will execute significantly faster than

for (int i = 0; i < audiodata.Length; i++)
{
    audiodata[i] = Math.Sin(audiodata[i]);
}

If the difference between Math.Sin and a simple multiplication is substantial, I would guess that the difference between Math.Sin and a lookup would also be substantial.

I dunno, though, and my computer with Visual Studio is in the basement, and I'm too tired to take the 2 minutes it would take to determine this.

Update: OK, it took more than 2 minutes (more like 20) to test this, but it looks like Math.Sin is at least twice as fast as a lookup table (using a Dictionary). Here's the class that does Sin using Math.Sin or a lookup table:

public class SinBuddy
{
    private Dictionary<double, double> _cachedSins
        = new Dictionary<double, double>();
    private const double _cacheStep = 0.01;
    private double _factor = Math.PI / 180.0;

    public SinBuddy()
    {
        for (double angleDegrees = 0; angleDegrees <= 360.0; 
            angleDegrees += _cacheStep)
        {
            double angleRadians = angleDegrees * _factor;
            _cachedSins.Add(angleDegrees, Math.Sin(angleRadians));
        }
    }

    public double CacheStep
    {
        get
        {
            return _cacheStep;
        }
    }

    public double SinLookup(double angleDegrees)
    {
        double value;
        if (_cachedSins.TryGetValue(angleDegrees, out value))
        {
            return value;
        }
        else
        {
            throw new ArgumentException(
                String.Format("No cached Sin value for {0} degrees",
                angleDegrees));
        }
    }

    public double Sin(double angleDegrees)
    {
        double angleRadians = angleDegrees * _factor;
        return Math.Sin(angleRadians);
    }
}

And here's the test/timing code:

SinBuddy buddy = new SinBuddy();

System.Diagnostics.Stopwatch timer = new System.Diagnostics.Stopwatch();
int loops = 200;

// Math.Sin
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0; 
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.Sin(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

// lookup
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0;
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.SinLookup(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

Using a step value of 0.01 degrees and looping through the full range of values 200 times (as in this code) takes about 1.4 seconds using Math.Sin, and about 3.2 seconds using a Dictionary lookup table. Lowering the step value to 0.001 or 0.0001 makes the lookup perform even worse against Math.Sin. Also, this result is even more in favor of using Math.Sin, since SinBuddy.Sin does a multiplication to turn the angle in degrees into the angle in radians on every call, while SinBuddy.SinLookup just does a straight lookup.

This is on a cheap laptop (no dual cores or anything fancy). Robert, you da man! (But I still think I should get the check, coz I did the work).

Update 2: OK, I am an idiot... It turns out stopping and restarting the Stopwatch doesn't reset the elapsed milliseconds, so the lookup only seemed half as fast because it's time was including the time for the Math.Sin calls. Also, I reread the question and realized you were talking about caching the values in a simple array, rather than using a Dictionary. Here is my modified code (I'm leaving the old code up as a warning to future generations):

public class SinBuddy
{
    private Dictionary<double, double> _cachedSins
        = new Dictionary<double, double>();
    private const double _cacheStep = 0.01;
    private double _factor = Math.PI / 180.0;

    private double[] _arrayedSins;

    public SinBuddy()
    {
        // set up dictionary
        for (double angleDegrees = 0; angleDegrees <= 360.0; 
            angleDegrees += _cacheStep)
        {
            double angleRadians = angleDegrees * _factor;
            _cachedSins.Add(angleDegrees, Math.Sin(angleRadians));
        }

        // set up array
        int elements = (int)(360.0 / _cacheStep) + 1;
        _arrayedSins = new double[elements];
        int i = 0;
        for (double angleDegrees = 0; angleDegrees <= 360.0;
            angleDegrees += _cacheStep)
        {
            double angleRadians = angleDegrees * _factor;
            //_cachedSins.Add(angleDegrees, Math.Sin(angleRadians));
            _arrayedSins[i] = Math.Sin(angleRadians);
            i++;
        }
    }

    public double CacheStep
    {
        get
        {
            return _cacheStep;
        }
    }

    public double SinArrayed(double angleDegrees)
    {
        int index = (int)(angleDegrees / _cacheStep);
        return _arrayedSins[index];
    }

    public double SinLookup(double angleDegrees)
    {
        double value;
        if (_cachedSins.TryGetValue(angleDegrees, out value))
        {
            return value;
        }
        else
        {
            throw new ArgumentException(
                String.Format("No cached Sin value for {0} degrees",
                angleDegrees));
        }
    }

    public double Sin(double angleDegrees)
    {
        double angleRadians = angleDegrees * _factor;
        return Math.Sin(angleRadians);
    }
}

And the test/timing code:

SinBuddy buddy = new SinBuddy();

System.Diagnostics.Stopwatch timer = new System.Diagnostics.Stopwatch();
int loops = 200;

// Math.Sin
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0; 
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.Sin(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

// lookup
timer = new System.Diagnostics.Stopwatch();
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0;
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.SinLookup(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

// arrayed
timer = new System.Diagnostics.Stopwatch();
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0;
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.SinArrayed(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

These results are quite different. Using Math.Sin takes about 850 milliseconds, the Dictionary lookup table takes about 1300 milliseconds, and the array-based lookup table takes about 600 milliseconds. So it appears that a (properly-written [gulp]) lookup table is actually a bit faster than using Math.Sin, but not by much.

Please verify these results yourself, since I have already demonstrated my incompetence.



回答2:

It used to be that an array lookup was a good optimization to perform fast trig calculations.

But with cache hits, built-in math coprocessors (which use table lookups) and other performance improvements, it might be best to time your specific code yourself to determine which will perform better.



回答3:

For performance questions, the only right answer is the one you reach after testing. But, before you test, you need to determine whether the effort of the test is worth your time - meaning that you've identified a performance issue.

If you're just curious, you can easily write a test to compare the speeds. However, you'll need to remember that using memory for the lookup table can affect paging in larger apps. So, even if paging is faster in your small test, it could slow things down in a larger app that uses more memory.



回答4:

Since you mention Fourier transforms as an application, you might also consider to compute your sines/cosines using the equations

sin(x+y) = sin(x)cos(y) + cos(x)sin(y)

cos(x+y) = cos(x)cos(y) - sin(x)sin(y)

I.e. you can compute sin(n * x), cos(n * x) for n = 0, 1, 2 ... iteratively from sin((n-1) * x), cos((n-1) * x) and the constants sin(x), cos(x) with 4 multiplications. Of course that only works if you have to evaluate sin(x), cos(x) on an arithmetic sequence.

Comparing the approaches without the actual implementation is difficult. It depends a lot on how well your tables fit into the caches.



回答5:

The answer to this depends entirely on how many values are in your lookup table. You say "the domain is between 0.01 and 360.01", but you don't say how many values in that range might be used, or how accurate you need the answers to be. Forgive me for not expecting to see significant digits used to convey implicit meaning in a non-scientific context.

More information is still needed to answer this question. What is the expected distribution of values between 0.01 and 360.01? Are you processing a lot of data other than the simple sin( ) computation?

36000 double precision values takes over 256k in memory; the lookup table is too large to fit in L1 cache on most machines; if you're running straight through the table, you'll miss L1 once per sizeof(cacheline)/sizeof(double) accesses, and probably hit L2. If, on the other hand, your table accesses are more or less random, you will be missing L1 almost every time you do a lookup.

It also depends a lot on the math library of the platform that you're on. Common i386 implementations of the sin function, for example, range from ~40 cycles up to 400 cycles or even more, depending on your exact microarchitecture and library vendor. I haven't timed the Microsoft library, so I don't know exactly where the C# Math.sin implementation would fall.

Since loads from L2 are generally faster than 40 cycles on a sane platform, one reasonably expects the lookup table to be faster considered in isolation. However, I doubt you're computing sin( ) in isolation; if your arguments to sin( ) jump all over the table, you will be blowing other data needed for other steps of your computation out of the cache; although the sin( ) computation gets faster, the slowdown to other parts of your computation may more than outweigh the speedup. Only careful measurement can really answer this question.

Am I to understand from your other comments that you're doing this as part of a FFT computation? Is there a reason that you need to roll your own FFT instead of using one of the numerous extremely high quality implementations that already exist?



回答6:

Math.Sin is faster. The people who wrote are smart and use table lookups when they are accurate and faster and use the math when that is faster. And there's nothing about that domain that makes it particularily faster, the first thing most trig function implementations do is to map down to a favorable domain anyway.



回答7:

As you may have thousands of values in your lookup table, what you may want to do is have a dictionary, and when you calculate a value, put it in the dictionary, so you only calculate each value one time, and use the C# function to do the calculation.

But, there is no reason to recalculate the same value over and over.