Compiler Performance with Compile-Time Array?

A

Andrew Tomazos

The following C++11 program (with N=1000000) causes GCC 4.7 to run out of memory and/or crash at compile time. For N=10000 it works fine.

It looks like it is using O(N^2) memory somehow, however there should only be O(NlogN) template instantiations and total template parameters. I am curious as to where the extra memory usage comes from.

The purpose of the program is to initialize a std::array at compile time with a constexpr function of each elements index. ie:

array x = { f(0), f(1), f(2), ..., f(N-1) }

such that the calculated data is stored in the application image static data (.rodata), in the same way as a C array is when given an initializer list.

===== CUT HERE =====

#include <iostream>
#include <array>
using namespace std;

constexpr int N = 1000000;
constexpr int f(int x) { return x*2; }

typedef array<int, N> A;

template<int... i> struct F { static constexpr A f() { return A{{ ::f(i)... }}; } };

template<class A, class B> struct C {};
template<int... i, int... j> struct C<F<i...>, F<j...>> : F<i..., (sizeof...(i)+j)...>
{
using T = F<i..., (sizeof...(i)+j)...>;
};

template<int n> struct S : C<typename S<n/2>::T, typename S<n-n/2>::T> {};
template<> struct S<1> : F<0> { using T = F<0>; };

constexpr auto X = S<N>::f();

int main()
{
cout << X[3] << endl;
}

===== CUT HERE =====

Any thoughts appreciated.

Regards,
Andrew.
 
M

Melzzzzz

The following C++11 program (with N=1000000) causes GCC 4.7 to run
out of memory and/or crash at compile time. For N=10000 it works
fine.

for N=100000 it goes out of memory too (I have 16gigs of ram and 5
swap ;)
It looks like it is using O(N^2) memory somehow, however there should
only be O(NlogN) template instantiations and total template
parameters. I am curious as to where the extra memory usage comes
from.

The purpose of the program is to initialize a std::array at compile
time with a constexpr function of each elements index. ie:

array x = { f(0), f(1), f(2), ..., f(N-1) }

such that the calculated data is stored in the application image
static data (.rodata), in the same way as a C array is when given an
initializer list.

===== CUT HERE =====

#include <iostream>
#include <array>
using namespace std;

constexpr int N = 1000000;
constexpr int f(int x) { return x*2; }

typedef array<int, N> A;

template<int... i> struct F { static constexpr A f() { return
A{{ ::f(i)... }}; } };

template<class A, class B> struct C {};
template<int... i, int... j> struct C<F<i...>, F<j...>> : F<i...,
(sizeof...(i)+j)...> {
using T = F<i..., (sizeof...(i)+j)...>;
};

template<int n> struct S : C<typename S<n/2>::T, typename
S<n-n/2>::T> {}; template<> struct S<1> : F<0> { using T = F<0>; };

constexpr auto X = S<N>::f();

int main()
{
cout << X[3] << endl;
}

===== CUT HERE =====

Any thoughts appreciated.

I don't know what this is suppose to do , but it expands beyond
16gb of RAM even if I put n/2 , n/2 and N = 100000
I have filling that this expands even more then n^2.
 
M

Melzzzzz

The following C++11 program (with N=1000000) causes GCC 4.7 to run
out of memory and/or crash at compile time. For N=10000 it works
fine.

for N=100000 it goes out of memory too (I have 16gigs of ram and 5
swap ;)
It looks like it is using O(N^2) memory somehow, however there
should only be O(NlogN) template instantiations and total template
parameters. I am curious as to where the extra memory usage comes
from.

The purpose of the program is to initialize a std::array at compile
time with a constexpr function of each elements index. ie:

array x = { f(0), f(1), f(2), ..., f(N-1) }

such that the calculated data is stored in the application image
static data (.rodata), in the same way as a C array is when given an
initializer list.

===== CUT HERE =====

#include <iostream>
#include <array>
using namespace std;

constexpr int N = 1000000;
constexpr int f(int x) { return x*2; }

typedef array<int, N> A;

template<int... i> struct F { static constexpr A f() { return
A{{ ::f(i)... }}; } };

template<class A, class B> struct C {};
template<int... i, int... j> struct C<F<i...>, F<j...>> : F<i...,
(sizeof...(i)+j)...> {
using T = F<i..., (sizeof...(i)+j)...>;
};

template<int n> struct S : C<typename S<n/2>::T, typename
S<n-n/2>::T> {}; template<> struct S<1> : F<0> { using T = F<0>; };

constexpr auto X = S<N>::f();

int main()
{
cout << X[3] << endl;
}

===== CUT HERE =====

Any thoughts appreciated.

I don't know what this is suppose to do , but it expands beyond
16gb of RAM even if I put n/2 , n/2 and N = 100000
I have filling that this expands even more then n^2.

Seems that it is g++ thing, clang 3.1 compiles fine even
with N=1000000 (takes around 2 Gig)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,571
Members
45,045
Latest member
DRCM

Latest Threads

Top