Re: on goto

Discussion in 'C++' started by Dave Harris, May 10, 2010.

  1. Dave Harris

    Dave Harris Guest

    (Nathan Baker) wrote (abridged):
    > > Yes. It led to simple code, but that was partly because all the
    > > complexity was hidden behind an iterator. Nathan Baker's version
    > > exposes some of that complexity, and I think illustrates how hard
    > > it is to get it right.

    >
    > Perhaps the words "simple" and "complexity" have different meanings
    > to different people because of their different training and the goals
    > they've been conditioned to target??
    >
    > Juha's code uses 3 loop constructs, 4 conditionals, and 2 exit
    > points.
    >
    > Mine uses 1 loop construct, 2 conditionals, and 1 exit point.


    But your code was broken in the ways pointed out by other posters. In
    particular, it accesses data[0][0][0] even if data[0].size() is zero,
    which will give a bounds error or a crash. It also failed to return a the
    address where the item was found, and was incapable of searching for
    value 0. It also did huge amounts of unnecessary work, in that the inner
    loop tested all of the conditionals. (And by my count it uses a lot more
    than 2 conditionals even as posted.) You need to fix all the bugs in your
    code before we can even consider its complexity.

    When I try to write code myself that uses your approach, but correctly
    handles all the cases the original code handles, I get a horrible mess.
    Specifically, you need to be aware that data[0][0].size() may be
    different to data[1][0].size() or data[0][1].size(), and that any of them
    may be zero meaning that (eg) data[0][0][0] does not exist and must not
    be accessed. Similarly for data[0].size() and data.size(), and data[0][0]
    and data[0].

    This seems to mean you need a loop to find the first accessible value
    before you even look at data[xInd][yInd][zInd].

    -- Dave Harris, Nottingham, UK.
     
    Dave Harris, May 10, 2010
    #1
    1. Advertising

  2. Dave Harris

    Nathan Guest

    On May 10, 9:04 am, (Dave Harris) wrote:
    > (Nathan Baker) wrote (abridged):
    >
    > > > Yes. It led to simple code, but that was partly because all the
    > > > complexity was hidden behind an iterator. Nathan Baker's version
    > > > exposes some of that complexity, and I think illustrates how hard
    > > > it is to get it right.

    >
    > > Perhaps the words "simple" and "complexity" have different meanings
    > > to different people because of their different training and the goals
    > > they've been conditioned to target??

    >
    > > Juha's code uses 3 loop constructs, 4 conditionals, and 2 exit
    > > points.

    >
    > > Mine uses 1 loop construct, 2 conditionals, and 1 exit point.

    >
    > But your code was broken in the ways pointed out by other posters. In
    > particular, it accesses data[0][0][0] even if data[0].size() is zero,
    > which will give a bounds error or a crash. It also failed to return a the
    > address where the item was found, and was incapable of searching for
    > value 0. It also did huge amounts of unnecessary work, in that the inner
    > loop tested all of the conditionals. (And by my count it uses a lot more
    > than 2 conditionals even as posted.) You need to fix all the bugs in your
    > code before we can even consider its complexity.
    >
    > When I try to write code myself that uses your approach, but correctly
    > handles all the cases the original code handles, I get a horrible mess.
    > Specifically, you need to be aware that data[0][0].size() may be
    > different to data[1][0].size() or data[0][1].size(), and that any of them
    > may be zero meaning that (eg) data[0][0][0] does not exist and must not
    > be accessed. Similarly for data[0].size() and data.size(), and data[0][0]
    > and data[0].
    >
    > This seems to mean you need a loop to find the first accessible value
    > before you even look at data[xInd][yInd][zInd].
    >


    Then let's just start with Juha's original code and see where some
    simple improvements can be made, shall we?

    Given, the original...

    ,---
    Value_t* MyClass::findValue(const Value_t& value)
    {
    for(size_t xInd = 0; xInd < data.size(); ++xInd)
    for(size_t yInd = 0; yInd < data[xInd].size(); ++yInd)
    for(size_t zInd = 0; zInd < data[xInd][yInd].size(); +
    +zInd)
    {
    if(data[xInd][yInd][zInd] == value)
    return &data[xInd][yInd][zInd];
    }

    return 0;
    }
    `---

    First, we can remove the nested loops like so...

    ,---
    Value_t* MyClass::findValue(const Value_t& value)
    {
    size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
    [yInd].size();
    for(size_t xInd = 0; xInd < xyzMax; ++xInd)
    {
    if(data[xInd] == value)
    return &data[xInd];
    }

    return 0;
    }
    `---

    Next, we remove the nested exit point by changing it into a signal to
    exit the loop...

    ,---
    Value_t* MyClass::findValue(const Value_t& value)
    {
    size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
    [yInd].size();
    result = 0;
    for(size_t xInd = 0; xInd < xyzMax; ++xInd)
    {
    if(data[xInd] == value)
    result = &data[xInd];
    xInd = xyzMax;
    }

    return result;
    }
    `---

    Juha asked for an alternative that didn't make use of multiple exit
    points. This is one way to achieve that -- fold the inner loops
    (those indices are meaningless) and signal the end to the 'for' loop.

    So,

    3 loop constructs are reduced to 1
    4 conditionals are reduced to 2
    2 exit points are reduced to 1

    Simple, isn't it?

    Nathan.
     
    Nathan, May 10, 2010
    #2
    1. Advertising

  3. [I've trimmed the groups. Sorry if this is the wrong thing to do]

    Nathan <> writes:
    <snip>
    > Then let's just start with Juha's original code and see where some
    > simple improvements can be made, shall we?
    >
    > Given, the original...
    >
    > ,---
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > for(size_t xInd = 0; xInd < data.size(); ++xInd)
    > for(size_t yInd = 0; yInd < data[xInd].size(); ++yInd)
    > for(size_t zInd = 0; zInd < data[xInd][yInd].size(); ++zInd)
    > {
    > if(data[xInd][yInd][zInd] == value)
    > return &data[xInd][yInd][zInd];
    > }
    >
    > return 0;
    > }
    > `---
    >
    > First, we can remove the nested loops like so...
    >
    > ,---
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
    > [yInd].size();


    Did you mean to multiply here?

    > for(size_t xInd = 0; xInd < xyzMax; ++xInd)
    > {
    > if(data[xInd] == value)


    You can't assume that the data is contiguous or, to be a little more
    general, that a single [] operation means the same as three of them.
    This is, after all, C++ not C.

    > return &data[xInd];
    > }
    >
    > return 0;
    > }
    > `---
    >
    > Next, we remove the nested exit point by changing it into a signal to
    > exit the loop...
    >
    > ,---
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
    > [yInd].size();
    > result = 0;
    > for(size_t xInd = 0; xInd < xyzMax; ++xInd)
    > {
    > if(data[xInd] == value)
    > result = &data[xInd];
    > xInd = xyzMax;
    > }
    >
    > return result;
    > }
    > `---
    >
    > Juha asked for an alternative that didn't make use of multiple exit
    > points. This is one way to achieve that -- fold the inner loops
    > (those indices are meaningless) and signal the end to the 'for' loop.
    >
    > So,
    >
    > 3 loop constructs are reduced to 1
    > 4 conditionals are reduced to 2


    These two are the same, in effect. It distorts the picture if you count
    the reduction in loops and then the inevitable consequences of that
    reduction.

    > 2 exit points are reduced to 1


    And you've added the potential for confusion -- maybe even future bugs.
    E.g. what, if anything, can we say about xInd after the loop?

    Rather than signal the end by setting xInd, what objection could you have
    to a 'break' statement there? How is setting xInd better?

    > Simple, isn't it?


    I don't think it's correct yet so its simplicity is not really the main
    issue.

    --
    Ben.
     
    Ben Bacarisse, May 10, 2010
    #3
  4. Am 10.05.2010 16:28, schrieb Nathan:
    > On May 10, 9:04 am, (Dave Harris) wrote:
    >> This seems to mean you need a loop to find the first accessible value
    >> before you even look at data[xInd][yInd][zInd].
    >>

    >
    > Then let's just start with Juha's original code and see where some
    > simple improvements can be made, shall we?
    >
    > Given, the original...
    >
    > ,---
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > for(size_t xInd = 0; xInd < data.size(); ++xInd)
    > for(size_t yInd = 0; yInd < data[xInd].size(); ++yInd)
    > for(size_t zInd = 0; zInd < data[xInd][yInd].size(); +
    > +zInd)
    > {
    > if(data[xInd][yInd][zInd] == value)
    > return &data[xInd][yInd][zInd];
    > }
    >
    > return 0;
    > }
    > `---


    This code iterates over a 3D matrix, so it uses 3 indices.

    > First, we can remove the nested loops like so...
    >
    > ,---
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
    > [yInd].size();
    > for(size_t xInd = 0; xInd < xyzMax; ++xInd)
    > {
    > if(data[xInd] == value)
    > return &data[xInd];
    > }
    >
    > return 0;
    > }
    > `---


    This code uses only 1 index operation to data. data[xInd] still has 2
    dimensions and can't be compared to the scalar 'value'; the types don't
    match.
    Also, you don't go over x_size*y_size*z_size elements but only
    x_size+y_size+z_size elements.

    The code doesn't do the same as the other one.

    [...]
    > Juha asked for an alternative that didn't make use of multiple exit
    > points. This is one way to achieve that -- fold the inner loops
    > (those indices are meaningless) and signal the end to the 'for' loop.
    >
    > So,
    >
    > 3 loop constructs are reduced to 1
    > 4 conditionals are reduced to 2
    > 2 exit points are reduced to 1
    >
    > Simple, isn't it?


    We can also remove all the loops and the function would even be more
    efficient:

    Value_t* MyClass::findValue(const Value_t& value)
    {
    return 0; // Single Exit point!
    }

    Much more simple, isn't it?

    --
    Thomas
     
    Thomas J. Gritzan, May 10, 2010
    #4
  5. Dave Harris

    Seebs Guest

    On 2010-05-10, Nathan <> wrote:
    > Then let's just start with Juha's original code and see where some
    > simple improvements can be made, shall we?


    > Given, the original...


    > ,---
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > for(size_t xInd = 0; xInd < data.size(); ++xInd)
    > for(size_t yInd = 0; yInd < data[xInd].size(); ++yInd)
    > for(size_t zInd = 0; zInd < data[xInd][yInd].size(); +
    > +zInd)
    > {
    > if(data[xInd][yInd][zInd] == value)
    > return &data[xInd][yInd][zInd];
    > }
    >
    > return 0;
    > }
    > `---


    > First, we can remove the nested loops like so...


    > ,---
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
    > [yInd].size();
    > for(size_t xInd = 0; xInd < xyzMax; ++xInd)
    > {
    > if(data[xInd] == value)
    > return &data[xInd];
    > }
    >
    > return 0;
    > }
    > `---


    Note the crossposts. Over here in C-land, we don't have the option of
    overriding data[] in that way. Actually, I'm not even sure you do in
    C++ land. Imagine that the array is a 10x10x10 array. What is
    data[5]? What is data[15]? How does data[5] know that data[5].size()
    should be the size of the sixth row, but that (data[5] == value) should
    be testing against the sixth element of the first row? Also, where are
    xInd and yInd being declared, that we can use data[xInd][yInd].size()?

    In short, this is just plain wrong. You're using undefined values. You
    haven't established that the rows are the same size -- the original code
    works even if each row is a different length!

    So far as I can tell, for a 10x10x10 array, you seem to be calculating
    the value 30, and then indexing the first 30 rows of the array, only 10
    of which exist, and comparing them to values. That seems... very confused.

    > Next, we remove the nested exit point by changing it into a signal to
    > exit the loop...


    > ,---
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
    > [yInd].size();
    > result = 0;
    > for(size_t xInd = 0; xInd < xyzMax; ++xInd)
    > {
    > if(data[xInd] == value)
    > result = &data[xInd];
    > xInd = xyzMax;
    > }
    >
    > return result;
    > }
    > `---


    I would never, ever, approve this code. Even apart from all the issues
    above, modifying loop variables like this is a worse control structure
    than anything this side of longjmp().

    Modifying loop variables by, say, incrementing them when you grab the next
    data item can make sense. Setting the loop variable to a magic value is
    crazy.

    Also, you omitted {}, so this will always exit on the first call, nevermind
    the various other problems.

    > Juha asked for an alternative that didn't make use of multiple exit
    > points. This is one way to achieve that -- fold the inner loops
    > (those indices are meaningless) and signal the end to the 'for' loop.


    Except you've not established that those indices are "meaningless".

    > Simple, isn't it?


    Simple, but completely wrong. Even if it were adjusted to compile, it'd
    still be wrong; you haven't addressed the fact that the original supported
    a grid in which different elements can be of different sizes.

    -s
    --
    Copyright 2010, all wrongs reversed. Peter Seebach /
    http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
    http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
     
    Seebs, May 10, 2010
    #5
  6. Dave Harris

    Seebs Guest

    On 2010-05-10, Ben Bacarisse <> wrote:
    > [I've trimmed the groups. Sorry if this is the wrong thing to do]


    No worries, I don't have two of those groups either.

    > You can't assume that the data is contiguous or, to be a little more
    > general, that a single [] operation means the same as three of them.
    > This is, after all, C++ not C.


    In C++ it's at least conceivable that someone could come up with some
    exceptionally sneaky overloading of [] to make it viable. In C, it's
    just plain wrong. In general, you cannot assume that
    data[x][y][z]
    and
    ((type *) data)[(x*YSIZE*ZSIZE) + (y + ZSIZE) + z]

    are the same.

    -s
    --
    Copyright 2010, all wrongs reversed. Peter Seebach /
    http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
    http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
     
    Seebs, May 10, 2010
    #6
  7. Dave Harris

    Seebs Guest

    On 2010-05-10, Nathan Baker <> wrote:
    > "Thomas J. Gritzan" <> wrote in message
    > news:hs981d$ot3$...
    >> This code uses only 1 index operation to data. data[xInd] still has 2
    >> dimensions and can't be compared to the scalar 'value'; the types don't
    >> match.


    > A computer's RAM has only one dimension.


    You are making a flawed assumption.

    > Let us consider the following 2-D
    > array:


    > x | 0 | 1 | 2
    > 0 | a | b | c
    > 1 | d | e | f
    > 2 | g | h | i
    >
    > In RAM, it looks like this:
    >
    > (0,0) a (0,1) b (0,2) c (1,0) d (1,1) e (1,2) f (2,0) g (2,1) h (2,2) i


    Maybe.

    It could be that it's not really a 2D array, but an array of pointers to
    non-contiguous blocks.

    > ...which we can also index as...
    >
    > (0) a (1) b (2) c (3) d (4) e (5) f (6) g (7) h (8) i


    > ...so, you can easily see that 'data[2][1]' and 'data[7]' reference the same
    > value.


    Wrong.

    Let's look more closely at data[1][0] and data[3]. data[3] refers to the
    block of memory just after 'i'. data[1][0] refers to 'b'.

    See, indexing doesn't magically change from one type to another without any
    intervention; it follows the type of the thing you're indexing. If
    data[2].size() tells you the size of the 'cfi' row, then data[2] is not 'b'.

    > Using one index, rather than three, decreases the complexity of our algo.


    It would if it worked, but it is not always the case that it would work, and
    this is one of the cases where it's very unlikely to.

    -s
    --
    Copyright 2010, all wrongs reversed. Peter Seebach /
    http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
    http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
     
    Seebs, May 10, 2010
    #7
  8. Dave Harris

    Dave Harris Guest

    (Nathan) wrote (abridged):
    > > Specifically, you need to be aware that
    > > data[0][0].size() may be different to data[1][0].size() or
    > > data[0][1].size(), and that any of them may be zero meaning
    > > that (eg) data[0][0][0] does not exist and must not be accessed.

    >
    > [...]
    > Value_t* MyClass::findValue(const Value_t& value)
    > {
    > size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
    > [yInd].size();
    > for(size_t xInd = 0; xInd < xyzMax; ++xInd)
    > {
    > if(data[xInd] == value)
    > return &data[xInd];
    > }
    >
    > return 0;
    > }


    But this does not do the same thing at all. You have done what I warned
    you against, which is suppose that data[0][0].size() is the same as
    data[0][1].size().

    You also assume the values are contiguous in memory, which won't be true
    if the data[0] is a std::vector (and it is almost certainly some kind of
    data structure; given that it defines the size() method it won't be a
    plain array).

    And you've introduced additional bugs, for example adding the sizes when
    you need to multiply them. Your later code misses out some braces so that
    the loop always terminates after one iteration. You don't declare or
    initialise all your variables. Frankly, you are not gaining any
    credibility here.


    > [...]
    > So,
    >
    > 3 loop constructs are reduced to 1
    > 4 conditionals are reduced to 2
    > 2 exit points are reduced to 1
    >
    > Simple, isn't it?


    When you can solve the original problem correctly, we can talk about
    whether your solution is simpler.

    -- Dave Harris, Nottingham, UK.
     
    Dave Harris, May 10, 2010
    #8
  9. Seebs <> writes:

    > On 2010-05-10, Ben Bacarisse <> wrote:

    <snip>
    >> You can't assume that the data is contiguous or, to be a little more
    >> general, that a single [] operation means the same as three of them.
    >> This is, after all, C++ not C.

    >
    > In C++ it's at least conceivable that someone could come up with some
    > exceptionally sneaky overloading of [] to make it viable.


    That's why I said "you can't assume...". It is conceivable but it is
    not a sensible assumption for the poster[1] to make. The post was about
    a re-write to simplify come code. If that involves a "sneaky
    overloading of []" any argument about simplification is blown away.

    [1] Was it Nathan? All earlier attributions have gone.

    --
    Ben.
     
    Ben Bacarisse, May 11, 2010
    #9
  10. Dave Harris

    Lie Ryan Guest

    On 05/11/10 09:56, Ben Bacarisse wrote:
    > Seebs <> writes:
    >
    >> On 2010-05-10, Ben Bacarisse <> wrote:

    > <snip>
    >>> You can't assume that the data is contiguous or, to be a little more
    >>> general, that a single [] operation means the same as three of them.
    >>> This is, after all, C++ not C.

    >>
    >> In C++ it's at least conceivable that someone could come up with some
    >> exceptionally sneaky overloading of [] to make it viable.

    >
    > That's why I said "you can't assume...". It is conceivable but it is
    > not a sensible assumption for the poster[1] to make. The post was about
    > a re-write to simplify come code. If that involves a "sneaky
    > overloading of []" any argument about simplification is blown away.


    If it hides the complexity, then it could be argued that it's simpler,
    at least from the outside. C/C++ hides the complexity of assembly, who
    would argue that?

    I think there are two separate definition of complexity:

    - subjective complexity: how complex the source code looks like
    - objective complexity: how complex what the machine is actually doing

    nowadays, as machine price goes down, the tendency is to reduce
    subjective complexity, while potentially increasing machine complexity
    slightly.
     
    Lie Ryan, May 12, 2010
    #10
  11. Lie Ryan <> writes:

    > On 05/11/10 09:56, Ben Bacarisse wrote:
    >> Seebs <> writes:
    >>
    >>> On 2010-05-10, Ben Bacarisse <> wrote:

    >> <snip>
    >>>> You can't assume that the data is contiguous or, to be a little more
    >>>> general, that a single [] operation means the same as three of them.
    >>>> This is, after all, C++ not C.
    >>>
    >>> In C++ it's at least conceivable that someone could come up with some
    >>> exceptionally sneaky overloading of [] to make it viable.

    >>
    >> That's why I said "you can't assume...". It is conceivable but it is
    >> not a sensible assumption for the poster[1] to make. The post was about
    >> a re-write to simplify come code. If that involves a "sneaky
    >> overloading of []" any argument about simplification is blown away.

    >
    > If it hides the complexity, then it could be argued that it's simpler,
    > at least from the outside.


    For the purposes of evaluating the complexity of a solution, you have to
    look at all the code. If a re-definition of is [] involved, then
    that is part of the solution and we never got to see it.

    I said that the argument is blown away -- not necessarily the
    conclusion. All code can be simplified to "find_solution_to_problem()"
    (or some variation thereof) but that's not a sound argument.

    > C/C++ hides the complexity of assembly, who
    > would argue that?
    >
    > I think there are two separate definition of complexity:


    Only two? :)

    > - subjective complexity: how complex the source code looks like
    > - objective complexity: how complex what the machine is actually doing


    <snip>
    --
    Ben.
     
    Ben Bacarisse, May 12, 2010
    #11
  12. Dave Harris

    Lie Ryan Guest

    On 05/12/10 21:42, Ben Bacarisse wrote:
    > Lie Ryan <> writes:
    >
    >> On 05/11/10 09:56, Ben Bacarisse wrote:
    >>> Seebs <> writes:
    >>>
    >>>> On 2010-05-10, Ben Bacarisse <> wrote:
    >>> <snip>
    >>>>> You can't assume that the data is contiguous or, to be a little more
    >>>>> general, that a single [] operation means the same as three of them.
    >>>>> This is, after all, C++ not C.
    >>>>
    >>>> In C++ it's at least conceivable that someone could come up with some
    >>>> exceptionally sneaky overloading of [] to make it viable.
    >>>
    >>> That's why I said "you can't assume...". It is conceivable but it is
    >>> not a sensible assumption for the poster[1] to make. The post was about
    >>> a re-write to simplify come code. If that involves a "sneaky
    >>> overloading of []" any argument about simplification is blown away.

    >>
    >> If it hides the complexity, then it could be argued that it's simpler,
    >> at least from the outside.

    >
    > For the purposes of evaluating the complexity of a solution, you have to
    > look at all the code. If a re-definition of is [] involved, then
    > that is part of the solution and we never got to see it.


    That is objective complexity.

    > I said that the argument is blown away -- not necessarily the
    > conclusion. All code can be simplified to "find_solution_to_problem()"
    > (or some variation thereof) but that's not a sound argument.


    If that function exists, then at that specific level of viewpoint, the
    specific problem is trivial. In subjective complexity, the current level
    of viewpoint is all that matters; the fact that one of the function
    being called is complex is considered irrelevant.

    In short, if you break your code into many small, trivial functions, it
    would generally score very good in terms of subjective complexity, even
    though its objective complexity may actually be equal (or even slightly
    higher) than when the code is all in one place.

    >> C/C++ hides the complexity of assembly, who
    >> would argue that?
    >>
    >> I think there are two separate definition of complexity:

    >
    > Only two? :)


    correction: "at least two"
     
    Lie Ryan, May 12, 2010
    #12
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. youngsun park
    Replies:
    2
    Views:
    642
    David Pursley
    Nov 18, 2003
  2. Skybuck Flying

    VHDL Goto statement ?

    Skybuck Flying, Aug 8, 2005, in forum: VHDL
    Replies:
    9
    Views:
    6,274
    ajahn
    Aug 26, 2005
  3. Skybuck Flying

    Re: VHDL Goto statement ?

    Skybuck Flying, Aug 8, 2005, in forum: VHDL
    Replies:
    0
    Views:
    763
    Skybuck Flying
    Aug 8, 2005
  4. Flip
    Replies:
    6
    Views:
    6,736
    Jeff Dillon
    Nov 18, 2004
  5. Flip
    Replies:
    1
    Views:
    663
    Brock Allen
    Apr 14, 2005
Loading...

Share This Page