multi dimensional arrays as one dimension array

J

James Kuyper

James Tursa wrote:
....
OK, that's fine for objects, but that doesn't answer my question. What
is it about 2-dimensional (or multi-dimensional) arrays of double that
does not allow them to be stepped through with a double* ?

Ultimately, nothing more or less than the fact that the standard says
that the behavior is undefined. Because the behavior is undefined,
compilers are allowed to generate code that might fail if such stepping
is attempted (though this is rather unlikely). More importantly,
compilers are allowed to generate code that assumes that such stepping
will not be attempted, and therefore fails catastrophically if it
actually is attempted - the most plausible mode of failure is a failure
to check for aliasing.

Specific details:

Given

double array[2][1];
double *p = (double*)array;

If there is code which sets array[1] to one value, and p[j] to
another value, the compiler not required to consider the possibility
that p[j] and array[1] might point at the same location in memory.
It's allowed to keep either value in a register, or to keep the two
values in different registers. It's not required to make the next
reference to array[1] give the same value as the next reference to p[j].

This is because the behavior would be undefined if 'i' and 'j' had
values that might ordinarily cause you the expect array[1] and p[j]
to refer to the same location. Note: this convoluted wording is
necessary, because if 'i' and 'j' have such values, then at least one of
the two expressions has undefined behavior, rendering it meaningless to
talk about which location that expression actually refers to.



... And
ultimately, I would also ask if it is safe/conforming to use memcpy or
the like to copy values from/to such an array wholesale. e.g., is it

Yes, it is, and the reason is that the standard explicitly allows access
to entirely of an object through lvalues of "unsigned char", and the
behavior of memcpy() is defined in terms of operations on "unsigned
char" lvalues. There is no similar exemption for "double*".
 
O

Old Wolf

Doesn't memcpy and the like do just that? See below.

No ; typically when you use 'memcpy' you are
not trying to copy past the bounds of an object.
When I increment dp, how can the compiler possibly know where it
originally came from with certainty in order to do meaningful bounds
checking?

how can the
compiler possibly set up bounds checking inside the function at
compile time if it has no way of knowing what version will be passed?

The storage of 'dp' would contain something like:
Object address: 0x123456
Object size: 0x0C
Current offset: 0x04

James Kanze refers to pointers that contain
bounds-checking information as 'fat pointers'.
And then I would wonder about memcpy and the like. How does that work
with regards to passing it d? Are you saying that I would have to pass
&d instead of d to ensure that I didn't invoke undefined behavior
inside the routine?
Yes

How would memcpy even know the differernce?

memcpy doesn't need to know anything. It just
copies data where you request. It's up to you
to not request a copy outside the bounds of
an object.
 
K

Keith Thompson

Here's a concrete example:

#include <stdio.h>
int main(void)
{
int obj = 123;
unsigned char *ptr = (unsigned char*)&obj;
int i;
for (i = 0; i < sizeof obj; i ++) {
printf("ptr = 0x%x\n", ptr);
}
return 0;

}


To be more concrete, I suggest size_t i; instead of int i; (with
INT_MAX < sizeof (int) your example is not concrete anymore, but
that's not likely)

Also, even if i is changed to size_t, if sizeof (int) == SIZE_MAX,
that would be an infinite loop...


Agreed, size_t is a little better here than int -- not because of any
real risk of that sizeof(int) will equal or exceed either SIZE_MAX or
INT_MAX, but just because sizeof yields a result of type size_t, and
it's usually better to use consistent types unless you have a good
reason not to.

On the other hand, it's not entirely unreasonable to use int for
quantities that you can be sure won't exceed 32767.
...
size_t i;
for(printf("ptr = 0x%x\n", ptr[i = 0]); i && i < sizeof obj; i++)
printf("ptr = 0x%x\n", ptr);


That's not *quite* convoluted enough for the IOCCC, but keep at it.
:cool:}
 
K

Keith Thompson

pete said:
Keith said:
That's how the string functions work.

N869
7.21 String handling <string.h>
7.21.1 String function conventions
[#1] The header <string.h> declares one type and several
functions, and defines one macro useful for manipulating
arrays of character type and other objects treated as arrays
of character type.
The latter refers to the mem* functions (memcpy, memmove, memcmp,
memchr, memset). Though they're described in the "String handling"
section and declared in <string.h>, I wouldn't call them string
functions.

Brian W. Kernighan and Dennis M. Ritchie
would call them string functions.

K&R2, Appendix B3:
There are two groups of string functions
defined in the header <string.h>.
The first have names begining with str;
the second have names begining with mem.

It's not only just me and the standard.

Ok, then I disagree with you *and* the standard *and* Kernighan *and*
Ritchie. Except that I generally accept the standard's definitions
for terms, whether I think they're sensible or not, for the sake
of consistent communication.

On the other hand, though the standard does use the term "string
functions", it doesn't actually define it. It does, however, provide
definitions for "string" and "function" -- and memcpy, for example, is
a function that doesn't have anything directly to do with strings.

oh, well.
 
O

Old Wolf

The reasoning seem wrong when you go down to 1D arrays.  With double
d[2][3]; d is &d[0] and you are saying that this pointer can't be used
to access beyond that first element of d.  But if I declare double
x[2]; I most certainly can use x (== &x[0]) to access beyond that
first element.

I don't really have a good answer to that point,
but here goes:

Text like 6.5.6#8 specify that if you do have
a pointer to an element of an array, then you
can increment or decrement it in order to point
to other elements of the array; this is definitely
legal too:
double (*p)[3] = &d[0];
++p; ++p;

this is analogous with from your example:
double *q = &d[0][0];

In this case, "p" is not yet limited to d[0],
it is a pointer to elements of "d".

However, once you go:
double *r = (double *)&d[0];

things change. We now have a pointer
that can only be used to access values
inside the specific object "d[0]".

This is because 'double' objects are not
elements of the array "d"; "r" is not
pointing to elements of the array "d".

We have to fall back on the aliasing rules:
the value at d[0][0] is being accessed through
an lvalue of type "double", which is legal;
but now "r" is a pointer into the object d[0],
and not a pointer to an element of the object d.

The standard often talks about pointers
pointing into objects, but I can't find
text that explicitly says that the result of
the expression:
(double *)&d[0]
(where d[0] is not already of type double) is
a pointer that's constrained to the bounds of
d[0] -- which is what I would like to find in
order to back up everything I've said so far
in this post.

However the case at hand (arrays containing
arrays) does not seem different in principle
to something like this:

struct S
{
double a[5][6];
double b[5];
};

....

S s;

if ( sizeof(S) == 35 * sizeof(double) )
{
double *p = (double *)&s.a[0];
p[31] = 0;
}

which should definitely be illegal in my view,
in the case where the 'if' clause is satisfied.
So I am confident that the intent of the standard
is as I've presented in this post.
 
J

James Tursa

OK. I have learned three things today based on everyone's posts:

1)
double d[2][3];
double *dp = malloc(sizeof(d));
memcpy(dp,d,sizeof(d));

This is conforming because memcpy accesses the elements as an unsigned
char array and the standard guarantees this will work ...

OR

it is not conforming because I am requesting access to memory beyond
the d[0] array inside memcpy and there are no special rules in the
standard that guarantee this will work. I need to use &d instead.

2)
double d[2][3];
double *dp = (double *) d;
int i;
for( i=0; i<6; i++ )
dp = (double *)(((unsigned char *)dp) + sizeof(double));

This is conforming because I am generating addresses using unsigned
char and am not calculating/dereferencing an address beyond the end of
d so the standard guarantees this will work ...

OR

it is not conforming because I am generating an address beyond the
d[0] array partway into the loop and there are no special rules in the
standard that guarantee this will work. I need to use &d instead when
defining dp.

3)
unsigned char uc[1];
unsigned char *ucp = (unsigned char *) uc;
unsigned char c, d;
uc[0] = 'a';
c = uc[0]; // (1)
ucp[0] = 'b'; // (2)
d = uc[0]; // (3)

This is conforming because the standard guarantees I can access the
elements of uc through the unsigned char pointer ucp and get
predictable results ...

OR

it is not conforming because the compiler is free to optimize and hold
the uc[0] value in (1) in a register or cache for later use in (3)
without checking to see if it was changed in (2).The value of d is
undefined.


I have a headache ... I'm going to bed ...

James Tursa
 
J

James Tursa

(I must confess that I don't see the fascination with deliberately
confusing type issues, though. If I want a one-dimensional array, I'll
build one. If I want a two-dimensional array, I'll build one of those
instead. Why mix them up?)

Because I want to copy a C 2-dimensional array (or multi-dimensional
array) to a MATLAB mxArray data pointer in a conforming way. Sounds
simple enough. Just copy the contents of a 2-dimensional array to a
data area accessed through a double *. The MATLAB functions will give
me a double * for the target data area and I have assumed (apparently
incorrectly) that a simple memcpy using the C variable name is
conforming. Apparently (according to some) I need to use &name instead
of just name to be conforming.

I'm just trying to understand the rules. I'm not trying to make up
deliberately confusing type issues. But as you can see I am getting
two different answers to many fairly simple questions (at least I
thought they were simple at the outset).

James Tursa
 
T

Tim Rentsch

James Kuyper said:
(e-mail address removed) wrote:
The subject might be misleading.
Regardless, is this code valid:
#include <stdio.h>
void f(double *p, size_t size) { while(size--) printf("%f\n", *p++); }
int main(void) {
double array[2][1] = { { 3.14 }, { 42.6 } };
f((double *)array, sizeof array / sizeof **array);
return 0;
}
Assuming casting double [2][1] to double * is implementation defined
or undefined behavior, replace the cast with (void *).
Since arrays are not allowed to have padding bytes in between of
elements, it seems valid to me.
Stepping through a one dimensional array
and on through a consecutive one dimensional array
is only defined for elements of character type
and only because
any object can be treated as an array of character type.

So as I understand it you are saying my code invokes undefined
behavior.
In which hypothetical (or real, if such implementation does exist)
implementation my code won't work, and why?

The key point is the pointer conversion. At the point where that
conversion occurs, the compiler knows that (double*)array == array[0].
It's undefined if any number greater than 1 is added to that pointer
value, and also if that pointer is dereferenced after adding 1 to it.

This conclusion doesn't fit a consistent reading of the standard.

Certainly, if we have

double a[5][3];
double (*xa)[3];
xa = a;

then the valid index values for xa are 0, 1, 2, 3, 4. The variable
xa points into the multidimensional array object a; the "extent"
of xa is all of a.

The extent of xa is not changed by converting it to void *. It's
legal to sort xa using qsort, by

qsort( xa, 5, sizeof *xa, suitable_function );

The conversion of xa to void * must preserve access to the entire
array a. And void * isn't special in this regard; converting to
unsigned char * must allow access to the entire original array a,
so that elements can be swapped (either in qsort or in another
sorting function with a similar interface).

The very same argument applies if we don't use xa but just use
a directly; the value '(void*)a' has the same extent as the
array a. And so must '(double*)a', or any other pointer
conversion of a, provided of course that alignment requirements
are satisfied.
 
B

Ben Bacarisse

Richard Heathfield said:
James Tursa said:
Because I want to copy a C 2-dimensional array (or multi-dimensional
array) to a MATLAB mxArray data pointer in a conforming way. Sounds
simple enough. Just copy the contents of a 2-dimensional array to a
data area accessed through a double *. The MATLAB functions will give
me a double * for the target data area and I have assumed (apparently
incorrectly) that a simple memcpy using the C variable name is
conforming. Apparently (according to some) I need to use &name instead
of just name to be conforming.

If all you want is a solution that is guaranteed not to break any rules,
it's pretty easy. If MATLAB provides a space into which you need only copy
the data, you can do so in a simple loop:

/* We assume that arr is defined as double arr[ROWS][COLS]. We
further assume that p is of type double *, and points to
space at least ROWS * COLS * sizeof(double) bytes in size.
*/
t = p;
thisrow = 0;
while(thisrow < ROWS)
{
memcpy(t, arr[thisrow], sizeof arr[thisrow]);
t += COLS; /* move t on by COLS doubles */
++thisrow;
}

Are you of the opinion that one or both of memcpy(p, arr, sizeof arr)
memcpy(p, &arr, sizeof arr) are undefined?

What puzzles me is that arr (in a context where it converts to
&arr[0]) is supposed to be pointer constrained legally to range over
only the first (array) element of arr, yet if have

double x[ROWS];

x (in places where it converts to &x[0]) is permitted to range beyond
that first (non-array) element of x. Now, in the first case you have
an array pointer that gets further converted and in the second you
don't so this may be where the difference comes from, but I am having
trouble seeing the wording in the standard.

What is it about the conversions in memcpy(p, arr, sizeof arr) that
causes trouble when those in memcpy(p, x, sizeof x) do not?
 
T

Tim Rentsch

The subject might be misleading.
Regardless, is this code valid:

#include <stdio.h>

void f(double *p, size_t size) { while(size--) printf("%f\n", *p++); }
int main(void) {
double array[2][1] = { { 3.14 }, { 42.6 } };
f((double *)array, sizeof array / sizeof **array);
return 0;
}

Assuming casting double [2][1] to double * is implementation defined
or undefined behavior, replace the cast with (void *).

Since arrays are not allowed to have padding bytes in between of
elements, it seems valid to me.

Despite what some other people have been saying,
this is valid. If foo is an array, doing (some_type *)foo
gives access to all of foo. Since 'array' is made up
(ultimately) of double's, using '(double*)array' will
work just fine.
 
T

Tim Rentsch

Barry Schwarz said:
The subject might be misleading.
Regardless, is this code valid:

#include <stdio.h>

void f(double *p, size_t size) { while(size--) printf("%f\n", *p++); }
int main(void) {
double array[2][1] = { { 3.14 }, { 42.6 } };
f((double *)array, sizeof array / sizeof **array);
return 0;

}

Assuming casting double [2][1] to double * is implementation defined
or undefined behavior, replace the cast with (void *). [snip]
Since arrays are not allowed to have padding bytes in between of
elements, it seems valid to me.

If your system has built in hardware assist for bounds checking, it
would be reasonable for the "bounds registers" to contain the start
and end addresses of array[0]. Eventually your p++ would be outside
this range (even though it is still within array as a whole). While
this is a perfectly valid value attempts to dereference it should be
trapped by the bounds checking logic in the hardware.

As explained elsethread, what's being converted is 'array',
the converted pointer value must allow access to all of array.
 
V

vippstar

The subject might be misleading.
Regardless, is this code valid:
#include <stdio.h>
void f(double *p, size_t size) { while(size--) printf("%f\n", *p++); }
int main(void) {
double array[2][1] = { { 3.14 }, { 42.6 } };
f((double *)array, sizeof array / sizeof **array);
return 0;
}
Assuming casting double [2][1] to double * is implementation defined
or undefined behavior, replace the cast with (void *).
Since arrays are not allowed to have padding bytes in between of
elements, it seems valid to me.

Despite what some other people have been saying,
this is valid. If foo is an array, doing (some_type *)foo
gives access to all of foo. Since 'array' is made up
(ultimately) of double's, using '(double*)array' will
work just fine.

Well, now I'm at loss again. I think the only way to settle this is to
provide quotes from the standard that agree (or disagree) with you.
 
T

Tim Rentsch

James Kuyper said:
James said:
The key point is the pointer conversion. At the point where that
conversion occurs, the compiler knows that (double*)array == array[0].
It's undefined if any number greater than 1 is added to that pointer
value, and also if that pointer is dereferenced after adding 1 to it.

Trying to understand your answer as it relates to the original post. I
don't see how the original function gets an address 2 beyond the end,
or 1 beyond the end and attempts to dereference it, as you seem to be
saying. Can you point this out? Did I misunderstand you?

Quite possibly. The key point you need to understand is what array the
pointer points at. It's important to understand that, given the
following declaration:

double array[2][1];

"array" is not an array of "double". The element type for "array" is
"double[1]". On the other hand, array[0] is itself an array; the element
type for that array is "double".

The rules governing the behavior of pointer arithmetic are described by
6.5.6p8 in terms of an array whose element type is the type that the
pointer points at; they make no sense when interpreted in terms of an
array with any other element type.

The standard does NOT clearly state where it is that (double*)array
points. I will assume what everyone "knows", which is that it points at
the same location in memory as the original pointer.

All good so far....

There is only one array with an element type of "double" that starts at
that location. It isn't "array", it's "array[0]". Therefore, the rules
concerning pointer arithmetic are described relative to array[0]. Since
array[0] has a length of 1, the behavior is undefined if any integer
other than 0 or 1 is added to it, and it is not legal to dereference it
after 1 has been added to it; the same must also be true of (double*)array.

The flaw in the reasoning here is that it must be an array of double
that is being converted into a double *. It need not; what's being
converted is array, and it's being converted to double *. Whatever
the type of elements of an array X, doing (some_type*) X treats all
of X as though it has elements of some_type (with the usual caveats
about alignment).
 
J

James Kuyper

Tim Rentsch wrote:
....
The very same argument applies if we don't use xa but just use
a directly; the value '(void*)a' has the same extent as the
array a. And so must '(double*)a', or any other pointer
conversion of a, provided of course that alignment requirements
are satisfied.

That's where your argument breaks down. A double* is governed by rules
about the limits of pointer addition that char* is specifically exempted
from, and which are meaningless for void*. I've described the problem in
more detail in another branch of this discussion, so I won't repeat the
description here.
 
J

James Kuyper

Tim Rentsch wrote:
....
converted is array, and it's being converted to double *. Whatever
the type of elements of an array X, doing (some_type*) X treats all
of X as though it has elements of some_type (with the usual caveats
about alignment).

The standard says nothing about that. It says remarkably little about
what (sometype*)X does in general. Most of what it does say is that,
under some circumstances (none of which are relevant to this case),
conversion back to the original type returns a pointer that compares
equal to the original. Of the few cases where it says anything more than
that about what the result of (sometype*)X is, none apply here.
 
T

Tim Rentsch

[restoring snipped portion}
int main(void) {
char array[2][1] = { { 'a' }, { 'b' } };
f((char *)array, sizeof array / sizeof **array);
return 0;

}
OK, that's fine for objects, but that doesn't answer my question. What
is it about 2-dimensional (or multi-dimensional) arrays of double that
does not allow them to be stepped through with a double* ?

The fact that double[2][3] doesn't have elements such as x[0][5]. There
must be a valid double, 5*sizeof(double) bytes into x. However, x[0][5]
doesn't mean just that. x[0][5] (or ((double*)x)[5]) means you're looking
5*sizeof(double) bytes into x[0]. x[0] doesn't have that many elements.

That doesn't matter since array isn't being accessed as a two-dimensional
array. Converting array (not array[0], but array) gives a pointer that
has access to all the same memory as array.

The machine will almost certainly let you get away with it, unless the
compiler specifically inserts instructions to stop this (bounds checking
implementations, as has been mentioned). The optimiser is less likely to.
The optimiser may assume, for example, that storing a value in x[0][5]
won't alter the value of x[1][2], or vice versa, and may re-order code
based on that assumption. If I recall correctly, there are situations
where at least gcc does this.

If some of the accesses to x are through a converted pointer then
other aliases can occur. If we do this:

double *xx = (double*)x;

then any store into xx (at unknown index value) must be
treated as though it might affect any x[j], and
similarly vice versa.
 
T

Tim Rentsch

James Kuyper said:
James Tursa wrote:
...
OK, that's fine for objects, but that doesn't answer my question. What
is it about 2-dimensional (or multi-dimensional) arrays of double that
does not allow them to be stepped through with a double* ?

Ultimately, nothing more or less than the fact that the standard says
that the behavior is undefined. Because the behavior is undefined,
compilers are allowed to generate code that might fail if such stepping
is attempted (though this is rather unlikely). More importantly,
compilers are allowed to generate code that assumes that such stepping
will not be attempted, and therefore fails catastrophically if it
actually is attempted - the most plausible mode of failure is a failure
to check for aliasing.

Specific details:

Given

double array[2][1];
double *p = (double*)array;

If there is code which sets array[1] to one value, and p[j] to
another value, the compiler not required to consider the possibility
that p[j] and array[1] might point at the same location in memory.
It's allowed to keep either value in a register, or to keep the two
values in different registers. It's not required to make the next
reference to array[1] give the same value as the next reference to p[j].

This is because the behavior would be undefined if 'i' and 'j' had
values that might ordinarily cause you the expect array[1] and p[j]
to refer to the same location. Note: this convoluted wording is
necessary, because if 'i' and 'j' have such values, then at least one of
the two expressions has undefined behavior, rendering it meaningless to
talk about which location that expression actually refers to.


You're starting with the conclusion, and then "proving" the
conclusion. This conclusion isn't consistent with other
behavior and language in the standard.
Yes, it is, and the reason is that the standard explicitly allows access
to entirely of an object through lvalues of "unsigned char", and the
behavior of memcpy() is defined in terms of operations on "unsigned
char" lvalues. There is no similar exemption for "double*".

Irrelevant, because that's talking about whether a memory access
can have undefined behavior because of an invalid representation.
It's just as illegal to access outside of an array using unsigned
char as it is using double. The only question is, what memory
may be accessed. Since 'array' is what was converted, any memory in
array may be accessed.
 
B

Ben Bacarisse

pete said:
Richard said:
Ben Bacarisse said:
Richard Heathfield writes:
If all you want is a solution that is guaranteed not to break any rules,
it's pretty easy. If MATLAB provides a space into which you need only
copy the data, you can do so in a simple loop:

/* We assume that arr is defined as double arr[ROWS][COLS]. We
further assume that p is of type double *, and points to
space at least ROWS * COLS * sizeof(double) bytes in size.
*/
t = p;
thisrow = 0;
while(thisrow < ROWS)
{
memcpy(t, arr[thisrow], sizeof arr[thisrow]);
t += COLS; /* move t on by COLS doubles */
++thisrow;
}
Are you of the opinion that one or both of memcpy(p, arr, sizeof arr)
memcpy(p, &arr, sizeof arr) are undefined?

<weasel>
I'm of the opinion that the above code represents a squeaky-clean
way of converting a two-dimensional array into a one-dimensional
array.
</weasel>

A slightly less weaselly answer to your question would be that I'm
not entirely sure that a pedant (i.e. plenty of people in this
newsgroup, including myself) could not construct an argument that
the memcpy route could exhibit undefined behaviour.
What puzzles me is that arr (in a context where it converts to
&arr[0]) is supposed to be pointer constrained legally to range over
only the first (array) element of arr, yet if have

double x[ROWS];

x (in places where it converts to &x[0]) is permitted to range beyond
that first (non-array) element of x. Now, in the first case you have
an array pointer that gets further converted and in the second you
don't so this may be where the difference comes from, but I am having
trouble seeing the wording in the standard.

What is it about the conversions in memcpy(p, arr, sizeof arr) that
causes trouble when those in memcpy(p, x, sizeof x) do not?

There can't be any difference.
In both cases the third argument is the number of bytes
of the object refered to by the second argument.
And in both cases the second parameter is initialised
to the address of the lowest addressable byte
of the object refered to by the second argument.

Firstly, that wording (about lowest addressable byte) applies only to
conversion to "pointer to character" types. All sane people know it
applies to void * too (and hence memcpy) but it is very hard to prove
it from the wording in the standard.

Secondly, there is a hair's breadth of difference between the cases.
In one (passing arr) a pointer of type double (*)[COLS] is converted
to void *; in another (passing &arr) a pointer of type double
(*)[ROWS][COLS] is converted to void *; and in my third case (passing
x) a double * is converted to void *. It is possible that some
wording somewhere allows an implementation to restrict the range of
some of these converted pointer and not others. I don't believe so,
but that is what I think is being claimed by some.

I don't understand the aversion to using a nested loop,
using the assignment operator for the type double values.

I never use memcpy when I can use an assignment operator instead.

I think people are just trying to work out what is allowed and what is
not but I can see some value for some numerical applications where
utility functions could be written to be "size neutral" without
needing VLAs.
 
B

Ben Bacarisse

James Kuyper said:
Tim Rentsch wrote:
...

That's where your argument breaks down. A double* is governed by rules
about the limits of pointer addition that char* is specifically
exempted from, and which are meaningless for void*. I've described the
problem in more detail in another branch of this discussion, so I
won't repeat the description here.

I can't find the other message so I'll have to ask here. Are you
saying that converting a double * to, say, unsigned char * permits one
to access parts of an array that are not accessible via the double *?
What are the limits on addition that one is exempted from and where is
this permission granted?

You go on to say that these limits are meaningless for void *, but at
some point, useful void *s are converted back. Do the limits that are
then imposed derive from the original pointer or can they get lost?
I.e. is (double *)(void *)dp different in what it can access to
(double *)(void *)(unsigned char *)dp?
 
H

Harald van Dijk

[restoring snipped portion}
int main(void) {
char array[2][1] = { { 'a' }, { 'b' } };
f((char *)array, sizeof array / sizeof **array);
return 0;

}
OK, that's fine for objects, but that doesn't answer my question.
What is it about 2-dimensional (or multi-dimensional) arrays of
double that does not allow them to be stepped through with a double*
?

The fact that double[2][3] doesn't have elements such as x[0][5]. There
must be a valid double, 5*sizeof(double) bytes into x. However, x[0][5]
doesn't mean just that. x[0][5] (or ((double*)x)[5]) means you're
looking 5*sizeof(double) bytes into x[0]. x[0] doesn't have that many
elements.

That doesn't matter since array isn't being accessed as a
two-dimensional array. Converting array (not array[0], but array) gives
a pointer that has access to all the same memory as array.

With the exception of character types, does the standard describe the
conversion of an array to anything other than its initial element?
Strictly speaking, I can't even find where the standard describes the
result of converting double(*)[3] to double* at all, but the only way to
perform that conversion indirectly is by taking the address of the first
element of the first sub-array, and I accept that a direct conversion
should mean the same thing. If you can point out where more permissions
are given, please do so.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top