C
Christopher
I am confused as I am converting my code to run on 64 bit. I know most
of the internals are specific to the compiler and implementation etc.
I can't decide whether how to handle standard types vs compiler
specific types.
I've thus far always opted to make a best effort to provide interfaces
that use standard types. However, I am now running into problems doing
that.
All my code that returned sizes is now returning 64 bit values, since
MS has made size_t 64 bits. My code used to treat these as 32 bit
unsigned integers. Heck, even the some of the Windows APIs expect 32
bit unsigned integers when passing parameters, but I get 64 bit
unsigned integers.
Anyway, is it bad practice for me to just give in and start using the
defined types MS provides instead of standard types? I am beginning to
believe it might make my job alot easier, even if my code is no longer
portable.
Also, should I start using 64 bit types whenever possible vs 32 bit
types, even if I don't think the values will grow that large? Does it
make a difference?
For instance I used to have a class something like this:
class MySpecialContainer
{
public:
const unsigned GetNumElements() const
{
return m_udts.size();
}
private:
unsigned m_numElements;
typedef std::vector<UDT> UDTVec;
UDTVec m_udts;
}
Now the return value of size() is 64 bits and does not convert to an
unsigned.
I could go and change everything to return size_t, but then I have to
change everywhere it is passed as a parameter, and half the APIs I
call are asking for a UINT, even the Windows APIs. I guess because
they expect the value to never grow that big. So, then I have to
static_cast<unsigned>( mySize_T) back again. I don't know what kind of
rules to adapt for these situations.
of the internals are specific to the compiler and implementation etc.
I can't decide whether how to handle standard types vs compiler
specific types.
I've thus far always opted to make a best effort to provide interfaces
that use standard types. However, I am now running into problems doing
that.
All my code that returned sizes is now returning 64 bit values, since
MS has made size_t 64 bits. My code used to treat these as 32 bit
unsigned integers. Heck, even the some of the Windows APIs expect 32
bit unsigned integers when passing parameters, but I get 64 bit
unsigned integers.
Anyway, is it bad practice for me to just give in and start using the
defined types MS provides instead of standard types? I am beginning to
believe it might make my job alot easier, even if my code is no longer
portable.
Also, should I start using 64 bit types whenever possible vs 32 bit
types, even if I don't think the values will grow that large? Does it
make a difference?
For instance I used to have a class something like this:
class MySpecialContainer
{
public:
const unsigned GetNumElements() const
{
return m_udts.size();
}
private:
unsigned m_numElements;
typedef std::vector<UDT> UDTVec;
UDTVec m_udts;
}
Now the return value of size() is 64 bits and does not convert to an
unsigned.
I could go and change everything to return size_t, but then I have to
change everywhere it is passed as a parameter, and half the APIs I
call are asking for a UINT, even the Windows APIs. I guess because
they expect the value to never grow that big. So, then I have to
static_cast<unsigned>( mySize_T) back again. I don't know what kind of
rules to adapt for these situations.