can't convert from type A* to type B*

C

carl.eichert

I need to change DMapEntry::pData from a char* to a class DMapData that contains the original pointer but still be able to refer to &pData[offset] in DMapEntry without changing it. Is this possible?

#include "stdafx.h"

class DMapData {
char* pData;
public:
char* operator->() { return pData; }
char operator[](size_t offset) { return pData[offset]; }

friend class DMapEntry;
};

class DMapEntry {
char* pStr;
public:
DMapData pData;
/*----->*/ void getStr(size_t offset) { pStr = &pData[offset]; }
};

int _tmain(int argc, _TCHAR* argv[])
{
DMapEntry a;
return 0;
}

Carl
Lomita, CA
 
I

Ian Collins

I need to change DMapEntry::pData from a char* to a class DMapData
that contains the original pointer but still be able to refer to
&pData[offset] in DMapEntry without changing it. Is this possible?

#include "stdafx.h"
?

class DMapData {
char* pData;
public:
char* operator->() { return pData; }

Why return a char* here?
char operator[](size_t offset) { return pData[offset]; }

friend class DMapEntry;
};

class DMapEntry {
char* pStr;
public:
DMapData pData;
/*----->*/ void getStr(size_t offset) { pStr = &pData[offset]; }

You can't take the address of temporary return value.

Why prefix a member that doesn't return anything with get?
};

int _tmain(int argc, _TCHAR* argv[])

Yuck!
 
S

Stuart

Carl Eichert wrote
int _tmain(int argc, _TCHAR* argv[])


That's a completely valid function declaration (if _TCHAR has been
declared somewhere.) What's so "yuck!" about it?

I think that you know perfectly well what is so "yuck" about it. It's
platform dependent code. Those readers that are not familiar with the
Windows platform will probably wonder what this strange "tmain" is
supposed to be, and whether the OP could _please_ post a compilable
example (or direct the question to a newsgroup that is dedicated to C++
with Visual Studio).

Of course, OP may not know what is so special about "tmain", so I have
to admit that "yuck" is probably not very accurate. Which is why I send
this reply.

Regards,
Stuart
 
J

James Kanze

Ian Collins said:
int _tmain(int argc, _TCHAR* argv[])
Yuck!
That's a completely valid function declaration (if _TCHAR has been
declared somewhere.) What's so "yuck!" about it?

Actually, it's not, at least not at namespace scope. The symbol
_tmain is in the implementation namespace (as is _TCHAR). In
this regard, Microsoft actually did the right thing, and avoided
poluting the users namespace. (Assuming that it *is* the
Microsoft extension, which seems a fairly safe bet.)

As for Yuck!... I'm tempted to say rather: another victim.
Except that in this case, I don't think Microsoft was actually
trying to anything wrong. They just didn't really that it
doesn't buy you anything; that just changing a typedef and
a function signature won't suddenly make a program handling
narrow characters Unicode compliant.
 
Ö

Öö Tiib

What Microsoft's funny TCHAR etc. do is let you, with care, write a
program that will function in both narrow and unicode implementations.

Wasted care. Windows ansi functions (names with A at end) are
converting wrapper around unicode functions (names with W at end)
so "narrow" means overhead for no gain. IOW one of that "both"
is pointless if the other one works.
 
N

Nobody

Wasted care. Windows ansi functions (names with A at end) are
converting wrapper around unicode functions (names with W at end)
so "narrow" means overhead for no gain.

That is the case in current versions of Windows, but it wasn't always that
way. TCHAR originated when Microsoft was still selling versions of Windows
which didn't support Unicode. Binaries which use the W versions won't run
on 95/98/ME (at least, not without unicows.dll, which didn't appear on the
scene until after ME was released).
 
Ö

Öö Tiib

That is the case in current versions of Windows, but it wasn't always that
way. TCHAR originated when Microsoft was still selling versions of Windows
which didn't support Unicode. Binaries which use the W versions won't run
on 95/98/ME (at least, not without unicows.dll, which didn't appear on the
scene until after ME was released).

Yes it was not always wasted care, however at current moment it lets you
to do something with care that is basically wasted care. ;-)

Even Microsoft itself agrees: "The TEXT and TCHAR macros are less useful today,
because all applications should use Unicode. However, you might see them
in older code and in some of the MSDN code examples."
http://msdn.microsoft.com/en-us/library/ff381407(VS.85).aspx
 
J

James Kanze

What Microsoft's funny TCHAR etc. do is let you, with care, write a
program that will function in both narrow and unicode implementations.

What Microsoft's funny TCHAR, etc. do is give you the illusion
that you can write a program that will function in both narrow
and wide character implementations. (Most of the code I write
today is narrow character, but fully supports Unicode. There's
no real relationship between the width of the character type and
whether it is Unicode or not.)

And it very defitety is an illusion, since the wide character
version of TCHAR is UTF-16, which requires special handling for
surrogates, etc.
 
J

James Kanze

Yes it was not always wasted care, however at current moment it lets you
to do something with care that is basically wasted care. ;-)
Even Microsoft itself agrees: "The TEXT and TCHAR macros are less useful today,
because all applications should use Unicode. However, you might see them
in older code and in some of the MSDN code examples."
http://msdn.microsoft.com/en-us/library/ff381407(VS.85).aspx

It's interesting that I write a lot of code which supports
Unicode, but never uses anything other than char. As I said, I
think this is a case of Microsoft trying to do the right thing,
but not understanding all of the real issues. Anyone concerned
with internationalization, be it under Windows or under Unix,
will use UTF-8 on plain char at the external interface, and
depending on what they are doing inside the code, either UTF-8
or UTF-32 (on uint_least32_t, or char32_t if they have C++11).
 
B

Balog Pal

What Microsoft's funny TCHAR, etc. do is give you the illusion
that you can write a program that will function in both narrow
and wide character implementations. (Most of the code I write
today is narrow character, but fully supports Unicode. There's
no real relationship between the width of the character type and
whether it is Unicode or not.)

And it very defitety is an illusion, since the wide character
version of TCHAR is UTF-16, which requires special handling for
surrogates, etc.

At the time the idea born it was UCS2 and surrogates considered not
worth bothering with. Eventually the latter fell, leaving behind a
system that puts together all the disadvantages without any of the
benefits... Well.

On the bright side, a C++ program is not forced to use the _UNICODE
model, the API functions and even many classes van be called with A or W
postfix, mixed and matched.
 
Ö

Öö Tiib

It's interesting that I write a lot of code which supports
Unicode, but never uses anything other than char. As I said, I
think this is a case of Microsoft trying to do the right thing,
but not understanding all of the real issues. Anyone concerned
with internationalization, be it under Windows or under Unix,
will use UTF-8 on plain char at the external interface, and
depending on what they are doing inside the code, either UTF-8
or UTF-32 (on uint_least32_t, or char32_t if they have C++11).

What each module does is basically input, processing and
output. Input and output are often interactive so they are called
together as "interface".

What a module can use in interface for texts depends how the
interface is specified.

Various text-based interfaces (XML, JSON) use UTF-8.

Microsoft's language-neutral COM interfaces use UTF-16 BSTR. It
is painful to use UTF-8 instead there.

If the interface is with GUI module written in Qt (it is
C++ too) then that likely uses UTF-16 QString in it. It is
tempting to use UTF-16 in interface with such module.

Modules written in Java or C# use also UTF-16 strings in them.
Again tempting.

As of internally for text processing inside of the module, use
consistently one Unicode encoding. It is straightforward how to
convert it into some other Unicode encoding. UTF-8 is most
natural choice. Maybe that was what you meant by that you
never use anything but char? As of illusions with UTF-16 ...
just do not have wrong illusions with it and everything works.
 
J

James Kanze

What each module does is basically input, processing and
output. Input and output are often interactive so they are called
together as "interface".
What a module can use in interface for texts depends how the
interface is specified.
Various text-based interfaces (XML, JSON) use UTF-8.
Microsoft's language-neutral COM interfaces use UTF-16 BSTR. It
is painful to use UTF-8 instead there.

If you're interfacing to some external functions, then you
obviously have to use the encoding format which they require.
As of internally for text processing inside of the module, use
consistently one Unicode encoding. It is straightforward how to
convert it into some other Unicode encoding. UTF-8 is most
natural choice. Maybe that was what you meant by that you
never use anything but char? As of illusions with UTF-16 ...
just do not have wrong illusions with it and everything works.

Just a nit (because I think we really agree here), but there is
only one Unicode encoding. What you mean is the encoding form:
how the encoding is represented. Depending on what you are
doing, you may choose different encoding forms. For anything
external, you should use UTF-8. For interfacing with a third
party library, you must use the form that the library expects.
Internally, depending on what you are doing, it may be more
convenient to convert the UTF-8 to UTF-32. Or not: there's a
lot you can effectively do in UTF-8.
 
J

James Kanze

On 5/29/2013 11:50 PM, James Kanze wrote:
At the time the idea born it was UCS2 and surrogates considered not
worth bothering with. Eventually the latter fell, leaving behind a
system that puts together all the disadvantages without any of the
benefits... Well.

Yes. I hope I made it clear that the motivation was _not_
something negative from Microsoft. Their decisions were made at
the wrong time, when it seemed that UCS2 would be the universal
solution. Time has shown it to be a wrong decision, but they
were trying to do the right thing.
On the bright side, a C++ program is not forced to use the _UNICODE
model, the API functions and even many classes van be called with A or W
postfix, mixed and matched.

Globalliy speaking, you should always use the A versions (which
is, I think, default), and convert internally to what is needed.
 
B

Balog Pal

Globalliy speaking, you should always use the A versions (which
is, I think, default), and convert internally to what is needed.

Where they do the same, yes. I recall stumbling on some functions where
the A versions had limitations like MAX_PATH length while the W version
did not. But those are hopefully minority.
 
T

Tobias Müller

Paavo Helde said:
One cannot use the A versions of Windows SDK because these are not Unicode-
capable. The A here means ANSI codepage, and it is not possible to use UTF-
8 as the ANSI codepage in Windows.

It's called ANSI everywhere (including MSDN), but actually it is the
current codepage, which can be selected. This happens to be ANSI on most
systems because noone changes that in the Windows world.
To close this subthread a link that explains everything:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd317752(v=vs.85).aspx
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top