Should UA string spoofing be treated as a trademark violation?

V

VK

I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).

Shouldn't it be considered as a trademark violation of the relevant
name owner? If I make a whisky and call it "Jack Daniels", I most
probably will have some serious legal problems. "Mozilla" partially
appeared because NCSA stopped them from using "Mosaic" in the UA
string.

Is it some different situation with the current spoofing?


P.S. And no, I am not starving for browser sniffing. But the stats
impact is obvious.
 
C

cwdjrxyz

VK said:
I wandering about the common proctice of some UA's producers to spoof
the UA string to pretend to be another browser (most often IE).

Shouldn't it be considered as a trademark violation of the relevant
name owner? If I make a whisky and call it "Jack Daniels", I most
probably will have some serious legal problems. "Mozilla" partially
appeared because NCSA stopped them from using "Mosaic" in the UA
string.

Is it some different situation with the current spoofing?


P.S. And no, I am not starving for browser sniffing. But the stats
impact is obvious.

It seems that this problem would be of concern mainly to Microsoft, and
Microsoft has plenty of good lawyers and usually will use them for the
least little copyright violation they see. I would thus guess that
either there is not a copyright violation or Microsoft considers this
problem unimportant. Of course copyright laws can vary somewhat around
the world, and they are not enforced very well in some countries. I am
quite willing to leave the copyright aspects of this "problem" to
Microsoft. However, I think that every browser variation should have a
unique ID number assigned by an international agency, and that any
browser should be blocked from the web that does not have such an ID.
This would allow meaningful browser detection on the rare occasions
that it is really needed - for example the browser has a bug that other
browsers do not have. Howover, I am sure that this is wishful thinking
on my part, just as is a requirement that all new pages be blocked from
the web unless they completely validate at the W3C html and css
validators.
 
R

Richard Cornford

It seems that this problem would be of concern mainly to Microsoft, and
Microsoft has plenty of good lawyers and usually will use them for the
least little copyright violation they see. I would thus guess that
either there is not a copyright violation or Microsoft considers this
problem unimportant.

Don't be silly, Microsoft couldn't take action against anyone as they
virtually invented UA string spoofing, and every browser they have
released since IE 4 has spoofed Netscape 4 (hence 'Mozilla/4.0' at the
start of their UA string).

It was Microsoft's action in spoofing Netscape that resulted in the
change between HTTP 1.0 and 1.1 where the latter no longer specifies
the UA header as a source of information, only suggests that it could
be used as such. By the time HTTP 1.1 was written the horse had long
since bolted.
... , and that any browser should be blocked from the web
that does not have such an ID.

At which point the people writing the browsers you have never heard
off, and would so assume are incapable of anything, start spoofing
browser IDs. We just end up back where we are now, with lots of people
wasting their time thinking about browser IDs in the same way people
have been wasting their time assuming that user agent strings could be
a source of information.
This would allow meaningful browser detection on the rare occasions
that it is really needed

Many more people declare a need for browser detection than are actually
capable of coming up with some example where feature detection could
not answer the question if asked.
- for example the browser has a bug that other
browsers do not have.

Don't all browsers have a bug that other browsers do not have? But most
significant bugs can be tested for without browser detection. If you
think otherwise you are welcome to suggest a concrete example and see
if it can't be feature detected.
Howover, I am sure that this is wishful thinking
<snip>

Yes it is.

Richard.
 
V

VK

Richard said:
Don't all browsers have a bug that other browsers do not have? But most
significant bugs can be tested for without browser detection. If you
think otherwise you are welcome to suggest a concrete example and see
if it can't be feature detected.

To not be nasty - but exclusively as a "burden of proof":

The current SVG Cairo engine used in Firefox 1.5.x cannot render
textpaths under Windows 98. Even more nasty: it just stops SVG
rendering on the first occurence of textpath. It is mentioned on
Mozilla's site but doesn't help on a practical run.

While the SVG Cairo support itself can be detected by
if (document.implementation.hasFeature('org.w3c.dom.svg', '1.0'))
for Windows 98 adjustments I have to sniff for "Win98" in UA's string.

Though this sample is not perfectly "clear" as I'm sniffing for OS, not
UA.
 
C

cwdjrxyz

Richard said:
At which point the people writing the browsers you have never heard
off, and would so assume are incapable of anything, start spoofing
browser IDs. We just end up back where we are now, with lots of people
wasting their time thinking about browser IDs in the same way people
have been wasting their time assuming that user agent strings could be
a source of information.

Of course you are right. So the international agency that assigns
browser IDs would have to have the ability to enforce the standards and
heavily fine or otherwise penalize browser writers who violate them. On
a more general level, we have to have international and national
standards for broadcasting radio and TV to avoid chaos. However the
situation on the web has approached anarchy in many respects, resulting
in unnecessary problems for both the writers of web pages and users.
The technical control of radio and TV broadcasting on an international
and national basis is not perfect, and a few rogue countries have
jammed broadcasts from time to time, for example. However, in my
opinion, the situation is far better in the broadcast field than it now
is on the web. I am only talking about enforcement of technical
standards. I do not think that regulation of content standards is a
good thing in most cases, although there could be rare exceptions. The
problem here is that what is acceptable in one society may not be so in
another. A good example is China. Both Google and Yahoo have recently
attracted the attention of the US Congress, and others, concerning
giving personal information about users that the Chinese official
demand. In some cases such information has apparently been used to jail
people who do not agree with some official Chinese policy - in other
words, a political "crime". That is apparently the price of their
doing business in China, but there are many who highly object on a
moral basis and claim that if giving personal information is the cost
of doing business in China, then the company should not operate there.
 
R

Richard Cornford

cwdjrxyz said:
Of course you are right. So the international agency that
assigns browser IDs would have to have the ability to enforce
the standards and heavily fine or otherwise penalize browser
writers who violate them.

That is fine so long as it cuts both ways and any web author who is
caught excluding a browser because it identifies itself is subject to
equivalent fines and penalties. Anything short of that and you are
inviting a browser monopoly that would not be in the public interest.

.... I am only talking about enforcement of technical
standards.
<snip>

Aren't you the 'cwdjrxyz' who blew his credibility in alt.html by
championing a content negotiation script that disregarded the mechanism
laid out in the HTTP 1.1 specification and actually failed so badly that
it would send XHTML to browsers that explicitly declared their rejection
of it:-

<
I don't think I will have much regard for any assertions you may make in
favour of technical standards until after I have seen some evidence that
you follow them yourself.

Richard.
 
R

Richard Cornford

VK said:
To not be nasty - but exclusively as a "burden of proof":

The current SVG Cairo ...
<snip>

That is not a concrete example, it is a hearsay report from the most
unreliable source available.

Richard.
 
C

cwdjrxyz

Richard said:
That is fine so long as it cuts both ways and any web author who is
caught excluding a browser because it identifies itself is subject to
equivalent fines and penalties. Anything short of that and you are
inviting a browser monopoly that would not be in the public interest.


<snip>

Aren't you the 'cwdjrxyz' who blew his credibility in alt.html by
championing a content negotiation script that disregarded the mechanism
laid out in the HTTP 1.1 specification and actually failed so badly that
it would send XHTML to browsers that explicitly declared their rejection
of it:-

<
I don't think I will have much regard for any assertions you may make in
favour of technical standards until after I have seen some evidence that
you follow them yourself.

I do not see what bringing up an unrelated reference to another group
has to do with this. You quote only one post in a very long thread. In
summary I use a php include to force a browser to accept true xhtml 1.1
if it reports it will accept it at all in the header exchange. It is up
to the browser maker to decide if they want to allow true xhtml using
the mime type for xhtml+xml or not. If they do not allow it then my php
include reverts to html 4.01 strict. If I did not do that, my pages
would not work on IE6! Thus I do not send xhtml to browsers that do not
indicate that they will accept it! In some cases the browser says it
will accept either the mime type for true xhtml or the mime type for
html. In some of these cases it says it prefers html. In that case I
have found that the common browsers that will accept both html and true
xhtml, but "prefer" html, work just fine if you force the xhtml path in
the header exchange. My guess is that some browser makers specify that
they prefer html just to be on the safe side. One should not confuse a
"preference" for the browser with the code that can be used to indicate
that preference in the header exchange, if a browser writer so wishes.
In addition a few lesser used browsers do not indicate what they will
accept in the header exchange, although they sometimes really will
accept true xhtml just as well as well as html. Apple's Safari comes to
mind here. In that case, I err on the safe side and use html 4.01
strict, because browser detection of some of these browsers is not safe
because they can spoof another browser.

I now have dozens of pages served as described above, and they all
validate perfectly as xhtml 1.1 or html 4.01 strict at the W3C
depending on what path is selected by the header exchange. Furthermore,
the pages work properly for the xhtml 1.1 or html 4.01 strict path
selected by the header exchange I use. You can see several such pages
by going to http://www.cwdjr.info/media/playersRoot.php .
 
V

VK

Richard said:
<snip>

That is not a concrete example, it is a hearsay report

You imply that I do not use SVG but just making up a problem? It is not
clear how did you come up to this conclusion - unless you think
yourselve telepathic.

Here is the feature detection block I'm currently using, and believe me
I did not make it just to post it here:

....
/**
* Feature detection block.
*/
/*@cc_on @*/
/*@if (@_jscript_version >= 5.5)
if (document.namespaces['v'] == null) {

document.namespaces.add('v','urn:schemas-microsoft-com:vml','#default#VML');
}
SVL.UA = 'IE';
SVL.VL = 'VML';
@elif (@_jscript)
SVL.UA = 'IE';
@else @*/
if (document.implementation) {
if (window.opera) {
SVL.UA = 'Opera';
SVL.VL = 'SVG';
}
else if (document.implementation.hasFeature('org.w3c.dom.svg', '1.0'))
{
SVL.UA = 'Gecko';
SVL.VL = 'SVG';
}
else if ((window.netscape)&&(window.netscape.security)) {
SVL.UA = 'Gecko';
}
else {
/*NOP*/
}
}
/*@end @*/
....
 
V

VK

cwdjrxyz said:
Of course you are right. So the international agency that assigns
browser IDs would have to have the ability to enforce the standards and
heavily fine or otherwise penalize browser writers who violate them.

Wow! Like 10 year in jail for not supporting XHTML? :) Form a W3C
International Police Corp exempted from the national legislatures? "You
browser doesn't support RegExp properly! On the wall, bastards!" :)

You you mean "technical" standards (like cryptography strength, proper
UA string reporting etc.) then maybe... But I still tend to believe
that it can be better handled on the particular country level.
That is apparently the price of their
doing business in China, but there are many who highly object on a
moral basis and claim that if giving personal information is the cost
of doing business in China, then the company should not operate there.

I'm aware of China - search engines story and finding it very sad. But
it is OT of UA's features. It was made by IP tracking - which is a core
feature of WWW used by governments even in many countries against of
their own citizens. Think of Carnivore (USA), SORM2 (Russia), some very
interesting organizations acting under EUCD in EU.
 
M

Michael Winter

Richard Cornford wrote:

[snipped quote]

Seems that you still don't know how to trim quoted posts.
I do not see what bringing up an unrelated reference to another group
has to do with this.

I think Richard made his point rather well: one cannot pontificate about
enforcing standards unless one is willing to implement them oneself.
You quote only one post in a very long thread.

Readers are welcome to view the rest of the thread, I'm sure; Google's
archived it all. However, what other insight do you expect them to gain?
It wasn't just me that told you that you were wrong, but that others
did, too?
In summary I use a php include to force a browser to accept true
xhtml 1.1

Which you shouldn't.
if it reports it will accept it at all in the header exchange.

That isn't the test you use at all. It's quite obvious from inspecting
its behaviour that all you do is look for the mere mention of the
string, application/xhtml+xml, as a substring of the Accept header value
(and a substring match is patently wrong). You make no effort to parse
the header whatsoever, and this is plain for all to see:

It is up to the browser maker to decide if they want to allow true
xhtml using the mime type for xhtml+xml or not.

Indeed, which is why quality values can be used to indicate not only
preference for a certain media type, but an explicit rejection of it as
well. If you had read and understood sections 14.1 and 3.9 in RFC 2616,
you would know this.
If they do not allow it then my php include reverts to html 4.01
strict.

But it doesn't. If a browser explicitly states that it cannot accept
XHTML, you'll serve it anyway.
If I did not do that, my pages would not work on IE6!

You only send HTML to IE 6 because it doesn't include
application/xhtml+xml in its Accept header value, not because your
negotiation mechanism is written correctly.
Thus I do not send xhtml to browsers that do not indicate that they
will accept it!

Repeating something false does not make it true.
In some cases the browser says it will accept either the mime type
for true xhtml or the mime type for html. In some of these cases it
says it prefers html. In that case I have found that the common
browsers that will accept both html and true xhtml, but "prefer"
html, work just fine if you force the xhtml path in the header
exchange.

There would be little point in advertising the ability to process XHTML
if in fact the user agent can't. However, that doesn't mean there isn't
a reason to prefer another media type.
My guess is [...]

Your guess is irrelevant. One does not incorrectly implement HTTP based
on a guess.

Though a rigid interpretation of quality values is not a requirement, it
is strongly recommended and you are clearly in no position to override
that recommendation.

[snip]
You can see several such pages by going to
http://www.cwdjr.info/media/playersRoot.php .

That isn't much there of which one should be proud. I see a useless
document title, a broken and superfluous meta element, poor use of class
attributes, badly chosen structural elements, and a very fetching uneven
white border.

Mike
 
R

Richard Cornford

VK said:
You imply that I do not use SVG but just making up a problem?

I do not imply that that you don't "use" SVG, I stated that your
assertion was a hearsay report form the most unreliable source
available.
It is not clear how did you come up to this conclusion -
unless you think yourselve telepathic.

There is no need for telepathy, all I have to do is observer that:-

1. You don't understand javascript sufficiently well to understand
the code that you write yourself.
2. You write bug-filled, convoluted, difficult to maintain, even
dangerous code, disregarding conventions and still failing to
fully address the applicable situation.
3. You are unwilling to take the advice of others on how to better
understand javascript, even when those individuals are the ones
capable of explaining the hows and whys of javascript.
4. You are incapable of comprehending technical explanations of
javascript, or engaging in the process of formatting questions
asking for clarification of what you don't understand.
5. You resist evidence and demonstrations that you are wrong far
beyond the point where any rational observer would accept reality.
6. When you find yourself in a minority of one in a technical
discussion involving many genuine experts on a subject you prefer
to conclude that everyone else is wrong and you alone are the
only person who really understands the technology.
7. You author code on a basis of mystical incantation, including
things 'because they work' but without any understanding of what
they actually do or ability to explain why you are using them.
8. You declare things to be 'bugs' because you don't like/understand
them when they are actually completely normal and expected (even
technically specified).
9. You don't understand computers (even to the extent of seeing why
the bit widths of data and address registers have nothing to do
with the precision of number representations in computer systems).
10. You bury your head in the sand whenever you are faced with the
possibility that things could be done better, or how they might
be done better.
11. You spend time testing browsers and end up knowing less about
them than when you started.
12. You don't understand logic.
13. You follow irrational though processes to false conclusions and
then maintain that you are correct in the face of any arguments.
14. You see relevance in the irrelevant and unrelated, but can never
justify it, preferring to characterise those who see the
irrelevant as irrelevant as beyond understanding.
15. The majority of your statements are too incoherent to convey
meaning.
16. When your statements are clear enough to convey meaning the
majority are irrational, technically false or made up off the
top of your head.
17. You use English terms outside of their accepted meanings and
apply those incorrect meaning to your interpretation of English.
18. You use technical language outside of its specified meaning.
19. You are incapable of consistently creating well-formed Usenet
posts.
20. You tend to regard people pointing out your inadequacies as
personally motivated rather than the reasoned responses to your
own misguided actions/behaviour that they actually are.

And given the above, when you make a statement that something is so it
would not be sensible for anyone to conclude that it is so, and even if
it were so it would be more reasonable to attribute it to shortcomings
in the programmer than anything else. I.E. if you are not capable of
rendering something sufficiently concrete that it can be reproduced by
others then it makes more sense to disregard it as just more irrational
ravings from your deranged mind.
Here is the feature detection block ...
<snip>

That is not a feature detection block, and is till irrelevant to the
'issue' you mentioned.

Richard.
 
R

Richard Cornford

cwdjrxyz said:
I do not see what bringing up an unrelated reference to
another group has to do with this.

It is a thread that demonstrates someone who is apparently keen to
stress their championing of technical standards disregarding RFC 2616
(Hypertext Transfer Protocol -- HTTP/1.1), which just happens to be one
of the most pivotal technical standards that exists for the Internet.

I particularly enjoyed the point in the tread where Michael Winter
proposed you actually read RFC 2616 and you declared (in considerable
length) that you didn't take technical advice from people posting to
Usenet and instead would get your advice through technical and computing
journals. While everyone observing the conversation knew full well that
if going through technical journals was a worthwhile practice at all it
must inevitable lead you all the way back to RFC 2616, as that is the
applicable technical standard for content negotiation. You had spent
your residual credibility in the group by the end of that post.
You quote only one post in a very long thread.

Anyone who cares will be able to reference the thread form any single
message ID within it.
In summary I use a php include to force a browser to accept
true xhtml 1.1 if it reports it will accept it at all in the
header exchange.

It was precisely the fact that you were not doing that, but instead
serving XHTML to any UA that included the character sequence
"application/xhtml+xml" in its Accept header, that was the reason for
the criticism you received in that thread. Because, as anyone familiar
with the technical standards for content negotiation, as laid out in RFC
2616, already knows, a UA may include the character sequence
"application/xhtml+xml" in its Accept header in order to express its
absolute rejection of the MIME type (and that is without even
considering that it may include the sequence in a way that expresses a
string preference for text/html or some other type).

Content negotiation is the subject of formal technical specification and
your simplistic efforts do completely disregard that specification, to
the extent that your system is capable of doing the opposite of what it
should do and serving XHTML to a UA that reports that it cannot accept
it.
It is up to the browser maker to decide if they want to
allow true xhtml using the mime type for xhtml+xml or not.

Yes, and their mechanism for doing that is providing an HTTP Accept
header that conforms with RFC 2616's specification for an Accept header.
And it is the responsibility of the person writing software to do
content negotiation to interpret that Accept header in accordance with
the technical specification, rather than making up their own rules based
on superficial observations of few actual Accept headers.
If they do not allow it then my php include reverts to
html 4.01 strict.

It can only do that if you are aware of what constitutes 'not allow' as
laid down in RFC 2616.
If I did not do that, my pages would not work on IE6! Thus
I do not send xhtml to browsers that do not indicate that
they will accept it!

But the consequence of your implementation was that you will also send
XHTML to browser that do assert that they do 'not allow' it. That is
poor, and it is in disregard for the applicable technical specification.
In some cases the browser says it will accept either the
mime type for true xhtml or the mime type for html.

And it may express a preference for one or the other. For a while Opera
expressed a preference for text/html, which was fair enough as their
XHTML could not sensibly be scripted at the time, only rendered, so HTML
was the better content type to accept. Your script would have pushed
XHTML at it regardless because it had no conception of the specified
mechanism for content negotiation. And for other browsers in the same
situation your scrip is still delivering the inferior choice when it
could send the superior.
In some of these cases it says it prefers html. In that case
I have found that the common browsers that will accept both
html and true xhtml, but "prefer" html, work just fine if you
force the xhtml path in the header exchange.

Interworking specifications are not about what works in 'common
browsers', they are about creating systems that deliver acceptable
outcomes for everyone. after all you are the one proposing that UAs
identify themselves so you can know the uncommon browsers when they show
up, yet you are only acting to accommodate the 'common browsers'
regardless of how well the uncommon browses conform to the applicable
technical specification that you would rather disregard.
My guess is that some browser makers specify that they
prefer html just to be on the safe side.

I think that Opera made their choice thinking that the user may prefer
the option of functional scripts to broken ones. Browser manufactures
may also think that the user may prefer progressive rendering to only
having access to a page's contents once the page had fully loaded, or
that a user may prefer the output of an old and well tested/debugged
HTMP parser to a brand new, hardly tested and experimental XHTML parser.
The browser manufacturers are in a good position to judge the relative
acceptability of various content types in their browsers, and have a
specified mechanism to express it in their Accept headers. It doesn't
make sense to put that aside because superficial testing does not expose
any manifestations.
One should not confuse a "preference" for the browser with
the code that can be used to indicate that preference in
the header exchange, if a browser writer so wishes.

That doesn't make sense.
In addition a few
lesser used browsers do not indicate what they will accept in
the header exchange, although they sometimes really will accept
true xhtml just as well as well as html. Apple's Safari comes
to mind here. In that case, I err on the safe side and use html
4.01 strict, because browser detection of some of these browsers
is not safe because they can spoof another browser.

I now have dozens of pages served as described above,

And because you serve them without any regard for the RFC 2616 specified
content negotiation mechanism any statement you may make about the
'enforcement of technical standards' will be hypocrisy.
and they all validate perfectly as xhtml 1.1 or html 4.01 strict
at the W3C depending on what path is selected by the header
exchange. Furthermore, the pages work properly for the xhtml 1.1
or html 4.01 strict path selected by the header exchange I use. ...

There is little point talking of a "header exchange" if the UAs are
sending headers in accordance with RFC 2616 and you are interpreting
them in accordance with superficial rules derived from a few
observations and a lot of blanket assumptions. It is not an exchange,
let alone negotiation, if you are not even talking the same language.

Richard.
 
C

cwdjrxyz

Michael said:
On 14/04/2006 04:41, cwdjrxyz wrote:
a lot.

I find most of your discussion without merit, and consider it just
another troll post. I am not going to waste my time on you again. Bye.
 
C

cwdjrxyz

Richard Cornford wrote:

a lot.

I find most of your discussion without merit, and consider it just
another troll post. I am not going to waste my time on you again. Bye.
 
R

Randy Webb

Richard Cornford said the following on 4/14/2006 9:59 AM:
cwdjrxyz wrote:


Anyone who cares will be able to reference the thread form any single
message ID within it.

I read the entire thread based on that one reference and it was because
I needed the laugh I got from reading it.

I can now, safely, add cwdjrxyz to the people in the VK File. He was
half way there but that thread finished it.
 
V

VK

Richard said:
1. You don't understand javascript sufficiently well to understand
the code that you write yourself.
<snip>

Rather strong statement from a person who just recently learned how to
add <script> elements to the page (see the relevant thread)

;-)
 

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,020
Latest member
GenesisGai

Latest Threads

Top