D
David Mark
This one was posted as a response to a Twitter conversation.
http://webreflection.blogspot.com/2011/10/on-user-agent-sniffing.html
I would have commented there, but moderation is turned on (meaning
criticism will likely be deleted) and I don't want to sign in to a
blog site anyway.
"Oh well, who was following me on twitter today is already bored about
this topic (I guess) but probably other developers would like to read
this too so ..."
Here's hoping that other developers do not read this latest defense of
UA sniffing.
"What Is UA Sniffing
UserAgent sniffing means that a generic software is relying into a
generic string representation of the underlying system. The User Agent
is basically considered a unique identifier of "the current software
or hardware that is running the app".
In native applications world UA could be simply the platform name ...
where if it's "Darwin" it means we are in a Mac platform, while if
it's Win32 or any other "/^Win.*$/" environment, the app reacts,
compile, execute, as if it is in a Windows machine ... and so on with
Linux and relative distributions."
I really prefer the description in Richard's article:-
http://jibbering.com/faq/faq_notes/not_browser_detect.html.
"The Modern Web Behavior
Recently things changed quite a lot on web side and only few companies
are redirecting via server side User Agent sniffing. We have now
something called runtime features detections, something that supposes
to test indeed runtime browser capabilities and understand, still
runtime, if the browser should be redirected or not into a hopefully
meaningful fallback or degraded service."
Redirecting by server side UA sniff? That's hardly the opposite of
feature detection/testing. I sense this is heading off into the
weeds...
"Features Detections Good Because
Well, specially because the browsers fragmentation is massive, FD can
tell us what we need from the current one, without penalizing in
advance anybody.
The potential redirection or message only if necessary, informing the
user his/her browser is not capable of features required to grant a
decent experience in the current online application/service.
FDs are also widely suggested for future compatibility with new
browsers we may not be able to test, or recognize, with any sort of
list present in our server side logic, the one that is not directly
able to understand if the current browser may run the application/
service or not.
Of course to be automatically compatible with newer browsers is both
business value, as "there before we know", and simplified maintenance
of the application/logic itself, since if it was working accordingly
with certain features, of course it's going to work accordingly with
newer or improved features we need.
As summary, runtime features detections can be extremely valuable for
our business ... but
Features Detections Bad Because
Not sure I have to tell you that the first browser with disabled
JavaScript support will fail all detections even if theoretically
capable ... but lets ignore these cases for now, right?
Well, it's kinda right, 'cause we may have detected browsers with JS
disabled already in the server side thanks to user headers or specific
agent ... should I mention Lynx browser ? Try to detect that one via
JavaScript ... "
I had a feeling.
"Back to "real world cases", all techniques used today for runtime
features detections are kinda weak ... or better, extremely weak!
I give you an example:"
// the "shimmable"
if (!("forEach" in []) || !Array.prototype.forEach) {
// you wish this gonna fix everything, uh? ...
Array.prototype.forEach = function () { ... };
}
That's a pretty lousy example.
// the unshimmable
if (!document.createElement("canvas").getContext("2d")) {
// no canvas support ... you wish to know here ...
}
What does that mean?
"Not because I want to disappoint you but you gonna be potentially
wrong in both cases ... why that?"
No, he is going to be potentially wrong. It is, after all, his code.
"Even if Array.prototype.forEach is exposed and this is the only Array
extra you need, things may go wrong. As example, the first shim will
never be executed in a case where "forEach" in [] is true, even if
that shim would have solved our problem."
Hard to understand why he included that - in - test at all (or how it
will lead to his perceived downfall).
"That bug I have filed few days ago demonstrated that we cannot really
trust the fact a method is somewhere since we should write a whole
test suite for a single method in order to be sure everything will
work as expected OR we gonna write unit, acceptance, integration, and
functional tests to be sure that a bloody browser works as expected in
our application."
What bug? And don't test *everything*; test what you need for your
app.
"Same is valid for classic canvas capability ... once we have that, do
we really test that every method works as expected?"
No.
"And if we need only a single method out of the canvas, how can we
understand that method is there and is working as expected without
involving, for the single test, part of the API that may not work but
even though we don't care since we need only the very first one?"
If it doesn't work as needed, proper feature testing should detect
that and avoid disaster.
"I am talking about drawImage, as example, in old Symbian browsers,
where canvas is exposed but drawImage does not visually draw anything
on the element ... nice, isn't it?"
If you can't test whether it drew anything, attempt to draw two
identical images at the outset and let the user choose which one looks
best. Then you can rightfully blame the user if they end up with empty
images.
"You Cannot Detect Everything Runtime
.... or better, if you do, most likely any user has to wait few minutes
before the whole test suite becomes green, specially in mobile
browsers where any of these tests take ages burning battery life, CPU
clocks, RAM, and everything else before the page can be even
visualized since we would like to redirect the user before he can see
the experience is already broken, isn't it?"
The whole test suite for what?
"IT Is Not Black Or White"
No, it's feature testing, feature detection, object inferences in that
order of proficiency. Notice that UA sniffing does not appear in the
list.
"... you think so? I think IT is more about "what's the most
convenient solution for this problem", assuming there is, generally
speaking, no best solution to a specific problem, since every problem
can be solved differently and in a better way, accordingly with the
surrounding environment.
So how do we brainstorm all these possible edge cases that cannot
obviously be solved runtime in a meaningful, reliable way?"
I suppose we can try.
"I want provide same experience to as many users as possible but
thanks to my tests I have already found user X, Y, and Z, that cannot
possibly be compatible with the application/service I am trying to
offer."
Now that is a shame.
"If I detect runtime everything I need for my app, assuming this is
possible, every browser I already know has no problems there will be
penalized for non updated, low market share, problematic
alternatives."
It's an imperfect Web. This is one of the reasons why context is so
important to the design. You test only what needs to be tested and
nothing more.
"If I sniff the User Agent with a list of browsers I already know I
cannot possibly support due lack of unshimmable features, how faster
will be on startup time every other browser I am interested on?"
Who cares how fast you can shoot yourself in the foot? You don't know
when those bugs might be fixed or what other agents have them and you
sure as hell can't determine anything from the UA string.
"Just think about it "
I thought about it in the late 90's and I haven't changed my mind.
You think about it as you are just confusing the "issue" (and there's
more than enough confusion out there already).
http://webreflection.blogspot.com/2011/10/on-user-agent-sniffing.html
I would have commented there, but moderation is turned on (meaning
criticism will likely be deleted) and I don't want to sign in to a
blog site anyway.
"Oh well, who was following me on twitter today is already bored about
this topic (I guess) but probably other developers would like to read
this too so ..."
Here's hoping that other developers do not read this latest defense of
UA sniffing.
"What Is UA Sniffing
UserAgent sniffing means that a generic software is relying into a
generic string representation of the underlying system. The User Agent
is basically considered a unique identifier of "the current software
or hardware that is running the app".
In native applications world UA could be simply the platform name ...
where if it's "Darwin" it means we are in a Mac platform, while if
it's Win32 or any other "/^Win.*$/" environment, the app reacts,
compile, execute, as if it is in a Windows machine ... and so on with
Linux and relative distributions."
I really prefer the description in Richard's article:-
http://jibbering.com/faq/faq_notes/not_browser_detect.html.
"The Modern Web Behavior
Recently things changed quite a lot on web side and only few companies
are redirecting via server side User Agent sniffing. We have now
something called runtime features detections, something that supposes
to test indeed runtime browser capabilities and understand, still
runtime, if the browser should be redirected or not into a hopefully
meaningful fallback or degraded service."
Redirecting by server side UA sniff? That's hardly the opposite of
feature detection/testing. I sense this is heading off into the
weeds...
"Features Detections Good Because
Well, specially because the browsers fragmentation is massive, FD can
tell us what we need from the current one, without penalizing in
advance anybody.
The potential redirection or message only if necessary, informing the
user his/her browser is not capable of features required to grant a
decent experience in the current online application/service.
FDs are also widely suggested for future compatibility with new
browsers we may not be able to test, or recognize, with any sort of
list present in our server side logic, the one that is not directly
able to understand if the current browser may run the application/
service or not.
Of course to be automatically compatible with newer browsers is both
business value, as "there before we know", and simplified maintenance
of the application/logic itself, since if it was working accordingly
with certain features, of course it's going to work accordingly with
newer or improved features we need.
As summary, runtime features detections can be extremely valuable for
our business ... but
Features Detections Bad Because
Not sure I have to tell you that the first browser with disabled
JavaScript support will fail all detections even if theoretically
capable ... but lets ignore these cases for now, right?
Well, it's kinda right, 'cause we may have detected browsers with JS
disabled already in the server side thanks to user headers or specific
agent ... should I mention Lynx browser ? Try to detect that one via
JavaScript ... "
I had a feeling.
"Back to "real world cases", all techniques used today for runtime
features detections are kinda weak ... or better, extremely weak!
I give you an example:"
// the "shimmable"
if (!("forEach" in []) || !Array.prototype.forEach) {
// you wish this gonna fix everything, uh? ...
Array.prototype.forEach = function () { ... };
}
That's a pretty lousy example.
// the unshimmable
if (!document.createElement("canvas").getContext("2d")) {
// no canvas support ... you wish to know here ...
}
What does that mean?
"Not because I want to disappoint you but you gonna be potentially
wrong in both cases ... why that?"
No, he is going to be potentially wrong. It is, after all, his code.
"Even if Array.prototype.forEach is exposed and this is the only Array
extra you need, things may go wrong. As example, the first shim will
never be executed in a case where "forEach" in [] is true, even if
that shim would have solved our problem."
Hard to understand why he included that - in - test at all (or how it
will lead to his perceived downfall).
"That bug I have filed few days ago demonstrated that we cannot really
trust the fact a method is somewhere since we should write a whole
test suite for a single method in order to be sure everything will
work as expected OR we gonna write unit, acceptance, integration, and
functional tests to be sure that a bloody browser works as expected in
our application."
What bug? And don't test *everything*; test what you need for your
app.
"Same is valid for classic canvas capability ... once we have that, do
we really test that every method works as expected?"
No.
"And if we need only a single method out of the canvas, how can we
understand that method is there and is working as expected without
involving, for the single test, part of the API that may not work but
even though we don't care since we need only the very first one?"
If it doesn't work as needed, proper feature testing should detect
that and avoid disaster.
"I am talking about drawImage, as example, in old Symbian browsers,
where canvas is exposed but drawImage does not visually draw anything
on the element ... nice, isn't it?"
If you can't test whether it drew anything, attempt to draw two
identical images at the outset and let the user choose which one looks
best. Then you can rightfully blame the user if they end up with empty
images.
"You Cannot Detect Everything Runtime
.... or better, if you do, most likely any user has to wait few minutes
before the whole test suite becomes green, specially in mobile
browsers where any of these tests take ages burning battery life, CPU
clocks, RAM, and everything else before the page can be even
visualized since we would like to redirect the user before he can see
the experience is already broken, isn't it?"
The whole test suite for what?
"IT Is Not Black Or White"
No, it's feature testing, feature detection, object inferences in that
order of proficiency. Notice that UA sniffing does not appear in the
list.
"... you think so? I think IT is more about "what's the most
convenient solution for this problem", assuming there is, generally
speaking, no best solution to a specific problem, since every problem
can be solved differently and in a better way, accordingly with the
surrounding environment.
So how do we brainstorm all these possible edge cases that cannot
obviously be solved runtime in a meaningful, reliable way?"
I suppose we can try.
"I want provide same experience to as many users as possible but
thanks to my tests I have already found user X, Y, and Z, that cannot
possibly be compatible with the application/service I am trying to
offer."
Now that is a shame.
"If I detect runtime everything I need for my app, assuming this is
possible, every browser I already know has no problems there will be
penalized for non updated, low market share, problematic
alternatives."
It's an imperfect Web. This is one of the reasons why context is so
important to the design. You test only what needs to be tested and
nothing more.
"If I sniff the User Agent with a list of browsers I already know I
cannot possibly support due lack of unshimmable features, how faster
will be on startup time every other browser I am interested on?"
Who cares how fast you can shoot yourself in the foot? You don't know
when those bugs might be fixed or what other agents have them and you
sure as hell can't determine anything from the UA string.
"Just think about it "
I thought about it in the late 90's and I haven't changed my mind.
You think about it as you are just confusing the "issue" (and there's
more than enough confusion out there already).