Not really.
Unless there really is an advantage of going with the non-standard
solution, then you would go for standard even for the less important
stuff.
usually, there are advantages in edge-cases.
in cases where a standard technology exists which does the job fairly
well, then it usually makes most sense to use it.
for example, PNG and JPEG are pretty good, so there are few reasons not
to use them (for most things image-storage related).
like, not being chained to standards, does not mean making a standard of
non-standard either.
where it makes more sense to ignore the standardized technologies is
when they either aren't very good, or are notably bad (and actually
using them would likely leave the product worse off).
it is like, trying to take VRML seriously as a 3D model format.
VRML is standardized, but the W3C at the time seemingly managed to get
nearly everything wrong in the design. then later, they tried again with
X3D, which sort of competes against COLLADA, which AFAICT is much more
popular (despite X3D being shoved into a larger number of other
standards, like HTML5 and MPEG-4...).
likewise for the OSI protocols, ... people were largely just like
"whatever" and continued using TCP/IP (IETF largely won this battle).
nevermind if, officially (as per the standards), JPEG was replaced by
JPEG-2000 around 13 years ago, and more recently there is JPEG-XR.
meanwhile, the original JPEG remains the more well-supported format by
most software (much easier to find apps which read/write JPEG images
than JP2 or JXR images).
....
sometimes, using a standard technology may actually make things worse
off in other ways.
for example, it is popular at present for people to make various file
formats consisting of XML documents or similar packaged up into a
ZIP-based container.
while this is easier to approve as "open" or "standard", it comes with a
drawback:
some applications are prone to detect the ZIP-related magic values, and
automatically change the file extension to ZIP, which can sometimes
prove rather annoying (whereas, if a non-ZIP container format were used,
these tools would more often leave the file alone).
it also makes little sense if the intention is actually for the
application to keep the data to itself, such as for a proprietary
file-format, where it may actually be to ones advantage if unaware
parties (such as competitors, ...) have little idea what the file contains.
They can.
But the home-made solutions often promise to be better but rarely
delivers.
I have generally been having generally good results with various
custom-designed technologies.
but, this again comes back to cost/benefit tradeoffs:
if the results of the choice don't pay off well, it means they did not
make a good choice, not that having had an option to make choice was to
blame.
like, having the freedom to make a choice does not mean freedom from any
consequences of having made the choice.
like, having freedom to make choices also means the freedom to shoot
oneself in the foot.
sometimes, a simple direct solution can also be better than a bigger
"standard" solution, for example:
passing simple lists or arrays for internal messages, vs using DOM or
similar;
using a HashMap or similar, vs using an RDBMS, to store key/value pairs
or similar;
passing plain data, rather than using RPC or similar;
....
like, a possible premise:
don't use a sledgehammer to do what can easily enough be done with a
tack-hammer.
like, even if the standard solution is to store data in a DBMS, the
HashMap may be simpler, easier, and also potentially significantly
faster, ...
BTW: have been, mostly for the hell of it, in the process of porting my
stuff to be able to work on Native Client. most of the work thus far is
in trying to migrate the stupid 3D renderer from the full OpenGL to
OpenGL ES.
to make this work, I am as well having to make a "hand-made solution":
namely, migrating much of the code from using normal OpenGL calls to
using a set of wrapper functions (which will then fake things and pass
the results off to GLES). (too much of the code still relies on the
existence of the "fixed function pipeline", so for GLES it needs to all
be faked...).
I guess the more standard solution here would be to rewrite all the code
directly (rather than forcing it onto wrappers), but this would be more
work.
but, then again, it is notably easier in this case to go with a
non-standard technology (Native Client), even if largely tied to a
single browser, than to go with a more standardized technology (IOW:
trying to rewrite a 3D engine into HTML5+JS+WebGL to shove it into a
browser).
granted, for targeting a browser up-front (writing a new engine
ground-up or similar), the HTML5+JS+WebGL route could probably make more
sense (at least assuming "general" things, like that the browsers are
smart enough to know how to cache compiled code and similar...).