Cross-Browser onmousedown JavaScript

M

Martin Rinehart

I recently tried the little test page at the bottom of this message.
This is what I found out about left button presses.

Tested: recent Chrome, Firefox, IE7, Opera, Safari - all on Windows XP
Pro

Well known, all but IE report on e.which.

Less well known:

None but Opera passes the mousedown to document. Is this an Opera
problem, or is Opera the only one that gets it right?

Move the onmousedown assignment (for the div) from JS to the markup
and Firefox (only) reports nothing. (No event is passed.) Firefox bug?

----------------------------------------------------
<html>
<body>
<div id='foo' style='height:50px; width:150px; border:1px solid
#000000; background:#f0f0ff'mouse down here
</div>

<script>
dv = document.getElementById( 'foo' );
dv.onmousedown = mouseDown;

document.onmousedown = 'alert( "document.onmousedown" )';

function mouseDown( e ) {
e = e || event;
if (e.which) { alert( 'e.which = ' + e.which ); }
else if (e.button) { alert( 'e.button = ' + e.button ); }
}
</script> </body> </html>
 
E

Elegie

Martin Rinehart a écrit :

Hello,
I recently tried the little test page at the bottom of this message.
This is what I found out about left button presses.

None but Opera passes the mousedown to document. Is this an Opera
problem, or is Opera the only one that gets it right?

Not really, the problem lies with your test code, you assign a string to
the onmousedown property, while you should really assign a function[1].
Opera is just nice enough to augment the code into a handler (creating a
function, using the provided text as function body).

Replace

---
document.onmousedown = 'alert( "document.onmousedown" )';
---

by

---
document.onmousedown = function (evt) {
alert( "document.onmousedown" );
}
---
Move the onmousedown assignment (for the div) from JS to the markup
and Firefox (only) reports nothing. (No event is passed.) Firefox bug?

Without your test case it's hard to tell, but I suspect your forgot to
pass the event object when defining the attribute. Try with the
following construct:

<div ... onmousedown="mouseDown(event);">


HTH,
Elegie.

[1] You can also assign an object implementing a handleEvent method, but
that's not really supported across user agents IIRC; better stick to the
function handler.
 
M

Martin Rinehart

You are SO right. Firefox gets it right. The others are more
forgiving, which may not, in fact, be a real kindness.
 
E

Elegie

Elegie wrote :

Hi again,
Without your test case it's hard to tell, but I suspect your forgot to
pass the event object when defining the attribute. Try with the
following construct:

To clarify a bit this "event" thing; Firefox does supply the event
argument (fortunately!), however as it wraps the attribute content into
an anonymous handler, you have to retrieve it yourself, as you'd do for
a regular local variable. What you think is the handler, defined by
yourself, is actually simply part of the body of this wrapper.

Let's try and illustrate this with the following code :

---
<div id ="foo" onmousedown="showHandler() ;">
Click here, and view the handler
</div>

<script type="text/javascript">
function showHandler (evt) {
var foo = document.getElementById ("foo") ;
foo.innerHTML = foo.onmousedown.toString().replace(/\n/g,"<br>") ;
}
</script>
---

.... you'll see that the div.onmousedown handler has been created by
Firefox as :
 
E

Elegie

kangax wrote :

Hello,
Those are not exactly identical.

He he, you're going a bit fast here, I'm just printing the output of the
test case I've provided, not making a comparison with a similar handler
which would be created in javascript (whose scope chain would, as you
have pointed out, be different, all the more that we may use scope
alterations mechanisms such as closures or "with" statements when
defining the javascript handler).
Don't forget that in a former case, scope of a function (the body of
which is created from `onmousedown` attribute value) is augmented with
both `document` and `div` (the one that event handler is attached to)
elements.

This scope chain alteration has always puzzled me, it makes so little
sense, and it is not consistent across user agents! Fortunately, good
coding practices should prevent the programmer from writing too much
code (if any) directly into the attribute, thereby limiting the
potentially harmful name collisions.

Cheers,
Elegie.
 
M

Martin Rinehart

Elegie,

Thanks so much for your explain. Using your technique in the little
tester below I get these:

FF, Opera, Safari (with minor format differences):

function onmousedown(event) {
mouseDown(event);
}

IE:

function anonymous()
{
mouseDown( event )
}

Chrome:

function onmousedown(evt) {
with (this.ownerDocument ? this.ownerDocument : {}) {
with (this.form ? this.form : {}) {
with (this) {
return (function(evt){mouseDown( event )}).call(this, evt);
}
}
}
}

IE looks like an error and Chrome looks like a more complex way to
repeat the IE error.

All of this leads me to return to my original position. The cross-
browser way to do it is to use JS to assign your function to
onmousedown, not the markup.
------------------------------------
<html>
<body>
<div id='foo' style='height:50px; width:150px; border:1px solid
#000000; background:#f0f0ff'
onmousedown='mouseDown( event )'mouse down here
</div>

<div id='debug' style='postion: absolute; top: 200px'>

</div>

<script>
function mouseDown( e ) {
e = e || event;

f = gebi( 'foo' );
say( f.onmousedown.toString().replace(/\n/g, '<br>') );
}

function gebi( id ) { return document.getElementById( id ); }
function say( s ) { gebi( 'debug' ).innerHTML += s; }
</script>

</body> </html>
 
E

Elegie

Martin Rinehart a écrit :

Hello Martin,
Thanks so much for your explain.

<snip>

You're welcome. Note that this behavior is AFAIK not properly
documented, so user agents are free to do whatever they want, most of
the time bearing the heavy burden of past browsers wars...
IE looks like an error and Chrome looks like a more complex way to
repeat the IE error.

<snip>

I didn't know this Chrome thing, it's a bit disappointing to see them
stray away from other major vendors' implementations, when the benefit
of doing so appears inexistent.

As for IE, no error, the event object referenced is simply window.event.
All of this leads me to return to my original position. The cross-
browser way to do it is to use JS to assign your function to
onmousedown, not the markup.

<snip>

It should be the preferred way indeed, as it is the one being properly
specified (or so), and offering better design flexibility. However,
JFTR, it should remain possible to execute the handler call in the
attribute, with something like the following.

---
<div onmousedown = "foo(arguments && arguments[0]);">
<script type="text/javascript">
function foo (evt) {
evt = evt || window.event ;
if (evt) { /*...*/ }
}
</script>
 
H

Henry

On Jan 23, 3:16 pm, Elegie wrote:
... . However, JFTR, it should remain possible to execute the
handler call in the attribute, with something like the following.
<snip>

What is the problem with using the - event - Identifier in the
attribute? It is a bit odd that Chrome does not use 'event' as the
name of the formal parameter in the function it creates, but as long
as it also provides a global event object everything will work (and if
things were not working to this extent someone (even someone at
Google) would have noticed by now).
 
E

Elegie

Henry wrote :

Hello Richard,
<snip>

What is the problem with using the - event - Identifier in the
attribute? It is a bit odd that Chrome does not use 'event' as the
name of the formal parameter in the function it creates, but as long
as it also provides a global event object everything will work (and if
things were not working to this extent someone (even someone at
Google) would have noticed by now).

Ah, I thought Chrome did not propose the global event object (a faulty
assumption from my part), but since it does there's really no problem at
all.

It's been a while since I'm been away from javascript; I'm installing
the Chrome thing now, and also have noticed the ES3.1 specification
existence. Do things change for the best? :)

I hope you've been doing well for all that time!

Cheers,
Elegie.
 
E

Elegie

Richard Cornford wrote :

Hello,

Just like Opera and Safari (etc, etc.) before it, Chrome wants to 'work'
with as much of the Internet as it can and so is attempting to be as
'compatible' with exiting browsers, including IE, as it can. To that end
it has the global - event - property so that code that was written to
only 'work' with IE will function sensibly when exposed to it.

IE will probably never abandon its own event model (or implement W3C
Event specification), so it could be a relevant investment for any user
agent to implement IE's model as well as the W3C's.

Still, implementing only the W3C's model would have been a fine choice
to me, the W3C event model is stronger than IE's model (this is much
more debatable for the Range Specification, for instance), it is now
well-known, so I don't think a browser would lose any audience not
implementing the IE's model. So far, Mozilla has been successful in this
regard.

Also, code relying only on window.event is likely to be poor, in regards
of generally accepted coding practices, and therefore is likely to use
other IE-only features, which the browser may not implement and stumble
upon. The choice of implementing window.event, as part of a broader
IE-compatibility strategy, is a heavy one, and if only partly completed,
a risky one.
It is also doing many other things to fool various types of browser
sniffing into thinking it is an 'acceptable' browser. You will recall
the many times we have seen the inference - if(window.external) - used
to branch for IE. Well Chrome has - window.external -, and so becomes
the first non-IE browser I have seen with that property. Another
illustration of how today's browser sniffing can expect to be subverted
by new browsers in the future (with the usual implications for the
future maintenance of code that employs it).

That was to be expected, and is a good illustration of why browser
sniffing has been advocated against since years now. Still, a browser
just spoofing a property without implementing the relevant functionality
behind it, would be a foolish thing.

Of course, an innovative functionality, proposed by some vendor, is fine
to copy; this is the way progress can be accomplished.
It is good to see you back, and I hope it will be more than a short visit.

Thank you, it's nice to read you again! I'm not sure I'll stick around
for long though, as you know javascript was just a hobby to me (one I
enjoyed very much), and I have now other projects I'm dedicated to.
Still, I just happen to have some free time currently, and I really
wanted to give my greetings to old fellow regulars :)
I don't know. There are interesting additions (standardised
getters/setters, constants, etc.) and some of the more ambiguous stiff
is getting cleaned up/clarified. There is also a 'strict' mode for a
more (externally imposed) 'disciplined' approach to javascript
programming, where exceptions will be thrown when the non-strict mode
would just fail silently.

I have downloaded and started to read all sorts of ES3.1 related
documents. I still have a long way to go, but I sure feel intrigued when
I see your quoting the 'disciplined' word. IMHO, discipline should
normally relate to practices, not tools; I'll have a look to that
'strict' mode.
On the other hand, things that probably should
be done are not going to get done in this version, such as sorting out
the try-catch system so that exceptions can be selectively caught,
and/or effectively identified when caught.

The exception sorting would be a very good thing to me, it's actually
quite fundamental when we adopt a try/catch programming style.
In the end there had to be change/progress, and given that historically
it takes about half a decade for the landscape of browsers in use to
catch up with the latest script standard (recall that general try-catch
use would not have been recommended here at all prior to about mid 2005)
it seems like a good idea to get the next ECMAScript version out of the
door as soon as practical.

I can only agree with that! The effort of producing a specification
leads to the effort of reading/discussing it, and enforces by the
programmers the feeling that things are evolving. That helps staying alert.

Interestingly there have been perception changes about the direction
javascript should take. At the turn of the centaury the direction was
supposed to be toward being Java-like, now javascript is being accepted
as a functional programming language and planned developments point in
that direction.

I honestly rejoice about it. While a lot of good work had been put into
the ES4 specification, I was somehow a bit sad to see that the quite
unique javascript paradigm, a clever mix of object and functional
programming, remained underestimated, unexplored. Java is a good thing;
but "javascript!=java" means much more than many originally thought.
It is unlikely you will ever get the credit you deserve
for that as there does not seem to be much appreciation of how old some
of the 'new ideas' are, or where they came from.

Well, all I did was simply enjoy (and enjoy very much) discussing and
trying out patterns, not to mention bothering you as much as I could :)
Receiving credit for this would be unexpected, and inappropriate.

I hope, however, that *you* will receive the credit you're entitled to,
for being the first to describe in depth javascript closures and related
patterns (ah, these Russian dolls...).
Generally, very well, with the drawback that the demands on my time have
increased so I don't have the time to devote to participating in this
group that I once had (there have been long periods where I have not
even been able to keep up with reading the traffic, let alone
participating), and I don't have time for the amount of research and
experimentation that I once put into the subject.

That's success for you. Good luck, and take care!

Cheers,
Yep.
 
R

Richard Cornford

Richard Cornford wrote :

IE will probably never abandon its own event model (or
implement W3C Event specification),

I don't think that is necessarily true. Browsers like Opera, Safari,
Chrome, etc. demonstrate that the two event models can happily co-
exist. Microsoft's problem is that the have a large number of
(presumably very profitable) corporate customers who have made their
own investment in internal web applications, many of which have been
written to be IE specific. If Microsoft change IE so that it is not
back-compatible with its previous versions they risk alienating their
commercial customers by breaking significant proportions of their
existing Intranet software and so imposing avoidable additional costs
on them (in repairing/replacing that software). As things stand the
investment in that software is one added incentive to stick with the
Windows operating system, but if that software is to be broken anyway
then the relative costs of switching to another operating system swing
in a direction Microsoft would not appreciate.

In changing IE, Microsoft can add anything new without risking much,
but they cannot significantly change what they already have. The W3C
event model is pure addition, unlike, for example, the proposed W3C
standard for position reporting in the DOM (which defines things like
- offsetLef - properties in a way that is incompatible with what IE
traditionally does).

Microsoft seem to have become convinced that they should be adopting,
and adhering to, standards in IE more completely than they have in the
past, so the W3C event model might get in. In other areas the business
considerations probably will surface and neutralise the W3C's less
reasoned efforts.
so it could be a relevant investment for any user
agent to implement IE's model as well as the W3C's.

That has always been true. So much code has been written to be IE only
that emulating as much of IE's object model as possible lets them
looks less broken in the face of a web that is not at all tolerant of
new/unexpected browsers.
Still, implementing only the W3C's model would have been a
fine choice to me, the W3C event model is stronger than IE's
model

Mostly yes. There is the inverse possibility that some script may
infer that the browser was Firefox/Mozilla from the absence of the IE
model and then attempt to employ the some of the many Firefox/Mozilla
specific features. I don't recall ever seeing anyone doing that so it
probably isn't a big risk.
(this is much more debatable for the Range Specification,
for instance), it is now well-known, so I don't think a browser
would lose any audience not implementing the IE's model.

The browser would have to arrange that when code guessed at the type
of browser the guess it made was that the browse was Mozilla/Friefox.
That may well mean not implementing IE originating features, and so
sacrificing compatibility with IE only code.
So far, Mozilla has been successful in this regard.

But even Mozilla had been seeing it as expedient to make concessions
for compatibility with IE only code, with its invisible support for -
document.all - proving a portent for a now extensive exercise in
pragmatism.
Also, code relying only on window.event is likely to be
poor, in regards of generally accepted coding practices,
and therefore is likely to use other IE-only features, which
the browser may not implement and stumble upon.

Yes, but if the alternative to the path of providing partial
compatibility with IE and so supporting, say, 80% of code designed to
be IE only, is to fail when exposed to any of it then that is likely
to seem like the better option.
The choice of implementing window.event, as part of a broader
IE-compatibility strategy, is a heavy one, and if only partly
completed, a risky one.

Yes, there are a number of things that have to go with it. One example
being having callable collections, as IE only code has a strong
tendency to use those.
That was to be expected, and is a good illustration of why
browser sniffing has been advocated against since years now.
Still, a browser just spoofing a property without implementing
the relevant functionality behind it, would be a foolish thing.

That depends a bit on what uses the object/feature is put to. Numerous
objects have been employed in object inference browser sniffing in a
way that made that use more significant than their intended purpose.
For example, IceBrowser had a global - ActiveXObject - function, which
could hardly be expected to instantiate ActiveX objects in a Java
based browser running on non-Windows systems. It seems the object was
there to confound the, then fashionable, strategy of inferring that
the browser was IE 5+ from the existence of that constructor. A
situation much like that of - window.external - use.
Of course, an innovative functionality, proposed by some
vendor, is fine to copy; this is the way progress can be
accomplished.

Very true, and why we have pretty much standardised AJAX without any
ratified standard for the interfaces it uses.
Thank you, it's nice to read you again! I'm not sure I'll stick
around for long though, as you know javascript was just a hobby
to me (one I enjoyed very much), and I have now other projects
I'm dedicated to. Still, I just happen to have some free time
currently, and I really wanted to give my greetings to old
fellow regulars :)



I have downloaded and started to read all sorts of ES3.1 related
documents. I still have a long way to go, but I sure feel
intrigued when I see your quoting the 'disciplined' word. IMHO,
discipline should normally relate to practices, not tools;

It probably was not the best choice of word, but nothing better sprang
to mind at the time.
I'll have a look to that 'strict' mode.

I will have to find the time to have a really good look at it myself
(that is, go through all the algorithms and see how normal cross-
browser code will be handled by it (and what could be done instead
where it cannot).
The exception sorting would be a very good thing to me, it's
actually quite fundamental when we adopt a try/catch programming
style.

Yes, if the language is going to be expected to throw more exceptions
(at lest in strict mode) then the ability to handle them becomes more
important. I don't mind the former if the try-catch system was up to
the job but as it is now there is every chance that the dangerous
habit of just suppressing errors rather than handling them becomes the
norm. Identifying exceptions that have been thrown is particularly
problematic and if more exceptions are were being thrown while trying
to examine characteristics of the first exceptions in order to
identify them then the catch blocks are going to get very complex.
I can only agree with that! The effort of producing a specification
leads to the effort of reading/discussing it, and enforces by the
programmers the feeling that things are evolving. That helps
staying alert.



I honestly rejoice about it. While a lot of good work had been
put into the ES4 specification, I was somehow a bit sad to see
that the quite unique javascript paradigm, a clever mix of
object and functional programming, remained underestimated,
unexplored.

I haven't had much to do with Java recently, but I gather that it has
grown (or is growing) closures, so moving in the javascript
direction.
Java is a good thing; but "javascript!=java" means
much more than many originally thought.


Well, all I did was simply enjoy (and enjoy very much)
discussing and trying out patterns, not to mention bothering
you as much as I could :) Receiving credit for this would be
unexpected, and inappropriate.

Was anyone else doing any more than playing around with the
possibilities because they found that entertaining?
I hope, however, that *you* will receive the credit you're
entitled to, for being the first to describe in depth javascript
closures and related patterns (ah, these Russian dolls...).
<snip>

In the end I suspect that the thing that will make the difference is
whether my ideas about how client-side code should be designed/
structured get more widely adopted. Recently both the Prototype.js and
JQuery libraries have (more or less) recognised the inherent
superiority of feature detection and set about replacing their browser
sniffing based branching code. These are the creations of people who
have, in the past, attempted to defend UA string based browser
sniffing as the only viable strategy in client-side code authoring
(regardless of being shown that it wasn't). If they can come to see
that one of their fundamental assumptions about how things should be
done then perhaps they can be persuaded to, at least, question some of
their others, and so become open to the idea that there may be
advantages in structures that are not large-scale, interdependent code
libraries.

Richard.
 
E

Elegie

Richard Cornford wrote :

Hello,

In changing IE, Microsoft can add anything new without risking much,
but they cannot significantly change what they already have.

<snip>

I completely agree, of course; changing the existing model would greatly
damage customers, because of the lost backward-compatibility with
existing applications. The cost, be it financial or image-related, would
definitely be unacceptable.

As for adding new features, impacts may not be negligible, a script for
instance may use the "evt=evt||window.event" test, in which case wrong
properties could be read at a later stage if we have improper feature
testing.
Microsoft seem to have become convinced that they should be adopting,
and adhering to, standards in IE more completely than they have in the
past, so the W3C event model might get in. In other areas the business
considerations probably will surface and neutralise the W3C's less
reasoned efforts.

I would say that the cost argument applies here as well; other than the
image of respecting so-called standards, I can see no benefit for
Microsoft in implementing W3C events, only an additional cost of
development and maintenance, all the more that IE's model could be
argued to be even more "standard", because apparently more used (or
perceived as more used), than the W3C's. It would only make sense to
implement the W3C events model if there is a demand for currently unused
event features (such as custom events); and if this were happening,
nothing would prevent Microsoft from extending the current model, rather
than implementing the W3C's model.

But even Mozilla had been seeing it as expedient to make concessions
for compatibility with IE only code, with its invisible support for -
document.all - proving a portent for a now extensive exercise in
pragmatism.

Being an idealist trying to resist to cynicism, I naturally have mixed
feelings about this pragmatism issue.

This starts from a set of badly designed scripts, by incompetent
programmers. (Designing for IE only cannot be defended at all; designing
with a universal approach ensures superior robustness, for no additional
cost).

Given this situation, two solutions are available: either change/correct
the scripts, or change/evolve the browser's model. The choice of
changing the model looks like the less painful; after all, scripts do
not need to updated, and the browser ends up with more capabilities, right?

The consequences of such a choice, however, are deplorable: incompetent
programmers do not feel encouraged to learn more about
browser-scripting, or simply improve their skills, and, more
importantly, their /attitude/ does not change - and this amateurish,
sometimes arrogant attitude, that has been here for so long, is
definitely the biggest issue of all issues, way more important than
overlapping scripting models or wavering standards, as it delays the
upcoming of true professionals (not even talking about experts). And the
cost of this, hidden in project or maintenance expenses, is simply enormous.

Implementing "document.all" was all good for Mozilla; it was not for the
community.

(sorry for the rant)
Yes, but if the alternative to the path of providing partial
compatibility with IE and so supporting, say, 80% of code designed to
be IE only, is to fail when exposed to any of it then that is likely
to seem like the better option.

Well, yes, probably :(

I haven't had much to do with Java recently, but I gather that it has
grown (or is growing) closures, so moving in the javascript
direction.

I don't know either, but *that* would be funny :)

Recently both the Prototype.js and
JQuery libraries have (more or less) recognised the inherent
superiority of feature detection and set about replacing their browser
sniffing based branching code. These are the creations of people who
have, in the past, attempted to defend UA string based browser
sniffing as the only viable strategy in client-side code authoring
(regardless of being shown that it wasn't).

Well, user agents and even proxies can spoof the UA string (I was
happily using IE8 back in 2002), feature detection is really the natural
way of doing... Browser detection in itself is extremely rarely needed.
If they can come to see
that one of their fundamental assumptions about how things should be
done then perhaps they can be persuaded to, at least, question some of
their others, and so become open to the idea that there may be
advantages in structures that are not large-scale, interdependent code
libraries.

As a matter of taste, I'd rather re-use some components I have
previously written and tested, rather than a full library (my reasons
being easier maintenance, simpler integration, better quality assessment
and less constraints on programming style). I would not be hostile to
library projects though, I just "feel" that (and it's a paradox) it
lacks flexibility given browser-scripting subtleties.


Regards,
Elegie.
 
G

Garrett Smith

Elegie said:
Richard Cornford wrote :



As a matter of taste, I'd rather re-use some components I have
previously written and tested, rather than a full library (my reasons
being easier maintenance, simpler integration, better quality assessment
and less constraints on programming style). I would not be hostile to
library projects though, I just "feel" that (and it's a paradox) it
lacks flexibility given browser-scripting subtleties.

A problem with all-in-one general purpose library is that the whole file
is sent to the user.

The principle REP is applicable to RIA development and particularly
because of the download issue.

Why send thousands of LOC to the end user where the code will not ever
be executed?

If there are many small, decoupled, files, a program can be used to
combine only those files that are needed for the application (as a
whole), and/or page. This reduces the amount of stuff downloaded. ANT
can create static files. Another program could create a file at request
time (<script src="/myLib?f=Event&f=Dom&f=Calendar" ...)

Benefits to modular design include ease of testing refactoring. The
point of refactoring is to make development maintenance easier.

Garrett
 
E

Elegie

Garrett Smith wrote :

Hi,
A problem with all-in-one general purpose library is that the whole file
is sent to the user.

The principle REP is applicable to RIA development and particularly
because of the download issue.

Why send thousands of LOC to the end user where the code will not ever
be executed?

That is good point.

All-in-one monolithic libraries constitutes the unit of release, and
since they integrate everything they are subject to more frequent
releases to accommodate bug corrections. This leads to more frequent and
more extended (complex) testing by the library user, which has to
re-test everything at each release (release which he is likely to want
to implement, as any bug correction may impact the functions he uses,
given the library integrated nature).

On the other hand, package-based libraries, which would bundle
components together and permit custom imports, are more flexible, since
we have a set of units of release which we can adopt one by one (and the
smaller the unit of release, the smaller the number of bugs and number
of releases).

Designers of javascript libraries have necessarily been faced with the
library structure, and some of them have chosen the monolithic way of
building the library. I believe this was in old times, where feature
detection was not employed or accepted. Designers were probably thinking
of having browser-based libraries, or at least some DOM normalization
layer on which the real features would be built. This would probably, in
regards of current practices, be better addressed with packaged
cross-browser components, proposing encapsulated base functionalities
(such as Traversing, Measuring) and applications (such as calendar,
table utilities). I am not a library expert though, so that's just light
thoughts.
If there are many small, decoupled, files, a program can be used to
combine only those files that are needed for the application (as a
whole), and/or page. This reduces the amount of stuff downloaded. ANT
can create static files. Another program could create a file at request
time (<script src="/myLib?f=Event&f=Dom&f=Calendar" ...)

This indeed looks like a fine way to create a package-based library.

<snip>

Regards,
Elegie.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top