UTF-8 without external modules on Perl 5.0

Y

Yohan N. Leder

Hi,

I've a problem for which I've created
- a recent thread : <http://minilien.fr/a0juc6>,
- another one in the alt.html group : <http://minilien.fr/a0jubn>,
- an online script which show the problem :
<http://yohannl.tripod.com/cgi-bin/form2dump.pl>.

Well, this said, maybe a solution would be to use the UTF-8 charset for
my generated HTML pages... But, this is a problem for me because of
these reasons :

- Some target servers are using Perl 5.00503 under FreeBSD and there's
nothing about UTF-8 encoding/decoding in the stock modules of this
release.

- On those old servers, stock Perl modules only are authorized, even in
personal /cgi-bin directory. I'm aware it's a big constraint, but I've
not any way to change the decision about that : we have to do with this!

- HTML forms generated by the Perl scripts must be able to handle all
which may be usually typed in English and French language, including
euro sign.

- These Perl scripts contain a configurable part where different persons
(some being not developers) will be able to change some strings (stored
as constants : "use constant NAMEOFCONSTANT => "The string people can
write, rewrite and manage by themself as if it was a configuration
feature";"), and we can't ask them to type character entity rather than
special or accentuated characters when there will be ones (e.g.
&agrave;, etc). So, if I would choose to use UTF-8, I should, in the
same time, find a way (without external module) to encode these
"configurable strings" prior to display them in any browser.

How to manage UTF-8 in these conditions ?
 
P

Peter J. Holzer

Yohan said:
I've a problem for which I've created
- a recent thread : <http://minilien.fr/a0juc6>,
- another one in the alt.html group : <http://minilien.fr/a0jubn>,
- an online script which show the problem :
<http://yohannl.tripod.com/cgi-bin/form2dump.pl>.

Well, this said, maybe a solution would be to use the UTF-8 charset for
my generated HTML pages... But, this is a problem for me because of
these reasons :

- Some target servers are using Perl 5.00503 under FreeBSD and there's
nothing about UTF-8 encoding/decoding in the stock modules of this
release.

perl 5.005 also doesn't know about wide characters. A character is a
byte, so there is no way to have a character outside of the range
0..255. So you don't need any decoding routines because you couldn't
decode a euro sign anyway :).

So if you need to work with unicode strings in perl 5.005, the best way
is probably to work with raw UTF-8-encoded strings. That means that a
single logical character can be represented by one, two or three perl
characters, which means you have to be careful with all operations which
work on individual characters. For example, you can't do m/[$£€¥]/, but
have to use m/(\$|£|€|¥)/ instead (and you probably shouldn't type them
verbatim in the script but make variables with their UTF-8 byte sequence
to avoid garbling them when someone edits the script with a non-UTF-8
editor). You also cannot use uc and lc, as that would change the case
on each individual byte of a multi-byte character, etc (It may work
if you have a working UTF-8 locale, but I doubt this is the case on an
old server). Most things will work just fine, but you will have to go
over your scripts with a fine-toothed comb to find the spots which
aren't multi-byte-character clean.

- On those old servers, stock Perl modules only are authorized, even in
personal /cgi-bin directory. I'm aware it's a big constraint, but I've
not any way to change the decision about that : we have to do with this!

This is a quite silly policy: If you can do something stupid or harmful
with a module, you can do the same thing with a script.
But I know that some sites have such a policy, and they probably won't
change it, so you're probably stuck with it.

- HTML forms generated by the Perl scripts must be able to handle all
which may be usually typed in English and French language, including
euro sign.

If you only need English and French (and won't be needing Czech next
year because your company opens a branch office in Prague) you are
probably better off using an 8-bit character set which covers those two
languages. ISO-8859-15 and Windows-1252 come immediately to mind.

- These Perl scripts contain a configurable part where different persons
(some being not developers) will be able to change some strings (stored
as constants : "use constant NAMEOFCONSTANT => "The string people can
write, rewrite and manage by themself as if it was a configuration
feature";"), and we can't ask them to type character entity rather than
special or accentuated characters when there will be ones (e.g.
&agrave;, etc).

Where do people edit these strings? Directly on the server? Or do they
edit the file on their Windows machine and then upload it to the server
via FTP (or whatever)?

hp
 
A

Alan J. Flavell

Yohan N. Leder wrote:

[ in a context where an old version of Perl has been imposed ... ]
If you only need English and French (and won't be needing Czech next
year because your company opens a branch office in Prague) you are
probably better off using an 8-bit character set which covers those
two languages. ISO-8859-15 and Windows-1252 come immediately to
mind.

Yes, this could indeed resolve the stated problem. (Windows-1252
could handle Czech also, no? - but not Polish etc). It's what the
search engines' query pages (altavista, google etc.) were doing some
years ago, before general browser support for utf-8 was adequate.
Users could select an 8-bit web page encoding appropriate to their
language, and then submit their query - the browser would submit their
input using that same encoding.

This is probably the wrong place to go into any detail on that, but if
I might mention
http://ppewww.ph.gla.ac.uk/~flavell/charset/form-i18n.html

There's absolutely nothing you can do to prevent users from typing or
copy/pasting oddball characters into the form, and browsers react in
various ways when they attempt to submit characters which cannot be
expressed in the chosen encoding. So it's necessary to design server
side scripts to be able to cope in some way when that happens - if
only to recognise the defective input and politely refuse it.

Depending on the circumstances, it may be that file upload would be a
preferable implementation, as you hinted?

With sufficiently modern software, on the other hand, I'd strongly
recommend getting things to work with utf-8. Practically any browser
of any consequence today can deal with that (as you may deduce from
the fact that the search engine queries no longer bother the user with
the encoding options, but simply use utf-8 without further comment).

best
 
Y

Yohan N. Leder

perl 5.005 also doesn't know about wide characters. A character is a
byte, so there is no way to have a character outside of the range
0..255. So you don't need any decoding routines because you couldn't
decode a euro sign anyway :).

Hum, effectively, I didn't realize all the aspect about this charset
problem. In fact, in my first idea, I thought I could do that :

1/ indicate (simply by comment) that the string in code (the
configurable one I told about and others written by me) have to use
character in iso-8859-* table only.

2/ indicate a charset of utf-8 for generated html pages and convert
anything to utf-8 prior to print to browser.

3/ take anything which come from html forms as being utf-8 and, then,
convert-it back to iso-8859-* immediately on receiving to be Perl
5.00503 compliant

And for this I found a pure Perl module called Unicode::UTF8simple
containing to/from conversion sub I could copy/paste in my own script
(indicating the original author in header of course)... But as you state
: own to convert from UTF8 to an iso-8859-* when the given UTF-8
character (like euro sign) is not representable in the target charset ?

What do you think ? Does this way definitively out or is there a
workaround ?
So if you need to work with unicode strings in perl 5.005, the best way
is probably to work with raw UTF-8-encoded strings. That means that a...

Reading your list of needed changes, I'm not very ready to go toward
this nightmare. Well, maybe I could develop two version :

- A one for Perl 5.00503 with a solution not found at this time :
depending grandly of your reply about the way (if any) to use this
UTF8simple converter above, or you iso-8859-15 solution below.

- A more evoluated one for more recent interpreters. So, just a question
: how does it's simple in these Perl release : do I just have to
indicate "use utf8;" and that's all ? Not clear in my mind.
This is a quite silly policy: If you can do something stupid or harmful
with a module, you can do the same thing with a script.
But I know that some sites have such a policy, and they probably won't
change it, so you're probably stuck with it.

The reason why is very simple : the team of developer who work in
majority for these servers are a PHP ones and they have conviced the
direction of this company to privillegiate PHP *against* Perl. So, not
silly, wicked for others developers who have to use Perl a day or
another !
If you only need English and French (and won't be needing Czech next
year because your company opens a branch office in Prague) you are
probably better off using an 8-bit character set which covers those two
languages. ISO-8859-15 and Windows-1252 come immediately to mind.

Yes, we will only target English and French, and even if things could be
accessed by people from countries without these language as natives,
they will input using these two languages (and will read in these two
languages two, of course). Well, effectively, choice of a single-byte
charset could be something which could make me happy... If really right
! Also, two questions :

1/ I found some (in ng and on web) who said iso-8859-15 was not a good
choice : but I don't knw exactly. What could be wrong with this charset?

2/ Windows-1252 seems to be not often choosen : why ? because of it's
"Windows" reminder in name?
Where do people edit these strings? Directly on the server? Or do they
edit the file on their Windows machine and then upload it to the server
via FTP (or whatever)?

Both :-( These scripts will be edited under Win32 and Unix flavors, will
run under Win32 and Unix flavors.

Thank you for your help Peter, it becomes a little less confused from
your post.
 
Y

Yohan N. Leder

[ in a context where an old version of Perl has been imposed ... ]

Imposed, yes : it's the right word :(
Yes, this could indeed resolve the stated problem. (Windows-1252
could handle Czech also, no? - but not Polish etc). It's what the
search engines' query pages (altavista, google etc.) were doing some
years ago, before general browser support for utf-8 was adequate.
Users could select an 8-bit web page encoding appropriate to their
language, and then submit their query - the browser would submit their
input using that same encoding.

Hum, no, it's sound like a good solution if final public were a
technical one... But the fact is that final users will be a commercial
ones and sure they will simply ignore the list or choose the first to
fill-in quickly. The only sign they will not ommit will be the euro one,
percents and sum !
This is probably the wrong place to go into any detail on that, but if
I might mention
http://ppewww.ph.gla.ac.uk/~flavell/charset/form-i18n.html

I'be bookmarked it and started to read it : not and easy subject. I'll
read it in depth just after having solved my current problematic target
: Perl 5.00503 which is a little bit forgotten everywhere (normal).
There's absolutely nothing you can do to prevent users from typing or
copy/pasting oddball characters into the form, and browsers react in
various ways when they attempt to submit characters which cannot be
expressed in the chosen encoding. So it's necessary to design server
side scripts to be able to cope in some way when that happens - if
only to recognise the defective input and politely refuse it.

Hey, don't you told about the alt.html thread on which I started to
express my problem... Well, we're agree : user will do what he wants !
And I have to refuse any malformed multipart/form-data data (since these
same form will handle file upload). I'm thinking about that :)
Depending on the circumstances, it may be that file upload would be a
preferable implementation, as you hinted?

Oh, I didn't talk about file uploadabout field data, but because user
will have possibility to upload a file (about their commercial results)
in paralel. So, the multipart/form-data will contain the text fields
data and the uploaded binary file.
With sufficiently modern software, on the other hand, I'd strongly
recommend getting things to work with utf-8. Practically any browser
of any consequence today can deal with that (as you may deduce from
the fact that the search engine queries no longer bother the user with
the encoding options, but simply use utf-8 without further comment).

I would like to, but how : please, just take a look at the reply (and my
questions) I done towards Peter.

Also, thanks to you too, Alan : for your kindly replies here and in
alt.html
 
M

Mumia W.

Yohan said:
[ in a context where an old version of Perl has been imposed ... ]

Imposed, yes : it's the right word :(
Yes, this could indeed resolve the stated problem. (Windows-1252
could handle Czech also, no? - but not Polish etc). It's what the
search engines' query pages (altavista, google etc.) were doing some
years ago, before general browser support for utf-8 was adequate.
Users could select an 8-bit web page encoding appropriate to their
language, and then submit their query - the browser would submit their
input using that same encoding.

Hum, no, it's sound like a good solution if final public were a
technical one... But the fact is that final users will be a commercial
ones and sure they will simply ignore the list or choose the first to
fill-in quickly. The only sign they will not ommit will be the euro one,
percents and sum !
[...]

I'm not sure, but I think that Allen Flavell is saying that your visitor
could choose his or her language on your web page, and you could assume
a particular character set based upon the chosen language.
 
A

Alan J. Flavell

Yohan said:
Hum, no, it's sound like a good solution if final public were a
technical one... But the fact is that final users will be a
commercial ones and sure they will simply ignore the list or
choose the first to fill-in quickly. The only sign they will not
ommit will be the euro one, percents and sum ! [...]

I'm not sure, but I think that Allen Flavell is saying that your
visitor could choose his or her language on your web page,

That's probably the right practical implementation, indeed: user
selects the language, and your server-side implementation sends them
the HTML form page in an appropriate 8-bit encoding for that language.
They then submit their input, and in practice the browser will then
submit the form input using that same encoding.
and you could assume a particular character set based upon the
chosen language.

You would have some way to remind the server-side software of which
character encoding is being used, yes. Since the browser isn't likely
to send this information by itself (as I discuss on my previously
cited page), it's useful to include a hidden field in the form which
carries this information back as part of the submission. Don't rely
on "assume"-ing too much.

You do, however, have to assume that they are capable of typing or
copy/pasting the data into the page correctly. It's not unknown for
users to paste material out of some other application that's in an
incompatible encoding, and to disregard that what they're seeing in
the form input field isn't what they intended to send. In MS Windows,
the MS-DOS window has been a particular source of confusion in this
regard.

h t h
 
Y

Yohan N. Leder

I'm not sure, but I think that Allen Flavell is saying that your visitor
could choose his or her language on your web page, and you could assume
a particular character set based upon the chosen language.
Yes, understood... But the only target language about these scripts are
French and English, including special sign like the euro one. So, at
this point, the choice to do is : 1/ going to utf8 (but how with Perl
5.00503 as asked in reply to Peter) 2/ using ISO-8859-2 or Windows-1252,
without to know what the drawbacks and advantages of each.
 
Y

Yohan N. Leder

That's probably the right practical implementation, indeed: user
selects the language, and your server-side implementation sends them
the HTML form page in an appropriate 8-bit encoding for that language.
They then submit their input, and in practice the browser will then
submit the form input using that same encoding.

Yes, but there will be only two language (French and English) and maybe
ISO-8859-15 or Windows-1252 would be a right choice to handl both : what
do you think about those ?

Also, if users may choose different languages (charset encoding), the
fact is that what they submit will go in a common log file on server
side, with the possibility for them to display this file in their
browser : so, if I go toward the multi-charset solution, how to store in
file ? using UTF8 : we fall-down in same problem as previously ; using a
different charset for every part of the file with an header which would
indicate it for every part : so, how does every user will be able to
display the part which comes from a different charset ?
 
M

Mumia W.

Yohan said:
[...]
Also, if users may choose different languages (charset encoding), the
fact is that what they submit will go in a common log file on server
side, with the possibility for them to display this file in their
browser

So store the language code along with the data.
 
B

Bart Lateur

Yohan said:
- Some target servers are using Perl 5.00503 under FreeBSD and there's
nothing about UTF-8 encoding/decoding in the stock modules of this
release.

Years ago, I wrote a UTF8 module for use with 5.005. It was largely made
obsolete by the arrival of 5.6, and even moreso of 5.8, but I'm planning
on releasing it on CPAN really soon anyway. This question just comes up
far too often.

In the meantime, you can start building your own stuff
based on the info you can find on Czyborra.com, more specifically on
- On those old servers, stock Perl modules only are authorized, even in
personal /cgi-bin directory. I'm aware it's a big constraint, but I've
not any way to change the decision about that : we have to do with this!

That's balloney. I can understand a restriction on XS modules, but for
the rest... *there's no difference between a module and a script!*
They're both plain Perl source code.
 
Y

Yohan N. Leder

So store the language code along with the data.

OK, it's what I said : "a header which would indicate the charset for
every part". But, it doesn't reply to : "how does every user will be
able to display the part which comes from a different charset ?"

These charsets may be unrepresentable in a specific browser and you can
only specify one (and one only) charset per HTML page.

So, the choice of a fixed charset sounds like easier to achieve, I think
: i.e. iso-8859-15, windows-1252 or utf-8.
 
A

Alan J. Flavell

Yohan said:
[...] Also, if users may choose different languages (charset
encoding), the fact is that what they submit will go in a common
log file on server side, with the possibility for them to display
this file in their browser

So store the language code along with the data.

Don't confuse language with character encoding. They are two
fundamentally different things, even though they are somewhat related.
Some languages can be quite officially written in two or even three
different writing systems ("language scripts" as MS calls them), and
anyway, (for example) Japanese (language) is still Japanese even when
transcribed into Latin characters.

Browsers cannot display different character encodings in the same web
page. If you're forced to do this, you'd need to resort to iframe or
object for each snippet of different encoding to be "embedded" as a
separate object in the page - with quite a number of implementation
bugs and general usability shortcomings.

There's surely something to be said for converting the various input
encodings into a unicode representation for storage and logging.
Quite how one achieves that with a back-level Perl is however an
exercise for the student!
 
Y

Yohan N. Leder

Years ago, I wrote a UTF8 module for use with 5.005. It was largely made
obsolete by the arrival of 5.6, and even moreso of 5.8, but I'm planning
on releasing it on CPAN really soon anyway. This question just comes up
far too often.

Does you work close to the Unicode::UTF8simple one at
<http://search.cpan.org/~gus/Unicode-UTF8simple-1.06/> ?

As I said to Peter, I've thought to use this module (even if I have to
copy/paste rather than use it externally ; I agree with you it's stupid,
but no choice) a time, but don't really know how about string which
comes from code. The objective, in this case, would be to convert to
utf8 prior to output anything towards browser and convert from utf8 all
which comes from html forms... This to *support* UTF-8 without to have
to rewrite regex and built-in fct wrappers... Well, but fr this to work,
I would need an *internal* encoding which woyuld be able to map anything
from utf-8, and it sounds like being impossible... Don't really see the
exit at this time...

Surely I've forgotten something important : could you explain me the
mecanism about use of UTF8simple or [YourModule] to add UTF-8 support in
a script written to be Perl 5.00503 compatible ?

Maybe I'll choose ISO-8859-15 for now and will think about UTF-8 quietly
In the meantime, you can start building your own stuff
based on the info you can find on Czyborra.com, more specifically on
<http://czyborra.com/utf/#UTF-8>.

Sure I'm not enough armed yet for this kind of project.
That's balloney. I can understand a restriction on XS modules, but for
the rest... *there's no difference between a module and a script!*
They're both plain Perl source code.

I know, but it's matter of influence of PHP developers as I've shortly
explained to Peter. Maybe a day, we'll succeed in our request to upgrade
Perl interpreter, but for now I've to terminate my script for 5.00503 :(
 
B

Ben Morrow

Quoth "Alan J. Flavell said:
Don't confuse language with character encoding. They are two
fundamentally different things, even though they are somewhat related.
Some languages can be quite officially written in two or even three
different writing systems ("language scripts" as MS calls them), and
anyway, (for example) Japanese (language) is still Japanese even when
transcribed into Latin characters.

And, of course, every language can now be written in the same script
in at least two different encodings: the traditional 8-bit/EUC/whatever
encoding and UTF8.
Browsers cannot display different character encodings in the same web
page. If you're forced to do this, you'd need to resort to iframe or
object for each snippet of different encoding to be "embedded" as a
separate object in the page - with quite a number of implementation
bugs and general usability shortcomings.

Or put the characters in as numeric entity refs, which are always in
Unicode. Reasonably modern browsers (IME) handle this correctly.

Ben
 
P

Peter J. Holzer

Yohan said:
Hum, effectively, I didn't realize all the aspect about this charset
problem. In fact, in my first idea, I thought I could do that :

[use an 8-bit charset internally and convert network traffic to/from UTF-8]
And for this I found a pure Perl module called Unicode::UTF8simple
containing to/from conversion sub I could copy/paste in my own script
(indicating the original author in header of course)... But as you state
: own to convert from UTF8 to an iso-8859-* when the given UTF-8
character (like euro sign) is not representable in the target charset ?

What do you think ? Does this way definitively out or is there a
workaround ?

It is definitely possible if you can treat characters outside of your
chosen charset as errors. If you accept UTF-8 input, you must expect
your users to fling any and all Unicode characters at your scripts. Even
if the language is English or French, a Turkish or Polish Employee may
enter his name with an accented character not included in your chosen
character set (I'm not even talking about somebody pasting Chinese text
into a form, although I agree with Alan that that will happen, too).
You have to handle this is some way: Maybe you can just issue an error
message about an invalid character. Maybe you can substitute these
characters. But you have to think about it in advance.

- A one for Perl 5.00503 with a solution not found at this time :
depending grandly of your reply about the way (if any) to use this
UTF8simple converter above, or you iso-8859-15 solution below.

- A more evoluated one for more recent interpreters. So, just a question
: how does it's simple in these Perl release : do I just have to
indicate "use utf8;" and that's all ? Not clear in my mind.

No, "use utf8" just indicates that the source code of the script itself
is in UTF-8. To get the script to read and write UTF-8, you have to get
perl to convert from "internal string representation" (which happens to
be UTF-8 for strings with characters > 255, but forget I said that), to
UTF-8 and back. You can do that with I/O-layers and/or explicit calls to
encode and decode. (This is also true for any 8-bit-character set, but
you can often get away without it)

Yes, we will only target English and French, and even if things could be
accessed by people from countries without these language as natives,
they will input using these two languages (and will read in these two
languages two, of course). Well, effectively, choice of a single-byte
charset could be something which could make me happy... If really right
! Also, two questions :

1/ I found some (in ng and on web) who said iso-8859-15 was not a good
choice : but I don't knw exactly. What could be wrong with this charset?

It is relatively young (less than ten years, I think), and not as widely
supported as iso-8859-1. Today it is widely supported by browsers and
mailers, but I don't think it is widely supported by editors on non-Unix
platforms. If you want people to edit files on their windows machines,
they probably won't be very happy with iso-8859-15.

2/ Windows-1252 seems to be not often choosen : why ? because of it's
"Windows" reminder in name?

The character set is a microsoft-specific extension of iso-8859-1.
That's reason enough for some people not to use it :). Actually, it is
used quite often, but often falsely labelled as 'iso-8859-1' or even
'us-ascii'. That's another reason why many people don't like it. I
believe it is now widely supported by browsers and mailers even on
non-windows platforms (I certainly don't have a problem with it on
Linux), but you probably won't have adequate locale support.

I can't really recommend one or the other. I prefer vendor-independend
standards and I'm a Unix guy, so I would generally prefer iso-8859-15.
OTOH, you probably have more Windows than Unix users, and the Unix users
are probably more able to work around charset issues, so windows-1252
will probably be less trouble to support.
Both :-( These scripts will be edited under Win32 and Unix flavors, will
run under Win32 and Unix flavors.

That almost guarantees that you can't make everyone happy.

I would try to remove all "user-servicable" parts from the script and
put them into configuration files, then forbid anyone to edit the
configuration files directly on the server and demand that they only be
uploaded through your scripts. Then you check and convert them as
necessary. Otherwise there will always be someone who gets it wrong.

hp
 
B

Bart Lateur

Yohan said:
Does you work close to the Unicode::UTF8simple one at
<http://search.cpan.org/~gus/Unicode-UTF8simple-1.06/> ?

Uh... I didn't know that module. I've taken a look at the source, and
yes, conceptually they look a lot alike, though mine uses bytestrings
internally, using a plain regexp to substitute them, so I expect mine to
work a faster for long strings. OTOH, it's limited to Ascii-compatible
single byte character sets. (No Chinese)

The data comes from external files instead of internal hashes, so data
is loaded on demand and the module source is a lot smaller, and, best of
all: if there's a character set I missed, or one is updated, and you
desperately want it, you can just add new character sets with data that
comes from unicode.org. You can even create your own if you must.

The main purpose behind mine is to convert between character sets, and
it's mainly using the UTF8 representation as a common ground. But you
can use Unicode::UTF8simple for the same purpose.
As I said to Peter, I've thought to use this module (even if I have to
copy/paste rather than use it externally ; I agree with you it's stupid,
but no choice) a time, but don't really know how about string which
comes from code.

Code? I'm lost. You want to write your source code in... what?
The objective, in this case, would be to convert to
utf8 prior to output anything towards browser and convert from utf8 all
which comes from html forms...

Yes, it can do that. That's what it's for, actually.
This to *support* UTF-8 without to have
to rewrite regex and built-in fct wrappers... Well, but fr this to work,
I would need an *internal* encoding which woyuld be able to map anything
from utf-8, and it sounds like being impossible... Don't really see the
exit at this time...

No, that's not possible, I think. I'm not really sure what you mean.
Surely I've forgotten something important : could you explain me the
mecanism about use of UTF8simple or [YourModule] to add UTF-8 support in
a script written to be Perl 5.00503 compatible ?

Well, you can convert any string in any encoding to UTF8 to send to the
browser. You can convert UTF8 coming from the browser back to whatever
encoding you like, or you can keep it as UTF8 too. Just keep track of
which is which.
Maybe I'll choose ISO-8859-15 for now and will think about UTF-8 quietly

You can convert between ISO-8859-15 and UTF8, too.
 
B

Bart Lateur

Yohan said:
I know, but it's matter of influence of PHP developers as I've shortly
explained to Peter.

It's still extremely silly. It's like forbidding PHP programmers to use
include files.
 
Y

Yohan N. Leder

It's still extremely silly. It's like forbidding PHP programmers to use
include files.

Sure, I believe you... I don't know anything about PHP ;) What you say
here is that PHP can *include* a Perl script ?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top