Implementing a "pull" (?) interface in perl

A

Arvin Portlock

I'm writing a module to parse an XML file of records. It
will be used by a variety of different applications, e.g.,
loading into a relational database, etc. I'll be using
a SAX based approach, ExpatXS, as the XML files can be
very large.

In the past I've written such modules by assembling a huge
data structure in memory then returning it to the calling
application as, say, a reference to an array of hashes.
This was tremendously convenient yet very very slow. Some
applications would take hours to execute. This time around
I'd like to learn something new and approach it differently.

Is there some way to design this, module plus application,
so that as a record is read the application can process it
immediately? Is this what is know as a pull-based architecture?
How does the application "know" when a new record is available?
Does it listen for something that the module emits? I'm
thinking maybe it can be done with a callback. The callback
subroutine is written in the calling application and when
the end of the record is parsed, that subroutine is called.

I'm sure this is a basic question but it's new to me. Is my
callback idea worth exploring? Are there any design patterns
people can point me to? Example programs? Articles online?

Thanks!

Arvin
 
J

John Bokma

Arvin Portlock said:
I'm writing a module to parse an XML file of records. It
will be used by a variety of different applications, e.g.,
loading into a relational database, etc. I'll be using
a SAX based approach, ExpatXS, as the XML files can be
very large.

In the past I've written such modules by assembling a huge
data structure in memory then returning it to the calling
application as, say, a reference to an array of hashes.
This was tremendously convenient yet very very slow. Some
applications would take hours to execute. This time around
I'd like to learn something new and approach it differently.

Is there some way to design this, module plus application,
so that as a record is read the application can process it
immediately? Is this what is know as a pull-based architecture?
How does the application "know" when a new record is available?
Does it listen for something that the module emits? I'm
thinking maybe it can be done with a callback. The callback
subroutine is written in the calling application and when
the end of the record is parsed, that subroutine is called.

I'm sure this is a basic question but it's new to me. Is my
callback idea worth exploring? Are there any design patterns
people can point me to? Example programs? Articles online?

You might want to have a look at XML::Twig, which is good at "record"
parsing. It can make a tree per record, and you get called when the record
is in.
 
A

Arvin Portlock

John said:
You might want to have a look at XML::Twig, which is good at "record"
parsing. It can make a tree per record, and you get called when the
record
is in.

I'm familiar with XML::Twig, but I didn't know it did that.
Thanks for the tip!

Arvin
 
X

xhoster

Arvin Portlock said:
I'm writing a module to parse an XML file of records. It
will be used by a variety of different applications, e.g.,
loading into a relational database, etc. I'll be using
a SAX based approach, ExpatXS, as the XML files can be
very large.

In the past I've written such modules by assembling a huge
data structure in memory then returning it to the calling
application as, say, a reference to an array of hashes.
This was tremendously convenient yet very very slow.

Something like?:

$parser->init($foo);
my $alldata=$parser->get_all();
foreach my $i (@$alldata) {
process($i);
};


Was it slow only because you exhausted memory and were swapping?
Or was it just slow in providing feedback/progress messages?
Some
applications would take hours to execute. This time around
I'd like to learn something new and approach it differently.

Is there some way to design this, module plus application,
so that as a record is read the application can process it
immediately?

There are several ways to do this, but it is quite likely that it will not
be any faster from beginning to end than the original way. If you can have
the parsing and the processing in separate threads or processes, or if you
are exhausting memory, then "parse process parse process" could be faster,
but if both operations happen synchronously anyway and memory is not an
issue, then "parse parse parse process process process" would probably not
be much if any slower.
Is this what is know as a pull-based architecture?
How does the application "know" when a new record is available?

The easiest way is to have the application block until a new record is
ready. That is just what readline aka <> does:

$parser->init($foo);
while (defined (my $i=$parser->get_one()) {
process($i);
};

Does it listen for something that the module emits? I'm
thinking maybe it can be done with a callback. The callback
subroutine is written in the calling application and when
the end of the record is parsed, that subroutine is called.


my $sub= sub { process($_[0]) };
$parser->init($foo);
$parser->put_all_through_callback($sub);

I'm sure this is a basic question but it's new to me. Is my
callback idea worth exploring?

The callback method is more flexible, but it isn't clear to me that you
need that flexibility. If not, I'd go with the simpler (or at least more
familiar) readline like method.

Xho
 
A

Arvin Portlock

Something like?:

$parser->init($foo);
my $alldata=$parser->get_all();
foreach my $i (@$alldata) {
process($i);
};

Heh, yes. That's exactly what I'm used to doing.
Was it slow only because you exhausted memory and were swapping?
Or was it just slow in providing feedback/progress messages?

For some few very large files it was swapping.
The easiest way is to have the application block until a new record is
ready. That is just what readline aka <> does:

$parser->init($foo);
while (defined (my $i=$parser->get_one()) {
process($i);
};

I don't understand. You are describing exactly how I want the
interface to act but not saying how you are doing it, i.e.,
how exactly get_one() works. But yes, that's what I'm aiming for.
Put another way:

my $parse = new FetchRecords ($xmlfile);
my $record = $parse->next_record;
while ($record) {
## Process record
$parse->next_record;
}

Right now I'm doing this:

my $records = getRecords($xmlfile);
foreach my $record (@$records) {
## Process record
}

Where getRecords() is defined in my module.
my $sub= sub { process($_[0]) };
$parser->init($foo);
$parser->put_all_through_callback($sub);

Okay, I'm thinking that the module stores up the record
information in a global variable, then when a specific
tag is read it would call something defined in main::,
not a callback in this instance cause I haven't figured
out the mechanics of that:

package FetchRecords;
my $current_record = {};
....
sub end_element {
my ($self, $e) = @_;
## Triggered when </record> is encountered
if ($e->{LocalName} eq 'record') {
&main::processRecord($current_record);
undef $current_record;
}
}

So the calling application would be forced to always
implement a processRecord() subroutine. Not exactly the
way I imagine doing it with my $parse->next_record example
but it's all I can think of.
The callback method is more flexible, but it isn't clear to me that you
need that flexibility. If not, I'd go with the simpler (or at least more
familiar) readline like method.

I don't really know what the readline like method means.
 
X

xhoster

Arvin Portlock said:
I don't understand. You are describing exactly how I want the
interface to act but not saying how you are doing it, i.e.,
how exactly get_one() works. But yes, that's what I'm aiming for.

OK, now I don't understand. If you give an example of how get_all() would
work, we could discuss how to make it work like get_one() (assuming I can).
But without seeing what you are currently doing, I don't know to help you
change it!

Xho
 
A

Arvin Portlock

OK, now I don't understand. If you give an example of how get_all()
would work, we could discuss how to make it work like get_one()
(assuming I can). But without seeing what you are currently doing, I
don't know to help you change it!

Xho

Right now I'm not object oriented but I want to move in that
direction. This is a simplified example of what I'm doing.
Assuming a file containing records like this:

<record>
<author>Baum, L. Frank</author>
<title>The Wizard of Oz</title>
<date>1909</date>
</record>

The module works something like this:

package FetchRecords;
use Exporter;
@ISA = Exporter;
@EXPORT = qw(getRecords);

my @records;
my $current_record = {};

sub getRecords {
my $xmlfile = shift;

## Parser setup and initialization here ...

return \@records;
}

sub end_element {
my ($self, $e) = @_;
if ($e->{LocalName} eq 'author') {
$current_record->{author} = $e->{data};
} elsif ($e->{LocalName} eq 'title') {
$current_record->{author} = $e->{data};
} elsif ($e->{LocalName} eq 'date') {
$current_record->{date} = $e->{data};
} elsif ($e->{LocalName} eq 'record') {
push @records, $current_record;
$current_record = {};
}
}

And the calling program works like this:

use FetchRecords;

my $records = getRecords('xmlfile.xml');

foreach my $record (@$records) {
my $author = $record->{author};

## etc...
}

The parser details, and even the fact that I'm using an XML
parser are completely hidden from the calling application
(other than the fact that it feeds an XML filename into it).
The module maintains a global @records array and pushes new
records into it each time a </record> tag is encountered.
These records are built up as the individual elements within
<record> are parsed. The module starts with the parsing
of the document and does not return to the calling program
until the entire document has been parsed. I'd prefer that
it return the record back to the calling program whenever
the </record> end tag is encountered rather than saving them
all up until the end. Again, I need to hide all of the parsing
details from the calling application. All it should know is
that it is being fed record objects or structs.
 
X

xhoster

Arvin Portlock said:
Right now I'm not object oriented but I want to move in that
direction. This is a simplified example of what I'm doing.
Assuming a file containing records like this:

OK, now I think I understand. I had thought you wanted to write your own
parser, but you want to write an adapter module that takes an *existing*
call-back based XML-parser, and encapsulate it into a blocking parser
which returns control to the calling program directly rather than through
callbacks. I thought about this a while ago but I gave up. The only ways
I could think of involved either multiple threads or multiple processes
with IPC, and even if those parts worked perfectly it still wouldn't be all
that nice.

It was easier to just get comfortable with callbacks than to do that. I'm
afraid you are in the same boat. But it shouldn't be too hard to make
that transition.


<record>
<author>Baum, L. Frank</author>
<title>The Wizard of Oz</title>
<date>1909</date>
</record>

The module works something like this:

package FetchRecords;
use Exporter;
@ISA = Exporter;
@EXPORT = qw(getRecords);

my @records;

# change to:
my $callback;
my $current_record = {};

sub getRecords {
my $xmlfile = shift;

## Parser setup and initialization here ...

return \@records;
}

sub getRecords {
$xmlfile=shift;
$callback=shift;
## Parser setup and initialization here ...
};


sub end_element {
my ($self, $e) = @_;
if ($e->{LocalName} eq 'author') {
$current_record->{author} = $e->{data};
} elsif ($e->{LocalName} eq 'title') {
$current_record->{author} = $e->{data};
} elsif ($e->{LocalName} eq 'date') {
$current_record->{date} = $e->{data};
} elsif ($e->{LocalName} eq 'record') {
push @records, $current_record;

##change previous line to:
$callback->($current_record);
$current_record = {};
}
}

And the calling program works like this:

use FetchRecords;

my $records = getRecords('xmlfile.xml');

foreach my $record (@$records) {
my $author = $record->{author};

## etc...
}

my $used_to_be_foreach = sub {
my $record=$_[0];
my $author = $record->{author};
## etc...
}

getRecords('xmlfile.xml',$used_to_be_foreach);


Xho
 
A

Arvin Portlock

but you want to write an adapter module that takes an *existing*
call-back based XML-parser, and encapsulate it into a blocking parser
which returns control to the calling program directly rather than
through callbacks.

I couldn't have said it better myself. Seriously.
##change previous line to:
$callback->($current_record);

You've just basically written my entire program for me.
All I have to do now is fill in the bits. My ideas in
regards to callbacks weren't quite as elegant. Yeah, threads
occurred to me as well but no way was I getting anywhere
near that. Callbacks aren't *exactly* the way I wanted to
handle this but they fulfill the design requirements.
It should work with arbitrarily large documents, and
the parsing details are completely hidden from the calling
application.

About performance. I have written these things directly in
the parsing program before, handling events as they were
encountered, and those applications have always been much
faster than storing up the events and processing them later
for large documents. I've never tried it with a DOM parser
though, which I imagine is much more efficient at storing up
events and iterating through them than I am. I've always
just assumed an 80 Mb file would bring it to its knees.

Thanks for you help!
 
R

robic0

I'm writing a module to parse an XML file of records. It
will be used by a variety of different applications, e.g.,
loading into a relational database, etc. I'll be using
a SAX based approach, ExpatXS, as the XML files can be
very large.

In the past I've written such modules by assembling a huge
data structure in memory then returning it to the calling
application as, say, a reference to an array of hashes.
This was tremendously convenient yet very very slow. Some
applications would take hours to execute. This time around
I'd like to learn something new and approach it differently.

Is there some way to design this, module plus application,
so that as a record is read the application can process it
immediately? Is this what is know as a pull-based architecture?
How does the application "know" when a new record is available?
Does it listen for something that the module emits? I'm
thinking maybe it can be done with a callback. The callback
subroutine is written in the calling application and when
the end of the record is parsed, that subroutine is called.

I'm sure this is a basic question but it's new to me. Is my
callback idea worth exploring? Are there any design patterns
people can point me to? Example programs? Articles online?

Thanks!

Arvin


I guess i'm coming late to this question but will give it a shot
for you. I see some code thrown around so i'll throw some too.

First and formost, if your processing a xtra large xml file
you want to use a "stream" processor where you get start/end tag/content
notification. The stream processor, if passed a file handle should do something
like this:

"p" is a parsing object instantiated from your program.

$p->parse(*DATA);

-------------
here's what happens:

"module"
============
sub parse {
my ($self, $data) = @_;
throwX ('30') unless (!$self->{'InParse'});
throwX ('31') unless (defined $data);
$self->{'InParse'} = 1;

# call processor
if (ref($data) eq 'SCALAR') {
print "SCALAR ref\n" if ($self->{'debug'});
eval {Processor($self, 1, $data);};
if ($@) {
Cleanup($self); die $@;
}
}
elsif (ref(\$data) eq 'SCALAR') {
print "SCALAR string\n" if ($self->{'debug'});
eval {Processor($self, 1, \$data);};
if ($@) {
Cleanup($self); die $@;
}
} else {
if (ref($data) ne 'GLOB' && ref(\$data) ne 'GLOB') {
$self->{'InParse'} = 0;
die "rp_error_parse, data source not a string or filehandle nor reference to one\n";
}
print "GLOB ref or filehandle\n" if ($self->{'debug'});
eval {Processor($self, 0, $data);};
if ($@) {
Cleanup($self); die $@;
}
}
$self->{'InParse'} = 0;
}

sub Processor
{
my ($obj, $BUFFERED, $rpl_mk) = @_;
my ($markup_file);
my $parse_ln = '';
my $dyna_ln = '';
my $ref_parse_ln = \$parse_ln;
my $ref_dyna_ln = \$dyna_ln;
if ($BUFFERED) {
$ref_parse_ln = $rpl_mk;
$ref_dyna_ln = \$dyna_ln;
} else {
# assume its a ref to a global or global itself
$markup_file = $rpl_mk;
$ref_dyna_ln = $ref_parse_ln;
}
my $ln_cnt = 0;
my $complete_comment = 0;
my $complete_cdata = 0;
my @Tags = ();
my $havroot = 0;
my $last_cpos = 0;
my $done = 0;
my $content = '';
my $altcontent = undef;

$obj->{'origcontent'} = \$content;

while (!$done)
{
$ln_cnt++;

# stream processing (if not buffered)
if (!$BUFFERED) {
if (!($_ = <$markup_file>)) {
# just parse what we have
$done = 1;
# boundry check for runnaway
if (($complete_comment+$complete_cdata) > 0) {
$ln_cnt--;
}
} else {
$$ref_parse_ln .= $_;

## buffer if needing comment/cdata closure
next if ($complete_comment && !/-->/);
next if ($complete_cdata && !/\]\]>/);

## reset comment/cdata flags
$complete_comment = 0;
$complete_cdata = 0;

## flag serialized comments/cdata buffering
if (/(<!--)|(<!\[CDATA\[)/)
{
if (defined $1) { # complete comment
if ($$ref_parse_ln !~ /<!--.*?-->/s) {
$complete_comment = 1;
next;
}
}
elsif (defined $2) { # complete cdata
if ($$ref_parse_ln !~ /<!\[CDATA\[.*?\]\]>/s) {
$complete_cdata = 1;
next;
}
}
}
## buffer until '>' or eof
next if (!/>/);
}
} else {
$ln_cnt = 1;
$done = 1;
}

## REGEX Parsing loop
while ($$ref_parse_ln =~ /$RxParse/g)
{
... the core ...
does callbacks here and alot of other shit, section totals about 3000 lines
}
}
}
=============

This just happens to handle eol oriented file io (also ram based io) and
a buffer passed reference (from a slurrped file? .. this is 20% faster).
The passed file handle technique above will read past as many eol's as
necessary to get a target processing character. The eol is *not* even a
factor. Beyond the *target*, this technique does not buffer file data,
so a very *large* file consumes almost no ram in the transaction.

In order to guarantee the structural integrity of your records you have
to use a shcema checker ahead of time. If you don't care and there's a problem,
well..... Shucks!

In a programming sense, you will want to validate that integrity,
otherwise all that is said below is invalid.

Use a schema checker (and have a schema file, this is mandatory) before you
parse. Then when you parse, you have safeguards.
If you want detailed instructions on how to install Xerces, let me know.
Xerces is a robo-lib generated interface to Apache's Xerces.
A note about schema. Schema is not a guarantee of structural integrity.
It becomes apparent if you have complex possibilities and schema is only
general, as in level oriented. I could give examples and solutions if requested.

When you parse, it is you who is parsing the data into records (structures), not
the parser. You should be in control of start and end of your records.

You have control in your program *when* the eor state is valid. That validity
will be reached by *you* in the context of *your* program.

Before you initiate parsing, you should validate the file (if you can), something
like this:

your program..

use XML::Xerces;

if (!ValidateSchema ($Xmlfilename,$SchemaFilename)) {not valid};

sub ValidateSchema {
my ($xfile, $SchemaFile) = @_;

# Docs: http://xml.apache.org/xerces-c/apiDocs/classAbstractDOMParser.html#z869_9
my $Xparser = XML::Xerces::XercesDOMParser->new();
$Xparser->setValidationScheme(1);
$Xparser->setDoNamespaces(1);
$Xparser->setDoSchema(1);
#$Xparser->setValidationSchemaFullChecking(1); # full constraint (if enabled, may be time-consuming)

$Xparser->setExternalNoNamespaceSchemaLocation($SchemaFile);

my $ERROR_HANDLER = XLoggingErrorHandler->new(\&LogX_warn, \&LogX_error, \&LogX_ferror, );
$Xparser->setErrorHandler($ERROR_HANDLER);

eval {$Xparser->parse (XML::Xerces::LocalFileInputSource->new($xfile));};
return 1;
}

Ok, i got a important phone call, so i'll continue this in a little later.
The important stuff comes on how to easily
pick off your records and process them in-stream. No problem whatsoever.
Brb..
 
R

robic0

On Fri, 24 Feb 2006 17:23:03 -0800, robic0 wrote:

Ok, its gonna have to wait until tommorow, something has come up...
I'm writing a module to parse an XML file of records. It
will be used by a variety of different applications, e.g.,
loading into a relational database, etc. I'll be using
a SAX based approach, ExpatXS, as the XML files can be
very large.

In the past I've written such modules by assembling a huge
data structure in memory then returning it to the calling
application as, say, a reference to an array of hashes.
This was tremendously convenient yet very very slow. Some
applications would take hours to execute. This time around
I'd like to learn something new and approach it differently.

Is there some way to design this, module plus application,
so that as a record is read the application can process it
immediately? Is this what is know as a pull-based architecture?
How does the application "know" when a new record is available?
Does it listen for something that the module emits? I'm
thinking maybe it can be done with a callback. The callback
subroutine is written in the calling application and when
the end of the record is parsed, that subroutine is called.

I'm sure this is a basic question but it's new to me. Is my
callback idea worth exploring? Are there any design patterns
people can point me to? Example programs? Articles online?

Thanks!

Arvin


I guess i'm coming late to this question but will give it a shot
for you. I see some code thrown around so i'll throw some too.

First and formost, if your processing a xtra large xml file
you want to use a "stream" processor where you get start/end tag/content
notification. The stream processor, if passed a file handle should do something
like this:

"p" is a parsing object instantiated from your program.

$p->parse(*DATA);

-------------
here's what happens:

"module"
============
sub parse {
my ($self, $data) = @_;
throwX ('30') unless (!$self->{'InParse'});
throwX ('31') unless (defined $data);
$self->{'InParse'} = 1;

# call processor
if (ref($data) eq 'SCALAR') {
print "SCALAR ref\n" if ($self->{'debug'});
eval {Processor($self, 1, $data);};
if ($@) {
Cleanup($self); die $@;
}
}
elsif (ref(\$data) eq 'SCALAR') {
print "SCALAR string\n" if ($self->{'debug'});
eval {Processor($self, 1, \$data);};
if ($@) {
Cleanup($self); die $@;
}
} else {
if (ref($data) ne 'GLOB' && ref(\$data) ne 'GLOB') {
$self->{'InParse'} = 0;
die "rp_error_parse, data source not a string or filehandle nor reference to one\n";
}
print "GLOB ref or filehandle\n" if ($self->{'debug'});
eval {Processor($self, 0, $data);};
if ($@) {
Cleanup($self); die $@;
}
}
$self->{'InParse'} = 0;
}

sub Processor
{
my ($obj, $BUFFERED, $rpl_mk) = @_;
my ($markup_file);
my $parse_ln = '';
my $dyna_ln = '';
my $ref_parse_ln = \$parse_ln;
my $ref_dyna_ln = \$dyna_ln;
if ($BUFFERED) {
$ref_parse_ln = $rpl_mk;
$ref_dyna_ln = \$dyna_ln;
} else {
# assume its a ref to a global or global itself
$markup_file = $rpl_mk;
$ref_dyna_ln = $ref_parse_ln;
}
my $ln_cnt = 0;
my $complete_comment = 0;
my $complete_cdata = 0;
my @Tags = ();
my $havroot = 0;
my $last_cpos = 0;
my $done = 0;
my $content = '';
my $altcontent = undef;

$obj->{'origcontent'} = \$content;

while (!$done)
{
$ln_cnt++;

# stream processing (if not buffered)
if (!$BUFFERED) {
if (!($_ = <$markup_file>)) {
# just parse what we have
$done = 1;
# boundry check for runnaway
if (($complete_comment+$complete_cdata) > 0) {
$ln_cnt--;
}
} else {
$$ref_parse_ln .= $_;

## buffer if needing comment/cdata closure
next if ($complete_comment && !/-->/);
next if ($complete_cdata && !/\]\]>/);

## reset comment/cdata flags
$complete_comment = 0;
$complete_cdata = 0;

## flag serialized comments/cdata buffering
if (/(<!--)|(<!\[CDATA\[)/)
{
if (defined $1) { # complete comment
if ($$ref_parse_ln !~ /<!--.*?-->/s) {
$complete_comment = 1;
next;
}
}
elsif (defined $2) { # complete cdata
if ($$ref_parse_ln !~ /<!\[CDATA\[.*?\]\]>/s) {
$complete_cdata = 1;
next;
}
}
}
## buffer until '>' or eof
next if (!/>/);
}
} else {
$ln_cnt = 1;
$done = 1;
}

## REGEX Parsing loop
while ($$ref_parse_ln =~ /$RxParse/g)
{
... the core ...
does callbacks here and alot of other shit, section totals about 3000 lines
}
}
}
=============

This just happens to handle eol oriented file io (also ram based io) and
a buffer passed reference (from a slurrped file? .. this is 20% faster).
The passed file handle technique above will read past as many eol's as
necessary to get a target processing character. The eol is *not* even a
factor. Beyond the *target*, this technique does not buffer file data,
so a very *large* file consumes almost no ram in the transaction.

In order to guarantee the structural integrity of your records you have
to use a shcema checker ahead of time. If you don't care and there's a problem,
well..... Shucks!

In a programming sense, you will want to validate that integrity,
otherwise all that is said below is invalid.

Use a schema checker (and have a schema file, this is mandatory) before you
parse. Then when you parse, you have safeguards.
If you want detailed instructions on how to install Xerces, let me know.
Xerces is a robo-lib generated interface to Apache's Xerces.
A note about schema. Schema is not a guarantee of structural integrity.
It becomes apparent if you have complex possibilities and schema is only
general, as in level oriented. I could give examples and solutions if requested.

When you parse, it is you who is parsing the data into records (structures), not
the parser. You should be in control of start and end of your records.

You have control in your program *when* the eor state is valid. That validity
will be reached by *you* in the context of *your* program.

Before you initiate parsing, you should validate the file (if you can), something
like this:

your program..

use XML::Xerces;

if (!ValidateSchema ($Xmlfilename,$SchemaFilename)) {not valid};

sub ValidateSchema {
my ($xfile, $SchemaFile) = @_;

# Docs: http://xml.apache.org/xerces-c/apiDocs/classAbstractDOMParser.html#z869_9
my $Xparser = XML::Xerces::XercesDOMParser->new();
$Xparser->setValidationScheme(1);
$Xparser->setDoNamespaces(1);
$Xparser->setDoSchema(1);
#$Xparser->setValidationSchemaFullChecking(1); # full constraint (if enabled, may be time-consuming)

$Xparser->setExternalNoNamespaceSchemaLocation($SchemaFile);

my $ERROR_HANDLER = XLoggingErrorHandler->new(\&LogX_warn, \&LogX_error, \&LogX_ferror, );
$Xparser->setErrorHandler($ERROR_HANDLER);

eval {$Xparser->parse (XML::Xerces::LocalFileInputSource->new($xfile));};
return 1;
}

Ok, i got a important phone call, so i'll continue this in a little later.
The important stuff comes on how to easily
pick off your records and process them in-stream. No problem whatsoever.
Brb..
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,020
Latest member
GenesisGai

Latest Threads

Top