Reading in variables from a setup file

C

caradomski

If I want to have a setup up file (setup.txt), so my perl program can
read in variables to be used, like $install_dir, or $output_dir, and
use them across multiple perl programs. How do I read these variables
in from that file so they can be declared?
 
I

it_says_BALLS_on_your forehead

a common way that i do it is to have a plain text config file with two
columns. you can use a comma delimiter, or space chars, or whatever you
like really. pipes work well too.

the first column contains keys, the second column contains values.

read in the file, and make a hash out of it.

e.g.

# setup.txt
install_dir /usr/local/bin
output_dir /usr/local/notbin


....
then, in your script:

my %config;
my $filename = "setup.txt";
getConfigHash($filename, \%config);

....
this assumes you have a tools.pl file of some sort where you have a sub
like this:

# tools.pl
sub getConfigHash {
my ($filename, $hash_ref) = @_;
my @configs = getFileContents($filename);
for (@configs) {
my ($name,$value) = split; # this assumes you are using spaces as a
delimiter;
$hash_ref->{$name} = $value;
}
}

sub getFileContents {
my ($file) = @_;
open (FILE, "$file") || die "Can't open $file $! \n";
my $contents = do {local $/; <FILE>};
close FILE;
return wantarray ? (split /\n/, $contents) : $contents;

}
 
P

Paul Lalli

If I want to have a setup up file (setup.txt), so my perl program can
read in variables to be used, like $install_dir, or $output_dir, and
use them across multiple perl programs. How do I read these variables
in from that file so they can be declared?


There are a few ways of doing this. The way that everyone starts off
assuming they want to do is to populate a file with commands like

my $file = 'input.txt';
my $dir = '/home/mritty';
#etc

This is the WRONG way to do it. These variables will only be visible
while in this file. Any other file that loads this file will not have
access to these variables.

The easiest/"best" way of doing this is to define a module which
exports the variables you want to have access to in the 'main' file,
and import those variables in the main file:

#In file: MyConfig.pm
package MyConfig;
use strict;
use warnings;
use base qw/Exporter/;
our @EXPORT_OK = qw/%config/;

our %config = (
'file'=>'input.txt',
'dir' =>'/home/mritty',
#etc
);

1;

Here I define one hash that contains all the configuration values you
need, rather than separate scalar variables for each value. This is so
you don't need to continually modify the @EXPORT_OK array every time
you add or remove a configuration value.

Then, in the 'main' file:

#!/usr/bin/perl
use strict;
use warnings;
use MyConfig qw/%config/;

print "The full path is: $config{dir}/$config{file}\n";

__END__



For more information on this procedure, please read:

perldoc Exporter
perldoc perlmod
perldoc -f our
perldoc -f use

Hope this helps,
Paul Lalli
 
P

Paul Lalli

it_says_BALLS_on_your forehead said:
my %config;
my $filename = "setup.txt";
getConfigHash($filename, \%config);

sub getConfigHash {
my ($filename, $hash_ref) = @_;
my @configs = getFileContents($filename);
for (@configs) {
my ($name,$value) = split; # this assumes you are using spaces as a
delimiter;
$hash_ref->{$name} = $value;
}
}

sub getFileContents {
my ($file) = @_;
open (FILE, "$file") || die "Can't open $file $! \n";
my $contents = do {local $/; <FILE>};
close FILE;
return wantarray ? (split /\n/, $contents) : $contents;
}

Why would you want to read all the lines of the file in at once? This
is a very bad habbit to get into. Much better is to read a file line
by line and process it line by line. If your file is large, the
slurping method is inviting out-of-memory errors. Even if it's not,
why would you want to consume so much more memory than you need to?

Paul Lalli
 
I

it_says_BALLS_on_your forehead

hey Paul, another way i do it is to create a simple perl script:

# config.pl

sub getConfig {
my %hash = (
name1 => "val1",
name2 => "val2",
);
return %hash;
}

1;


....then, in the main script:

require "config.pl";

my %config = getConfig();

....
is there a drawback of doing it this way versus doing it your way? or
are they essentially the same? i know that "use" is generally better
than "require"--compile time versus run-time.

i like your way though.
 
I

it_says_BALLS_on_your forehead

ha! i just looked it up in my "Perl Best Practice" book by Damian
Conway (I attended several lectures of his--VERY sharp guy), and you
are absolutely right. i need to amend my behavior to use line by line
processing.

question: is slurping good to get a CSV into a 2D array?
 
P

Paul Lalli

it_says_BALLS_on_your forehead said:
hey Paul, another way i do it is to create a simple perl script:

# config.pl

sub getConfig {
my %hash = (
name1 => "val1",
name2 => "val2",
);
return %hash;
}

1;


...then, in the main script:

require "config.pl";

my %config = getConfig();

...
is there a drawback of doing it this way versus doing it your way? or
are they essentially the same? i know that "use" is generally better
than "require"--compile time versus run-time.

I can think of one disadvantage off the top of my head. In your
example, there is no package statement. Therefore, the first time this
file is "required", the code within it is loaded into the current
package. require() maintains a list of all files it has previously
loaded, and does not load them again.
Therefore, say you are working with a main file and a module. Each of
the main file and the module file "require" this config.pl. The first
time this file is required, getConfig() becomes a member of whatever
package first required the file. The second require is ignored. Now
both the mainfile and the module attempt to call getConfig(). Only one
call will succeed, because getConfig() is in only one of the correct
packages.

Perhaps an illustration would be helpful:

main.pl:
#!/usr/bin/perl
use strict;
use warnings;
use Foo;

require "config.pl";
my %cfg = getConfig();

__END__

Foo.pm:
package Foo;
use strict;
use warnings;

require "config.pl";
my %cfg = getConfig();

1;

In this example, Foo.pm's require() is executed first (as it was loaded
at compile time via the use statement). Therefore, config.pl got
brought into the package Foo. Foo.pm's call to getConfig() will
therefore succeed. Now the remainder of main.pl is parsed. It too
calls require, but this call is ignored, as the file has already been
required. When main.pl attempts to call getConfig(), it will print the
error: Undefined subroutine &main::getConfig called at main.pl line 7.
That's because main::getConfig doesn't exist - Foo::getConfig does.
The main script - and any other modules that also want to use this
function - would have to know into which package the file was first
required.

The Exporter method I used allows every file and module to import the
included subroutine into its own call space, thus avoiding this issue
entirely.

.... That turned out to be a longer post than I intended. Apologies.

Paul Lalli
 
P

Paul Lalli

it_says_BALLS_on_your forehead said:
ha! i just looked it up in my "Perl Best Practice" book by Damian
Conway (I attended several lectures of his--VERY sharp guy), and you
are absolutely right. i need to amend my behavior to use line by line
processing.

It would be appreciated if you could start quoting a small amount of
relevant context in your replies. Not everyone reads posts by threads,
and not everyone wants to read the whole thread to find out what you're
talking about in one message. Thank you.
question: is slurping good to get a CSV into a 2D array?

I'm not sure how or why slurping would help. When you slurp the file,
you generally still process all the lines of that file by looping
through your array of lines. Rather than looping through the array of
lines, it generally makes more sense to loop through repeated one-line
reads of the file. Perhaps you could give an example?

When dealing with CSV data, you probably want to consider the Text::CSV
module anyway.

The only exception to the general rule of no slurping I've found is
when you're processing lines that have related data on other lines.
That is, where a complete bit of processing cannot be done on a single
line. But even then, it's usually possible to set $/ to be something
other than undef so you can read in "records" at a time.

Paul Lalli
 
J

Josef Moellers

If I want to have a setup up file (setup.txt), so my perl program can
read in variables to be used, like $install_dir, or $output_dir, and
use them across multiple perl programs. How do I read these variables
in from that file so they can be declared?

I use a file with the format
key = value
where I ignore any leading and trailing whitespace as well as whitespace
around the equals sign.
I read the file line-by-line, ignore any lines that have an initial #
(possibly preceded by white space), then extract the key and the value
and populate a hash.
Off the top of my head, untested:

my %config;
if (open(CONF, '<', $configfilename)) {
while (<CONF>) {
next if /^\s*#/;
next unless /^\s*(\S.*\S)\s*=\s*(\S.*\S)\s*$);
$config{$1} = $2;
}
close CONF;
}

I know that I can't have "A = B" with this, so the two sub-patterns
"\S.*\S" need to be worked on or the "\s*" needs to be made greedy.

Josef
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top