U
uNConVeNtiOnAL
Hi -
I am trying to read from an file and put the lines into a hash. Then I
put the hash
into an array with the sort command. This sort will put the array into
such order that
I can see if duplicate lines occur and add their numeric total field
together. I will add the combined
data as new hash entries and then remove the original lines that were
duplicates.
I don't seem to be putting any values in the hash - before you go off on
me, this code
is very similar to code that is working. The twist is I have to
identify the duplicate data,
create a new entry for it (rename one of the elements so it is
distinguishable from the
duplicates), and remove all duplicate lines.
Thanks a bunch for any help -
-T-
open (my_file, "$ARGV[0]") || die "ERROR: missing file";
#load up vmi hash to sort and combine duplicate records
while (<my_file>)
{
chomp;
$a_a = substr ($_, 0, 4);
$b_b = substr ($_, 12, 11);
c_c = substr ($_, 24, 2);
d_d = substr ($_, 54, 4);
e_e = substr ($_, 59, 2);
f_f = substr ($_, 62, 2);
g_g = substr ($_, 46, 7);
$forcombo{"$b_b$d_d$e_e$f_f"} =
$forcombo{"$b_b$d_d$e_e$f_f"}."^"."$a_a$b_b$c_c$g_g$d_d$e_e$f_f";
}
close my_file;
# how to delete from a hash ==> delete($HASH{$KEY});
@keys = split(/\^/,$forcombo{"$b_bd_de_ef_f"});
foreach $key (sort(@keys))
{
#printf nodupes_file "$key\n";
$lv_b_b = substr($key, 4 ,11);
$lv_a_a = substr($key, 0, 4);
$lv_c_c = substr($key, 15, 2);
$lv_d_d = substr($key, 24,4);
$lv_e_e = substr($key, 28,2);
$lv_f_f = substr($key, 30,2);
$lv_g_g = substr($key, 17, 7);
$lv_g_g=~s/ //g;
- - - more stuff
}
I am trying to read from an file and put the lines into a hash. Then I
put the hash
into an array with the sort command. This sort will put the array into
such order that
I can see if duplicate lines occur and add their numeric total field
together. I will add the combined
data as new hash entries and then remove the original lines that were
duplicates.
I don't seem to be putting any values in the hash - before you go off on
me, this code
is very similar to code that is working. The twist is I have to
identify the duplicate data,
create a new entry for it (rename one of the elements so it is
distinguishable from the
duplicates), and remove all duplicate lines.
Thanks a bunch for any help -
-T-
open (my_file, "$ARGV[0]") || die "ERROR: missing file";
#load up vmi hash to sort and combine duplicate records
while (<my_file>)
{
chomp;
$a_a = substr ($_, 0, 4);
$b_b = substr ($_, 12, 11);
c_c = substr ($_, 24, 2);
d_d = substr ($_, 54, 4);
e_e = substr ($_, 59, 2);
f_f = substr ($_, 62, 2);
g_g = substr ($_, 46, 7);
$forcombo{"$b_b$d_d$e_e$f_f"} =
$forcombo{"$b_b$d_d$e_e$f_f"}."^"."$a_a$b_b$c_c$g_g$d_d$e_e$f_f";
}
close my_file;
# how to delete from a hash ==> delete($HASH{$KEY});
@keys = split(/\^/,$forcombo{"$b_bd_de_ef_f"});
foreach $key (sort(@keys))
{
#printf nodupes_file "$key\n";
$lv_b_b = substr($key, 4 ,11);
$lv_a_a = substr($key, 0, 4);
$lv_c_c = substr($key, 15, 2);
$lv_d_d = substr($key, 24,4);
$lv_e_e = substr($key, 28,2);
$lv_f_f = substr($key, 30,2);
$lv_g_g = substr($key, 17, 7);
$lv_g_g=~s/ //g;
- - - more stuff
}