Maximum List size (item number) limit?

K

Kriston-Vizi Janos

Dear Mr. Kern, and Members,

Thank you very much for the fast answer, my question became
over-simplified.

My source code is appended below. It uses two text files (L.txt and
GC.txt) as input and merges them. Please find these two files here:
http://kristonvizi.hu/L.txt
http://kristonvizi.hu/GC.txt

Both L.txt and GC.txt contains 3000 rows. When running, the code stops
with error message:

'The debugged program raised the exception IndexError "list index out of
range"
File: /home/kvjanos/file.py, Line: 91'

And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items
NIR_stdev_l = 1000 items
R_mean_l = 1000 items
R_stdev_l = 1000 items
G_mean_l = 999 items
G_stdev_l = 999 items
area_l = 999 items

NIR_mean_gc = 1000 items
NIR_stdev_gc = 1000 items
R_mean_gc = 1000 items
R_stdev_gc = 1000 items
G_mean_gc = 999 items
G_stdev_gc = 999 items
area_gc = 999 items

This is why I thought it is a limit in list items number.

Code that's failing:
#*******************************************

import string,sys,os,sets

# Open L, GC txt files and create merged file
inp_file_l = open('/home/kvjanos/L/L.txt')
inp_file_gc = open('/home/kvjanos/GC/GC.txt')
out_file = open('/home/kvjanos/L_GC_merged/merged.txt', 'w')

# Define L lists
NIR_mean_l = []
NIR_stdev_l =[]
R_mean_l = []
R_stdev_l =[]
G_mean_l = []
G_stdev_l =[]
area_l = []

# Define GC lists
NIR_mean_gc = []
NIR_stdev_gc =[]
R_mean_gc = []
R_stdev_gc =[]
G_mean_gc = []
G_stdev_gc =[]
area_gc = []


# Processing L file
line_no_l =0 # Input L file line number
type_l = 1 # Input L file row type: 1 (row n),2 (row n+1) or 3 (row n+2)

# Append L values to lists.
for line in inp_file_l.xreadlines():
line_no_l = line_no_l + 1
if line_no_l == 1: # To skip the header row
continue
data_l = [] # An L row
data_l = line.split()

if type_l == 1:
NIR_mean_l.append(data_l[2]) # Append 3rd item of the row to
the list
NIR_stdev_l.append(data_l[3]) # Append 4th item of the row to
the list
type_l = 2 # Change to row n+1

else:
if type_l == 2:
R_mean_l.append(data_l[2])
R_stdev_l.append(data_l[3])
type_l = 3
else:
G_mean_l.append(data_l[2])
G_stdev_l.append(data_l[3])
area_l.append(data_l[1])
type_l = 1
inp_file_l.close()


# Processing GC file, the same way as L file above
line_no_gc =0
type_gc = 1

for line in inp_file_gc.xreadlines():
line_no_gc = line_no_gc+ 1
if line_no_gc== 1:
continue
data_gc = []
data_gc = line.split()

if type_gc== 1:
NIR_mean_gc.append(data_gc[2])
NIR_stdev_gc.append(data_gc[3])
type_gc= 2

else:
if type_gc== 2:
R_mean_gc.append(data_gc[2])
R_stdev_gc.append(data_gc[3])
type_gc= 3
else:
G_mean_gc.append(data_gc[2])
G_stdev_gc.append(data_gc[3])
area_gc.append(data_gc[1])
type_gc= 1
inp_file_gc.close()

#############################

# Create output rows from lists
for i in range(len(NIR_mean_l)): # Process all input rows

# Filters L rows by 'area_l' values
area_l_rossz = string.atof(area_l)
if area_l_rossz < 10000:
continue
elif area_l_rossz > 100000:
continue

# Filters GC rows by 'area_gc' values
area_gc_rossz = string.atof(area_gc)
if area_gc_rossz < 10000:
continue
elif area_gc_rossz > 200000:
continue

# Create output line and write out
newline = []
newline.append(str(i+1))
# L
newline.append(NIR_mean_l)
newline.append(NIR_stdev_l)
newline.append(R_mean_l)
newline.append(R_stdev_l)
newline.append(G_mean_l)
newline.append(G_stdev_l)
newline.append(area_l)
# GC
newline.append(NIR_mean_gc)
newline.append(NIR_stdev_gc)
newline.append(R_mean_gc)
newline.append(R_stdev_gc)
newline.append(G_mean_gc)
newline.append(G_stdev_gc)
newline.append(area_gc)
outline = string.join(newline,'\t') + '\n'
out_file.writelines(outline)

out_file.close()

#*******************************************

Thnx again,
Janos
 
J

Juho Schultz

Kriston-Vizi Janos said:
Dear Mr. Kern, and Members,

Thank you very much for the fast answer, my question became
over-simplified.

My source code is appended below. It uses two text files (L.txt and
GC.txt) as input and merges them.

Both L.txt and GC.txt contains 3000 rows. When running, the code stops
with error message:

'The debugged program raised the exception IndexError "list index out of
range"
File: /home/kvjanos/file.py, Line: 91'

And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items
Code that's failing:
# Processing L file
line_no_l =0 # Input L file line number
type_l = 1 # Input L file row type: 1 (row n),2 (row n+1) or 3 (row n+2)
# Append L values to lists.
for line in inp_file_l.xreadlines():
line_no_l = line_no_l + 1
if line_no_l == 1: # To skip the header row
continue
data_l = [] # An L row
data_l = line.split()
if type_l == 1:
NIR_mean_l.append(data_l[2]) # Append 3rd item of the row to the list
NIR_stdev_l.append(data_l[3]) # Append 4th item of the row to the list
type_l = 2 # Change to row n+1
else:
if type_l == 2:
R_mean_l.append(data_l[2])
R_stdev_l.append(data_l[3])
type_l = 3
else:
G_mean_l.append(data_l[2])
G_stdev_l.append(data_l[3])
area_l.append(data_l[1])
type_l = 1
inp_file_l.close()

Looking at the data files, it seems there is no header row to skip.
Skipping 1st row seems to cause the discrepancy of vector sizes,
which leads to the IndexError. should NIR_mean_l[0] be 203 or 25?

As the comments in your code suggest, the code adds values to
NIR_mean_l only from lines 1, 4, 7, ...
R_mean_l only from lines 2, 5, 8, ...
G_mean_l only from lines 3, 6, 9, ...
Try with 12 lines of input data and see how the vectors
are 4 elements before filtering/writing.
 
B

bearophileHUGS

Juho Schultz
NIR_mean_l only from lines 1, 4, 7, ...
R_mean_l only from lines 2, 5, 8, ...
G_mean_l only from lines 3, 6, 9, ...

This can be the problem, but it can be right too.
The following code is shorter and I hope cleaner, with it maybe
Kriston-Vizi Janos can fix his problem.

class ReadData:
def __init__(self, filename):
self.NIR_mean = []
self.NIR_stdev = []
self.R_mean = []
self.R_stdev = []
self.G_mean = []
self.G_stdev = []
self.area = []

for line in file(filename):
row = line.split()
self.area.append(row[1])
self.NIR_mean.append(row[2])
self.NIR_stdev.append(row[3])
self.R_mean.append(row[4])
self.R_stdev.append(row[5])
self.G_mean.append(row[6])
self.G_stdev.append(row[7])

# -------------------------------
L = ReadData('L.txt')
GC = ReadData('GC.txt')
out_file = file('merged.txt', 'w')

# Create output rows from lists
for i in xrange(len(L.NIR_mean)): # Process all input rows

# Filter L and GC rows by area values
if (10000 <= float(L.area) <= 100000) and \
(10000 <= float(GC.area) <= 100000):

# Create output line and write out
newline = [str(i+1)]
for obj in L, GC:
newline.extend([obj.NIR_mean, obj.NIR_stdev,
obj.R_mean, obj.R_stdev,
obj.G_mean, obj.G_stdev,
obj.area])
outline = '\t'.join(newline) + '\n'
out_file.write(outline)

out_file.close()
 
J

Juho Schultz

Juho Schultz



This can be the problem, but it can be right too.

I guess he is expecting 3000 elements, not 1000, as he wrote:

"And I noticed that all the lists that should contain 3000 items,
contains less as follows:
NIR_mean_l = 1000 items"
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top