Sudoku solver: reduction + brute force

P

Pavol.Severa

ago said:
Inspired by some recent readings on LinuxJournal and an ASPN recipe, I
decided to revamp my old python hack... The new code is a combination
of (2) reduction methods and brute force and it is quite faster than
the
ASPN program. If anyone is interested I attached the code in
http://agolb.blogspot.com/2006/01/sudoku-solver-in-python.html

I suggest trying

input="""
0,0,0,0,9,6,8,0,0
0,0,1,0,0,0,0,7,0
0,2,0,0,0,0,0,0,3
0,3,0,0,0,8,0,0,6
0,0,4,0,2,0,3,0,0
6,0,0,5,0,0,0,8,0
9,0,0,0,0,0,0,5,0
0,7,0,0,0,0,1,0,0
0,0,5,9,4,0,0,0,0"""

your program seems to take too long to solve it.

I think the hard part is not to solve, but rather to create *difficult*
sudoku grids.
But to inflate my ego beyond the known universe, here is my solver
(that solves the avove mentioned grid reasonably fast). I suppose the
only difference is that is uses 3, rather than 2, rules to simplify
before starting tree-like search.



#########
#if a copyryght is needed:
#this is pulbic domain, do with it whatever you want
#i.e. most probably nothing
#########

class DeadEnd(Exception):
pass

class sudoku(object):

def __init__(self,*args):
self.changed=True
self.possible=[]
if len(args) != 81:
raise ValueError, "need 81 numbers"
for i in args:
if i==0:
self.possible.append(range(1,10))
else:
self.possible.append()

def __getitem__(self,(x,y)):
return self.possible[9*x+y]

def __setitem__(self,(x,y),what):
self.possible[9*x+y]=what

def copy(self):
result=sudoku(*(81*[1]))
for i in range(9):
for j in range(9):
result[i,j]=list(self[i,j])
return result

def solved(self):
for i in range(9):
for j in range(9):
if len(self[i,j]) != 1:
return False
return True

def trials(self):
for i,j in ((i,j) for ln in range(2,10)
for i in range(9) for j in range(9)
if len(self[i,j])==ln):
for k in self[i,j]:
new=self.copy()
new[i,j]=[k]
yield new

def clean1(self,x,y):
self.changed=False
if len(self[x,y]) == 1:
return
remove=set()
for places in self.regions(x,y):
missing=set(range(1,10))
for xx,yy in places:
if xx==x and yy==y:
continue
if len(self[xx,yy])==1:
remove.add(self[xx,yy][0])
missing-=set(self[xx,yy])
if missing:
a=missing.pop()
self[x,y]=[a]
self.changed=True
for a in remove:
try:
self[x,y].remove(a)
if not self[x,y]:
raise DeadEnd
self.changed=True
except ValueError:
pass

def clean3(self,out1,out2):
for (o1, o2) in ((out1,out2), (out2,out1)):
remove=set(range(1,10))
for x,y in o1:
remove-=set(self[x,y])
for x,y in o2:
for n in remove:
try:
self[x,y].remove(n)
if not self[x,y]:
raise DeadEnd
self.changed=True
except ValueError:
pass

@staticmethod
def regions(x,y):
return (((xx,y) for xx in range(9)),
((x,yy) for yy in range(9)),
((xx,yy) for xx in range(3*(x//3),3*(x//3)+3)
for yy in range(3*(y//3),3*(y//3)+3)))


@staticmethod
def outs():
for i in range(3):
for j in range(3):
for k in range(3):
out1=[(a+3*i,b+3*j) for a in range(3)
if a is not k for b in range(3)]
out2=[(k+3*i,n) for n in range(9) if n//3!=j]
yield out1, out2
for k in range(3):
out1=[(a+3*i,b+3*j) for a in range(3)
for b in range(3) if b is not k]
out2=[(n,k+3*j) for n in range(9) if n//3!=i]
yield out1, out2

def clean_all(self):
while self.changed:
self.changed=False
for x in range(9):
for y in range(9):
self.clean1(x,y)
for out1,out2 in self.outs():
self.clean3(out1,out2)

def __repr__(self):
result=""
for x in range(9):
for y in range(9):
if len(self[x,y])==1:
haf=self[x,y][0]
else:
haf=self[x,y]
result+=str(haf)+' '
result+='\n'
return result




from collections import deque

class liter(object):

def __init__(self, *iterables):
self.iters=deque(iter(x) for x in iterables)

def __iter__(self):
while self.iters:
it=self.iters.popleft()
try:
result=it.next()
except StopIteration:
continue
self.iters.append(it)
yield result

def append(self,what):
self.iters.append(iter(what))



def solve(me):
tree=liter([me])
for you in tree:
try:
you.clean_all()
except DeadEnd:
continue
if you.solved():
return you
tree.append(you.trials())

######

input=(
0,0,0,0,9,6,8,0,0,
0,0,1,0,0,0,0,7,0,
0,2,0,0,0,0,0,0,3,
0,3,0,0,0,8,0,0,6,
0,0,4,0,2,0,3,0,0,
6,0,0,5,0,0,0,8,0,
9,0,0,0,0,0,0,5,0,
0,7,0,0,0,0,1,0,0,
0,0,5,9,4,0,0,0,0)

result=solve(sudoku(*input))
print result
 
P

Pavol.Severa

ago said:
Inspired by some recent readings on LinuxJournal and an ASPN recipe, I
decided to revamp my old python hack... The new code is a combination
of (2) reduction methods and brute force and it is quite faster than
the
ASPN program. If anyone is interested I attached the code in
http://agolb.blogspot.com/2006/01/sudoku-solver-in-python.html

I suggest trying

input="""
0,0,0,0,9,6,8,0,0
0,0,1,0,0,0,0,7,0
0,2,0,0,0,0,0,0,3
0,3,0,0,0,8,0,0,6
0,0,4,0,2,0,3,0,0
6,0,0,5,0,0,0,8,0
9,0,0,0,0,0,0,5,0
0,7,0,0,0,0,1,0,0
0,0,5,9,4,0,0,0,0"""

your program seems to take too long to solve it.

I think the hard part is not to solve, but rather to create *difficult*
sudoku grids.
But to inflate my ego beyond the known universe, here is my solver
(that solves the avove mentioned grid reasonably fast). I suppose the
only difference is that is uses 3, rather than 2, rules to simplify
before starting tree-like search.



#########
#if a copyryght is needed:
#this is pulbic domain, do with it whatever you want
#i.e. most probably nothing
#########

class DeadEnd(Exception):
pass

class sudoku(object):

def __init__(self,*args):
self.changed=True
self.possible=[]
if len(args) != 81:
raise ValueError, "need 81 numbers"
for i in args:
if i==0:
self.possible.append(range(1,10))
else:
self.possible.append()

def __getitem__(self,(x,y)):
return self.possible[9*x+y]

def __setitem__(self,(x,y),what):
self.possible[9*x+y]=what

def copy(self):
result=sudoku(*(81*[1]))
for i in range(9):
for j in range(9):
result[i,j]=list(self[i,j])
return result

def solved(self):
for i in range(9):
for j in range(9):
if len(self[i,j]) != 1:
return False
return True

def trials(self):
for i,j in ((i,j) for ln in range(2,10)
for i in range(9) for j in range(9)
if len(self[i,j])==ln):
for k in self[i,j]:
new=self.copy()
new[i,j]=[k]
yield new

def clean1(self,x,y):
self.changed=False
if len(self[x,y]) == 1:
return
remove=set()
for places in self.regions(x,y):
missing=set(range(1,10))
for xx,yy in places:
if xx==x and yy==y:
continue
if len(self[xx,yy])==1:
remove.add(self[xx,yy][0])
missing-=set(self[xx,yy])
if missing:
a=missing.pop()
self[x,y]=[a]
self.changed=True
for a in remove:
try:
self[x,y].remove(a)
if not self[x,y]:
raise DeadEnd
self.changed=True
except ValueError:
pass

def clean3(self,out1,out2):
for (o1, o2) in ((out1,out2), (out2,out1)):
remove=set(range(1,10))
for x,y in o1:
remove-=set(self[x,y])
for x,y in o2:
for n in remove:
try:
self[x,y].remove(n)
if not self[x,y]:
raise DeadEnd
self.changed=True
except ValueError:
pass

@staticmethod
def regions(x,y):
return (((xx,y) for xx in range(9)),
((x,yy) for yy in range(9)),
((xx,yy) for xx in range(3*(x//3),3*(x//3)+3)
for yy in range(3*(y//3),3*(y//3)+3)))


@staticmethod
def outs():
for i in range(3):
for j in range(3):
for k in range(3):
out1=[(a+3*i,b+3*j) for a in range(3)
if a is not k for b in range(3)]
out2=[(k+3*i,n) for n in range(9) if n//3!=j]
yield out1, out2
for k in range(3):
out1=[(a+3*i,b+3*j) for a in range(3)
for b in range(3) if b is not k]
out2=[(n,k+3*j) for n in range(9) if n//3!=i]
yield out1, out2

def clean_all(self):
while self.changed:
self.changed=False
for x in range(9):
for y in range(9):
self.clean1(x,y)
for out1,out2 in self.outs():
self.clean3(out1,out2)

def __repr__(self):
result=""
for x in range(9):
for y in range(9):
if len(self[x,y])==1:
haf=self[x,y][0]
else:
haf=self[x,y]
result+=str(haf)+' '
result+='\n'
return result




from collections import deque

class liter(object):

def __init__(self, *iterables):
self.iters=deque(iter(x) for x in iterables)

def __iter__(self):
while self.iters:
it=self.iters.popleft()
try:
result=it.next()
except StopIteration:
continue
self.iters.append(it)
yield result

def append(self,what):
self.iters.append(iter(what))



def solve(me):
tree=liter([me])
for you in tree:
try:
you.clean_all()
except DeadEnd:
continue
if you.solved():
return you
tree.append(you.trials())

######

input=(
0,0,0,0,9,6,8,0,0,
0,0,1,0,0,0,0,7,0,
0,2,0,0,0,0,0,0,3,
0,3,0,0,0,8,0,0,6,
0,0,4,0,2,0,3,0,0,
6,0,0,5,0,0,0,8,0,
9,0,0,0,0,0,0,5,0,
0,7,0,0,0,0,1,0,0,
0,0,5,9,4,0,0,0,0)

result=solve(sudoku(*input))
print result
 
A

ago

But to inflate my ego beyond the known universe, here is my solver
(that solves the avove mentioned grid reasonably fast). I suppose the
only difference is that is uses 3, rather than 2, rules to simplify
before starting tree-like search.

Thanks for the nice problem and the nice post.

The issue with my code was not due to the reduction algorithms used.

In fact we used exactly the same set of rules, my Cell.solve was
equivalent to your Clean1 method and my Cell.skim was equivalent to
your Clean3 method (except that my algorithm was only doing "for (o1,
o2) in ((out1,out2),)", but it did not make any difference in most
cases).

The real problem was due to an external loop inside my
solveByBruteForce which was absolutely useless. I fixed that and now
everything seems ok. It can solve the mentioned grid in about half the
time.

You can see my amended code in the link above.
 
A

Anton Vredegoor

ago said:
You can see my amended code in the link above.

Thanks, I will look into it sometime. At the moment I'm at a library
computer, which severely limits my Python options. Meanwhile I have
been thinking about the sudoku problem, maybe it will prompt you, me or
someone else to make some kind of other implementation which would
resemble what I am thinking about now.

Imagine a sudoku representation which is inside a 9x9x9 cube. The
values in the cubes' cells are just 1 or 0. The height of a '1' is
determined by the value in the original (flat) sudoku grid. There are
81 filled cells in the cube, just like in a sudoku solution. If one
would look at the cube from a side it would always be the case that a
filled cell at some depth inside the cube would block your line of
vision wichever column one would be looking at. In a way a sudoku is a
special case of a magic square, and a magic square can be transformed
to this view, and this way it can be seen as the solution to the
problem of making a cube not transparent by filling the minimum number
of cells.

Now such a cube can be mirrored in 48 ways and it would still be the
same 'kind' of solution. Also it would be possible to swap horizontal
layers at will and still have some kind of solution that is the 'same'
in some way. One could also swap vertical layers iff (and only if) one
would stay inside a 3-block group of layers. On the other hand it would
be possible to swap complete 3-block groups of layers (if I'm still
making sense) and maybe there are other symmetries that would leave the
original solution somewhat intact.

Suppose one would be able to generate all these possible symmetries and
only use the 'smallest' representation of the original position, this
'hash code' would consist of just letting Python sort the possible
'same' cubes and selecting the smallest. It could be used to prevent us
from computing the same cube twice since we could check if we already
saw something with the same hash code.

Now to the optimization part. If we find empty cells in the cube where
there are only few positions in the same row, column, or depth
available, we can limit the search space considerably because cutting
off leaves early is most profitable. Maybe it would even pay off to
consider complete layers and selecting possible fillable cells that
have minimal fillable layers' sums.

Sorry, I guess I'm getting a little bit pataforical here, expect my
Python script any day now :). It will be implemented as a sparse
matrix based on sets of triplets (3-tuples) where for example tuple
(0,0,0) will mean that the cell with x , y and z coordinate having
value '0', is filled (virtually has a '1' inside, the 'unfilled' cells
in the cube (the zeros) are not represented).

I wonder if I still make sense, it's hard to stay programming Python
without a computer to correct my thinking. Can someone write an
implementation to check my ideas?

Anton
 
C

Carl Cerecke

ago said:
Thanks for the nice problem and the nice post.

While we're on the topic of sudoku solvers we've written...

I have a simple brute-force only (DFS) solver that is reasonably fast
considering the algorithm used. It also will find and print all
solutions, and give an estimate of the difficulty of a board. The
estimate is simply the number of nodes in the search tree. I'm guessing
that the number is an approximate measure of the difficulty a human
would have of solving the problem; I've never actually solved one of
these by hand.... Once I'd written the program I sort-of lost interest.

The first three sample boards included are all quite difficult, and take
some time to solve (and verify no other solutions exist!) with a
depth-first search. Your reduction-first approach makes short work of
them, though. On the other hand, my version probably didn't take as long
to write!

Here it is:
#!/usr/bin/env python
# Copyright 2005 Carl Cerecke
# Permission is granted to use, copy, modify, and distribute the code
and/or derived works of the code.
#import psyco
#psyco.full()

import copy

def compute_edge_cells():
global edge_ls
edge_ls = []
for x in range(9):
for y in range(9):
ls = []
for i in range(9):
if i != x:
ls.append((i,y))
if i != y:
ls.append((x,i))
xblock = x/3
yblock = y/3
for bx in range(3):
for by in range(3):
rx = xblock*3+bx
ry = yblock*3+by
if rx != x and ry != y:
ls.append((rx,ry))
edge_ls.append(tuple(ls))


class Board(object):

board_count = 0
solutions = 0

def __init__(self, board, empties = None, init=1):
self.board = board
self.empties = empties
Board.board_count += 1
if init:
self.empties = []
for x in range(9):
for y in range(9):
if self.board[y][x] == 0:
self.empties.append((x,y))
else:
if self.board[y][x] not in self.valids(x,y):
print "bad board",x,y


def valids(self,x,y):
ls = [0,1,2,3,4,5,6,7,8,9]
for ex,ey in edge_ls[x*9+y]:
ls[self.board[ey][ex]] = 0
#return [x for x in ls if x != 0]
return filter(None, ls)

def __repr__(self):
return '\n'.join([''.join(`x`) for x in self.board])

def solve(self):
if self.empties == []:
print "found solution:"
print self
Board.solutions += 1
return
x,y = self.empties[0]
for n in self.valids(x,y):
new_board = list(self.board)
new_board[y] = list(new_board[y])
new_board[y][x] = n
new_board[y] = tuple(new_board[y])
new_board = tuple(new_board)
board = Board(new_board, self.empties[1:], 0)
board.solve()


compute_edge_cells()
def solve(b):
Board.solutions = 0
Board.board_count = 0
b.solve()
print b.board_count
print "solutions:",b.solutions

a = Board((
(0,0,0,1,0,9,0,2,0),
(0,0,8,0,0,5,6,0,0),
(2,0,0,0,0,0,0,0,1),
(0,0,0,4,0,7,0,0,6),
(0,0,6,0,0,0,3,0,0),
(7,0,0,9,0,1,0,0,0),
(5,0,0,0,0,0,0,0,2),
(0,0,7,2,0,0,9,0,0),
(0,4,0,5,0,8,0,7,0)))
b = Board((
(0, 2, 0, 0, 0, 0, 0, 0, 0),
(0, 0, 0, 6, 0, 0, 0, 0, 3),
(0, 7, 4, 0, 8, 0, 0, 0, 0),
(0, 0, 0, 0, 0, 3, 0, 0, 2),
(0, 8, 0, 0, 4, 0, 0, 1, 0),
(6, 0, 0, 5, 0, 0, 0, 0, 0),
(0, 0, 0, 0, 1, 0, 7, 8, 0),
(5, 0, 0, 0, 0, 9, 0, 0, 0),
(0, 0, 0, 0, 0, 0, 0, 4, 0)))

c = Board((
(0, 0, 0, 0, 0, 9, 0, 0, 0),
(0, 0, 0, 0, 1, 4, 7, 0, 0),
(0, 0, 2, 0, 0, 0, 0, 0, 0),
(7, 0, 0, 0, 0, 0, 0, 8, 6),
(5, 0, 0, 0, 3, 0, 0, 0, 2),
(9, 4, 0, 0, 0, 0, 0, 0, 1),
(0, 0, 0, 0, 0, 0, 4, 0, 0),
(0, 0, 6, 2, 5, 0, 0, 0, 0),
(0, 0, 0, 8, 0, 0, 0, 0, 0)))

d = Board((
(0,0,0,0,9,6,8,0,0),
(0,0,1,0,0,0,0,7,0),
(0,2,0,0,0,0,0,0,3),
(0,3,0,0,0,8,0,0,6),
(0,0,4,0,2,0,3,0,0),
(6,0,0,5,0,0,0,8,0),
(9,0,0,0,0,0,0,5,0),
(0,7,0,0,0,0,1,0,0),
(0,0,5,9,4,0,0,0,0)))

solve(a)
solve(b)
solve(c)
solve(d)
 
A

ago

Anton,

Do you think it is possible to reduce the set of all possible solutions
to a small enough set? I personally doubt it, but IF that was the case
an efficient solver could be easily created.

In reducing the set of all solutions for instance you could always swap
the numbers (3rd axis) so that the first submatrix reads
[[1,2,3],[4,5,6],[7,8,9]]. By this we reduced the set of solutions by
362880. You can then always move blocks, columns and rows to fix the
following elements (1,4)=4, (4,1)=2, (9,9)=9. Further reductions are
still possible, but I do not know how far can this go and if the end
result is a small enough set.

Just my 2c.
 
A

ago

Your reduction-first approach makes short work of
them, though. On the other hand, my version probably didn't take as long
to write!

Well, I started from the reduction-only algorithm so by the time I
implemented the brute force solver I already had the code. Anyway the
full code is just above 100 lines, it could be shorter (and it was in
its first incarnation) but I strived to make it clean and more object
oriented following the LinuxJournal article and by avoiding index
operations (only contained in the __init__ methods).

I like the idea of estimating difficulty... But for a subjective mesure
from the point of view of the solver you probably need to give weights
to the reduction techniques required to eliminatre cells, since those
are the ones used by human beings. Some puzzles might be easy to solve
by reduction but difficult to solve by brute force only. In this
context for instance, a weight of 1 could be used every time one or
more cells are eliminated thanks to Cell.Solve (the easiest approach),
a weight of 2 when Cell.Skim was used to eliminate cells (more
complex), and a weight of 3 every time BruteForce needs to be invoked
(i.e. solutions must be guessed).

One thing that my solver lacks is the ability to recognize multiple
solutions. It will simply stop at the first admissible one whether it
is unique or not. I am not sure what is an efficient way to detect it.
 
A

Anton Vredegoor

ago said:
Do you think it is possible to reduce the set of all possible solutions
to a small enough set? I personally doubt it, but IF that was the case
an efficient solver could be easily created.

No I don't think so, but it's a great idea :) . Iff we would have some
ultimate symmetry detector we could reduce all positions to variations
of a few base types and start *generating* solutions from there in
stead of checking possible mistakes.
In reducing the set of all solutions for instance you could always swap
the numbers (3rd axis) so that the first submatrix reads
[[1,2,3],[4,5,6],[7,8,9]]. By this we reduced the set of solutions by
362880. You can then always move blocks, columns and rows to fix the
following elements (1,4)=4, (4,1)=2, (9,9)=9. Further reductions are
still possible, but I do not know how far can this go and if the end
result is a small enough set.

I think one could reduce more than just a factor 9! . A 3-dim cube has
48 symmetric mirror images and we could multiply 9! by this. Then there
are the horizontal slice swaps and the whole 3-slice swaps. Anyway I
wasn't thinking about a grand unified theory but more about limiting
the search space (in a depth first tree) early.

If for example some field would allow only 2 values it would pay off to
check that field first in the search (before fields that can have say 9
values) because that would be the next best thing to having that value
as a fixed starting value.

Similarly if we would only check a subtree position once (by using the
hash) it could save some computations, but I have no idea how effective
it would be, all this mirrorring could be expensive too. On the other
hand this is done on the single leaf level, perhaps cutting off whole
branches, so it might indeed pay off very much. Remember that some
subtrees can be identical even though the routes to get to there were
different.

Here's the idea to make all the mirrors (I have the code at home, but I
can't reach it now, but it should be easy to code):

Say one has dimension x with values [0,1,....,8]

Now rescale this to [-4,-3,...,+4]

Then do this for all x,y and z coordinates.

Now to generate all mirrors, make all 6 permutations and all +-
variations of all coordinate points x,y,z for each mirror.

So x,y,z gives 6 permutations and doing +-x,+-y,+-z for each of these
makes for 48 (6*2**3) mirror images of each point.

for example a coordinate [-3,-2,-1] mirrored through mirror [z,-x,y]
would give coordinate point [-1,3,-2].

Do this for all points.

Repeat for each mirror.

Now convert back to [0,1,..8] coordinates and select the smallest
mirrored cube.

Eh, maybe math notation wouldn't be such a bad idea after all, only
then I wouldn't be able to read what I wrote here. I hope you can :)

Anton
 
A

ago

Do you think it is possible to reduce the set of all possible solutions
to a small enough set? I personally doubt it, but IF that was the case
an efficient solver could be easily created.

To expand on the concept, assume for the argument sake that the
universe of possible solutions can be reduced to a single grid (it is
most likely an unrealistic assumption), an efficient solver (of
linear/polinomial complexity) could then be created as follows:

1) Transform the starting puzzle grid to match the unique solution for
the available cells
2) Apply inverse transformations to the unique solution to get the
solution for the starting puzzle.

So we shift the focus from finding "the unique value of cells" to
finding "equivalent transformations", which should be an easier problem
to tackle.

Note that the same process also applies if the universe of possible
solutions can be reduced to a "small" set.

For istance in 4X4 grid with 2X2 submatrices it can proven that all
possible solutions are equivalent transformations of the following
matrix:

1 2 3 4
3 4 1 2
4 1 2 3
2 3 4 1

If we now start with a given grid, what we want is to transform it so
that the available cells match the grid above. Assume for instance that
the cell (0,0)=3. The first transformation is to swap all the 3 into
1... Take a note of the transformations, apply them in reverse to the
above grid and you get the solution.

According to Anton the number of possible solutions can be reduced
using 1) number swapping, 2) mirroring, 3) blocks/rows/columns
swapping. All those operations create equivalent matrices. For a 9X9
grid, this should give a reduction factor = (9!)*(48)*(6^12) minus the
number of duplicated combinations given by the methods above. I am not
sure how to calculate the number of duplicated combinations and
therefore do not know if the result is "good enough". As mentioned, I
doubt that it is a viable approach, but I find it an intriguing
approach nevertheless.
 
A

Anton Vredegoor

ago wrote:

[Something I mostly agree with]
According to Anton the number of possible solutions can be reduced
using 1) number swapping, 2) mirroring, 3) blocks/rows/columns
swapping. All those operations create equivalent matrices. For a 9X9
grid, this should give a reduction factor = (9!)*(48)*(6^12) minus the
number of duplicated combinations given by the methods above. I am not
sure how to calculate the number of duplicated combinations and
therefore do not know if the result is "good enough". As mentioned, I
doubt that it is a viable approach, but I find it an intriguing
approach nevertheless.

We could start hunting down net sites giving sudoku problems and claim
they are trying to sell us the same problem twice :) Or produce
counterfeit problems ourselves and get rich quick.

But I wonder about that 6^12 term. Within each 3-row block there are 6
permutations. There are 3 3-row blocks and 3 3-column blocks. Then
between blocks (swapping complete 3-row blocks) permutations also give
a factor 6.

So in my count (but I suck at math) this term schould be: 6**8 (also
switching to Python exponentiation notation)

Anton
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top