BIG successes of Lisp (was ...)

A

Alex Martelli

Isaac said:
Alex> Chess playing machines such as Deep Blue, bridge playing
programs Alex> such as GIB and Jack ..., dictation-taking programs
such as those Alex> made by IBM and Dragon Systems in the '80s (I
don't know if the Alex> technology has changed drastically since I
left the field then, Alex> though I doubt it), are based on
brute-force techniques, and their Alex> excellent performance comes
strictly from processing power.

Nearly no program would rely only on non-brute-force techniques. On the

Yes, the temptation to be clever and cut corners IS always there...;-).
other hand, all the machines that you have named uses some non-brute-force
techniques to improve performance. How you can say that they are using
"only" brute-force techniques is something I don't quite understand. But

In the case of Deep Blue, just hearsay. In the case of the IBM speech
recognition (dictation-taking) system, I was in the team and knew the
code quite well; that Dragon was doing essentially the same (on those
of their systems that _worked_, that is:) was the opinion of people
who had worked in both teams, had friends on the competitors' teams,
etc (a lot of unanimous opinions overall). GIB's approach has been
described by Ginsberg, his author, in detail; Jack's hasn't. but the
behavior of the two programs, seen as players, is so similar that it
seems doubtful to me that their implementation techniques may be
drastically different.

Maybe we're having some terminology issue...? For example, I consider
statistical techniques "brute force"; it's not meant as pejorative --
they're techniques that WORK, as long as you can feed enough good data
to the system for the statistics to bite. A non-brute-force model of
natural language might try to "understand" some part of the real world
that an utterance is about -- build a semantic model, that is -- and
use the model to draw some part of the hypotheses for further prediction
or processing; a purely statistical model just sees sequences of
symbols (words) in (e.g.) a Hidden Markov Model from which it takes
all predictions -- no understanding, no semantic modeling. A non-bf
bridge playing program might have concepts (abstractions) such as "a
finesse", "a squeeze", "drawing trumps", "cross-ruffing", etc, and
match those general patterns to the specific distribution of cards to
guide play; GIB just has a deterministic double-dummy engine and guides
play by montecarlo samples of possible distributions for the two
unseen hands -- no concepts, no abstractions.
even then, I can't see why this has anything to do with whether the
machines
are intelligent or not. We cannot judge whether a machine is intelligent
or
not by just looking at the method used to solve it. A computer is best at

That's not how I read the AAAI's page's attempt to describe AI.
number crunching, and it is simply natural for any program to put a lot
more
weights than most human beings on number crunching. You can't say a
machine
is unintelligent just because much of it power comes from there. Of
course, you might say that the problem does not require a lot of
intelligence.

Again, that't not AAAI's claim, as I read their page. If a problem
can be solved by brute force, it may still be interesting to "model
the thought processes" of humans solving it and implement _those_ in
a computer program -- _that_ program would be doing AI, even for a
problem not intrinsically "requiring" it -- so, AI is not about what
problem is being solved, but (according to the AAAI as I read them)
also involves considerations about the approach.

Whether a system is intelligent must be determined by the result. When
you feed a chess configuration to the big blue computer, which any average
player of chess would make a move that will guarantee checkmate, but the
Deep Blue computer gives you a move that will lead to stalemate, you know
that it is not very intelligent (it did happen).

That may be your definition. It seems to me that the definition given
by a body comprising a vast numbers of experts in the field, such as
the AAAI, might be considered to carry more authority than yours.


Alex
 
M

Marc Battyani

Paul Rubin said:
I missed the earlier messages in this thread but Latex wasn't written
in Lisp. There were some half-baked attempts to lispify TeX, but
afaik none went anywhere.

I'm currently working on a Common Lisp typesetting system based on cl-pdf.
One example of what it can already do is here:
http://www.fractalconcept.com/ex.pdf

For now I'm working on the layout and low level stuff. But it's a little bit
soon to think that it will go nowhere, as I hope a lot of people will add TeX
compatibility packages for it ;-)

BTW note that I'm not rewriting TeX. It's another design with other
priorities.

Marc
 
R

Raymond Wiker

John Thingstad said:
You all seem to forget www.google.com
One of the most used distributed applications in the world.
Written in Common Lisp (xanalysis)

I'm pretty sure that this is absolutely incorrect.

--
Raymond Wiker Mail: (e-mail address removed)
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
 
F

Francis Avila

Christian Lynbech said:
It is still a question of heated debate what actually killed the lisp
machine industry.

I have so far not seen anybody dipsuting that they were a marvel of
technical excellence, sporting stuff like colour displays, graphical
user interfaces and laser printers way ahead of anybody else.

I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that
most are rather unaccustomed to. Procedural, imperative programming is
simply a paradigm that more closely matches the ordinary way of thinking
(ordinary = in non-programming, non-computing spheres of human endevor) than
functional programming. As such, lisp machines were an oddity and too
different for many to bother, and it was easy for them to come up with
excuses not to bother (so that the 'barrier of interest,' so to speak, was
higher.) Lisp, the language family (or whatever you want to call it), still
has this stigma: lambda calculus is not a natural way of thinking.

This isn't to make a value judgment, but I think it's an important thing
that the whole "functional/declarative v. procedural/OO" debate overlooks.
The same reason why programmers call lisp "mind-expanding" and "the latin of
programming languages" is the very same reason why they are reluctant to
learn it--its different, and for many also hard to get used to. Likewise,
Americans seem to have some repulsive hatred of learning latin--for people
who are used to english, it's just plain different and harder, even if it's
better. (Ok, that last bit was a value judgement. :)

Python doesn't try (too) hard to change the ordinary manner of thinking,
just to be as transparent as possible. I guess in that sense it encourages a
degree of mental sloth, but the objective is executable pseudocode. Lisp
counters that thinking the lisp way may be harder, but the power it grants
is out of all proportion to the relatively meager investment of mental
energy required--naturally, it's hard to convince someone of that if they
don't actually _use_ it first, and in the end some will probably still think
it isn't worth the trouble. It will take very significant and deep cultural
and intellectual changes before lisp is ever an overwhelmingly dominant
language paradigm. That is, when it becomes more natural to think of
cake-making as

UD: things
Gxyz: x is baked at y degrees for z minutes.
Hx: x is a cake.
Ix: x is batter.

For all x, ( (Ix & Gx(350)(45)) > Hx )

(i.e. "Everything that's a batter and put into a 350 degree oven for 45
minutes is a cake")

....instead of...

1. Heat the oven to 350 degrees.
2. Place batter in oven.
3. Bake 45 minutes
4. Remove cake from oven.

(i.e. "To make a cake, bake batter in a 350 degree oven for 45 minutes")

....then lisp will take over the universe. Never mind that the symbolic
logic version has more precision.
 
A

Andreas Hinze

John said:
You all seem to forget www.google.com
One of the most used distributed applications in the world.
Written in Common Lisp (xanalysis)
That new. But it may explain Peter Norvigs role there.
From where do you know that ?

Best
AHz
 
J

John J. Lee

Alex Martelli said:
That may be your definition. It seems to me that the definition given
by a body comprising a vast numbers of experts in the field, such as
the AAAI, might be considered to carry more authority than yours.

Argument from authority has rarely been less useful or interesting
than in the case of AI, where both our abilities and understanding are
so poor, and where the embarrassing mystery of conciousness lurks
nearby...


John
 
T

Terry Reedy

Francis Avila said:
I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that
most are rather unaccustomed to.

My contemporaneous impression, correct or not, as formed from
miscellaneous mentions in the computer press and computer shows, was
that they were expensive, slow, and limited -- limited in the sense of
being specialized to running Lisp, rather than any language I might
want to use. I can understand that a dedicated Lisper would not
consider Lisp-only to be a real limitation, but for the rest of us...

If these impressions are wrong, then either the publicity effort was
inadequate or the computer press misleading. I also never heard of
any 'killer ap' like Visicalc was for the Apple II. Even if there had
been, I presume that it could have been ported to others workstations
and to PCs -- or imitated, just as spreadsheets were (which removed
Apple's temporary selling point).

Terry J. Reedy
 
W

Wade Humeniuk

Terry said:
My contemporaneous impression, correct or not, as formed from
miscellaneous mentions in the computer press and computer shows, was
that they were expensive, slow, and limited -- limited in the sense of
being specialized to running Lisp, rather than any language I might
want to use. I can understand that a dedicated Lisper would not
consider Lisp-only to be a real limitation, but for the rest of us...

Well its not true. Symbolics for one supported additional languages,
and I am sure others have pointed out that are C compilers for
the Lisp Machines.

See

http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/symbolics-tech-summary.html

Section: Other Languages

It says that Prolog, Fortran and Pascal were available.

Wade
 
K

Kaz Kylheku

b. AFAIK, Yahoo Store was eventually rewritten in a non-Lisp.
Why? I'd tell you, but then I'd have to kill you :)

From a new footnote in Paul Graham's ``Beating The Averages'' article:

In January 2003, Yahoo released a new version of the editor written
in C++ and Perl. It's hard to say whether the program is no longer
written in Lisp, though, because to translate this program into C++
they literally had to write a Lisp interpreter: the source files of
all the page-generating templates are still, as far as I know, Lisp
code. (See Greenspun's Tenth Rule.)
 
R

Rainer Joswig

Wade Humeniuk said:
Well its not true. Symbolics for one supported additional languages,
and I am sure others have pointed out that are C compilers for
the Lisp Machines.

See

http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/symbolics-tech-summary.html

Section: Other Languages

It says that Prolog, Fortran and Pascal were available.

Wade

ADA also.

Actually using an incremental C compiler and running C on type- and bounds-checking
hardware - like on the Lisp Machine - is not that a bad idea.
A whole set of problems disappears.
 
A

Alex Martelli

John said:
Argument from authority has rarely been less useful or interesting
than in the case of AI, where both our abilities and understanding are
so poor, and where the embarrassing mystery of conciousness lurks
nearby...

It's a problem of terminology, no more, no less. We are dissenting on
what it _means_ to say that a system is about AI, or not. This is no
different from discussing what it means to say that a pasta sauce is
carbonara, or not. Arguments from authority and popular usage are
perfectly valid, and indeed among the most valid, when discussing what given
phrases or words "MEAN" -- not "mean to you" or "mean to Joe Blow"
(which I could care less about), but "mean in reasonable discourse".

And until previously-incorrect popular usage has totally blown away by
sheer force of numbers a previously-correct, eventually-pedantic
meaning (as, alas, inevitably happens sometimes, given the nature of
human language), I think it's indeed worth striving a bit to preserve the
"erudite" (correct by authority and history of usage) meaning against
the onslaught of the "popular" (correct by sheer force of numbers) one.
Sometimes (e.g., the jury's still out on "hacker") the "originally correct"
meaning may be preserved or recovered.

One of the best ways to define what term X "means", when available, is
to see what X means _to people who self-identify as [...] X_ (where the
[...] may be changed to "being", or "practising", etc, depending on X's
grammatical and pragmatical role). For example, I think that when X
is "Christian", "Muslim", "Buddhist", "Atheist", etc, it is best to
ascertain the meaning of X with reference to the meaning ascribed to X by
people who practice the respective doctrines: in my view of the world, it
better defines e.g. the term "Buddhist" to see how it's used, received,
meant, by people who _identify as BEING Buddhist, as PRACTISING
Buddhism_, rather than to hear the opinions of others who don't. And,
here, too, Authority counts: the opinion of the Pope on the meaning of
"Catholicism" matter a lot, and so do those of the Dalai Lama on the
meaning of "Buddhism", etc. If _I_ think that a key part of the meaning
of "Catholicism" is eating a lot of radishes, that's a relatively harmless
eccentricity of mine with little influence on the discourse of others; if
_the Pope_ should ever hold such a strange opinion, it will have
inordinately vaster effect.

Not an effect limited to religions, by any means. What Guido van Rossum
thinks about the meaning of the word "Pythonic", for example, matters a lot
more, to ascertain the meaning of that word, than the opinion on the same
subject, if any, of Mr Joe Average of Squedunk Rd, Berkshire, NY. And
similarly for terms related to scientific, professional, and technological
disciplines, such as "econometrics", "surveyor", AND "AI".

The poverty of our abilities and understanding, and assorted lurking
mysteries, have as little to do with the best approach to ascertain the
definition of "AI", as the poverty of our tastebuds, and the risk that the
pasta is overcooked, have to do with the best approach to ascertain the
definition of "sugo alla carbonara". Definitions and meanings are about
*human language*, in either case.


Alex
 
P

Pascal Costanza

Francis said:
I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that
most are rather unaccustomed to. Procedural, imperative programming is
simply a paradigm that more closely matches the ordinary way of thinking
(ordinary = in non-programming, non-computing spheres of human endevor) than
functional programming.

Wrong in two ways:

1) Lisp is not a functional programming language.

2) Imperative programming does not match "ordinary" thinking. When you
visit a dentist, do you explain to her each single step she should do,
or do you just say "I would like to have my teeth checked"?


Pascal
 
J

JanC

Alex Martelli said:
But just about any mathematical theory is an abstraction of "mechanisms
underlying thought": unless we want to yell "AI" about any program doing
computation (or, for that matter, memorizing information and fetching it
back again, also simplified versions of "mechanisms underlying thought"),
this had better be a TAD more nuanced. I think a key quote on the AAAI
pages is from Grosz and Davis: "computer systems must have more than
processing power -- they must have intelligence". This should rule out
from their definition of "intelligence" any "brute-force" mechanism
that IS just processing power.

Maybe intelligence IS just processing power, but we don't know (yet) how
all these processes in the human brain[*] work... ;-)

[*] Is there anything else considered "intelligent"?
 
P

Paul Foley

I think what helped kill the lisp machine was probably lisp: many people
just don't like lisp, because it is a very different way of thinking that
most are rather unaccustomed to. Procedural, imperative programming is
simply a paradigm that more closely matches the ordinary way of thinking
(ordinary = in non-programming, non-computing spheres of human endevor) than
functional programming. As such, lisp machines were an oddity and too

Rubbish. Lisp is *not* a functional programming language; and Lisp
Machines also had C and Ada and Pascal compilers (and maybe others)
different for many to bother, and it was easy for them to come up with
excuses not to bother (so that the 'barrier of interest,' so to speak, was
higher.) Lisp, the language family (or whatever you want to call it), still
has this stigma: lambda calculus is not a natural way of thinking.

Lisp has almost nothing at all to do with the lambda calculus.
[McCarthy said he looked at Church's work, got the name "lambda" from
there, and couldn't understand anything else :) So that's about all
the relationship there is -- one letter! Python has twice as much in
common with INTERCAL! :)]
This isn't to make a value judgment, but I think it's an important thing
that the whole "functional/declarative v. procedural/OO" debate overlooks.

How does that have anything to do with Lisp, even if true? Lisp is,
if anything, more in the "procedural/OO" camp than the
"functional/declarative" camp. [Multiple inheritance was invented in
Lisp (for the Lisp Machine window system); ANSI CL was the first
language to have an OO standard; ...]

There's nothing wrong with functional languages, anyway (Lisp just
isn't one; try Haskell!)


[much clueless ranting elided]
 
F

Fred Gilham

John Thingstad said:
You all seem to forget www.google.com
One of the most used distributed applications in the world.
Written in Common Lisp (xanalysis)

Nah, google runs on 10K (yes, ten thousand) computers and is written
in C++. Norvig in his talk at the last Lisp conference explained all
this. He said this gave them the ability to patch and upgrade without
taking the whole system down --- they'd just do it
machine-by-machine.
 
A

Alex Martelli

JanC said:
Alex Martelli said:
But just about any mathematical theory is an abstraction of "mechanisms
underlying thought": unless we want to yell "AI" about any program doing
...
Maybe intelligence IS just processing power, but we don't know (yet) how
all these processes in the human brain[*] work... ;-)

[*] Is there anything else considered "intelligent"?

Depends on who you ask -- e.g., each of dolphins, whales, dogs, cats,
chimpanzees, etc, are "considered intelligent" by some but not all people.


Alex
 
S

Stephen Horne

Yes, the temptation to be clever and cut corners IS always there...;-).

It's an essential part of an intelligent (rather than perfect)
solution.

The only one of the programs above where I know the approach taken is
Deep Blue. A perfect search solution is still not viable for chess,
and unlikely to be for some time. Deep Blue therefore made extensive
use of heuristics to make the search viable - much the same as any
chess program.

A heuristic is by definition fallible, but without heuristics the
seach cannot be viable at all.

So far as I can see, this is essentially the approach that human chess
players take - a mixture of learned heuristics and figuring out the
consequences of their actions several moves ahead (ie searching). One
difference is that the balance in humans is much more towards
heuristics. Another is that human heuristics tend to be much more
sophisticated (and typically impossible to state precisely in words)
because of the way the brain works. Finally, there is intuition - but
as I believe that intuition is basically the product of unconscious
learned (and in some contexts innate) heuristics and procedural memory
applied to information processing, it really boils down to the same
thing - a heuristic search.

The human and computer approaches (basically their sets of heuristics)
differ enough that the computer may do things that a human would
consider stupid, yet the computer still beat the human grandmaster
overall.

Incidentally, I learned more about heuristics from psychology
(particularly social psychology) than I ever did from AI. You see,
like Deep Blue, we have quite a collection of innate, hard-wired
heuristics. Or at least you lot do - I lost a lot of mine by being
born with Asperger syndrome.

No, it wasn't general ability that I lost by the neural damage that
causes Asperger syndrome. I still have the ability to learn. My IQ is
over 100. But as PET and MRI studies of people with Asperger syndrome
reveal, my social reasoning has to be handled by the general
intelligence area of my brain instead of the specialist social
intelligence area that neurotypical people use. What makes the social
intelligence area distinct in neurotypicals? To me, the only answer
that makes sense is basically that innate social heuristics are held
in the social intelligence area.

If the general intelligence region can still take over the job of
social intelligence when needed, the basic 'algorithms' can't be
different. But domain-specific heuristics aren't so fundamental that
you can't make do without them (as long as you can learn some new
ones, presumably).

What does a neural network do under training? Basically it searches
for an approximation of the needed function - a heuristic. Even the
search process itself uses heuristics - "a better approximation may be
found by combining two existing good solutions" for recombination in
genetic algorithms, for instance.

Now consider the "Baldwin Effect", described in Steven Pinkers "How
the mind works"...

"""
So evolution can guide learning in neural networks. Surprisingly,
learning can guide evolution as well. Remember Darwin's discussion of
"the incipient stages of useful structures" - the
what-good-is-half-an-eye problem. The neural-network theorists
Geoffrey Hinton and Steven Nowlan invented a fiendish example. Imagine
an animal controlled by a neural network with twenty connections, each
either excitatory (on) or neutral (off). But the network is utterly
useless unless all twenty connections are correctly set. Not only is
it no good to have half a network; it is no good to have ninety-five
percent of one. In a population of animals whose connections are
determined by random mutation, a fitter mutant, with all the right
connections, arises only about once every million (220) genetically
distinct organisms. Worse, the advantage is immediately lost if the
animal reproduces sexually, because after having finally found the
magic combination of weights, it swaps half of them away. In
simulations of this scenario, no adapted network ever evolved.

But now consider a population of animals whose connections can come in
three forms: innately on, innately off, or settable to on or off by
learning. Mutations determine which of the three possibilities (on,
off, learnable) a given connection has at the animal's birth. In an
average animal in these simulations, about half the connections are
learnable, the other half on or off. Learning works like this. Each
animal, as it lives its life, tries out settings for the learnable
connections at random until it hits upon the magic combination. In
real life this might be figuring out how to catch prey or crack a nut;
whatever it is, the animal senses its good fortune and retains those
settings, ceasing the trial and error. From then on it enjoys a higher
rate of reproduction. The earlier in life the animal acquires the
right settings, the longer it will have to reproduce at the higher
rate.

Now with these evolving learners, or learning evolvers, there is an
advantage to having less than one hundred percent of the correct
network. Take all the animals with ten innate connections. About one
in a thousand (210) will have all ten correct. (Remember that only one
in a million nonlearning animals had all twenty of its innate
connections correct.) That well-endowed animal will have some
probability of attaining the completely correct network by learning
the other ten connections; if it has a thousand occasions to learn,
success is fairly likely. The successful animal will reproduce
earlier, hence more often. And among its descendants, there are
advantages to mutations that make more and more of the connections
innately correct, because with more good connections to begin with, it
takes less time to learn the rest, and the chances of going through
life without having learned them get smaller. In Hinton and Nowlan's
simulations, the networks thus evolved more and more innate
connections. The connections never became completely innate, however.
As more and more of the connections were fixed, the selection pressure
to fix the remaining ones tapered off, because with only a few
connections to learn, every organism was guaranteed to learn them
quickly. Learning leads to the evolution of innateness, but not
complete innateness.
"""

A human brain is not so simple, but what this says to me is (1) that
anything that allows a person to learn important stuff (without
damaging flexibility) earlier in life should become innate, at least
to a degree, and (2) that learning should work together with
innateness - there is no hard divide (some aspects of neurological
development suggest this too). So I would expect some fairly fixed
heuristics (or partial heuristics) to be hard wired, and I figure
autism and Asperger syndrome are fairly big clues as to what is
innate. Stuff related to nonverbal communication such as body
language, for instance, and a tendency to play activities that teach
social stuff in childhood.

And given that these connectionist heuristics cannot be stated in
rational terms, they must basically form subconscious processes which
generate intuitions. Meaning that the subconscious provides the most
likely solutions to a problem, with conscious rational thought only
handling the last 5% ish of figuring it out. Which again fits my
experience - when I'm sitting there looking stupid and people say 'but
obviously it's z' and yes, it obviously is - but how did they rule out
a, b, c, d, e, f and so on and so forth so damned quick? Maybe they
didn't. Maybe their subconsious heuristics suggested a few 'obvious'
answers to consider, so they never had to think about a, b, c...


Human intelligence is IMO not so hugely different from Deep Blue, at
least in principle.
no understanding, no semantic modeling.
no concepts, no abstractions.

Sounds a bit like intuition to me. Of course it would be nice if
computers could invent a rationalisation, the way that human brains
do.

What? I hear you say...

When people have had their corpus callosum cut, so the left and right
cerebral cortex cannot communicate directly, you can present an
instruction to one eye ('go get a coffee' for instance) and the
instruction will be obeyed. But ask why to the other eye, and you get
an excuse (such as 'I'm thirsty') with no indication of any awareness
that the request was made.

You can even watch action potentials in the brain and predict peoples
actions from them. In fact, if you make a decision in less than a
couple of seconds, you probably never thought about it at all -
whatever you may remember. The rationalisation was made up after the
fact, and 'backdated' in your memory.

Hmmm - how long does it take to reply to someone in a typical
conversation...

Actually, it isn't quite so bad - the upper end of this is about 3
seconds IIRC, but the absolute lower end is something like half a
second. You may still have put some thought into it in advance down to
that timescale (though not much, obviously).

Anyway, why?

Well, the textbooks don't tend to explain why, but IMO there is a
fairly obvious explanation. We need excuses to give to other people to
explain our actions. That's easiest if we believe the excuses
ourselves. But if the actions are suggested by subconscious
'intuition' processes, odds are there simply isn't a rational line of
reasoning that can be put into words. Oh dear - better make one up
then!

Again, that't not AAAI's claim, as I read their page. If a problem
can be solved by brute force, it may still be interesting to "model
the thought processes" of humans solving it and implement _those_ in
a computer program -- _that_ program would be doing AI, even for a
problem not intrinsically "requiring" it -- so, AI is not about what
problem is being solved, but (according to the AAAI as I read them)
also involves considerations about the approach.

1. This suggests that the only human intelligence is human
intelligence. A very anthropocentric viewpoint.

2. Read some cognitive neuroscience, some social psychology,
basically whatever you can get your hands on that has cognitive
leanings (decent textbooks - not just pop psychology) - and
then tell me that the human mind doesn't use what you call brute
force methods.
 
S

Scott McKay

In the context of LATEX, some Pythonista asked what the big
successes of Lisp were. I think there were at least three *big*
successes.
a. AFAIK Orbitz frequently has to be shut down for maintenance
(read "full garbage collection" - I'm just guessing: with
generational garbage collection, you still have to do full
garbage collection once in a while, and on a system like that
it can take a while)

Not true. The servers do get restarted a couple of times
a day to get new data loads, which are huge.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top