BIG successes of Lisp (was ...)

J

John J. Lee

I prefer not to comment on the "jobbing physicist" part.

Hmm, just in case of any misunderstanding: I wasn't trying to belittle
your work as a Physicist by using that phrase, Michele. It was meant
light-heartedly, and wasn't even meant as a description of you in
particular (I've never read any of your Physics work, after all).


John
 
M

Michele Simionato

Stephen Horne said:
Perhaps cats simply don't have a particle/wave duality issue to worry
about.

I have got the impression (please correct me if I misread your posts) that
you are invoking the argument "cats are macroscopic objects, so their
ondulatory nature does not matter at all, whereas electrons are
microscopic, so they ondulatory nature does matter a lot."

This kind of arguments are based on the de Broglie wavelenght concept and
are perfectly fine. Nevertheless, I would like to make clear (probably
it is already clear to you) that quantum effects are by no means
confined to the microscopic realm. We cannot say "okay, quantum is
bizarre, but it does not effect me, it affects only a little world
that I will never see". That's not true. We see macroscopic effects of
the quantum nature of reality all the time. Take for instance
conduction theory. When you turn on your computer, electron flow
through a copper cable from the electric power plant to your house.
Any modern theory of conduction is formulated as a
(non-relativistic) quantum field theory of an electron gas
interacting with a lattice of copper atoms. From the microscopic
theory you get macroscopic concepts, for instance you may determine
the resistivity as a function of the temperature. The classical
Drude's model has long past as a good enough explanation of
conductivity. Think also to superconductivity and superfluidity:
these are spectacular examples of microscopic quantum effects
affecting macroscopic quantities.
Finally, the most extreme example: quantum fluctuations
during the inflationary era, when the entire observable universe
has a microscopic size, are finally responsible for the density
fluctuations at the origin of galaxies formation. Moreover, we
observe effects of these fluctuations in the cosmic background
radiation, i.e. from phothons coming from the most extreme
distance in the Universe, phothons that travelled from billions
and billions of light years. Now, that's macroscopic!


Michele
 
S

Stephen Horne

I have got the impression (please correct me if I misread your posts) that
you are invoking the argument "cats are macroscopic objects, so their
ondulatory nature does not matter at all, whereas electrons are
microscopic, so they ondulatory nature does matter a lot."

That is *far* from what I am saying.

I find some explanations of superposition and decoherence difficult to
believe *because* they seem to differentiate between the microscopic
and macroscopic scales. MWI is one - the appearance is that
macroscopic objects in superposition get a different universe for each
superposed state (because there is no visible artifact of the
superposition - the observer is only in one universe) whereas for
microscopic objects in superposition there is no different universe
(as there are clear artifacts of the superposition, showing that the
superposed states interacted and thus existed in the same universe at
the same time).

I equally find the 'conscious mind has special role as observer'
hypothesis hard to accept as we have ample evidence that the universe
existed for billions of years before there were any conscious minds
that we know of. The evidence suggests that conscious minds exist
within the universe as an arrangement of matter subject to the same
laws as any other arrangement of matter.

In both cases, there is no issue of proof or logic involved. It's more
a matter of credibility - and with the conscious mind concept in
particular, of explanatory value. As far as science has studied the
mind so far all the findings show it to be an arrangement of matter
following the same laws of physics and chemistry that any other
arrangement of matter follows. There is no sign of an outside agency
creating unexplainable artifacts in the brains functioning. And if
there is no role for a thing outside of the brain to be generating
consciousness - if the consciousness we experience is a product of the
brain - then what role does this other consciousness have?

While I have a tendency to confuse his name (I think I called him
Penfold earlier, though what Dangermouse' sidekick has to do with this
I really can't say!), I prefer Penrose' theory where the microscopic
and macroscopic really are different - not because they follow
different rules, but because the time that a superposition can survive
is inversely related to the uncertainty it creates in space-time. Have
a lot of mass in substantially different states (e.g. a cat both alive
and dead, or for that matter a vial of poison both broken and intact)
and the superposition can only survive for a tiny portion of a second.

I'm not sure if this is the same Penrose who speculates that
superposition of brain states is important to creating consciousness.
It would be odd if it is, of course, as a brain is clearly
macroscopic. But then he could mean something else - many
superpositions of particles within the brain. As long as each created
superposition only a small local uncertainty in space time (ie no
substantial 'hotspots' of superposition), this accumulation of
microscopic superpositions could be consistent - though to be honest I
seriously doubt it.

As should be clear, my understanding of the specifics of quantum
theory is extremely limited - but my understanding of general
scientific principles isn't too bad. That is why I earlier pointed out
that maybe the MWI wouldn't cause me such a problem if it was
expressed in some other way - after all, most current theory is so
abstract that the explanations should be taken as metaphors rather
than reality anyway.
This kind of arguments are based on the de Broglie wavelenght concept and
are perfectly fine. Nevertheless, I would like to make clear (probably
it is already clear to you) that quantum effects are by no means
confined to the microscopic realm. We cannot say "okay, quantum is
bizarre, but it does not effect me, it affects only a little world
that I will never see". That's not true. We see macroscopic effects of
the quantum nature of reality all the time.

No problem with that, but we are seeing microscopic effects en masse
rather than macroscopic effects - something rather different, in my
mind, to a cat being both alive and dead at the same time. For
example...
Take for instance
conduction theory. When you turn on your computer, electron flow
through a copper cable from the electric power plant to your house.
Any modern theory of conduction is formulated as a
(non-relativistic) quantum field theory of an electron gas
interacting with a lattice of copper atoms. From the microscopic
theory you get macroscopic concepts, for instance you may determine
the resistivity as a function of the temperature. The classical
Drude's model has long past as a good enough explanation of
conductivity. Think also to superconductivity and superfluidity:
these are spectacular examples of microscopic quantum effects
affecting macroscopic quantities.

Of course. But none of these requires a macroscopic object to be
superposed. It may require many microscopic objects to have been
superposed, over and over again (I really don't know how, or even if,
superposition is really involved in these effects - but let me argue
the principle anyway) - but that isn't the same thing. Taking Penrose'
theory again, each individual superposition only creates a small local
uncertainty in spacetime. As long as the many separate superpositions
are spread out in space and time, there will be no particular
'hotspots' where superposition would brake down. In fact, any
coincidental hotspots of uncertainty would accelerate decorence of
superpositions in that region and thus act as a stabilising or
limiting factor in setting the amount of superposition that can occur
in any region.

I would suggest that this limiting thing would be a useful artifact to
look for - or at least some useful artifact might be suggested by the
idea - that could be tested for to proove or disproove the theory.
This limiting effect wasn't mentioned in the article, BTW - maybe
Penrose hasn't thought of it (he proposed to look for artifacts of a
single superposition which cannot be measured using current
technology).

Maybe superfluids would be the place to look for that artifact. Maybe
there is something in the shells of electrons around a nucleus - I'd
certainly expect quantum wierdness there.

But of course I wouldn't have a clue what kind of artifact to look
for, as my understanding is strictly limited to the 'I've read new
scientist on occasion' level.
Finally, the most extreme example: quantum fluctuations
during the inflationary era, when the entire observable universe
has a microscopic size, are finally responsible for the density
fluctuations at the origin of galaxies formation. Moreover, we
observe effects of these fluctuations in the cosmic background
radiation, i.e. from phothons coming from the most extreme
distance in the Universe, phothons that travelled from billions
and billions of light years. Now, that's macroscopic!

Of course, but the quantum effects are not particularly interesting in
that case. Or rather they are to cosmology, but not as far as I can
see to understanding quantum theory. It's a bit like looking at an
image from an electron microscope and claiming that an atom is several
mm wide - the artifact that you are observing has simply been scaled
up relative to the process that created that artifact. The effects
when they occured were on the microscopic scale - only the artifacts
are macroscopic.
 
A

Anton Vredegoor

Stephen Horne said:
particular, of explanatory value. As far as science has studied the
mind so far all the findings show it to be an arrangement of matter
following the same laws of physics and chemistry that any other
arrangement of matter follows. There is no sign of an outside agency
creating unexplainable artifacts in the brains functioning. And if
there is no role for a thing outside of the brain to be generating
consciousness - if the consciousness we experience is a product of the
brain - then what role does this other consciousness have?

The more territory modern science covers, the more it becomes clear
that the known parts of the universe are only a small part of what is
"out there". So "objectively" science gains more knowledge, but
relatively speaking (seeing it as a percentage of that what is
currently known to be not known, but knowable in principle) science is
loosing ground fast. Also an even greater area of the universe is
supposed to exist that we will not even have a chance *ever* to know
anything about.

Trough out the centuries there has been evidence placing humanity
firmly *not* in the center of the universe. First the earth was proven
to rotate around the sun and not the other way around, next our sun
was not in the center of the galaxy and so on.

Maybe now it is time to accept the fact that all the things we know
taken together are only a small fraction of what can be known, and
even more that that fraction is not even in a central position in the
larger universe of the knowable, and that the knowable is just
disappearingly small compared to the great abyss of the unknowable
where everything is embedded in.

How come then that the sciences have been so uncanningly effective
given that they are such an arbitrary choice within the knowable? The
answer is of course that there are a lot of other possible sciences,
completely unrelated to our own that would have been just as effective
as -or even more effective than- our current sciences, had they been
pursued with equal persistence during the same amount of time over a
lot of generations.

The effectiveness of the current natural sciences is a perceptual
artifact caused by our biased history. From a lot of different
directions messages are coming in now, all saying more or less the
same: "If asking certain questions, one gets answers to these
questions in a certain way, but if asking different questions one gets
different answers, sometimes even contradicting the answers to the
other questions".

This might seem mystic or strange but one can see these things
everywhere, if one asks that kind of questions :)

One example would be the role the observer plays in quantum mechanics,
but something closer to a programmer would be the way static or
dynamic typing influence the way one thinks about designing a computer
program. A static typist is like someone removing superfluous material
in order to expose the statue hidden inside the marble, while a
dynamic typist would be comparable to someone taking some clay,
forming it into a shape and baking it into a fixed form only at the
last possible moment. These ways of designing things are both valid
(and there are infinitely more other ways to do it) but they lead to
completely different expectations about the design of computer
programs.

In a certain sense all science reduces the world to a view of it that
leaves out more than that it describes, but that doesn't preclude it
being effective. For a last example, what about the mathematics of
curved surfaces? Sciences have had most of their successes using
computations based on straight lines, and only recently the power of
curves is discovered as being equal or more predictive than linear
approaches.

[..]
As should be clear, my understanding of the specifics of quantum
theory is extremely limited - but my understanding of general
scientific principles isn't too bad. That is why I earlier pointed out
that maybe the MWI wouldn't cause me such a problem if it was
expressed in some other way - after all, most current theory is so
abstract that the explanations should be taken as metaphors rather
than reality anyway.

Abstractness doesn't preclude effectiveness, but to try to use
abstractions to understand the world is foolish since it doesn't work
the other way around. It's a many to one mapping, as in plotting a
sinus function on an xy-plane and not being able to find a
x-coordinate to a certain y-coordinate, while at he same time being
perfectly able to predict an y-coordinate if given an x-coordinate.

Anton
 
A

Andrew Dalke

Anton Vredegoor said:
The more territory modern science covers, the more it becomes clear
that the known parts of the universe are only a small part of what is
"out there".

But as Stephen pointed out, the new things we find are new
in terms of arrangement but still following the same laws of physics
we see here on Earth. (There *may* be some slight changes in
cosmological constants over the last dozen billion years, but I
haven't followed that subject.) These arrangements may be
beyond our ability to model well, but there's little to suggest
that in principle they couldn't be. (Eg, QCD could be used to
model the weather on Jupiter, if only we had a currently almost
inconceivably powerful computer. Running Python. ;)
So "objectively" science gains more knowledge, but
relatively speaking (seeing it as a percentage of that what is
currently known to be not known, but knowable in principle) science is
loosing ground fast. Also an even greater area of the universe is
supposed to exist that we will not even have a chance *ever* to know
anything about.

That's a strange measure: what we know vs. what we know we
don't know. Consider physics back in the late 1800s. They could
write equations for many aspects of electricity and magnetism, but
there were problems, like the 'ultraviolet catastrophe'. Did they
consider those only minor gaps in knowledge or huge, gaping chasms
best stuck in a corner and ignored?

Is the gap between QCD and general relativity as big? Hmmmm...
Trough out the centuries there has been evidence placing humanity
firmly *not* in the center of the universe. First the earth was proven
to rotate around the sun and not the other way around, next our sun
was not in the center of the galaxy and so on.

You use this as an argument for insignificance. I use it to show
that the idea of "center of" is a meaningless term. If I want, I can
consider my house as the center of the universe. I can still make
predictions about the motion of the planets, and they will be
exactly as accurate as using a sun-centered model. The only
difference is that my equations will be a bit more complicated.
Maybe now it is time to accept the fact that all the things we know
taken together are only a small fraction of what can be known, and
even more that that fraction is not even in a central position in the
larger universe of the knowable, and that the knowable is just
disappearingly small compared to the great abyss of the unknowable
where everything is embedded in.

Why? I mean, it's true, but it seems that some knowledge is
more useful than others. It's true because even if there were
a quadrillion people, each one would be different, with a unique
arrangement of thoughts and perceptions and a unique bit of
knowledge unknowable to you.
How come then that the sciences have been so uncanningly effective
given that they are such an arbitrary choice within the knowable? The
answer is of course that there are a lot of other possible sciences,
completely unrelated to our own that would have been just as effective
as -or even more effective than- our current sciences, had they been
pursued with equal persistence during the same amount of time over a
lot of generations.

I don't follow your argument that this occurs "of course."

It's not for a dearth of ideas. Humans did try other possible
sciences over the last few millenia. Despite centuries of effort,
alchemy never became more than a descriptive science, and
despite millenia of attempts, animal sacrifices never improved
crop yields, and reading goat entrails didn't yield any better
weather predictions.

On the other hand, there are different but equivalent ways to
express known physics. For example, Hamiltonian and Newtonian
mechanics, or matrix vs. wave forms of classical quantum mechanics.
These are alternative ways to express the same physics, and some
are easier to use for a given problem than another. Just like a
sun-centered system is easier for some calculations than a "my house"
centered one.

On the third hand, there are new theoretical models, like string
theory, which are different than the models we use. But they are
not "completely unrelated" and yield our standard models given
suitable approximations.

On the fourth hand, Wolfram argues that cellular automata
provide such a new way of doing science as you argue. But
my intuition (brainw^Wtrained as it is by the current scientific
viewpoint) doesn't agree.
The effectiveness of the current natural sciences is a perceptual
artifact caused by our biased history. From a lot of different
directions messages are coming in now, all saying more or less the
same: "If asking certain questions, one gets answers to these
questions in a certain way, but if asking different questions one gets
different answers, sometimes even contradicting the answers to the
other questions".

This might seem mystic or strange but one can see these things
everywhere, if one asks that kind of questions :)

Or it means that asking those questions is meaningless. What's
the charge of an electron? The bare point charge is surrounded
by a swarm of virtual particles, each with its own swarm. If you
work it out using higher and higher levels of approximation you'll
end up with different, non-converging answers, and if you continue
it onwards you'll get infinite energies. But given a fixed
approximation you'll find you can make predictions just fine, and
using mathematical tricks like renormalization, the inifinities cancel.

For a simpler case .. what is the center of the universe? All locations
are equally correct. Is it mystic then that there can be multiple
different answers or is simply that the question isn't well defined?
One example would be the role the observer plays in quantum mechanics,
but something closer to a programmer would be the way static or
dynamic typing influence the way one thinks about designing a computer
program.

The latter argument was an analogy that the tools (formalisms) affect
the shape of science. With that I have no disagreement. The science
we do now is affected by the existance of computers. But that's
because no one without computers would work on, say, fluid dynamics
simulations requiring trillions of calculations. It's not because the
science is fundamentally different.

And I don't see how the reference to QM affects the argument. Then
again, I've no problems living in a collapsing wave function.
In a certain sense all science reduces the world to a view of it that
leaves out more than that it describes, but that doesn't preclude it
being effective. For a last example, what about the mathematics of
curved surfaces? Sciences have had most of their successes using
computations based on straight lines, and only recently the power of
curves is discovered as being equal or more predictive than linear
approaches.

Yes, a model is a reduced representation. The orbit of Mars can be
predicted pretty well without knowing the location of every rock on
it surface. The argument is that knowing more of the details (and
having the ability to do the additional calculations) only improves the
accuracy. And much of the training in science is in learning how to
make those approximations and recognize what is interesting in the
morass of details.

As for "straight lines". I don't follow your meaning. Orbits have
been treated as curves since, well, since before Ptolomy (who used
circles) or since Kepler (using ellipses), and Newton was using
parabolas for trajectories in the 1600s, and Einstein described
curved space-time a century ago.
Abstractness doesn't preclude effectiveness, but to try to use
abstractions to understand the world is foolish since it doesn't work
the other way around.

You have a strange definition of "effectiveness." I think a science
is effective when it helps understand the world.

The other solution is to know everything about everything, and, well,
I don't know about you but my brain is finite. While I can remember
a few abstractions, I am not omnipotent.

Andrew
(e-mail address removed)
 
G

GrayGeek

Andrew said:
But as Stephen pointed out, the new things we find are new
in terms of arrangement but still following the same laws of physics
we see here on Earth. (There *may* be some slight changes in
cosmological constants over the last dozen billion years, but I
haven't followed that subject.) These arrangements may be
beyond our ability to model well, but there's little to suggest
that in principle they couldn't be. (Eg, QCD could be used to
model the weather on Jupiter, if only we had a currently almost
inconceivably powerful computer. Running Python. ;)

Probably not even with Python! :-(
Weather (3D fluid dynamics) is chaotic both here on Earth and on Jupiter.
As Dr. Lorenz established when he tried to model Earth's weather, prediction
of future events based on past behavior (deterministic modeling) is not
possible with chaotic events. Current weather models predicting global or
regional temperatures 50 years from now obtain those results by careful
choices of initial conditions and assumptions. In a chaotic system
changing the inputs by even a small fractional amount causes wild swings in
the output, but for deterministic models fractional changes on the input
produce predictable outputs.

Exactly. Even worse, the various peripherals of Physics and Math are
getting so esoteric that scholars in those areas are losing their ability
to communicate to each other. It is almost like casting chicken entrails.
That's a strange measure: what we know vs. what we know we
don't know. Consider physics back in the late 1800s. They could
write equations for many aspects of electricity and magnetism, but
there were problems, like the 'ultraviolet catastrophe'. Did they
consider those only minor gaps in knowledge or huge, gaping chasms
best stuck in a corner and ignored?

Is the gap between QCD and general relativity as big? Hmmmm...


You use this as an argument for insignificance. I use it to show
that the idea of "center of" is a meaningless term. If I want, I can
consider my house as the center of the universe. I can still make
predictions about the motion of the planets, and they will be
exactly as accurate as using a sun-centered model. The only
difference is that my equations will be a bit more complicated.


Why? I mean, it's true, but it seems that some knowledge is
more useful than others. It's true because even if there were
a quadrillion people, each one would be different, with a unique
arrangement of thoughts and perceptions and a unique bit of
knowledge unknowable to you.


I don't follow your argument that this occurs "of course."

It's not for a dearth of ideas. Humans did try other possible
sciences over the last few millenia. Despite centuries of effort,
alchemy never became more than a descriptive science, and
despite millenia of attempts, animal sacrifices never improved
crop yields, and reading goat entrails didn't yield any better
weather predictions.

On the other hand, there are different but equivalent ways to
express known physics. For example, Hamiltonian and Newtonian
mechanics, or matrix vs. wave forms of classical quantum mechanics.
These are alternative ways to express the same physics, and some
are easier to use for a given problem than another. Just like a
sun-centered system is easier for some calculations than a "my house"
centered one.

On the third hand, there are new theoretical models, like string
theory, which are different than the models we use. But they are
not "completely unrelated" and yield our standard models given
suitable approximations.

On the fourth hand, Wolfram argues that cellular automata
provide such a new way of doing science as you argue. But
my intuition (brainw^Wtrained as it is by the current scientific
viewpoint) doesn't agree.


Or it means that asking those questions is meaningless. What's
the charge of an electron? The bare point charge is surrounded
by a swarm of virtual particles, each with its own swarm. If you
work it out using higher and higher levels of approximation you'll
end up with different, non-converging answers, and if you continue
it onwards you'll get infinite energies. But given a fixed
approximation you'll find you can make predictions just fine, and
using mathematical tricks like renormalization, the inifinities cancel.

The charge of an Electron is a case in point. Okkam's Razor is the
justification for adopting unitary charges and disregarding fractional
charges. But, who justifies Okkam's Razor?

For a simpler case .. what is the center of the universe? All locations
are equally correct. Is it mystic then that there can be multiple
different answers or is simply that the question isn't well defined?

"All locations are equally correct" depends on your base assumptions about
the Cosmological Constant, and a few other constants. Event Stephen
Hawkings, in "A Brief History of Time" mentions the admixture of philsophy
in determining the value of A in Einstein's Metric.
 
S

Stephen Horne

Abstractness doesn't preclude effectiveness, but to try to use
abstractions to understand the world is foolish since it doesn't work
the other way around. It's a many to one mapping, as in plotting a
sinus function on an xy-plane and not being able to find a
x-coordinate to a certain y-coordinate, while at he same time being
perfectly able to predict an y-coordinate if given an x-coordinate.

Oh dear, here we go again...

The human brain simply doesn't have the 'vocabulary' to handle
concepts which are outside of our direct experience. 'Gravity' we can
deal with as it is in our everyday direct experience. Space-time
curvature, OTOH, is not - it is an abstract concept (relative to what
we perceive directly) and we can only understand it by relating it to
things that we do understand - metaphor being a very common way of
doing so.

So in the case of space-time curvature, for instance, 'curvature'
itself is a metaphor. It relates to the geometry of a non-Euclidean
space (or rather space-time, in this case). The intuitive meaning of
the word 'curve' relates to shape - in mathematical terms, a locus of
points in space. But the concept of non-Euclidian curvature is not
intended to suggest that the non-Euclidian space exists within some
higher order space. Sometimes it does (e.g. the non-Euclidean space
defined by the surface of the Earth exists within the 3D space of,
well, space ;-) ) but equally, often it does not.

So far as we know, space-time is 'curved' but does not exist in a
higher-order space - it just happens to have a non-Euclidean geometry.
And as that geometry is entirely defined (so far as we currently know)
by the content of space-time (gravity), there is no need to
hypothesise a higher level space. In fact it may not be possible to
create appropriate space-time 'shapes' within any higher order space -
it is certainly possible to define non-Euclidian 2D surfaces that
cannot be implemented using a real surface shape in flat 3D space.

You can point out that there is a specialist definition of the word
'curve' which relates specifically to non-Euclidian geometry, and deny
any connection to earlier meanings of the word 'curve' or to peoples
intuitive sense of what a curve is, but that would be pretty abstruse.
The simple fact is that that word has considerable explanatory power -
it presents an abstract concept in terms that make sense to people,
and which allows them to visualise the concept. That is why the term
'curve' was selected when non-Euclidian geometry was first defined and
studied and, despite the later specialist definition of the word, it
remains relevant - it is still how people understand the geometry of
non-Euclidean spaces and it is still how people understand the concept
of space-time curvature. The real phenomena described by the term
'space-time curvature' simply don't exist in peoples direct
experience.


I have just been through the same point (ad nauseum) in an e-mail
discussion. Yet I can't see what is controversial about my words.
Current theory is abstract (relative to what we can percieve) and we
simply don't have the right vocabulary (both in language, and within
the mind) to represent the concepts involved. We can invent words to
solve the language problem, of course, but at some point we have to
explain what the new words mean.

Thus, as I said, "most current theory is so abstract that the
explanations should be taken as metaphors rather than reality anyway."

The point being that different metaphors may equally have been chosen
to explain the same models - presumably emphasising different aspects
of them - and such explanations may work better for people who can't
connect with the existing explanations. The model would still be the
same, though, just as it remains the same even if you describe it in a
language other than English. The terminology changes, but not the
model.


I get the feeling that there must be some 'ism' relating to metaphor
in physics, and people are jumping to the conclusion that I'm selling
that 'ism'. But seriously, whatever a 'metaphorianist' may be, I am
not one of them.

Read very carefully, and you will note that I said the EXPLANATIONS
should be taken as metaphors - NOT the models themselves.

And yes, a model is distinct from the real thing that it models, but
that is a rather uninteresting point and not one that I would normally
bother commenting on.

As I know from the e-mail discussion that this point can be serially
misunderstood no matter how I state it, I will make one final attempt
to clarify my point as far as I am able and that is it - I am bored to
death with the issue, and have no interest in continuing it. So here
it is...

I don't deny that, for instance, quarks are real. They are just as
real as protons, atoms, molecules, bricks and galaxy clusters. The
fact that the concept of a quark is more than a little abstract
(relative to what humans can experience directly) does not stop it
being real.

The word "curvature" in "space-time curvature" is a metaphor, however.
Space-time does not exist as a locus of points in some higher-order
space. But space-time really does have a non-Euclidean geometry, which
is easier to understand if you visualise it in terms of curvature.
Hence the common ball-on-a-rubber-sheet analogy (the rubber sheet
being an easily understood 2D curved 'space' within a 3D space).

At no time did I say that relativity is a metaphor. The model
described by relativity is real (within limits, as with any model), as
is clear from the fact that it has been proven.

But if you believe you can describe relativity (or equally, quantum
mechanics) in entirely literal terms - at all, let alone in a way that
people can understand it - I'll be very *very* impressed.


I hope that is sufficient to lay that issue to rest, but in any case I
cannot be bothered with it any more. I *HAVE* asked several people IRL
what they thought I meant by that half scentence in case I was going
nuts, and it was clear that they all understood what I meant. This is
not, in other words, an Asperger syndrome based misunderstanding.

I stand by what I said 100%, but I can't write a book explaining every
half-scentence I write. Life is too short.


With appols, BTW, to those who have written rather large books dealing
with my misunderstandings and thus helped me to understand how they
arise. But I really don't think my AS is the problem here.
 
M

Michele Simionato

Stephen Horne said:
That is *far* from what I am saying.
Oops, sorry! I was not sure about your point: sometimes I have difficulties
in understanding what are you saying, but when I understand it, I usually agree
with you ;)
The evidence suggests that conscious minds exist
within the universe as an arrangement of matter subject to the same
laws as any other arrangement of matter.

I think that mind is a an arrangement of matter, too; nevertheless,
I strongly doubt that we will ever be able to understand it. Incidentally,
I am also quite skeptical about IA claims.
I prefer Penrose' theory

I read a Penrose's book book years ago: if I remember correctly, he
was skeptical about AI (that was ok). However, at some point there was
an argument of this kind: we don't understand mind, we don't understand
quantum gravity, therefore they must be related. (?)
most current theory is so
abstract that the explanations should be taken as metaphors rather
than reality anyway.
True.

No problem with that, but we are seeing microscopic effects en masse
rather than macroscopic effects - something rather different, in my
mind, to a cat being both alive and dead at the same time.
The effects
when they occured were on the microscopic scale - only the artifacts
are macroscopic.

I have nothing against what you say in the rest of your post, but let me
make a comment on these points, which I think is important and may be of
interest for the other readers of this wonderfully off-topics thread.

According to the old school of Physics, there is a large distinction
between fundamental (somewhat microscopic) Physics and
non-fundamental (somewhat macroscopic) Physics. The idea
is that once you know the fundamental Physics, you may in principle
derive all the rest (not only Physics, but also Chemistry, Biology,
Medicine, and every science in principle). This point of view, the
reductionism, has never been popular between chemists of biologists, of
course, but it was quite popular between fundamental physicists with
a large hubrys.

Now, things are changing. Nowadays most people agree with the effective
field theory point of view. According to the effective field theory approach,
the fundamental (microscopic) theory is not so important. Actually, for
the description of most phenomena it is mostly irrelevant. The point is
that macroscopic phenomena (here I have in mind (super)conductivity or
superfluidity) are NOT simply microscopic effects en mass: and in
certain circumstances they do NOT depend at all from the microscopic theory.

These ideas come from the study of critical phenomena (mostly in condensed
matter physics) where the understanding of the macroscopic is (fortunately)
completely unrelated from the understanding of the microscopic: we don't need
to know the "true" theory or a detailed description of the material we
are studying, if we are near a critical point. In this situation it is enough to
know an effective field theory which can explain all the phenomena we can see
given a finite experimental precision, even if it is not microscopically
correct. In critical phenomena the concept of universality came out: completely
different microscopical theories can give the *same* universal macroscopic
field theory. Actually, the only things that matter are the dimensionality
of the space-time and the symmetry group, all others details are irrelevant.

This point of view has become relevant at the fundamental Physics level too,
since nowadays most people regard the Standard Model of Particle Physics
(once regarded as "the" fundamental theory) as a low energy effective
theory of the "true" theory.

This means that even if it is not the full story, it explain the
99.99% of phenomena we can measure; moreover, it is extremely
difficult to see clear signatures of the "true" underlining theory.
The real theory can be string theory, can be loop quantum gravity,
can be a completely new theory, but for 99.99% of our experiments
only the effective theory matters. So, even if we knew perfectly
quantum gravity, this would not help at all in describing 99.99%
of elementary particle physics, since we would still need to
solve the quantum field theory.

And, for a similar reason, even if we knew everything about QCD,
we could not use it to describe the wheather of Jupiter (which is described
by a completely different effective theory) even if we had an ultra-powerful
Python-powered quantum computer ...

That's life, but it is more interesting this way ;)


Michele
 
M

Michele Simionato

Andrew Dalke said:
On the fourth hand, Wolfram argues that cellular automata
provide such a new way of doing science as you argue. But
my intuition (brainw^Wtrained as it is by the current scientific
viewpoint) doesn't agree.

Me too ;)
Or it means that asking those questions is meaningless. What's
the charge of an electron? The bare point charge is surrounded
by a swarm of virtual particles, each with its own swarm. If you
work it out using higher and higher levels of approximation you'll
end up with different, non-converging answers, and if you continue
it onwards you'll get infinite energies. But given a fixed
approximation you'll find you can make predictions just fine, and
using mathematical tricks like renormalization, the inifinities cancel.

I would qualify myself as an expert on renormalization theory and I would
like to make an observation on how the approach to renormalization has
changed in recent years, since you raise the point.

At the beginning, quantum field theory was - more or less universally -
regarded as a fundamental theory. Fundamental theory means that asking
the right questions one must get the right answers.
Nowadays people are no more so optimistic.

Quantum field theory is hard: even if the perturbative renormalizability
properties you are referring to can be proved (BTW, now there are easy
proofs based on the effective field theory approach; I did my Ph.D. on
the subject) very very little can be said at the non-perturbative level.
Also, there are worrying indications. It may be very well possible that
QFT does not exists as a fundamental theory: i.e. it is not mathematically
consistent. For instance, perturbation theory in quantum electrodynamics
is probably not summable, so the sum of the renormalized series (even if
any single term is finite) is not finite. In practice, this is not bad,
since we can resum even non-summable series, but the price to pay to make
finite an infinite sum is to add an arbitrarity (technically, this
is completely unrelated to the infinities in renormalization, they
only seems similar). Now, one can prove that the arbitrarity is
extremely small and has no effect at all at our energy scales: but
in principle it seems that we cannot determine completely an observable,
even in quantum electrodynamics, due to an internal inconsistency of the
mathematical model.

Notice that what I am saying is by no means a definitive statement:
there are no conclusive proofs that the standard model is
mathematically inconsistent. But it could be. And it would not be
surprising at all, given the experience we have with simpler models.
The latter argument was an analogy that the tools (formalisms) affect
the shape of science. With that I have no disagreement. The science
we do now is affected by the existance of computers. But that's
because no one without computers would work on, say, fluid dynamics
simulations requiring trillions of calculations. It's not because the
science is fundamentally different.

Yes, and still a lot of science is done without computers. I never
used a computer for my scientific work, expect for writing my papers
in latex ;)
The other solution is to know everything about everything, and, well,
I don't know about you but my brain is finite. While I can remember
a few abstractions, I am not omnipotent.

we are not omnipotent nor omniscient, but still we may learn something ;)
 
S

Stephen Horne

I don't follow your argument that this occurs "of course."

It's not for a dearth of ideas. Humans did try other possible
sciences over the last few millenia. Despite centuries of effort,
alchemy never became more than a descriptive science, and
despite millenia of attempts, animal sacrifices never improved
crop yields, and reading goat entrails didn't yield any better
weather predictions.

;-)

Actually I missed this point in Antons post, being already primed to
be bugged by his last paragraph or two, so I will reply to it here.

The choice was not arbitrary by any stretch of the imagination. We
could not construct the models described by quantum mechanics or
relatively until we had a good understanding of classical mechanics.
We cannot percieve either quantum or relativistic effects directly, so
they could not be the earliest models. We needed sufficient scientific
understanding and practical technology to be able to observe these
effects at all.

I doubt anyone could form a sensible theory of electricity, for
instance, if the only experience of electricity that they could
perceive was of phenomena such as lightning and flames. No wonder it
was all blamed on angry gods!

And yes, even classical mechanics could not have been our first model
for simple commonsense reasons. How often, for instance, did ancient
Greeks get to observe objects moving through a frictionless
environment?
On the other hand, there are different but equivalent ways to
express known physics. For example, Hamiltonian and Newtonian
mechanics, or matrix vs. wave forms of classical quantum mechanics.
These are alternative ways to express the same physics, and some
are easier to use for a given problem than another. Just like a
sun-centered system is easier for some calculations than a "my house"
centered one.

Rather similar to the idea of using different metaphors to explain the
same model, though you are looking at maths rather than language.
On the third hand, there are new theoretical models, like string
theory, which are different than the models we use. But they are
not "completely unrelated" and yield our standard models given
suitable approximations.

Agreed. Just as quantum mechanics and relativity both yield a close
approximation of classical mechanics within certain limits, and just
as classical mechanics yields something close to 'intuitive physics'
within the limits of most peoples everyday lives.
On the fourth hand, Wolfram argues that cellular automata
provide such a new way of doing science as you argue. But
my intuition (brainw^Wtrained as it is by the current scientific
viewpoint) doesn't agree.

I just love the way that a guy who got rich selling software to do
fiddly maths jobs such as working with systems of differential
equations has suddenly decided that all that fiddly maths is
completely the wrong way to go ;-)

But even if, at some level, the universe is a cellular automata, I
don't see that meaning that the fiddly maths can be abandoned. The
fiddly maths is generally an artifact of removing detail in a sense,
after all - we use the formula for the entire path, for instance,
rather than listing all the points that make up the path. And the list
of points, like the list of states of the cells, lacks explanatory
power.
Or it means that asking those questions is meaningless.

I wouldn't go so far. No model (at least none we have yet) is perfect,
so different models are bound to contradict each other - particularly
when you push them beyond their limits. Extrapolation is always less
reliable than interpolation, so it is best not to use a model to
extrapolate beyond the range where experiment has shown it to apply.

But there is clearly a baseline reality which these models are seeking
to approximate.

As I mentioned earlier, when a primitive person tries to understand
how your car works, the engine does not turn into a demon. The
technology based on our current scientific understanding works,
whatever you personally happen to believe.
For a simpler case .. what is the center of the universe? All locations
are equally correct. Is it mystic then that there can be multiple
different answers or is simply that the question isn't well defined?

Hmmm - I suppose this depends what you mean by center. If you mean
'origin' in the graph-plotting sense, then you are right, of course.

But my understanding is that the universe, so far as anyone can tell,
is either an infinite space or finite without bounds. In either case,
there is no such thing as a center.

I find the 'infinite' theory dubious - if the expansion rate has
remained finite since the big bang, then how can space have grown to
become infinite? The only way I can understand it is if space was
always infinite. That wouldn't necessarily mean it can't 'expand',
just as it isn't necessarily impossible to multiply infinity by two.

I guess 'expansion' relating to the universe is a metaphor too, really
- after all, the universe isn't an object within some other space. The
'expansion' is really just rewriting of the scale factors on the
dimension axes of the universe, I suppose. That being why the speed of
light isn't a problem in inflation - nothing is actually moving faster
than the speed of light, even though the distances between things is
expanding faster than the speed of light.

Hmmm - I wonder if 'expansion' or 'scale' is a continuous value in
space-time, like curvature? Well, I guess it must be really - just
write the model in those terms and hey presto - but what I mean, I
guess, is "is there a function that can define that 'scale' in terms
of local physics to explain things we don't currently have an
explanation for?".

And I don't see how the reference to QM affects the argument. Then
again, I've no problems living in a collapsing wave function.

I suspect this is the 'conscious mind has special role as observer'
thing again. And as has already stated, there are other explanations
of waveform collapse that don't require consciousness to take a
special role. Explanations that make more sense, as the observer never
had any control over how the waveform collapses - it is a mechanical
process that follows clearly defined non-mystical rules.
 
A

Andrew Dalke

Me:
GrayGeek:
Weather (3D fluid dynamics) is chaotic both here on Earth and on Jupiter.
As Dr. Lorenz established when he tried to model Earth's weather, prediction
of future events based on past behavior (deterministic modeling) is not
possible with chaotic events.

Weather is chaotic, but you misstate the conclusion. Short term predictions
are possible. After all, we do make weather predictions based on
simulations, and the "shot in the dark" horizon is getting more distant.
We're even getting reasonable models for hurricane track predictions.
Orbital mechanics for the major planets are also chaotic, it's just that the
time frame for problems well exceeds the life of the sun. (As I recall;
don't have a reference handy.)

Also, knowledge of history does help. Chaotic systems are still
subject to energy conservation and other physical laws, so
observations help predict which parts of phase space are accessible.
And if the system is only mildly chaotic (eg, Lyapunov exponent is
small enough) then an observation which is "close enough" to the
current state does help predict some of the future.
In a chaotic system changing the inputs by even a small fractional
amount causes wild swings in the output, but for deterministic
models fractional changes on the input produce predictable outputs.

To be nit-picky, that should be "... amount eventually causes arbitrary
differences in the output .. " (up to the constraints of phase space).
The two values could swing wildly but still track each other for some
time.
The charge of an Electron is a case in point. Okkam's Razor is the
justification for adopting unitary charges and disregarding fractional
charges. But, who justifies Okkam's Razor?

Quarks have partial charges, and solid state uses partial charges
for things like the fractional Hall effect.

The justification is that without Occam (or Ockham)'s razor
then there is no way to choose between theories with the same
ability to describe observed data.

In a simple case, consider
x y
-----
1 1
2 2
3 3
4 4
5 5

This can be modeled with y = x or with the osculating

y = 1*(x-2)*(x-3)*(x-4)*(x-5)/( (1-2)*(1-3)*(1-4)*(1-5) ) +
2*(x-1)*(x-3)*(x-4)*(x-5)/( (2-1)*(2-3)*(2-4)*(2-5) ) +
3*(x-1)*(x-2)*(x-4)*(x-5)/( (3-1)*(3-2)*(3-4)*(3-5) ) +
4*(x-1)*(x-2)*(x-3)*(x-5)/( (4-1)*(4-2)*(4-3)*(4-5) ) +
5*(x-1)*(x-2)*(x-3)*(x-4)/( (5-1)*(5-2)*(5-3)*(5-4) )

(Hope I got that all correct. BTW, I remember this as the
an osculating function, because it wobbles back and forth
so much it 'kisses' the actual function. However, the term
'osculating curve' appears to be something different and the
term 'osculating function' is almost never used. Pointers? )

Both describe the finite amount of data seen. Which
do you prefer, and why?


Me:
"All locations are equally correct" depends on your base assumptions about
the Cosmological Constant, and a few other constants. Event Stephen
Hawkings, in "A Brief History of Time" mentions the admixture of philsophy
in determining the value of A in Einstein's Metric.

I was refering to my earlier statement that I could designate my house
as the center of the universe and still have all my calculations come
out correct. I somewhat overstepped that when I made the above
statement.

I looked in my copy of ABHoT but didn't see mention of "A".
It's been about a decade since I last looked at Wheeler, and I
never took a GR course, so I don't recognize the term. Web
searches don't find anything relevant. (Do you really mean
"a Lorentzian manifold whose Ricci tendor R_(ab) in the
coordinate basis is a constant scalar multiple of the metric
tensor g_(ab)."? Perhaps you mean the Robertson-Walker
metric, which appears to meet that definition. But there
doesn't appear to be an A term in the formulations I found.
Perhaps it's the a in the cosmic scale factor of the Friedmann
equation?)

How does the cosmological constant affect things? I don't
recall that having an implication on isotropy and homogeneity.

In any case, you are refering to the observed large-scale
isotropy and homogeneity of the universe. There is a bit in
ABHoT on that, but it's pre-COBE, and definitely pre-brane and
the statements of Hawking are more of a "this may explain things
but it's untested." Then again, that's about the current state of
the art too. ;)

So it's still pretty safe to say that my house is the center
of the universe.

Andrew
(e-mail address removed)
P.S.
When you quote someone else's post, please take care to
trim the paragraphs you are not responding to. That makes
it easier to find the text you added.
 
S

Stephen Horne


Thank god - someone actually understood that bit!!!

Except you could have agreed with your own misunderstanding of what I
meant, I suppose - but lets agree to ignore that option ;-)
According to the old school of Physics, there is a large distinction
between fundamental (somewhat microscopic) Physics and
non-fundamental (somewhat macroscopic) Physics. The idea
is that once you know the fundamental Physics, you may in principle
derive all the rest (not only Physics, but also Chemistry, Biology,
Medicine, and every science in principle). This point of view, the
reductionism, has never been popular between chemists of biologists, of
course, but it was quite popular between fundamental physicists with
a large hubrys.

I think I understand what you mean.

I am aware of the idea, though I haven't really considered what it
implies. Certainly we haven't discovered any fundamental layer of
physics yet (AFAIK) and we may never do. And even if we do discover a
baseline level, it may be that we can never express the higher levels
deterministically in baseline level terms (as Goedel says, there are
relations that can never be proven or disproven - the incompleteness
theorem).

If forced to take a position, I would say that the key requirement is
that each model be consistent with all other models at all layers of
abstraction over the range where all are applicable. A higher level
layer may have features that cannot be derived from the lower level
layers, but they cannot contradict each other unless you go outside
the scope where one or more of the models is applicable.
Now, things are changing. Nowadays most people agree with the effective
field theory point of view. According to the effective field theory approach,
the fundamental (microscopic) theory is not so important. Actually, for
the description of most phenomena it is mostly irrelevant. The point is
that macroscopic phenomena (here I have in mind (super)conductivity or
superfluidity) are NOT simply microscopic effects en mass: and in
certain circumstances they do NOT depend at all from the microscopic theory.

OK - but if you are describing superfluidity as a single macroscopic
effect then you must describe it within a macroscopic framework. At
which point it has nothing to do with quantum effects because it isn't
within a quantum framework - it is just that the macroscopic
phenomenon called electricity (distinct from electrons moving en
masse) is not subject to the macroscopic phenomenon called resistance
(distinct from energy loss through the electomagnetic interactions
between electrons and atoms en masse) when the macroscopic phenomenon
called temperature (distinct from the kinetic energy of atoms en
masse) is sufficiently low.

There is nothing wrong with this per se - it is the limit of most
peoples (mine included) understanding of superconductivity - but it
has nothing to do with the framework of quantum mechanics.

The quantum framework may give an explanation, of sorts, for why
superconductivity occurs (or perhaps Goedel has put his veto on this)
but I do understand why explaining something less abstract to our
perceptions in terms of something even more abstract might seem
counterproductive ;-)
That's life, but it is more interesting this way ;)

Agreed ;-)
 
A

Andrew Dalke

Stephen Horne:
We cannot percieve either quantum or relativistic effects directly, so
they could not be the earliest models.

[In general I agree with your post. Just some comments.]

What about superfluid helium?
And yes, even classical mechanics could not have been our first model
for simple commonsense reasons. How often, for instance, did ancient
Greeks get to observe objects moving through a frictionless
environment?

Every clear night.
I just love the way that a guy who got rich selling software to do
fiddly maths jobs such as working with systems of differential
equations has suddenly decided that all that fiddly maths is
completely the wrong way to go ;-)

Excepting that he spent 10 years on that book :)
But even if, at some level, the universe is a cellular automata, I
don't see that meaning that the fiddly maths can be abandoned.

I liked the scene in one of Brin's novels (from the Brightness Rift
trilogy). Alien civilizations are the result of a several billion years
old lineage. Nearly all knowledge is found in the Library.
Computer simulations are based on automata theory. But Earth
isn't part of the culture ("wolflings") and developed this bizarre
math using infinitesimals which was sophisticated enough to make
pen&paper(&abacus) predictions of certain events which were
hard to simulate.
I wouldn't go so far. No model (at least none we have yet) is perfect,
so different models are bound to contradict each other - particularly
when you push them beyond their limits.

True. But some questions are meaningless. "Wave or particle?"
"Where is the center of a black hole?" "What would happen if you
were driving at the speed of light and turned the headlights on?"
Hmmm - I suppose this depends what you mean by center. If you mean
'origin' in the graph-plotting sense, then you are right, of course.

But my understanding is that the universe, so far as anyone can tell,
is either an infinite space or finite without bounds. In either case,
there is no such thing as a center.

Michele is a better one for this topic. My point was just that many
different answers doesn't necessarily imply a mystic explanation.
I find the 'infinite' theory dubious - if the expansion rate has
remained finite since the big bang, then how can space have grown to
become infinite?

There's also the Oblers' paradox, but that also requires infinite time.

I read a popular account of "branes", membrane theory, which
was interesting. I don't know enough to describe it, other than
that the universe was created from high-dimensional membranes
hitting each other.

Andrew
(e-mail address removed)
 
S

Stephen Horne

Weather (3D fluid dynamics) is chaotic both here on Earth and on Jupiter.
As Dr. Lorenz established when he tried to model Earth's weather, prediction
of future events based on past behavior (deterministic modeling) is not
possible with chaotic events. Current weather models predicting global or
regional temperatures 50 years from now obtain those results by careful
choices of initial conditions and assumptions. In a chaotic system
changing the inputs by even a small fractional amount causes wild swings in
the output, but for deterministic models fractional changes on the input
produce predictable outputs.

Very true for predicting weather, but the 50 years hence models are
predicting climate. That is a different layer of abstraction, and not
necessarily chaotic (at least on the same timescales) as shown by the
fact that the real world climate only changes relatively slowly -
despite some quite random inputs such as sunspot activity which have
nothing to do with chaos in the climate model.

Whether these models are actually accurate (or rather which, if any)
is, of course, a whole other question. I guess we'll find out in 50
years time ;-)
Exactly. Even worse, the various peripherals of Physics and Math are
getting so esoteric that scholars in those areas are losing their ability
to communicate to each other. It is almost like casting chicken entrails.

There are just too many too abstract fields to be studied, I guess -
at some point, we'll need more specialists than the entire human
population!

Better start working on them AI systems ;-)
 
A

Andrew Dalke

Michele Simionato:
I would qualify myself as an expert on renormalization theory and I would
like to make an observation on how the approach to renormalization has
changed in recent years, since you raise the point.

Feel free. I started a field theory course in '93 but didn't finish it
as I decided to do computational biophysics instead. So not only is
my knowledge dated but it wasn't strong to begin with.

(Plus, I was getting sick of SHOs ;)
only seems similar). Now, one can prove that the arbitrarity is
extremely small and has no effect at all at our energy scales: but
in principle it seems that we cannot determine completely an observable,
even in quantum electrodynamics, due to an internal inconsistency of the
mathematical model.

How small? Plank scale small?
Yes, and still a lot of science is done without computers. I never
used a computer for my scientific work, expect for writing my papers
in latex ;)

Whereas I went into computer simulations. Then again, I
wrote my first simulation program in ... 9th grade? .. for simulating
orbits, and tested it out by hand. Too bad I didn't know that I
should decrease the timestep, as my planets jumped all over the
place, and I didn't know about using a symplectic integrator,
nor about atan2, nor ...

Andrew
(e-mail address removed)
 
S

Stephen Horne

Stephen Horne:
We cannot percieve either quantum or relativistic effects directly, so
they could not be the earliest models.

[In general I agree with your post. Just some comments.]

What about superfluid helium?

Superfluid helium is a macroscopic phenomenon - it may be explained in
terms of QM effects, but that doesn't make it a quantum effect in
itself any more than more everyday macroscopic effects (which can also
be described in QM terms). If superfluid helium is your only clue, it
will tell you no more about quantum effects than e.g. lightening tells
you about the properties of an electron.

Besides, you need the science and technology to achieve very low
temperatures before you can observe superfluid helium. It may be
pretty cold in the winter, but even so we don't often see superfluid
helium laying around ;-)
Every clear night.

Yes, but they mostly thought the planets obeyed different laws to the
things that could see up close. Besides, with an Earth-centric model,
it is pretty hard to see the simple patterns of motion - and of course
even if they did, gravity is still confusing the issue. Don't forget
that it was actually quite a big leap of understanding when Newton
realised that the planets followed elliptical orbits because of the
same force that made apples fall from trees - it is only from the
perspective of having been told this since the age of 12 that it seems
obvious.

It was quite a revelation to discover that the physics of the cosmos
were actually the same physics we experience on the ground.
True. But some questions are meaningless. "Wave or particle?"
"Where is the center of a black hole?" "What would happen if you
were driving at the speed of light and turned the headlights on?"

Absolutely.

"Wave" and "particle" should be seen as metaphors, each describing a
subset of the properties of subatomic particles. The 'duality' is an
artifact of the metaphors.

The centre of a black hole exists, in a sense, but we can never
observe it because it is inside the event horizon, and as time itself
stops at the event horizon (from the perspective of any outside
observer) there is even good reason for claiming that the space inside
the event horizon doesn't exist.
Michele is a better one for this topic. My point was just that many
different answers doesn't necessarily imply a mystic explanation.

Yes, sorry - I was just following a random tangent.
I read a popular account of "branes", membrane theory, which
was interesting. I don't know enough to describe it, other than
that the universe was created from high-dimensional membranes
hitting each other.

I read a book about string and brane theory some time ago - I guess
possibly the same one, though it has vanished into book-borrowing
space as all the best books do so I can't tell you the title.

Lots of theory about possible geometries and topologies of many
dimensional space-time and how they could change from one another.
They didn't address the issue of how they could change at all, given
that time existed within the geometry rather than outside of it, and
for that among other reasons my impression was that it was a
fascinating read that nevertheless left me with no more clue than I
had to start with.

I would at least have appreciated a definition of supersymmetry,
rather than the usual 'its too abstract for your puny mind' copout.
 
S

Stephen Horne

Orbital mechanics for the major planets are also chaotic, it's just that the
time frame for problems well exceeds the life of the sun. (As I recall;
don't have a reference handy.)

Are you sure?

I know that multi-object gravitational systems can be chaotic in
principle (and I believe that the orbits of some of Jupiters moons are
a case in point) but I thought the orbit of the planets around the sun
had been proven stable. Which implies that you needn't worry about
chaos unless you are worried about the minor deviations from the
idealised orbits - the idealised bit can be treated as constant, and
forms a very close approximation of reality no matter what timescale
you are working in.

It was one of the big successes of the Laplace transform, IIRC. But I
could be mistaken.
 
M

Michele Simionato

Stephen Horne said:
I have just been through the same point (ad nauseum) in an e-mail
discussion. Yet I can't see what is controversial about my words.
Current theory is abstract (relative to what we can percieve) and we
simply don't have the right vocabulary (both in language, and within
the mind) to represent the concepts involved. We can invent words to
solve the language problem, of course, but at some point we have to
explain what the new words mean.

Thus, as I said, "most current theory is so abstract that the
explanations should be taken as metaphors rather than reality anyway."

The point being that different metaphors may equally have been chosen
to explain the same models - presumably emphasising different aspects
of them - and such explanations may work better for people who can't
connect with the existing explanations. The model would still be the
same, though, just as it remains the same even if you describe it in a
language other than English. The terminology changes, but not the
model.

Read very carefully, and you will note that I said the EXPLANATIONS
should be taken as metaphors - NOT the models themselves.

Dunno who was "attacking" you via e-mail, but FWIW, I fully support
your point of view and I am sure a lot of other people in science
would agree. The time where one could have a fully intuitive or
visual understanding of physical models is long past.

Nice discussion, BTW.

Michele
 
A

Anton Vredegoor

Stephen Horne said:
Oh dear, here we go again...

No, we don't :)

[..]
I have just been through the same point (ad nauseum) in an e-mail
discussion. Yet I can't see what is controversial about my words.
Current theory is abstract (relative to what we can percieve) and we
simply don't have the right vocabulary (both in language, and within
the mind) to represent the concepts involved. We can invent words to
solve the language problem, of course, but at some point we have to
explain what the new words mean.

Probably this e-mail discussion -which I didn't have any part in- has
caused a lot of irritation, some of which has ended up on my plate,
but I just want to make clear it's not my piece of cake :)

Anton
 
M

Michele Simionato

Andrew Dalke said:
Michele Simionato wrote:

How small? Plank scale small?

Actually it is much *smaller* than that: this is the reason why it is
not significant at all from a physical perspective. I was adopting there
a purely mathematical POV. In practice, only at absurdely high
energy scales, where certainly QED does not apply, the effect is relevant:
so physicist don't need to worry at all. The number I find in my Ph. D. thesis
(http://www.phyast.pitt.edu/~micheles/dott.ps, unfortunaly it is in
Italian since they give the freedom to write the dissertation in English
only the year after my graduation :-() is 10^227 GeV (!) BTW, it seems
too large now, I don't remember how I got it, but anyway I am sure the
number is much much larger than Plank scale (10^19 GeV).

In my post I was simply saying that there are issues of principle:
in practice quantum electrodynamics is the most successful physical
theory we ever had, with incredibly accurate predictions. No doubt
about that ;)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top