Design patterns

J

Joshua Maurice

Enough. ;)
What a impressive argument!

Technically, that was my point too. Thus far, we haven't made any
reasoned arguments. tanix, Branimir Maksimovic, and myself have just
been spitting out definitions, or axioms, without any sort of coherent
argument. We've just made the claim that "Intelligence is defined as
X, and I am right". However, at least I mentioned that I was doing
that.

This is going to quickly devolve into an argument over "correct
definitions" or an argument of religion, neither of which I'm keen to
discuss, so I will leave it at that. (I do wish that instead we could
try to distill down the core of what most people mean when they say
"intelligence", but I fear that such a discussion is also off topic,
and none of the people in the discussion are well enough educated to
have such a discussion, myself included.)
 
M

MiB

Well, I am not against the design patterns in principle.

But what I DO see all over the place is a literal obscession.
That web page used two design patterns for a single thing.
I do not argue whether it IS the way to go or not.

But that looked like an extremism to me, just from glancing at it.

It is not uncommon practice to mix several design patterns.
Note, using a design pattern does not imply you are talking about
every aspect of an application.

For example, assume you need to do the architecture for a application
that shall have a GUI and needs to access a central database in a
distributed system. You may end up with a design that separates the
internal representation of data from how it is represented in the GUI,
plus some (again separate) mechanism to react on user input from the
GUI and propagate business logic activity to state changes in the GUI.
If you talk to some other expert about this approach it may be easier
to just say you are planning for a Model-View-Controller design
pattern, and usually he will understand what you mean without the need
to explain the details, many of which may not yet be fixed anyway.
For the access to the central database you may find it a good idea not
to access the database directly, but layer an application server in
between and already you use a second pattern "Three-Tier".
The connection to the application server may be encapsulated in its
own class and for efficiency you want to a) make sure it is only
established if actually needed, b) is shared in different parts of
your application, and c) at most one object is created; you'll
probably end up with the "Singleton" pattern.

At no time you are limited to use an existing design pattern, if you
find something unique that is better suited for the problem at hand,
use it by all means. However, if it proves useful, you should make a
note on why you chose this special design and how the parts fit
together - maybe it can be reused for a different problem later and
become a design pattern for you.

I would like to recommend you two books; there are a number of other
good books on the topic available already but these I liked most:

E. Gamma, R. Helm, R. Johnson, J. Vlissides: "Design Patterns -
Elements of Reusable Object-Oriented Software", Addison-Wesley: 1995.
M. Fowler: "Patterns of Enterprise Application Architecture", Addison-
Wesley: 2003.

Especially the Fowler book changed my view on software development; I
rarely encountered similar eye-openers before - Stroustrup's "The C++
programming language"; not the language description part, but the
later chapters about programming paradigms; "Gödel, Escher, Bach" by
D. Hofstaedter was another one for me.

best,

MiB
 
K

Kaz Kylheku

I think that this does not have to do anything with soul.
Fact is that algorithm cannot think and that;s it.

This is not a fact, but an open question in artificial intelligence
research.
Human consciousness and intelligence does not works
on algorithm.

Assertion without evidence.

Quantum physicists believe in a finite state universe. If consciousness
is embedded in a finite-state universe then it means it's part of a
finite state machine, ergo ...

But that is not so interesting; what's more provoking is the possibility
that consciousness could be encoded in a lot fewer states.
Plain fact. We can invent algorithm,

Not all of us, just a small minority.
but algorithm itself can;t produce previously
unknown algorithm.

Obviously, an algorithm whose purpose isn't algorithm invention doesn't
invent algorithms.

Genetic programming is an concrete example of algorithms inventing
algorithms.
This is mathematical fact....

ROFL.
 
B

Branimir Maksimovic

Kaz said:
This is not a fact, but an open question in artificial intelligence
research.
If you say so...
Assertion without evidence.

Evidence is that in mathematic there is no algorithm to proof valid
logic formula.
Quantum physicists believe in a finite state universe. If consciousness
is embedded in a finite-state universe then it means it's part of a
finite state machine, ergo ...

This is also assertion without proof. Fact is that set of all
valid second order logic formulas is not even recursively enumerable set.

But that is not so interesting; what's more provoking is the possibility
that consciousness could be encoded in a lot fewer states.

No one knows what is consciousness yet...
Not all of us, just a small minority.

Everybody invents algorithm , that does not
have to be algorithm for computers...
For example simple algorithm to shop some
thing...
Obviously, an algorithm whose purpose isn't algorithm invention doesn't
invent algorithms.

Genetic programming is an concrete example of algorithms inventing
algorithms.

Genetic programming ? Could you provide example algorithm
creating algorithm, that is not in no way encoded in that algorithm?

?

Greets
 
T

tanix

If you say so...


Evidence is that in mathematic there is no algorithm to proof valid
logic formula.


This is also assertion without proof. Fact is that set of all
valid second order logic formulas is not even recursively enumerable set.



No one knows what is consciousness yet...

Sorry to interfere here, but I can tell you that those
that experience it "know" what it is.

Not sure if you heard of such a thing as awareness.

Sure, as far as science goes, they do not know what consciousness is,
and it is not even in the cards. It like knowing God or Truth,
which is simply WAY out of scope of this domain.

Yes, you CAN have some taste of it.
But to know it, you have to be it, nothing less.

Hope you don't mind THAT much.
Everybody invents algorithm , that does not
have to be algorithm for computers...
For example simple algorithm to shop some
thing...


Genetic programming ? Could you provide example algorithm
creating algorithm, that is not in no way encoded in that algorithm?


?

Greets

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
D

Daniel Pitts

Jayson said:
Bean it! Everything... bean-able. Enterprise Java Bean, Netbeans...
Kicked in the beans.

The Java folks had a great idea with JavaBeans, but they failed at
implementing it in a consistent and useful way. There is no language
level support for property-change listeners, so it becomes extremely
burdensome trying to create a functional bean which is useful as a GUI
model.

Don't get me wrong, I'm primarily a Java programmer. I just think that
the Bean hype has caused many people to forget basic abstraction
concepts, and the benefits of Beans have yet to been fully realized.
 
J

Jacqueline Townsend

Yes, robust design patterns kinda get untidy therefore, most programmers avoid them.
 
T

tanix

Kicked in the beans.

The Java folks had a great idea with JavaBeans, but they failed at
implementing it in a consistent and useful way. There is no language
level support for property-change listeners, so it becomes extremely
burdensome trying to create a functional bean which is useful as a GUI
model.

Don't get me wrong, I'm primarily a Java programmer. I just think that
the Bean hype has caused many people to forget basic abstraction
concepts, and the benefits of Beans have yet to been fully realized.

Well, I think Sun stretched itself too much creaing ALL sorts of
gadgets, "toolkits", "subsystems" and you name it.

As a result, they spread themselves too thin, worked in too many
different directions and finally, boom, the biggest tragedy in the
sw business, a war with Microsoft, which simply killed Java.

As of this moment, the traffic on MFC sites is twice as much as on
C++ sites and at least 5 times as much as on Java sites.

And Java has contributed significantly by simplifying the language,
getting away from all these pointers, "by value", references and
all sorts of other complications in the language that end up creating
nothing but nightmare.

People often forget that developers have ALL sorts of things in their
mind. They don't need to remember another universe of things when
incoroprating some functioinality from some toolkit or some other
language complication that is voluminous.

Just switching from IDE world view, with its thousands of things
to remember, to debugger worldview, with its piles of things,
to language worldview, and on and on and on, down to switching
your mind to your email program worldivew, then your editor
worldview, which also has thousands of things to keep in mind
and its own concepts, symbols, keystroke sequences, codes,
maps, tables, languages, fonts, colors and on and on and on.

So, what happens that that you have to remember millions of
different things, hundreds ways to switch your mind into totally
different universe and its perspective.

So, when someone develops something, they think someone has
either time or interest in studying another bible sized book
and INSTANTLY be able to switch to another bible sized worldview.

That is what I mean when I say:
"We totally do not understand what information is".
Just a stone age view.

Meanwile, the load on the mind is simply immense.

Just look at things like Java generics or C++ templates,
or .net all sorts of things.

Does anybody think that someone is going to sit there for half
an hour and stare at all those hierogliphs when they look at
some method or class? How long is it going to take you to
develop anything worth even mentioning if you have to switch
your mind at this rate?

And now we have these design patterns with ALL sorts of side
effects. People look at design patterns as some kind of
revolutionary progress and try to stick them on to anything
they stick.

Just as I described before, I worked JSpider.
Very nice conceptually. It used the visitor pattern and the
whole program was totally async. When you try to spider the
web, using this thing the amount of things you have to do is
quite something to consider. There are ALL sorts of complications,
structures, objects, trees and you name it to be dynamically
constructed, updated, etc.

Try to debug this thing?
Well, I spent at least a week working with that code and trying
to extend it so it has much more powerful functionality and
much more flexible everything.

You can not even DEBUG that thing in async mode, because you
may get ANY async event coming from ANY of the threads,
simpltaneously accessing either some page or entire site.

As a result, you can not single step through this thing
and actually relate anything to anything else.

It was the worst nightmare I recall.

No wonder the guy, who originally wrote JSpider, just gave
up and did not maintain or develop it for the last 5 years
at least, despite the fact that it is the most powerful
spider I saw, at least in Java world, which is the ONLY
thing I am interested in.

Just being able to run my main app on Linux after my win
box was rooted and was unusable for a mont, beats all the
arguments I heard against Java hands down.

There was not even need to recompile it.
Sure, GUI did look quite differently and less pleasing then
on the win, but functionality was perfectly there.

And I see ALL sorts of design pattern that create such a
nightmare from debugging standpoint or even from the standpoint
of being able to quickly analyze your code withing a couple
of seconds in order to implement something or fix something,
that the overall net benefit is zero, if not negative.

But yes, it DOES look on paper as something "revolutionary",
something really "impressive".

As a mind trip that is.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
T

tanix

Yes, robust design patterns kinda get untidy therefore, most programmers avoid
them.

Well, if you could put that design pattern in some comfortable box
so it sits there and once you have gone through pain of implementing
it, you don't have to worrry about it, then it would be a different
story.

But plenty of design patterns end up creating the async. code, and
they are such a pain on the neck to debug in a more or less complex
program, that I bet you can find more than handful of them that
actually help things and are like hands in glove at the end.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
 
N

Nick Keighley

I'm dubious

one of the more abused results in mathematics. Comparable with
"Eienstein showed everything was relative" and "modern physics is only
just discovering what eastern philosophy has long known".


and CTT would then argue that nothing else can be either. Dis-proofs
of AI seem to assume their conclusion.


you think!

(That is, are humans "simple" chemical machines,



How bout this one:

"There are no closed systems. So the issue of entropy does not apply".


AI is just a myth.
How can you possibly create an ARTIFICIAN intelligence
if you don't even know how natural, and that is biological,
intelligence "works"?

surely the point is to find out? With your attitude we wouldn't have
steam engines or aeroplanes let alone computers.

AI is simply trying to copycat that, which alredy exists
in biological world.

assuming that is what they're doing, you say that like it's a bad
thing
 
T

tanix

Sorry, I'd like to stay away from this, but can not.
Intelligence is NOT, and never EVER will be
"a capability to find algorithm to solve some problem"
This is the HIGHER order insult to Intelligence.
That is ALL I am interested in saying or even seeing
in THIS grade of crap.

So, everybody had a chance to talk their brains out on "design patterns".

And here is the scoop from my end of the wire:

Design patterns are not some recepies of God telling you "how it is"
in Reality.

They are ideas by some hopefully creative people that discovered
certain things, and those things are:

1) Optimization

Optimization is a broad issue.
You can optimize for performance.
You can optimize for code size.
You can optimize for minimum number of methods,
thus assuring the max. generality of code.

2) System structure

System structure is highly complex issue.
Basically, the MAJOR principles, the #1 criteria in system
is STABILITY.

If your program, which is a system, ever breaks,
than you may end up with a nuclear disaster,
just for the sake of argument.

Performance IS important, but only to the extent that it
does not affect the #1 criteria, stability.

System needs to be structured to minimize the complexity
of interactions. The more components and subcomponents
the system has, the more complexity results, and complexity
of ALL sorts, such as the number of interactions, the path
length to interact (the more steps you have to perform,
the longer is your path, the more probability of increased
complexity).

This is basically a beauty domain.
The simplicity.

Well designed system has the minimal number of components
and the minimal amount of interactions.

3) Reusability

Reusability is anotother way to describe generality or
universality of some component or subsystem.
It indirectly translates into portability as a side effect.

In a well designed system, some component or subsystem
could be used for multiple purposes by adding a thin
layer above it.

So, the fundamental premise behind the "design patterns"
is to maximize the system "correctness".

What is "correct" system?
Well, it is a mathematical issue.
In formal terms, it requires proof that your program logic
will perform as advertized under ANY conditions whatsoever.

To prove that system is "correct" in terms of mathematics
is FAR from being a trivial task.

It is interesting to note the fact that beyond an idea of
a semaphore that has been proven to be formally correct
by Digjstra, there exists no proof for more complex
systems to the best of my knwledge.

The complexity of problem is just immense.

Why are we leaning toward this train of thought?

Well, because when you evaluate and applicability of some
design "pattern", you have to watch really carefully what
does it buy you.

There are ALL sorts of issues in a system design, and,
especially in the modern and hecktic times,
you need to consider the information overload factor.

What is information overload factor?

It is a very interesting issue and higly complex.

One of the aspects of it is the fact that we are subjected
to tremendous amounts of information jamming our brains
almost anywhere we look.

In the software business, most of the time, people designing
something do not even bother to consider how consistent their
system is in terms of being similar conceptually with that,
which is well known and established at the moment.

So, they keep hacking some "new" ways of looking at things.
Take for example the user interface issue.

Have you noticed that most of the programs out there do not
present you with consistent user interface. They simply
invent all sorts of buttons, functionality, tabs, etc. as
THEY see fit, without even considering the fact that the user
may have to deal with hundreds of different programs, each
having thousands of different parameters, notions, functions,
buttons and you name it.

Each program simply exists in its own artificially created
world without even bothering that the user's mind can not
possibly switch from one world view to another with a flip
of the finger. His mind has to literally switch to another
universe with thousands if not millions of ALL sorts of
parameters, ways of looking at essentially the same things, etc.

One very interesting thing was the idea of Microsoft on
consistent GUI design they started advocating nearly a
generation ago.

The idea was essentially this:

Since the the most general level of GUI with any program
is the menu bar, you need to design it in a consistent manner.
Each program, no matter what it does, should have following
menu items:

1) File
2) Edit
3) Help

I do not remember exactly the details, but the idea still holds.

Basically, what it means is that your mind can easily switch
while using different programs because your basic menu categories
are logically similar, so the mind does not have to RADICALLY
switch from one world view to another, so different, that it
basically has nothing in common with the first one.

But what does it have to do with design patterns?

Well, it happens to apply to the idea of design patterns
just as well. How?

First of all, why do you need some "design pattern"?
Don't you have your own brain to structure the system
in the most optimal way?

What IS the design pattern?
Is it some kind of a pill against stupidity?
Is it some kind of recepie to "enlightenment" or way to heaven?

Nope.

Take for example an idea of an interface as such.
What is an interface?

Well, simple enough.
It is simply a way to communicate between systems that
minimizes the amount of different ways of looking at different
things by providing the most common parameters.
Secondly, it shields you from knowing the internals
of the other system you are trying to communicate with.

So, if you perform some operation, there is a requred
minimum of parameters to pass regarless of what kind of
operation are you going to perform.

That is basically an optimization issue.

Secondly, you can achieve a "code reuse" concept.

If system is architectured in general enough way,
then each component of that system may perform the necessary
operations in multiple situations and produce the correct
result REGARLESS of who is the client or consumer of that
operation. That is basically all there is to it.

Poorly designed system has a separate piece of code to
perform a similar operation but in a slightly differnt
context. As a result, you are basically dealing with
"special cases" no matter where you look in your system.

Another aspect of "design patterns" is the aspect that
ties in directly with the ability to very easily comprehend
some operation or a set of parameters and to be able to
understand a different piece of code. The information
overload issue.

In a well designed system, you should be able to look at
any piece of code and be able to understand what it does
within seconds. Nowadays, THAT is the time frame.

We no longer can afford switching our minds from one world
view to another, with its thousands or even millions of
different parameters and points of views.

The switch has to be smooth and easys.
That means things to be consistent.
There is a limited logical number of things that may
happen in ANY system, no matter what it does.

So, when you look at a totally different part of your system,
you don't want to have to switch your mind from Bible to
Coran and Yoga or Exestantialism.

You'll simply go crazy one day if you do these things
hundreds if not thousands of times a day.

You have to have some solid ground to stand on.

No wonder Jesus said:

"House that was built on the sand is bound to fall".

Correct.

Add the debugging aspect.

Since it is not possible to write the "correct" programs
from the very first attempt, no matter what smart guy tells you,
you would have to debug your code.

So, when you write that code, you need to be aware of
two fundamentally different approaches.

1) Syncronous mode
2) Asynchronous mode

In synchronous mode everything is simple.
Because you can follow each step of what your program does.
It is all sequential. One thing logically follows the other.

In async mode, it is a totally different world.
You can hardly single step it.
You'll come to some point where one component of your
system with perofm and write or send operation and that's it.

You won't be able to see the result of that write in
a syncronous manner.

Some event will occur and will trigger some other method
to handle the input or a result of your write.

Considering the fact that modern programs are mostly
multithreaded, the task of debugging becomes immense
if not monumental.

But what does it have to do with design patterns?

Well, that means when you have some issue to resolve,
yes, you CAN recall or review some "design pattern",
helping you to solve it.

But be very careful to consider ALL sorts of issues,
such as:

1) Information overload
2) Ability to read and understand the code fast, within seconds.
3) Ability to debug this thing
4) Ability to log this thing so it could be easily fixed.
If your design pattern exhibits the asyn behavior that is
not logically necessary, you may have a hell of a time
debugging or reviewing the log information.
5) What does that design pattern buy you at the end?
What issues does it solve?
What impact on system stability and performance does it have?

Get the drift?

:--}





--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript, PHP,
organized by major topics of language, tools, methods, techniques.
 
J

Joshua Maurice

I think that this does not have to do anything with soul.
Fact is that algorithm cannot think and that;s it.
Human consciousness and intelligence does not works
on algorithm. Plain fact. We can invent algorithm,
but algorithm itself can;t produce previously
unknown algorithm. But human brain can.
This is mathematical fact....

I believe your valid argument goes something like this:

definition 1- Creativity is the ability to create an algorithm to
solve any solvable problem.
premise 2- Various correct proofs demonstrate that no algorithm is
"creative".
premise 3- Humans are creative.
--
conclusion 4- Therefore, human consciousness does not operate on an
algorithm.

Your argument is valid; that is its conclusion follows logically from
its premises. However, it is not sound: some of your premises are
false, and thus we cannot conclude that the conclusion is true.

Definition 1. I might argue over the definition of creative, but let's
just go with your definition.

Premise 2 is true (under your definition of creative).

Premise 3 is not true (under your definition of creative). At least,
you have yet to convince me that it is true. Put another way, yes we
have proofs that a general problem solver is impossible, and that the
halting problem cannot be solved by a turing machine. However, we
still have Maple, and other pseudo general (math) problem solvers. My
calculator can still solve most / all calculus equations I encountered
in high school, and my human brain can still solve most problems put
to it (given enough time). However, I do not believe that I could
solve every solvable problem, nor do I believe that I could determine
if some turing machine would halt for every possible turing machine
(ignoring time restraints).
 
T

tanix

I believe your valid argument goes something like this:

definition 1- Creativity is the ability to create an algorithm to
solve any solvable problem.

"There are no problems to be solved.
There only misteries in life".
premise 2- Various correct proofs demonstrate that no algorithm is
"creative".
premise 3- Humans are creative.

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript, PHP,
organized by major topics of language, tools, methods, techniques.
 
N

Nick Keighley

I believe your valid argument goes something like this:

definition 1- Creativity is the ability to create an algorithm to
solve any solvable problem.
premise 2- Various correct proofs demonstrate that no algorithm is
"creative".
premise 3- Humans are creative.
--
conclusion 4- Therefore, human consciousness does not operate on an
algorithm.

Your argument is valid; that is its conclusion follows logically from
its premises. However, it is not sound: some of your premises are
false, and thus we cannot conclude that the conclusion is true.

Definition 1. I might argue over the definition of creative, but let's
just go with your definition.

I was wondering how we sorted problems into "solvable" and
"unsolvable"...

Premise 2 is true (under your definition of creative).

Premise 3 is not true (under your definition of creative). At least,
you have yet to convince me that it is true.

I was trying to remember where I first came across the godel argument
"disproving" AI (Weinburg?). It sounded BS then and it sounds BS now.

p1) machines must operate by a fixed algorithm
p1a) and hence are bound by godels result.
p2) people do not have to operate by fixed algorithm and hence are not
bound by godels result.

conclusion: people can do things machines can't do

well, woopy doop. I didn't accept p1 and p2 originally. Now I'm not
convinced p1a is even applicable.

It's like winning a race by disqualifying the other contestants.
 
A

Alf P. Steinbach

* Nick Keighley:
I was trying to remember where I first came across the godel argument
"disproving" AI (Weinburg?). It sounded BS then and it sounds BS now.

p1) machines must operate by a fixed algorithm
p1a) and hence are bound by godels result.
p2) people do not have to operate by fixed algorithm and hence are not
bound by godels result.

conclusion: people can do things machines can't do

well, woopy doop. I didn't accept p1 and p2 originally. Now I'm not
convinced p1a is even applicable.

It's like winning a race by disqualifying the other contestants.

Hm, this is very OFF TOPIC, but p1 is false, and p1a is meaningless (it doesn't
follow even if p1 were true, it's a category error). p2 is meaningless.

Roger Penrose, the inventor of the above, is a genius (e.g. Penrose tiles, his
work with Hawkings, etc.), but he is also utterly mad -- like (at least) 89%
of the US population, 10% of US scientists, and about 65% of Middle East
scientists. Blaise Pascal was, I think, another example of the kind. Just
different religious issues.


Cheers & hth.,

- Alf
 
T

tanix

* Nick Keighley:

Hm, this is very OFF TOPIC, but p1 is false, and p1a is meaningless (it doesn't

follow even if p1 were true, it's a category error). p2 is meaningless.

Roger Penrose, the inventor of the above, is a genius (e.g. Penrose tiles, his
work with Hawkings, etc.), but he is also utterly mad
:--}

-- like (at least) 89%
of the US population, 10% of US scientists, and about 65% of Middle East
scientists. Blaise Pascal was, I think, another example of the kind. Just
different religious issues.


Cheers & hth.,

- Alf

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript, PHP,
organized by major topics of language, tools, methods, techniques.
 
T

tanix


Ok, fine. I'll speak, no matter how hopeless it is.

It is not HIM who is "mad".
It is YOU.

Why?

Well because YOU do not claim your own being,
and YOU do not allow the expression of it,
being forever afraid to go against the herd
because great fear arises in you.

The fear of being condemned by others,
just as you condemn him in this very post.

But you know what?
As "mad" as he is,
his life is a life of a diamond
compared to your utterly gray existence.

MAD?

WHO?

You MUST be mad to waste your life like this
and do not claim all the grandior of it!

:--}

Enough?

Or you want more?

:--}

The mothership is FULLY loaded...

--
Programmer's Goldmine collections:

http://preciseinfo.org

Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript, PHP,
organized by major topics of language, tools, methods, techniques.
 
B

Branimir Maksimovic

Alf said:
* Nick Keighley:

Hm, this is very OFF TOPIC, but p1 is false, and p1a is meaningless (it
doesn't follow even if p1 were true, it's a category error). p2 is
meaningless.

p1: is false : machines have to operate on algorithm,
since there is no algorithm for creativity...
follows that
p2 :) people's creativity is not based on algorithm...

Roger Penrose, the inventor of the above, is a genius (e.g. Penrose
tiles, his work with Hawkings, etc.), but he is also utterly mad --

No Penrose, just found all teachers of mathematical logic
new before him since 30's...

Greets!
 
M

Miles Bader

Alf P. Steinbach said:
Roger Penrose, the inventor of the above, is a genius (e.g. Penrose
tiles, his work with Hawkings, etc.), but he is also utterly mad --

I'm not sure "mad" is quite the word to use, but for someone so smart to
screw up his argument in such an obvious way certainly suggests he has
Issues (typically this ends up being religious or some other existential
insecurity).

-Miles
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top