analytical Skill for Java Development

P

Patricia Shanahan

moejo said:
So how can one learn better problem solving skills? In addition to
problem solving, where can one find better algorithms to use?

Get an algorithms textbook, and read it. Don't just look at the
algorithms in isolation. Think about how they are put together. There
are design patterns they follow. For example, binary search is an
example of divide-and-conquer.

I'm not going to recommend a book, because the last time I bought an
algorithms book it was for a course, so I had to buy the one the
professor had chosen. I didn't look at any others.

I'm also convinced that problem solving improves with practice. The more
you think about problems and puzzles, the better you will be at it.

Patricia
 
O

Oliver Wong

steve said:
well the first one is the same as a simple binary sort. ,took just under
20
seconds to solve that one.

it's the same as having a sorted list , then finding a "break floor"
directly
above a "non break floor"

the second one can be done by just taking a few rocks (same number form
each
pile) adding them together then , looking at the total weight of each
group
of rocks, which ever group is heavier, is the winner.

the infinite part is just shoved in there to mess you up.

that one took longer, the "infinite " threw me.

Amazing.

Forget programming, you're obviously a much better fit for management!

- Oliver
 
C

Chris Uppal

I said:
Sadly, I suspect that I got the arithmetic wrong.

Even more sadly, I got some of the analysis wrong too. Anyway, I got sick of
trying to work this stuff out on bits of paper and threw the thing onto the
computer. Unless my code is buggy, the figures for testing floors
(zero-based):
0 10 20 30 40 50 60 70 80 90
are
mean throws: 11.61
worst case throws: 19

and for the floors:
9 19 29 39 49 59 69 79 89 99
I get:
mean throws: 10.9
worst case throws: 19

Neither of which is optimal. More discussion in another post in this thread.

-- chris
 
S

Stefan Ram

Adam Maass said:
1) You have two identical glass balls; when dropped from a certain height,

I have failed when trying to find a good solution for this
question and also when trying to find the solution for the
earth-walk-problem (walk 1 mile south ...).

I now have analyzed the cause of my failure and found a
procedure to avoid such failures in the future.

Analysis In both cases, I immediately had an idea about the
correct solution. This was in the first case: Bisecting the
intervalls recursively, which lead me to the bad solution
to begin at floor 50. In the second case: Starting at the
north pole, which kept my from finding other solutions.

Interpretation I call this phenomenon "stuckness".
(Although I just made up this word, Google finds some
interesting hits.) One is stuck with some solution and thus
kept from finding a better one. One sometimes experiences
this, when searching keys within the flat: One finds them
when one is giving up attempts to find them. That's
because, when intentionally searching keys one is stuck with
one's idea where they should be and thereby excluding
the place where they are from the search. Another example
of stuckness is [1] (given below).

Solution One should try to forget what one knows and to
evenly and coarsely split the solution space. Then one should
evaluate every candidate without prejudice. In the case of the
balls this might lead to 11 candidates: start with the first
ball at floor 0, 10, 20, .., 99. Then, when analyzing floor
10, ask: What would be the next step if it breaks? What would
be the next step if it does not break? This should lead to the
insight that this might better than to start at floor 50. In
case of the earth-walk, the candidates could be 11 circles of
latitude: The north pole, the south pole, a point at the
equator and some circles in between. Analyzing a start point
from the circle near to the south pole might lead to find
the other solution(s).

Comment In optimizing generate-and-filter programs, one is
"moving the filter into the generator" in order to avoid
generating nonsense only to filter it later. The exercises
show the results of this happening in the human brain: Better
solutions are missed in case of artificially constructed
exercises. Still, the individual might be more efficient in
the average.

Comment Thus, these questions might not measure "skill" but
"stuckness" or the »distance between the generator and filter«
for an individual. The problems might prefer persons who can
increase the distance of the filter to the generator. This
might be an advantage to solve such exercises, but it is not
clear whether it will be an advantage on the avarage of real
work problems otherwise. The tests might also test some sort
of patience: Whether one has a tendency to jump to conclusions
or takes some time and care for a deeper investigation. In
real life, both approaches sometimes are beneficial. One can
not always investigate every problem in depth.

[1]
Here are quotes from a related older article of mine:

QUOTE

Sometimes one believes that it is easy for children to
understand technology..

The German researcher Martina Ziefle did an experiment with
children and adults and found that this is not true. But there
was a difference: The children wanted to learn and thus had
much more patience than the adults.

A small report in German about this experiment is here:

http://www.heise.de/newsticker/meldung/print/37449

UNQUOTE

Comment This shows the relevancy of patience.

QUOTE

Alan Kay was teaching five-year old children how to
program a circle: They were asked to walk in a circle
and to report what they did. The children would answer
"walk a little and turn a little." After that cognition
they could write a program to draw a circle.

Ten-year old children already knew what a circle is:
"The set of all point, having the same distance to a
center." So they startet to program individual points
starting at a center, which was more complicated; and
the result was non a connected circle but only single
dots.

Fifteen-year old children already knew the formula »r² = x² + y²«.
They tried to draw a circle using that formula, but
failed. (This formula is not a good starting point for such a
program.) Just because of their additional knowledge, it was
actually more difficult or impossible for them to write such a
program. At least that is what Alan Kay said in a video.

UNQUOTE

Comment This illustrates my concept of »stuckness«:
The fifteen-year old children where stuck with the
formula and thus where not able to find the solutions
of the younger children.
 
D

Dimitri Maziuk

Patricia Shanahan sez:
.... The interesting part of this problem is
working out the strategy for using the first ball, given that limitation.

Not really: 100 data points are simply too few to warrant any
special strategy. What you do is identify the slowest part of
the process, and it's getting down from floor n to pick up the
ball and geting back up to floor n+1 to throw it again.
It becomes obvious, then, that the right solution is to Google
for ball manufacturer and borrow 98 more balls from them. Then
you do a plain linear search going up. Once a ball breaks, you
return 99 balls (98 + interest) to the manufacturer.

See also "elevator algorithm".

Dima
 
C

Chris Uppal

I can do it in worse case of 16 right now, maybe even less.

The basic flaw I saw in the original, increase by 10 then drop, is that
your search area increases.

Good point. But I don't think you've reached the optimum.

As I said elsewhere, I got sick of trying to work this out by hand, and threw
it on the computer. I also wrapped a hill-climbing algorithm around that
calculation to see if I could get any closer to optimal solutions. That isn't
guaranteed to find global optima (though it seems plausible since the search
space is smooth except for the jaggedness caused by integer rounding[*]), but
it does produce moderately interesting results. I've run it with different
numbers of ball1-probes in the range 9 through 16.

The algorithm is trying to optimise the average number of throws, but I did
modify it to look for best worst-case instead (which is /much/ slower) and --
for the cases I checked (9 through 13) -- it does seem that the strategies with
the best average also have the best worst case.

The first thing is that (according to the algorithm), sqrt(N) is not the
optimum number of probes with ball 1 after all. It seem that the best we can
do with 10 probes is:
{ 14 27 39 50 60 69 77 84 90 95 } (NB: 1-based floor numbers!)
mean: 10.34 (4.9 & 5.44)
worst case: 14

That particular case is interesting, because the "optimum" solution is unique.
And it also follows exactly the (+14, +13, +12, ....) pattern. In the other
cases there is no unique best solution, and also there are "glitches" in the
interval sequences. In each case, though, there is a solution with only 1
glitch (or only a few when we are looking at longer probe sequence where we
need more glitches just to fit the entire sequence in). It seems as if the
optimum "wants" to be a sequence like that, but it cannot always quite manage
it (if you see what I mean ;-)

Anyway, some more data. The discovered "optimum" sequences of various lengths
are:

9: { 5 29 42 54 64 73 81 88 94 }
mean: 10.44 (4.54 & 5.9)
worst case: 15

10: { 14 27 39 50 60 69 77 84 90 95 }
mean: 10.34 (4.9 & 5.44)
worst case: 14

11 { 14 27 39 50 60 69 77 84 90 94 97 }
mean: 10.31 (4.96 & 5.35)
worst case: 14

12: { 14 27 39 50 60 69 77 84 89 93 96 98 }
mean: 10.3 (5.02 & 5.28)
worst case: 14

13: { 14 27 39 50 60 69 77 83 88 92 95 97 98 }
mean: 10.3 (5.09 & 5.21)
worst case: 14

14: { 13 25 36 46 55 63 71 78 84 89 93 96 98 99 }
mean: 10.3 (5.53 & 4.77)
worst case: 14

15: { 13 25 36 46 55 63 71 78 84 89 93 96 98 99 100 }
mean: 10.31 (5.54 & 4.77)
worst case: 15

16: { 13 25 36 46 55 63 70 76 82 87 91 94 96 97 98 99 }
mean: 10.33 (5.71 & 4.62)
worst case: 16

So 12 though 14 probes can all be used to create a strategy that requires an
average of 10.3 throws with a worst case of 14. (BTW, there's no rounding in
the floating point numbers).

-- chris

[*] FWIW, it does seem to converge on the same solution when started from
rather distant points in the search space -- but I only tried that once or
twice so it may be a coincidence.
 
C

Chris Uppal

Patricia said:
Yes, if I were going to prove this I would assume a different strategy
and show that it can leave a lower floor unevaluated when the ball breaks.

I think the reason is the same for both phases. Start with phase 2, where we
are using ball 2 to find the target floor in a contiguous block of floors. It
is clear that any testing order other than start at the bottom and work up will
risk breaking the ball before discovering the target floor. Alternatively, in
order to ensure you have a correct diagnosis of floor n, you /must/ have tested
floor n-1 sometime earlier, we can prove by induction that you must test the
floors in order going upwards.

But the above doesn't use the fact that the floors form a contiguous block. So
the same result applies to /any/ subset of floors that you want to test -- if
we don't want to risk breaking the ball before it has given us as much
information as it can, then we have to take them in order going upwards. Also,
since the floors are all equally likely (by hypothesis[*]) to be the target,
there can be nothing to gain by taking the risk of an early break. So,
whenever we identify a subset of floors we want to test (with either ball) we
take them in order.

For completeness, note that the balls are interchangeable, but cannot be used
after they have broken. So -- whatever we do with them -- the solution will
automatically fall into 2 phases: use one ball (not necessarily the same ball
each time) until it breaks. Then use the other. Once the first ball has
broken, we have only one strategy available: linear search (as shown earlier),
we can optimise that by choosing our end-points based on the data obtained with
ball1, but that's all we can do. That means that we have no room for
conditional strategies with ball1 -- in effect any strategy we choose will be
to nominate a fixed set of floors that we wish to probe. As shown above, the
optimal order to take them in is bottom up too.

It's worth mentioning that if we run off the end of our set of probes with
ball1 still unbroken, then we could do two things. We can either pocket that
ball and try to get a refund later, or we could restart the 2-ball probing
algorithm. But the second case is identical to just having started with an
extended set of probes in the first place. I now suspect that this is the
reason why (it appears[**]) the optimum strategy uses rather more than sqrt(N)
probes. You might say that the algorithm "wants" to be optimal with sqrt(N)
but the fact that we can restart with two balls in the topmost partition "drags
in" more probes.

-- chris

[*] if this hypothesis is untrue, and especially if the probabilities are
heavily skewed, then it may be worth taking a risk with ball1. E.g. if an
overwhelmingly large proportion of balls will only break when dropped from the
top floor, then starting with ball1 at the next floor down would be a good
strategy.

[**] I've been doing some experiments, and it appears that the optimum is
around 13 -- more details in another post in this thread.
 
R

Raymond DeCampo

Andrea said:
well, if you know how heavy the stones are it's not a problem, is it?
And it is in the requirements, as far as I understand.
To state it clearly, all rocks from all piles have the same weight w,
known. Only one pile have rocks with the same weight w+d, also known.
This is how I understand it.

Even if the weights are not known, it can be done if you have an upper
and lower bound on how much more the heavier rocks weigh. The problem
clearly implies that the rocks weigh a little more, not say twice as
much. Also implied by the problem is that the heavier rocks weigh at
least 1.01 kg.

So now take b^n rocks from pile n (starting at zero and where b will be
determined). If all the weighs were equal the pile would weigh x = 1 +
b + b^2 + ... + b^n. Suppose the kth pile has the heavier rocks. Let
d be the difference between the actual weight and x. Then b^k * .01 =
b^k / 100 <= d <= b^k. So we need to select b such that

b^k < b^(k+1) / 100, or
1 < b^k / 100, or
b > 100

Ray
 
R

Raymond DeCampo

Eric said:
Adam Maass wrote On 02/08/06 03:42,:



If we accept Tony Morris' thesis that one should prefer
skill over experience, the ability to answer these questions
might well DISqualify the answerer. Both are old chestnuts,
which an experienced person has almost certainly encountered
before. (Besides: The "usual" answer for questions like #2
will not work for the question as stated! The person to hire
is not the guy who "solves" #2, but the guy who says it can't
be done.)

#2 can be solved as stated. While the exact difference in mass is not
given, the order of magnitude is. Using this, one can find the pile
using a modified version of the traditional solution. See my other post
for details.

Ray
 
W

wolfgang zeidler

Hi Ingo,
Hi wolfgang,



That is correct - although you get different points of course - but the
number is the same.


Eh... I do not understand what you are trying to say.

So, what is your answer to the question(s)?:

(1) How many points are there?
(2) Where is/are this/these point/s?
I do *not* wonder why you didn't understand answer 2 and 3,
it was because this answers were wrong.

So I'll try it again:
For every number i = 1, 2, 3, ...
let B be the border of a circle around the South Pole
with radius ~ ( 1 + 1 / i / PI ) km.
From all points of all circles you may go 1 km to south,
1 km to west ( or east ) and 1 km to north
- and you will be again at your startung point.

Nevertheless we should still distinct between answer 2 and answer 3:

answer 2 = "mathematical view":
There is a infinity number of possibilities
because the number of circel is infinity
and every circle has a infinity number of points.

answer 3 = "physical view":
There is only a huge but not infinity number
because the world is / may be discrete.
Regards from Bielefeld (which *does* exist)
Ingo

Regards ( to Bielefeld, you convinced me that it exists ),
Wolfgang
 
W

wolfgang zeidler

wolfgang said:
For every number i = 1, 2, 3, ...
let B be the border of a circle around the South Pole
with radius ~ ( 1 + 1 / i / PI ) km.
From all points of all circles you may go 1 km to south,
1 km to west ( or east ) and 1 km to north
- and you will be again at your startung point.


better:
For every number i = 1, 2, 3, ...
let B be the border of a circle around the South Pole
with radius ~ ( 1 + 1 / i / 2 / PI ) km.
From all points of all circles you may go 1 km to south,
1 km to west ( or east ) and 1 km to north
- and you will be again at your startung point.
 
O

Oliver Wong

Chris Uppal said:
(e-mail address removed) wrote:

As I said elsewhere, I got sick of trying to work this out by hand, and
threw
it on the computer. I also wrapped a hill-climbing algorithm around that
calculation to see if I could get any closer to optimal solutions. That
isn't
guaranteed to find global optima (though it seems plausible since the
search
space is smooth except for the jaggedness caused by integer rounding[*]),
but
it does produce moderately interesting results. I've run it with
different
numbers of ball1-probes in the range 9 through 16.

[snipped some interesting results]

Nice work. But since you seem to be calculating average-performance (and
not just worst-case performance), what assumptions are you making about the
distribution of glass-shattering heights? Uniform?

- Oliver
 
O

Oliver Wong

Chris Uppal said:
[**] I've been doing some experiments, and it appears that the optimum is
around 13 -- more details in another post in this thread.

If we're both thinking of the same post, it would seem that your post
says the optimum is 14. My best is currently also 14. If you can find a
solution for 13, I'd be really interested in seeing it.

- Oliver
 
C

Chris Uppal

Oliver Wong wrote:

[me:]
[**] I've been doing some experiments, and it appears that the optimum
is around 13 -- more details in another post in this thread.

If we're both thinking of the same post, it would seem that your post
says the optimum is 14. My best is currently also 14. If you can find a
solution for 13, I'd be really interested in seeing it.

I may have mislead you. What I meant was that it seemed that 13 (or so) was
the optimum number of probes to use in phase 1. I /think/ you are talking
about the worst case throws with a 10-probe strategy in phase 1 (or, perhaps,
any strategy). In any case, the post I'm talking about is the one with all the
numbers (to which you have already replied ;-) I can't beat a worst-case of 14
with any number of probes.

BTW, the only solution you've described (as far as I remember) was (one-based):
{ 14 27 39 50 60 69 76 92 96 }
which has a worst-case of 23 ;-) Not surprising since there's no probe in the
big gap between 76 and 92. That's obviously a typo, but I don't there's
anywhere you can put a probe in that gap which brings the worst-case below 16.
For instance, putting it in at 85 gives a worst-case of 16 and an average of
10.38.

-- chris
 
C

Chris Uppal

Oliver said:
Nice work. But since you seem to be calculating average-performance
(and not just worst-case performance), what assumptions are you making
about the distribution of glass-shattering heights? Uniform?

I'm assuming that each floor has an equal chance of turning out to be the
target, and that one of the floors /is/ the target.

BTW, I've realised that my code doesn't take proper advantage of the chance to
skip a throw if the strategy calls for chucking ball1 off the top two floors.
I don't think it affects any of my figures except perhaps the 15 case. Even in
that case, the effect would be small -- but maybe[*] large enough to permit a
strategy with average 3.1 and worst-case 14 like the other "good" ones.

-- chris

[*] pure speculation...
 
O

Oliver Wong

BTW, the only solution you've described (as far as I remember) was
(one-based):
{ 14 27 39 50 60 69 76 92 96 }
which has a worst-case of 23 ;-) Not surprising since there's no probe in
the
big gap between 76 and 92. That's obviously a typo, but I don't there's
anywhere you can put a probe in that gap which brings the worst-case below
16.
For instance, putting it in at 85 gives a worst-case of 16 and an average
of
10.38.

Damn!

And here I was, thinking I had a 14-drop solution the whole time. Oh
well, back to the drawing board.

- Oliver
 
R

Roedy Green

That seems intuitively obvious, but how do you prove it? I think you
need some sort of reductio ad absurdam approach.

perhaps an inductive proof is what is needed. Prove that if the
theorem holds for N it must also hold for N+1.
 
P

Patricia Shanahan

Oliver said:
Damn!

And here I was, thinking I had a 14-drop solution the whole time. Oh
well, back to the drawing board.

- Oliver

I think your solution concept is correct, even if the details need to be
reworked. You broke the pattern you seemed to be following when you used
76 rather than 77 (69+8, where the previous increment was 9).

How about 14, 27, 39, 50, 60, 69, 77, 84, 90, 95, 99, 100?

Patricia
 
I

Ingo R. Homann

Hi SR,

Stefan said:

In short: People want mobile phones which are easy to use. Who had
suspected that? ;-)
Alan Kay was teaching five-year old children how to
program a circle: They were asked to walk in a circle
and to report what they did. The children would answer
"walk a little and turn a little." After that cognition
they could write a program to draw a circle.

Ten-year old children already knew what a circle is:
"The set of all point, having the same distance to a
center." So they startet to program individual points
starting at a center, which was more complicated; and
the result was non a connected circle but only single
dots.

Fifteen-year old children already knew the formula »r² = x² + y²«.
They tried to draw a circle using that formula, but
failed. (This formula is not a good starting point for such a
program.) Just because of their additional knowledge, it was
actually more difficult or impossible for them to write such a
program. At least that is what Alan Kay said in a video.

UNQUOTE

Comment This illustrates my concept of »stuckness«:
The fifteen-year old children where stuck with the
formula and thus where not able to find the solutions
of the younger children.

Yes, but how did the program and the circle of the five-year-old
children look like? (e.g. did they really ended where they started?)

I think, the problem here is another: Writing a program (even if it only
should draw a circle) is much more complicated than everybody thinks.
So, if I understand you correctly, *none* of the childs succeeded.

By the way: I do not understand how the ten-year-olds could program a
circle without knowing Phythagoras' formula. Did they use sin and cos?

Ciao,
Ingo
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,276
Latest member
Sawatmakal

Latest Threads

Top