[OT] IBM in talks to buy Sun

A

Arne Vajhøj

Mike said:
"Can" isn't "will". And a big complex codebase is awfully difficult to
support once the expertise goes away. Especially one that (again if things
haven't changed much in the past eight years) is barely maintainable even by
those experts.

But if you need to take over a big complex codebase, then the users
are about as good as they can get.

There are probably 300000-500000 NetBeans users out there.

And every one of them is a Java programmer.

Arne
 
J

John B. Matthews

"Larry K. Wollensham said:
That's because I wasn't responding to point 2.

Point 2 insightfully explains how the value of a maintenance contract
may grow, even as the value of the maintained hardware falls.
 
L

Lew

Are you pricing high-reliability, 10K+ RPM SAS or SATA drives with large RAM
buffers in a rack-mount format?

<https://www-01.ibm.com/products/har...bility=true&lenovo=false&display_leasing=true>
A 14 TB SATA system is roughly $21.8K plus maintenance. Call it $1,500 per
TB, not including further costs.

<http://shop.sun.com/is-bin/INTERSHO...Name=Sun_NorthAmerica-Sun_Store_US-SunCatalog>
Roughly twelve grand (plus maintenance) for 12 TB, and that's only 7200 RPM
SATA. Call it $1,000 per TB, not including further costs.

One thing big-iron shops avoid is consumer-grade hardware.

If I were a betting man, I'd bet that anyone who's buying these systems is
buying extra hard drives, more controllers, faster cables, and then of course
you have the racks themselves, electricity, and finally personnel to maintain
and manage all that.
 
L

Lew

Arne said:
I assume that must have been a joke.

You can not in general assume 26 has any magic meaning in this regard.

And what we do know is that besides a number of suppliers, then the
products must be fully interchangeable.

That is not the case for most IBM and SUN products.

i've been racking my brain to think of a single market segment in my
experience that has 26 competitors.

Maybe restaurants, but even there, you have to segment between fast-food,
casual dining and serious dining. Yes, in New York (properly known as "the
city"), you maybe can find 26 pizza parlors within a few blocks radius. (In
the states of Maryland and Virginia or the District of Columbia, not one a New
Yorker would agree actually sells pizza, but that's beside the point.) Many
of those probably have common ownership, though.

Outside of big cities, the numbers drop sharply.

This doesn't contradict Roedy's claim, necessarily. Perhaps it's the case
that there is no true competition among lawn-mower dealers in my area, but I
need a stronger argument than presented here to convince me.
 
M

Mike Schilling

Arne said:
But if you need to take over a big complex codebase, then the users
are about as good as they can get.

There are probably 300000-500000 NetBeans users out there.

And every one of them is a Java programmer.

I gotta admit, I'd love to see a product developed by hundreds of thousand
of people. Not use it, mind you, just see it.
 
L

Larry K. Wollensham

John said:
Point 2 insightfully explains how the value of a maintenance contract
may grow, even as the value of the maintained hardware falls.

It's "how badly does the owner of that 14TB disk farm want to keep it
working" that I was wondering. With hardware prices falling, the amount
you can charge them to keep it working will also eventually have to
fall. In the limit, if 14TB disk farms cost pennies and were easy to
install and use, people who needed them would just buy them, get them
running, and replace them whenever they went kaput, and the heck with
maintenance contracts. Much as they do with PCs now even though at one
time *nobody* treated a computer that way.
 
L

Larry K. Wollensham

Lew said:
Are you pricing high-reliability, 10K+ RPM SAS or SATA drives with large
RAM buffers in a rack-mount format?

A 14 TB SATA system is roughly $21.8K plus maintenance. Call it $1,500
per TB, not including further costs.

Roughly twelve grand (plus maintenance) for 12 TB, and that's only 7200
RPM SATA. Call it $1,000 per TB, not including further costs.

One thing big-iron shops avoid is consumer-grade hardware.

That might change when they do the math.

To compensate for slower speeds, double the number of file servers
behind a load-balancer, and ultimately the number of disks. This works
as long as you don't need to serve *single* files *really* fast.

To compensate for lower reliability (but consumer grade hardware is
getting better), assume a doubled disk replacement rate in the RAIDs.

Overall, that means four times the disks. If they're $200/TB each the
above doublings produce $800 in place of the $1000-1500 you cite for the
non-consumer-grade hardware.

Parallelism in various forms (multiprocessors, load-balanced clusters,
RAID, and so forth) make consumer grade hardware able to "add up" to be
equivalent to higher-grade hardware. Sometimes still with lower price tags.

Of course, how efficiently these clusters can be powered or cooled is an
issue; which is why you're starting to see makers of PC parts producing
energy-efficient parts that can be rack-mounted, water-cooled, and the
like. This lets them expand their (low-margin, so always
expansion-hungry) businesses into parts of the server/mainframe market,
at the expense of companies like IBM and Sun. It's this pressure from
below, combined with the present world economic situation, that is
probably driving this contemplation of a merger.
If I were a betting man, I'd bet that anyone who's buying these systems
is buying extra hard drives, more controllers, faster cables, and then
of course you have the racks themselves, electricity, and finally
personnel to maintain and manage all that.

A plug-it-in-and-away-you-go (or, I think they sometimes say,
"turn-key") solution has an opportunity to get in there and start
out-competing the existing ones then.

Actually, I recall hearing of Sun Microsystems working on something like
that fairly recently. A plug-and-play "data center in a box" the size of
a standard shipping container. I think they planned to even rent them out.
 
B

blue indigo

I gotta admit, I'd love to see a product developed by hundreds of thousand
of people. Not use it, mind you, just see it.

Then go right ahead and click this link: http://www.linux.org/

(Nearly 4000 contributors just in the past four years, and it's been
around a lot longer.)
 
M

Mike Schilling

blue said:
Then go right ahead and click this link: http://www.linux.org/

(Nearly 4000 contributors just in the past four years, and it's been
around a lot longer.)

And it's tightly controlled by a small group of dedicated experts; the
vast, vast majority of the contributors make nothing more than simple
bugfixes; even those are extensively vetted.
 
B

blue indigo

And it's tightly controlled by a small group of dedicated experts; the
vast, vast majority of the contributors make nothing more than simple
bugfixes; even those are extensively vetted.

Nonetheless, it has certainly had at least tens of thousands of
developers, if not actually hundreds. And it has discovered and
demonstrated processes that enable those kinds of numbers without making a
mess of things, a model other projects can in principle copy.
 
M

Mike Schilling

blue said:
Nonetheless, it has certainly had at least tens of thousands of
developers, if not actually hundreds. And it has discovered and
demonstrated processes that enable those kinds of numbers without
making a mess of things, a model other projects can in principle
copy.

OK. I agree that if IBM drops NetBeans and it's adopted by someone as
talented and dedicated as Torvalds, it'll be fine. But Linex has had
hundreds of developers only in that sense that _The Yiddish
Policeman's Union_ had dozens of authors when you count all the
editors, proofreaders, and typesetters.
 
B

blue indigo

OK. I agree that if IBM drops NetBeans and it's adopted by someone as
talented and dedicated as Torvalds, it'll be fine. But Linex has had
hundreds of developers only in that sense that _The Yiddish
Policeman's Union_ had dozens of authors when you count all the
editors, proofreaders, and typesetters.

According to http://en.wikipedia.org/wiki/History_of_Linux and close-by
pages "the largest part of the work on Linux is performed by the
community" and Linus's own role is diminishing. About two percent of the
code in the 2.6 kernel is Linus's own.
 
J

John B. Matthews

"Larry K. Wollensham said:
It's "how badly does the owner of that 14TB disk farm want to keep it
working" that I was wondering. With hardware prices falling, the
amount you can charge them to keep it working will also eventually
have to fall. In the limit, if 14TB disk farms cost pennies and were
easy to install and use, people who needed them would just buy them,
get them running, and replace them whenever they went kaput, and the
heck with maintenance contracts. Much as they do with PCs now even
though at one time *nobody* treated a computer that way.

The owner who wants to keep something working may consider the higher
cost a hedge[*] against loss of service or data, both of which are
typically more valuable than the hardware itself.

[*] <http://en.wikipedia.org/wiki/Hedge_(finance)>
 
N

Nigel Wade

John said:
Larry K. Wollensham said:
It's "how badly does the owner of that 14TB disk farm want to keep it
working" that I was wondering. With hardware prices falling, the
amount you can charge them to keep it working will also eventually
have to fall. In the limit, if 14TB disk farms cost pennies and were
easy to install and use, people who needed them would just buy them,
get them running, and replace them whenever they went kaput, and the
heck with maintenance contracts. Much as they do with PCs now even
though at one time *nobody* treated a computer that way.

The owner who wants to keep something working may consider the higher
cost a hedge[*] against loss of service or data, both of which are
typically more valuable than the hardware itself.

[*] <http://en.wikipedia.org/wiki/Hedge_(finance)>

Quite.

You can't tread a RAID as a throw-away object, to simply be replaced if it
fails, unless you regard your data with the same cavalier attitude.
 
L

Lew

Larry said:
The owner who wants to keep something working may consider the higher
cost a hedge[*] against loss of service or data, both of which are
typically more valuable than the hardware itself.

Nigel said:
Quite.

You can't tread a RAID as a throw-away object, to simply be replaced if it
fails, unless you regard your data with the same cavalier attitude.

The notion that you can simply hot-swap disks if they fail in a high-
volumn production environment is wacky.

It's not just data, it's service time that's valuable. Most such
installations cannot afford downtime, at least not much. Disk
failures are a major fubar. The motivation for RAID itself proves
that - it increases the cost per storage amount in favor of higher
reliability.

The suggestion that one can buy inferior hardware in quantity and just
swap in new pieces when something breaks betrays an utter lack of
understanding of the problem. If that idea worked, the people
responsible for those data centers would do that, but it doesn't and
they don't.
 
L

Lew

Eric said:
     The notion of hot-swapping failed disks is certainly not wacky..
Perhaps it's the "simply" that you consider wacky?  That might
make sense.

Yes, it was the "simply".

Lew:
Eric said:
     Right.  And a RAID unit that survives a disk failure but still
must be taken out of service for repair is less reliable than one
in which you can replace the failed drive without interruption.

True.

However, I have yet to meet an operations person who appreciates a
hard-drive failure, much less repeated hard-drive failures, even when
you can hot-swap failed drives. Nor do facilities managers generally
consider only the price of a hard drive without at least thinking
about reliability. The reason the more expensive drives and drive
arrays continue to sell is that customers see value in them.

Lew:
Eric said:
     Tradeoffs certainly exist.  It is not a given that the same trade
makes sense for all circumstances.

I never, ever make sweeping generalizations. :)

Seriously, it is more common than not for data centers to purchase
more reliable, faster hard drives rather than simply opting for the
cheapest. The odds that they'll spend more go up with the volume and
value of their data, with the volume of access to those data, and with
the importance they attach to uptime.

Over the years I've dealt with many businesses who've tried to go with
less expensive hardware, be it disk drives, network cards, PCs or
whatnot, only to regret that and switch to more expensive but more
reliable choices. Every large-scale data center I've worked with, and
most medium- and even small-scale ones, tended to favor reliability
and performance over price in their evaluations.
 
T

Tom Anderson

That might change when they do the math.

To compensate for slower speeds, double the number of file servers behind a
load-balancer, and ultimately the number of disks. This works as long as you
don't need to serve *single* files *really* fast.

To compensate for lower reliability (but consumer grade hardware is getting
better), assume a doubled disk replacement rate in the RAIDs.

Overall, that means four times the disks. If they're $200/TB each the above
doublings produce $800 in place of the $1000-1500 you cite for the
non-consumer-grade hardware.

Parallelism in various forms (multiprocessors, load-balanced clusters, RAID,
and so forth) make consumer grade hardware able to "add up" to be equivalent
to higher-grade hardware. Sometimes still with lower price tags.

If anyone still isn't persuaded by Larry's argument, they should consider
Google. And i don't mean go and do a search. Google run their data centres
in exactly this way, using vast amounts of cheap commodity hardware. It's
an approach that doesn't work at the small scale, where a single better
machine plus operating costs may well work out cheaper than two or more
cheaper, less reliable ones, but at the large scale, it works out very
well indeed. They've even written a paper about it:

http://labs.google.com/papers/disk_failures.pdf

It'd be interesting to compare that hard data to comparable data for
high-end drives - except oh wait, there isn't any, because nobody's ever
used them in that kind of volume.

The one large-scale study that does make an attempt, amongst other things,
to compare cheap and expensive disks find that there isn't a significant
difference (assuming that you accept 10K FC disks as representatives of
expensive disks, and 7200 rpm SATA disks as cheap):

http://www.cs.cmu.edu/~bianca/fast07.pdf

tom
 
A

Arne Vajhøj

Christian said:
Why should a better AWT kill Swing?

Because Swing is build on top of AWT.
It may be AWT and SWT might get a more common interface. And I see no
problem of running Swing on top of SWT instead of AWT if you want some
heavy widgets?

I don't think it would be easy to give them a common interface.

And would it still be SWT if it were done ?

Arne
 
A

Arne Vajhøj

Mike said:
I gotta admit, I'd love to see a product developed by hundreds of thousand
of people. Not use it, mind you, just see it.

It would create some project management problems.

It would also allow the IDE to implement an app server, a word
processor, a spreadsheet, an ERP system, a CRM system and
practically everything else under the sun in no time.

The point is that just 0.01%-0.1% of the user population is
sufficient to keep the project alive.

That is not unrealistic.

Arne
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,051
Latest member
CarleyMcCr

Latest Threads

Top