The Industry choice

D

Donn Cave

Quoth Paul Rubin <http://[email protected]>:
|> Yes, it would be really weird if Python went that way, and the
|> sort of idle speculations we were reading recently from Guido
|> sure sounded like he knows better. But it's not like there aren't
|> some interesting issues farther on downstream there, in the compare
|> function. cmp(), and str() and so forth, play a really big role in
|> Python's dynamically typed polymorphism. It seems to me they are
|> kind of at odds with static type analysis
|
| I don't understand that. If I see "str x = str(3)", then I know that
| x is a string.

Sure, but the dynamically typed polymorphism in that function is
about its parameters, not its result. If you see str(x), you can't
infer the type of x. Of course you don't need to, in Python style
programming this is the whole point, and even in say Haskell there
will be a similar effect where most everything derives the Show
typeclass. But this kind of polymorphism is pervasive enough in
Python's primitive functions that it's an issue for static type
analysis, it seems to me, especially of the type inference kind.
cmp() is more of a real issue than str(), outside of the type
inference question. Is (None < 0) a valid expression, for example?

Donn Cave, (e-mail address removed)
 
T

Terry Reedy

Peter Dembinski said:
Besides, shouldn't str be a reserved word or something?

It is a name in the builtins module which is automatically searched after
globals. Many experienced Pythoneers strongly advise against rebinding
builtin names *unless* one is intentionally wrapping or overriding the
builtin object. The latter are sometimes valid expert uses of masking
builtins. Newbies are regularly warned on this list against making a habit
of casual use of list, dict, int, str, etc.

None has been reserved because there is no known good use for overriding
it. True and False will be reserved someday. There have been proposals to
turn on reserved status for all builtins on a per-module status.

Terry J. Reedy
 
A

Alex Martelli

Roy Smith said:
You can use __slots__ to get the effect you're after. Well, sort of; it
only works for instance variables, not locals. And the gurus will argue
that __slots__ wasn't intended for that, so you shouldn't do it.

There's a simple, excellent recipe by Michele Simionato, on both the
online and forthcoming 2nd edition printed Cookbook, showing how to do
that the right way -- with __setattr__ -- rather than with __slots__ .


Alex
 
T

Terry Reedy

2005-W: "xundef.f", line 4: 'x' is used but never set.
2153-W: "xundef.f", line 5, column 1: Subscript out of range.

None of these are syntax errors. The first two of these would be caught by
lint or pychecker (I am presuming).
One reason interpreted languages like Python are recommended to
beginners is to avoid the edit/compile/debug cycle. But I think it is
faster and less frustrating to have many errors caught in one shot.

True syntax errors often result in such a cascade of bogus errors that it
may often be best to fix the first reported error and then recompile. Of
course, compilers vary in their recovery efforts.

Terry J. Reedy
 
T

Terry Reedy

Bulba! said:
There is a good _chance_ here: money. Somebody has poured a lot
of money into this thing. It's not going to get dropped bc of that.
From what I have read, the amount of proprietary code which *did* get
effectively shredded after the dot-com bust is enough to make one cry.
There were a few companies that would buy code at bankruptcy sales for
maybe 1% of its development cost, but even then, with the original
programmers long gone, it could be hard to make anything from it.

Terry J. Reedy
 
R

Roy Smith

Terry Reedy said:
None has been reserved because there is no known good use for overriding
it.

Should I infer from the above that there's a known bad use?
True and False will be reserved someday.

I remember a lisp I used many years ago. If you tried to rebind nil,
you got an error message -- in latin!
 
T

Terry Reedy

Steve Holden said:
Well clearly there's a spectrum. However, I have previously written that
the number of open source projects that appear to get stuck somewhere
between release 0.1 and release 0.9 is amazingly large, and does imply
some dissipation of effort.

And how do the failure and effort dissipation rates of open source code
compare to those of closed source code? Of course, we have only anecdotal
evidence that the latter is also 'amazingly large'. And, to be fair, the
latter should include the one-programmer proprietary projects that
correspond to the one-programmer open projects.

Also, what is 'amazing' to one depends on one's expectations ;-). It is
known, for instance, that some large fraction of visible retail business
fail within a year. And that natural selection is based on that fact that
failure is normal.

Terry J. Reedy
 
A

Alex Martelli

Bulba! said:
True. I have a bit of interest in economics, so I've seen e.g.
this example - why is it that foreign branches of companies
tend to cluster themselves in one city or country (e.g.

It's not just _foreign_ companies -- regional clustering of all kinds of
business activities is a much more widespread phenomenon. Although I'm
not sure he was the first to research the subject, Tjalling Koopmans, as
part of his lifework on normative economics for which he won the Nobel
Prize 30 years ago, published a crucial essay on the subject about 50
years ago (sorry, can't recall the exact date!) focusing on
_indivisibilities_, leading for example to transportation costs, and to
increasing returns with increasing scale. Today, Paul Krugman is
probably the best-known name in this specific field (he's also a
well-known popularizer and polemist, but his specifically-scientific
work in economics has mostly remained in this field).
China right now)? According to standard economics it should
not happen - what's the point of getting into this overpriced
city if elsewhere in this country you can find just as good
conditions for business.

Because you can't. "Standard" economics, in the sense of what you might
have studied in college 25 years ago if that was your major, is quite
able to account for that if you treat spatial variables as exogenous to
the model; Krugman's breakthroughs (and most following work, from what I
can tell -- but economics is just a hobby for me, so I hardly have time
to keep up with the literature, sigh!) have to do with making them
endogenous.

Exogenous is fine if you're looking at the decision a single firm, the
N+1 - th to set up shop in (say) a city, faces, given decisions already
taken by other N firms in the same sector.

The firm's production processes have inputs and outputs, coming from
other firms and (generally, with the exception of the last "layer" of
retailers etc) going to other firms. Say that the main potential buyers
for your firm's products are firms X, Y and Z, whose locations all
"happen to be" (that's the "exogenous" part) in the Q quarter of town.
So, all your competitors have their locations in or near Q, too. Where
are you going to set up your location? Rents are higher in Q than
somewhere out in the boondocks -- but being in Q has obvious advantages:
your salespeople will be very well-placed to shuttle between X, Y, Z and
your offices, often with your designers along so they can impress the
buyers or get their specs for competitive bidding, etc, etc. At some
points, the competition for rents in quarter Q will start driving some
experimenters elsewhere, but they may not necessarily thrive in those
other locations. If, whatever industry you're in, you can strongly
benefit from working closely with customers, then quarter Q will be
where many firms making the same products end up (supply-side
clustering).

Now consider a new company Z set up to compete with X, Y and Z. Where
will THEY set up shop? Quarter Q has the strong advantage of offering
many experienced suppliers nearby -- and in many industries there are
benefits in working closely with suppliers, too (even just to easily
have them compete hard for your business...). So, there are easily
appreciated exogenous models to explain demand-side clustering, too.

That's how you end up with a Holliwood, a Silicon Valley, a Milan (for
high-quality fashion and industrial design), even, say, on a lesser
scale, a Valenza Po or an Arezzo for jewelry. Ancient European cities
offer a zillion examples, with streets and quarters named after the
trades or professions that were most clustered there -- of course, there
are many other auxiliary factors related to the fact that people often
_like_ to associate with others of the same trade (according to Adam
Smith, generally to plot some damage to the general public;-), but
supply-side and demand-side, at least for a simpler exogenous model, are
plenty.

Say that it's the 18th century (after the corporations' power to stop
"foreign" competition from nearby towns had basically waned), you're a
hat-maker from Firenze, and for whatever reason you need to move
yourself and your business to Bologna. If all the best hat-makers'
workshops and shops are clustered around Piazza dell'Orologio, where are
YOU going to set up shop? Rents in that piazza are high, BUT - that's
where people who want to buy new hats will come strolling to look at the
displays, compare prices, and generally shop. That's close to where
felt-makers are, since they sell to other hat-makers. Should your
business soon flourish, so you'll need to hire a worker, that's where
you can soon meet all the local workers, relaxing with a glass of wine
at the local osteria after work, and start getting acquainted with
everybody, etc, etc...

Risk avoidance is quite a secondary issue here (except if you introduce
in your model an aspect of imperfect-information, in which case,
following on the decisions made by locals who may be presumed to have
better information than you is an excellent strategy). Nor is there any
"agency problem" (managers acting for their interests and against the
interest of owners), not a _hint_ of it, in fact -- the hatmaker acting
on his own behalf is perfectly rational and obviously has no agency
problem!).

So, I believe that introducing agency problems to explain clustering is
quite redundant and distracting from what is an interesting sub-field of
(quite-standard, by now) economics.

There are quite a few other sub-fields of economics where agency
problems, and specifically the ones connected with risk avoidance, have
far stronger explicatory power. So, I disagree with your choice of
example.


Alex
 
M

Mike Meyer

Bulba! said:
This "free software" (not so much OSS) notion "but you can
hire programmers to fix it" doesn't really happen in practice,
at least not frequently: because this company/guy remains
ALONE with this technology, the costs are unacceptable.

Yes, but fixing python software - even sloppily written python
software - is pretty easy. I regularly see contracts to add a feature
to or fix a bug in some bit of OSS Python.

<mike
 
P

Paul Rubin

Roy Smith said:
Around here, AOL/Moviephone has been trolling for years for Tcl people;
I guess that counts as a big company.

The AOL web server also uses tcl as a built-in dynamic content
generation language (i.e. sort of like mod_python), or at least it
used to.
 
P

Peter Hansen

Roy said:
Should I infer from the above that there's a known bad use?

Yes: making None equal to the integer 3. That's one of
six known bad uses.... it's possible there are more. ;-)

-Peter
 
E

Eric Pederson

Alex Martelli commented:

It's not just _foreign_ companies -- regional clustering of all kinds
of
business activities is a much more widespread phenomenon. Although I'm
not sure he was the first to research the subject, Tjalling Koopmans,
as
part of his lifework on normative economics for which he won the Nobel
Prize 30 years ago, published a crucial essay on the subject about 50
years ago (sorry, can't recall the exact date!) focusing on
_indivisibilities_, leading for example to transportation costs, and to
increasing returns with increasing scale. Today, Paul Krugman is
probably the best-known name in this specific field (he's also a
well-known popularizer and polemist, but his specifically-scientific
work in economics has mostly remained in this field).


Because you can't. "Standard" economics, in the sense of what you
might
have studied in college 25 years ago if that was your major, is quite
able to account for that if you treat spatial variables as exogenous to
the model; Krugman's breakthroughs (and most following work, from what
I
can tell -- but economics is just a hobby for me, so I hardly have time
to keep up with the literature, sigh!) have to do with making them
endogenous.

Exogenous is fine if you're looking at the decision a single firm, the
N+1 - th to set up shop in (say) a city, faces, given decisions already
taken by other N firms in the same sector.

The firm's production processes have inputs and outputs, coming from
other firms and (generally, with the exception of the last "layer" of
retailers etc) going to other firms. Say that the main potential
buyers
for your firm's products are firms X, Y and Z, whose locations all
"happen to be" (that's the "exogenous" part) in the Q quarter of town.
So, all your competitors have their locations in or near Q, too. Where
are you going to set up your location? Rents are higher in Q than
somewhere out in the boondocks -- but being in Q has obvious
advantages:
your salespeople will be very well-placed to shuttle between X, Y, Z
and
your offices, often with your designers along so they can impress the
buyers or get their specs for competitive bidding, etc, etc. At some
points, the competition for rents in quarter Q will start driving some
experimenters elsewhere, but they may not necessarily thrive in those
other locations. If, whatever industry you're in, you can strongly
benefit from working closely with customers, then quarter Q will be
where many firms making the same products end up (supply-side
clustering).

Now consider a new company Z set up to compete with X, Y and Z. Where
will THEY set up shop? Quarter Q has the strong advantage of offering
many experienced suppliers nearby -- and in many industries there are
benefits in working closely with suppliers, too (even just to easily
have them compete hard for your business...). So, there are easily
appreciated exogenous models to explain demand-side clustering, too.

That's how you end up with a Holliwood, a Silicon Valley, a Milan (for
high-quality fashion and industrial design), even, say, on a lesser
scale, a Valenza Po or an Arezzo for jewelry. Ancient European cities
offer a zillion examples, with streets and quarters named after the
trades or professions that were most clustered there -- of course,
there
are many other auxiliary factors related to the fact that people often
_like_ to associate with others of the same trade (according to Adam
Smith, generally to plot some damage to the general public;-), but
supply-side and demand-side, at least for a simpler exogenous model,
are
plenty.

Say that it's the 18th century (after the corporations' power to stop
"foreign" competition from nearby towns had basically waned), you're a
hat-maker from Firenze, and for whatever reason you need to move
yourself and your business to Bologna. If all the best hat-makers'
workshops and shops are clustered around Piazza dell'Orologio, where
are
YOU going to set up shop? Rents in that piazza are high, BUT - that's
where people who want to buy new hats will come strolling to look at
the
displays, compare prices, and generally shop. That's close to where
felt-makers are, since they sell to other hat-makers. Should your
business soon flourish, so you'll need to hire a worker, that's where
you can soon meet all the local workers, relaxing with a glass of wine
at the local osteria after work, and start getting acquainted with
everybody, etc, etc...


Right, and distribution in general is "clumpy"; i.e. one doesn't find the spatial distribution of people to be uniform (unless at saturation!)

I'm decades behind on economics research, but I remember modeling clustering based on mass and distance (the gravity model). On a decision making basis there seems to be an aspect of it that is binary: (0) either give in to gravity and gain shared advantage as part of a massive object, or (1) choose an alternate "location" far enough away not to be much affected by the force of the massive objects, and try to build "mass" there. I suspect Python is a (1) in that regard, but I may be wrong.


Gravity as a model of technology adoption appeals to me as I've been thinking about cosmology a fair bit, and I have grave suspicions that much of the universe's dark (and green) matter is in Redmond.




Eric Pederson
http://www.songzilla.blogspot.com
:::::::::::::::::::::::::::::::::::
domainNot="@something.com"
domainIs=domainNot.replace("s","z")
ePrefix="".join([chr(ord(x)+1) for x in "do"])
mailMeAt=ePrefix+domainIs
:::::::::::::::::::::::::::::::::::
 
J

JanC

Paul Rubin schreef:
The AOL web server also uses tcl as a built-in dynamic content
generation language (i.e. sort of like mod_python), or at least it
used to.

It still does:
"""
AOLserver is America Online's Open-Source web server. AOLserver is the
backbone of the largest and busiest production environments in the world.
AOLserver is a multithreaded, Tcl-enabled web server used for large scale,
dynamic web sites.
"""

<http://www.aolserver.com/>
 
P

Peter Dembinski

Peter Hansen said:
Yes: making None equal to the integer 3. That's one of
six known bad uses.... it's possible there are more. ;-)

Binding user variables to these names should raise exception
(eg. AreYouInsaneException or WhatAreYouDoingException) :>
 
C

Cameron Laird

It's not just _foreign_ companies -- regional clustering of all kinds of
business activities is a much more widespread phenomenon. Although I'm
not sure he was the first to research the subject, Tjalling Koopmans, as
part of his lifework on normative economics for which he won the Nobel
Prize 30 years ago, published a crucial essay on the subject about 50
years ago (sorry, can't recall the exact date!) focusing on
_indivisibilities_, leading for example to transportation costs, and to
increasing returns with increasing scale. Today, Paul Krugman is
probably the best-known name in this specific field (he's also a
well-known popularizer and polemist, but his specifically-scientific
work in economics has mostly remained in this field).
.
.
.
clp actually dropped related names back in April <URL:
http://groups-beta.google.com/group/comp.lang.python/index/browse_frm/thread/ce749b848d1c33da/ >,
but I think that was during one of your sabbaticals from the
group.

The work of Vernon Smith, the unconventionally conventional
Nobel co-recipient of 2002, can be viewed as a commentary on
clustering and other non-homogeneities. Many US readers
have encountered Jane Jacobs, who has made a career (and
spawned a following) exploring the significance of cities as
economic clusters.
 
C

Cameron Laird

And how do the failure and effort dissipation rates of open source code
compare to those of closed source code? Of course, we have only anecdotal
evidence that the latter is also 'amazingly large'. And, to be fair, the
latter should include the one-programmer proprietary projects that
correspond to the one-programmer open projects.

Also, what is 'amazing' to one depends on one's expectations ;-). It is
known, for instance, that some large fraction of visible retail business
fail within a year. And that natural selection is based on that fact that
.
.
.
The usual measurements and estimates are generally between 15% and
30%. "Desirable" businesses--restaurants, for example, or computing
consultancies--are even more likely to fail.
 
C

Cameron Laird

You may recall correctly, but Fortran compilers have improved. The
following Fortran 90 program

integer, parameter :: n = 1
real :: x,y=2.0,z(n)
print*,"dog"
print*,x
z(n+1) = 1.0
print*,z
end

has 3 errors, all detected at compile time by the Lahey/Fujitsu Fortran
95 compiler, with the proper options:

2004-I: "xundef.f", line 2: 'y' is set but never used.
.
.
.
I wonder how many of Lahey/Fujitsu users ignore the 'I' diagnostics:
"It's not really an error--the program still runs."

I'm a bit grouchy today on the subject of engineering standards.

I think your point was that the checking present in modern Fortran
compilers, or PyCheckers, but absent from core Python, is a net
benefit. That I grant. I'm reluctant to argue for a change in
Python. I personally prefer to urge PyChecker on developers.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,733
Messages
2,569,439
Members
44,829
Latest member
PIXThurman

Latest Threads

Top