does Ruby generate WINDOWS and dialog boxes?

R

Richard

Hi Michael,

Though I've bought and skimmed a few books on compiler design (because
I am a mathematician who likes that kind of stuff), I don't know
enough to dispute any of the internals that you discuss here concerning
computer-language design. I'll just respond with my perceptions born
out my experience mainly as an independent computer consultant in
business & gov't info systems.
Point me to a single book that calls C++ "elegant" written by someone
who, you know, designs languages. C++ is a hack built on a hack.

Stroustrup said: My initial aim for C++ was a language where I could
write programs that were as elegant as Simula programs, yet as
efficient as C programs [Java Report, July 2000]: Incidentally, he
held a doctorate in Math, I believe. And this may be a self-serving
statement; besides, he missed his goal in your eyes.

Herb Schildt (a prolific author on computer technology) in "STL
Programming From the Ground Up", 1999, wrote on p.5: "The combination
of Stroustrup's insight into OOP and Stepanov's vision for 'plug
compatible' software components yielded a powerful new software
paradigm." Maybe not "elegant", but "powerful". I'd say maybe its
just a semantic difference.

The authors of the Wikipedia article on "Template Metaprogramming"
extol C++ virtues in its ability to unroll recursive expressions, e.g.
transform a factorial definition to a number at compile time. They
didn't say "elegant", but I think they might agree if pressed on the
matter.
Problem #1 with C++ (of several billion): it requires literally infinite
lookahead to fully parse. This is not something I would ever consider
even remotely elegant.

No doubt every compiler of a general purpose programming language is
susceptible of failure for some well formed programs. But I never had
a C++ compiler fail on anything I've written. Maybe I just haven't
been working very hard :)
C was not great engineering.

My main reason for lauding C's engineering is the insight that creating
ever more powerful languages with compilers that translate to machine
code is a poor strategy, e.g. Fortran, then Cobol, then PL/1. The
alternative K&R devised is a *small* language plus the creation of a
lot of tools to allow application programming at a higher level. If
nothing else, this allowed economical porting of the whole thing hither
and yon.
It was a high level assembler

so what?
full of quirks based on its first implementation platform (PDP-11) --
quirks that kill (sometimes literally) to this very day.

That flies in the face of the fact that that it's probably the widely
ported of any programming language ever.
It is
acceptable (barely) as a systems programming tool provided it is used
under intensely-inspected environments.

What's wrong with that? I'd expect anything built for use (directly or
indirectly) by a mass of people to be monitored for extreme quality
control.
Its use in applications verges
on the criminal culpability side of things.

Well, I've certainly got war stories to testify to the problems of app
development with C. But who does that anymore? C is very big for
programming in the embedded world. The many organizations that take
that route can't all be run by idiots.
The list of useful (and
some necessary) features that C lacks for application programming
includes ...

I concede that (though I don't know much about those details). But
they can be handled at a higher level written in C, just as K&R
conceived it. Case in point: the original C++ was built on C and
certainly has been widely adopted and no doubt address some of things
that are not in C natively.
C++ is a hack layered on this hack. Despite being designed for projects
"in the large" it still has no support for automated memory management
which, given its unnatural appetite for memory (in typical C++
programming), is really funny.

Check out "Accelerated C++", Koenig & Moo, , 2000, pp. 182, 203, 223,
262; "Effective STL", Scott Meyers, 2001, pp. 173-175
Further, although an "object-oriented"
language, things like iterators are an afterthought (and it shows!)
added later on in the library interface instead of being a core part of
the language.

So what? Lots of stuff in most languages is added to languages as they
evolve. IMHO, C++ is no different than Ruby in this respect.
And it still sucks--despite the addition of namespaces,
classes, etc.--at actually helping in modular programming. You have to
recompile, for example, if the implementation of a class you're using
changes. (I still shake my head at this.) It's not enough just to
relink to a new object file/library/whatever. You have to recompile
your source. (This is, of course, again because of that filthy #include
thing.)

Well, I can't see why recompilation is an important issue. I've
always treated recompilation time as coffee-break time.
Then we have templates.... I'm not even going to begin that rant.

I loved their introduction. I mentioned them above as a virtue. If
you decry them in C++, you've also got to carp about 'parameterized
types', known as 'generics', in Ada, Eiffel, Java, and C# (according
to Wikipedia's article on "Generic Programming").
...Haskell and dead ones: Dylan, Modula-3

I just took a fast peek at Haskell. I noticed Guards, and that
brought back memories of trying to achieve provably correct programs.
As I recall, there some proof that no system can be built to proved
the logical correctness of every program. (Just a side note.)

But I've never heard of it. I can't imagine any client allowing me do
develop an information system that neither of us ever heard of. Have
you been able to use it inside any large organization?
I would not call Ruby "elegant" but I will call it a joy to program in.

I'm with you on that!
If it didn't have Perl's magic variables, et
al. I'd be more inclined to call it elegant even.

Here, here! I refuse to use any Perlisms. I think there a Ruby Way
for all of them.
I lack sufficient experience in Rails to call it elegant or not. I

Rails is what won me over. I think it's brilliant, though I'm still
struggling to master a bunch of major issues in it and Ruby.
There are some things which at first glance
appear very impressive to me, but C++ templates did once too until I had
to use them extensively. (My first run-in with ">>" vs. "> >" pretty
much ended my love affair with templates in a huge, bloody crash.)

Ouch! Did you consider some abstraction that was a more palatable way
to employ templates. Heck, Stroustup's early compilers were "merely"
preprocessors to the C compiler. I wonder if something like that would
have ameliorated the risky syntax.
Yes. Bad taste and my taste. ;)
Well, you've got me there! :)

Best wishes,
Richard
 
R

Richard

David said:
Suraj Kurapati wrote:
No ;P The database is shared between the users, only the webserver runs
locally.

Right on David. But Suraj's comments make me wonder: if I create a
new migration on the MySQL server, wouldn't I almost surely make some
changes in the Rails app? Otherwise, what would be the point?

That leads me to thing that the machine housing the MySQL server should
be my development machine. There I make my changes targeting the test
db, run all my tests (including new ones specifically to cover the
changes), put all the Rails-app changes into version control.
Finally, I broadcast to all users to update their local copy of the
Rails app.

Make sense?

Regards,
Richard
 
R

Richard

Hi Suraj,

Thanks for responding.
And lots of baggage if we use a system for multi-user applications...

You think MySQL 5.0 server version is not good for multi-user
applications?
And security concerns about data leaking from the user's machine...

Might I suggest:

- use WEBrick on a firewalled port that is accessible only from the
local machine 0.0.0.0

I'm planning to use WEBrick server on each user's machine, as well as
on the database server/development machine/version-control repository.
Everybody will access their own copy of the web-server at
localhost:3000. where localhost=127.0.0.1. Is 0.0.0.0 superior to
that, and why?
- use the SQLite3 (http://wiki.rubyonrails.org/rails/pages/SQLite)
because it stores data in normal (binary) files on the machine it's
running on. For example, if your Rails app is located in /foo/bar, then
the DB files will be in /foo/bar/db/

Right now, I have MySQL 5.0 running on my development machine, and it
stores the data the data sub-directory of its installation directory.
I imagine it's easy to configure it to store data wherever I wish to
target it.
- for concerns about upgrading (i.e. it's easier to update one central
server than updating many individual machines), you could have each user
check-out your whole rails app from a version control system. Then, when
you do maintainance updates to your DB schema and so on, the user would
simply update their check-out and run "rails migrate". Done!

(I noted your "rake migrate" correction)

If confused about one thing here. Since the databases and development
environment are on one machine, shouldn't I:
- run "rake migrate" only on this machine
- update the Rails app to relating to the changes in the database
- update my test suite to test those changes (or write the tests first
to satisfy some purists)
- run my entire test suite until all tests pass
- submit my Rails app to the version control system
- broadcast a message to all users that the should update their version
of app by getting the latest and greatest version?

Regards,
Richard
 
R

Richard

Hi David,

I think we're converging to a solution.
Yes. However, there's still the HTTP request divisor in the middle, with
no / hacked push in the WEBrick -> Firefox direction. ...Of course,
it might be possible your application only needs to pull data.

OK, we'll have to wait and see on that, 'cause I don't know.
... However,
for me that is offset by having fine-grained data binding - for complex
UIs, it rubs me in a better way than bending my code to be essentially
request / (partial) response.

OK, another thing to wait on.
I didn't mean work in the first place. I meant keeping an up-to-date
version on all the users' machines. Last time there was a thread on the
subject of distributing whole applications with rubygems, I think there
were quite a few gotchas mentioned. But you might want to search the
archives for that to see if the approach is viable / what you'd have to
work around.

Gotcha. I just posed the question about handling upgrades to the app.
I didn't think about Ruby gems. I thought I'd make the database server
machine server as the developer machine as well as the config. mgmt
machine. So when a database schema change is required, I thought I'd
shutdown access to users and:
- run "rake migrate" only on this machine
- update the Rails app to relating to the changes in the database
- update my test suite to test those changes (or write the tests first
to satisfy some purists)
- run my entire test suite until all tests pass
- submit my Rails app to the version control system
- broadcast a message to all users that the should update their version
of app by getting the latest and greatest version?

What do you think of that?
... weak / coarse-grained push from the model to the view would be a
showstopper for me for the sort of problems where'd I use a rich GUI.

I don't understand. I think this process works this way: The
controller tells the model to cook up some data, suitably filtered and
ordered, and keep it available for access by the appropriate view.
Then the contoller tells the appropriate view to do its thing with the
data, namely produce HTML with the data suitable tagged plus embedded
Ruby and send that amalgam to the web-server. The web-server invokes
Extended Ruby to process the "enriched" HTML and send the resulting
vanilla HTML to the browser.

Is my description of the process OK? Does "push" refer to the
"web-server to browser" step?
Nothing that's really a showstopper from this point of view, just the
request / response / sessions clutter that's not really essential to the
way I would model a rich GUI application.

Gotcha. But all the request/response stuff is on a single box, so the
processing time should be a few milliseconds, don't you think?
For me, it would involve
working around those, maybe you have an architectyre / model in mind
where the mapping is straightforward.

I've anticipating a dozen, maybe two dozen, tables with lots of
one-to-many and many-to-many relations, somewhat complex updates which
would be unpleasant to program were it not for transaction service for
the DBMS. However, the amount of data in any transaction will be
small. And the data transfers for display will be small, too, because
all tables will be paged.
Not on a quantitative level. Easily accessible metaprogramming
facilities are a potential complexity explosion, and need competence and
/ or restraint to avoid abstraction leak between modules. (For example
introducing a method to a class at a scope where it clutters in modules
that probably won't use the functionality "because it's convenient that
way".)

I don't plan to intoduce that kind of complexity myself. I'm sure
Rails itself employs some metaprogramming, but I'm confident that
works well, requiring no meddling by me.
If creating a database schema from scratch, unless you hit some of the
more obscure deficiencies of AR (2PC and the like), in your scenario
it's probably fine.
Cool!

While migrations are indeed sexy, using them means you pretty much
eschew the magic of AR, at which point it becomes only incrementally
more convenient as an ORM solution rather than revolutionary.

I plan to use mostly generated screens funtil I've got concept approval
from clients. So I can use AR to painlessly regenerate any of them
affected by DB changes. However, I also anticipate a number of
screens with one or more tables with column-by-column table-sorting and
filtering functionality. But most of that involves passing parameters
in links that invoke a package that produces the required effect. I've
got a demo of that working -- I just need to figure out how to package
it.
This I consider unforgivable brain-damage, AR should have supported
database metadata investigation and looking at foreign key constraints
out of the box.

Yea, but its only a few lines here and there. To me, it's really
painless, especially considering what I've had to do with Oracle and
SQL Sever using VC++.
And only decades after all the competition. Currently, I don't see much
of a reason why not to use MySQL in a general scenario (before
concurrency handling considerations and load tests come into play),
however I also don't see much of a reason to do so - I'll stay with
religious opinions and stick to the featurefullness of Postgres.

I'm planning to look into the final choice of DBMS in the final stage
of my current project. Concurrency handling and response times are
certainly issues I'll want to address.

Thanks for your ideas.

Best wishes,
Richard
 
S

Suraj Kurapati

I wrote some text above the quoted text in your message, saying:

So, whatever I wrote below, it was under the assumption that only one
user would access the database (i.e. each user has their own DB).
You think MySQL 5.0 server version is not good for multi-user
applications?

It is fine. However, my software engr. professor said PostgreSQL is
better because MySQL doesn't handle concurrency issues very well.
I'm planning to use WEBrick server on each user's machine, as well as
on the database server/development machine/version-control repository.
Everybody will access their own copy of the web-server at
localhost:3000. where localhost=127.0.0.1.

Is 0.0.0.0 superior to that, and why?

I used to think 0.0.0.0 was better (somebody told me so, long ago), but
I've just read up on it again and there really doesn't seem to be any
difference. Thus, I would just use "localhost" because it is more human
readable.
Right now, I have MySQL 5.0 running on my development machine, and it
stores the data the data sub-directory of its installation directory.
I imagine it's easy to configure it to store data wherever I wish to
target it.

Yes, that is fine.
(I noted your "rake migrate" correction)

If confused about one thing here. Since the databases and development
environment are on one machine, shouldn't I:
- run "rake migrate" only on this machine
- update the Rails app to relating to the changes in the database
- update my test suite to test those changes (or write the tests first
to satisfy some purists)
- run my entire test suite until all tests pass
- submit my Rails app to the version control system
- broadcast a message to all users that the should update their version
of app by getting the latest and greatest version?

Yes, that is correct.

However, this brings up another issue. Since each user's local copy of
your Rails app has write access to your central DB, you could lose all
your data on the central DB if a user runs

rake migrate VERSION=0

It's kinda silly, but possible.

Good luck.
 
D

David Vallner

--------------enigB2ADD3CD8C7B992BA5D7D0E9
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
- broadcast a message to all users that the should update their version=
of app by getting the latest and greatest version?
=20

Mind the "should". It's a bit too late at night for me to try to go over
such scenarios in my head, but there's always users that can't be
bothered (me, on any occasion I can get away with it, for instance), and
you run the risk of someone with a version that's not up-to-date
thrashing the data. Somehow. Just a point to watch out for. Either keep
more stringent checks on the DB-side, or make sure that updates to users
happen automatically, transparently, and preferrably fast - depending on
how controlled your user base is, you might or might not get away with
the app "calling home".

Of course, odds are that's just me being paranoic and that wouldn't
really happen with a proper data model design in the first place.
I think this process works this way: The
controller tells the model to cook up some data, suitably filtered and
ordered, and keep it available for access by the appropriate view.
Then the contoller tells the appropriate view to do its thing with the
data, namely produce HTML with the data suitable tagged plus embedded
Ruby and send that amalgam to the web-server. The web-server invokes
Extended Ruby to process the "enriched" HTML and send the resulting
vanilla HTML to the browser.
=20
Is my description of the process OK? Does "push" refer to the
"web-server to browser" step?
=20

No. With a webapp, the process is always initiated by the web browser,
and it's a pull. The produced HTML is the current state of the view for
a user, and if the user doesn't poll for changes, it will go stale if a
relevant part of the model changes. The "hack" around this is having the
browser regularly poll for updates (in a JS loop), and more or less cook
your own event loop using Ajax, as-known-from-GUIs. (Which you do if, in
fact, you expect asynchronous changes rendering someone's view state
obsolete and that is an undesirable thing.) The one problem there is
lag, HTTP has bad performance characteristics for this - otherwise it's
semantically the same you do in GUIs, except you have to implement it
yourself (or look into an Ajax support library for help). So I'd take
frequency of necessary updates and the connection quality into account
too, if you need either more "realtime" behaviour, or you want to
service sluggish connections, you might want to do something against it.
Comet comes to mind if that's the case.
[http://alex.dojotoolkit.org/?p=3D545]. I'm not sure if Rails /
Prototype.js support that though, so you might end up having to use
Dojo, which Rails didn't have special support for last time I checked.
=20
Gotcha. But all the request/response stuff is on a single box, so the=
processing time should be a few milliseconds, don't you think?
=20

Yes. It's just clutter from the elegance point of view, not a
performance one.

[Stuff I didn't have anything to add to snipped.]

David Vallner


--------------enigB2ADD3CD8C7B992BA5D7D0E9
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (MingW32)

iD8DBQFFidfby6MhrS8astoRAhFCAJ0S28r/ozIrGGj8UvubAzH9YqZJlQCfbVCA
tg31nZl7ZcEfE/BpT7IQXJU=
=ZW19
-----END PGP SIGNATURE-----

--------------enigB2ADD3CD8C7B992BA5D7D0E9--
 
R

Richard

Hi Suraj,
It is fine. However, my software engr. professor said PostgreSQL is
better because MySQL doesn't handle concurrency issues very well.

Great to know. I'll check that out before I go to production.
... Thus, I would just use "localhost" because it is more human
readable.
Cool!


Yes, that is fine.
Cool.


Yes, that is correct.
Excellent!

However, this brings up another issue. Since each user's local copy of
your Rails app has write access to your central DB, you could lose all
your data on the central DB if a user runs

rake migrate VERSION=0

It's kinda silly, but possible.

IMHO, not silly at all. It says to me that when I port the application
to client machines, I should arrange that Rake can only be run under an
Administrator account. Also, I should make sure that any dangerous
commands to the DBMS can only be run under the root ID. That should at
least plug up some of the security holes.

Thank you very much for all your observations and comments. I feel a
lot more confident that my development effort will be well received.

Best wishes,
Richard
 
J

John Wilger

The app structure I outlined, Firefox+Ruby+Rails+WEBrick+MySQL, all
runs on the client's machine (except for the database server). The
user machine(s) need not be connected to the Internet, though it/they
needs a LAN connection to a database server. So everything except the
data-store is "client-side".

OK, I've been skimming this thread the last few days, and I'm surprised
no one else has asked the obvious question here (if someone did and I
missed it, I apologize)...

/Why/ in the world would you write a web app with RoR to be run locally
on several users' machines that connects to one central database? I
hate to be blunt, but that's just kinda dumb.

Considering you already need a central server to run the database on,
just run the web app from a central server as well. Then all these
questions about upgrading the app on the users' machines, who should
run `rake db:migrate`, etc. kinda take care of themselves.
 
R

Richard

Hi David,
Mind the "should". ... you run the risk of someone with a version that's not up-to-date
thrashing the data.

Good point. I'll be able to force all users to shut down their
connection to DB. How about I put a ver. # in the DB (inaccessible to
users directly) and have the login screen check that the user's version
matches the version requirement in the DB; mismatch will inhibit login
if anything's amiss, just like a bad userid or password?
I think this process works this way: The
controller tells the model to cook up some data, suitably filtered and
ordered, and keep it available for access by the appropriate view. [snip]

Is my description of the process OK? Does "push" refer to the
"web-server to browser" step?

No. With a webapp, the process is always initiated by the web browser,
and it's a pull.

Agreed. I omitted the steps where the user:
- starts a local copy or WEBrick
- directs his/her browser to localhost:3000, which defaults to the
logon screen
- provides his/her credentials (and the user's app's version is check
against the DB)
The produced HTML is the current state of the view for
a user, and if the user doesn't poll for changes, it will go stale if a
relevant part of the model changes.

There will be no change to the model while any user is connected to the
DB, at least not on my watch.
The "hack" around this is ...

So I'm saving the workaround suggestions, but not thinking about them
for a while.

Thanks for all your ideas. Your insights, along with Suraj's, have
boosted my confidence that I'll get my first Rails production system
running smoothly.

Best wishes,
Richard
 
R

Richard

Hi John,
/Why/ in the world would you write a web app with RoR to be run locally
on several users' machines that connects to one central database? I
hate to be blunt, but that's just kinda dumb.

Considering you already need a central server to run the database on,
just run the web app from a central server as well. Then all these
questions about upgrading the app on the users' machines, who should
run `rake db:migrate`, etc. kinda take care of themselves.

[As I slap my forehead in astonishment, I say:] because I never
thought of that. It's my first Rails application and this thread has
helped me see my way through a bunch of development issues. You just
added one more (much needed) clarification.

Thank you very much for weighing in with that observation.

Yours truly,
Richard
 
D

David Vallner

--------------enig3C957BE86FE223823423F786
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

John said:
/Why/ in the world would you write a web app with RoR to be run locally=
on several users' machines that connects to one central database? I
hate to be blunt, but that's just kinda dumb.
=20
Considering you already need a central server to run the database on,
just run the web app from a central server as well. Then all these
questions about upgrading the app on the users' machines, who should
run `rake db:migrate`, etc. kinda take care of themselves.
=20

Load distribution. The only thing you need to ever make scalable is the
database and the access to it, and can run it on a relatively
underpowered machine - and if you play it right, with less bandwidth
requirements too (you only ever transfer resources like images etc.
during a client update, which can be yet optimised by using something
like Jigsaw, or SVN as mentioned for text resources). It's not a common
architecture, but not one without any sense to it whatsoever.

David Vallner


--------------enig3C957BE86FE223823423F786
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (MingW32)

iD8DBQFFiinky6MhrS8astoRApSrAJ993cFajT81wbUnJiLAThqesC/PBACfUeJg
6uygYOh3exyuYw2RnN8Njgk=
=wqXC
-----END PGP SIGNATURE-----

--------------enig3C957BE86FE223823423F786--
 
S

Suraj Kurapati

Richard said:
IMHO, not silly at all. It says to me that when I port the application
to client machines, I should arrange that Rake can only be run under an
Administrator account.

Maybe that is too restrictive -- rake is used for other Ruby stuff as
well. An alternative approach is to override the migrate task in the
main Rakefile:

task :migrate do
exit # silently!
end

This way, you can still use rake for other projects.
Also, I should make sure that any dangerous
commands to the DBMS can only be run under the root ID. That should at
least plug up some of the security holes.

Precisely.

I don't know much details about DB permissions, but I'm sure there is a
different set of permissions for DROP and CREATE tables. Those should
not be given to users. Instead, users should only have INSERT, UPDATE,
and DELETE permissions.

Other than that, making regular backups of your DB should cover any
remaining troubles -- like a user deleting all rows/records from a DB
table or inserting lots of spam into a DB table.
 
S

Suraj Kurapati

Richard said:
Hi John,
/Why/ in the world would you write a web app with RoR to be run locally
on several users' machines that connects to one central database? I
hate to be blunt, but that's just kinda dumb.

Considering you already need a central server to run the database on,
just run the web app from a central server as well. Then all these
questions about upgrading the app on the users' machines, who should
run `rake db:migrate`, etc. kinda take care of themselves.

[As I slap my forehead in astonishment, I say:] because I never
thought of that.

Shocking indeed! :)

I too was only thinking within the scenario Richard had initially
posted. It never occurred to me to step back and look at the big
picture.

Good observation John. You are 100% correct.
 
J

John Wilger

Load distribution. The only thing you need to ever make scalable is the
database and the access to it, and can run it on a relatively
underpowered machine - and if you play it right, with less bandwidth
requirements too (you only ever transfer resources like images etc.
during a client update, which can be yet optimised by using something
like Jigsaw, or SVN as mentioned for text resources). It's not a common
architecture, but not one without any sense to it whatsoever.

We may have to agree to disagree here, but I'm going to say that there
is absolutely no benefit to this architecture, that can outweigh the
detriments of both the increased maintenance and the decreased
security.

If you really need to distribute load, a better route to take would be
a client/server architecture where you have an intelligent app server
-- not just a database connection -- that exposes services (perhaps
REST-style web services). You can do a lot of the heavy lifting on the
clients, but you can still manage security and client updates
intelligently from the server.

While Ruby/Rails may be a great choice for the server, I would probably
not use Ruby for the client in most situations. Not that I don't love
Ruby, of course -- I just wouldn't want to have to deal with keeping
both Ruby, and library dependencies, and the client application up to
date on every machine that was using the program. Most likely, I would
go with something like Flex[1] for the client, since it's relatively
painless to distribute upgrades that way. I'm currently using Flex for
the client end of a large workflow automation tool that we're building
at work, and I have to say that I would never want to go back to using
HTML interfaces to create web-based /applications/ again.

[1] http://www.adobe.com/products/flex/ (note: Flex Builder costs
money, but all you really /need/ is the SDK which is free as in beer)
 
M

Martin DeMello

On Sun, 2006-17-12 at 10:09 +0900, Suraj Kurapati wrote:
Tk is wonderful! It's really simple to manipulate widgets and graphics
in the way you'd expect. IMHO, Tk is the Ruby of GUI toolkits.


I'm with David here. Tk isn't the Ruby of GUI toolkits. But I diverge sharply from him after that. He says it's the PHP of GUI toolkits. I say it's the GWBASIC of them.

Tk-based GUIs are typically so fscking useless that I'd rather use the CLI and ed over an
app coded in Tk.

Apart from the TkCanvas, which is utterly brilliant. How many other
toolkits give you a vector-oriented canvas, in which you can draw
graphical objects that respond to click events, for free?

martin
 
R

Richard

Hi Suraj,
Maybe that is too restrictive -- rake is used for other Ruby stuff as
well. An alternative approach is to override the migrate task in the
main Rakefile:

task :migrate do
exit # silently!
end

This way, you can still use rake for other projects.

Understood!

Thanks again for your insightful comments. I rated this post to be
"excellent" as a reflection of all your contributions to this thread.

I'm going to sign off this thread. I've got to spend some time
actually *doing* something :) I'm making a copy of all the essential
conclusions presented here, which I'll review as my development efforts
proceed.

Best wishes,
Richard
 
R

Richard

Hi David,
Load distribution. The only thing you need to ever make scalable is the
database and the access to it, and can run it on a relatively
underpowered machine - and if you play it right, with less bandwidth
requirements too (you only ever transfer resources like images etc.
during a client update, which can be yet optimised by using something
like Jigsaw, or SVN as mentioned for text resources). It's not a common
architecture, but not one without any sense to it whatsoever.

Good additional ideas.

As I said to Suraj, thanks again for your insightful comments. I
rated this post to be "excellent" as a reflection of all your
contributions to this thread.

I'm going to sign off this thread. I've got to spend some time
actually *doing* something :) I'm making a copy of all the essential
conclusions presented here, which I'll review as my development efforts
proceed.

Best wishes,
Richard
 
R

Richard

Hi John,
[snip] there
is absolutely no benefit to this architecture, that can outweigh the
detriments of both the increased maintenance and the decreased
security.

If you really need to distribute load, a better route to take would be
a client/server architecture where you have an intelligent app server
-- not just a database connection -- that exposes services (perhaps
REST-style web services). You can do a lot of the heavy lifting on the
clients, but you can still manage security and client updates
intelligently from the server.

While Ruby/Rails may be a great choice for the server, I would probably
not use Ruby for the client in most situations. Not that I don't love
Ruby, of course -- I just wouldn't want to have to deal with keeping
both Ruby, and library dependencies, and the client application up to
date on every machine that was using the program. Most likely, I would
go with something like Flex[1] for the client, since it's relatively
painless to distribute upgrades that way. I'm currently using Flex for
the client end of a large workflow automation tool that we're building
at work, and I have to say that I would never want to go back to using
HTML interfaces to create web-based /applications/ again.

[1] http://www.adobe.com/products/flex/ (note: Flex Builder costs
money, but all you really /need/ is the SDK which is free as in beer)

Good additional ideas!

As I said to Suraj and David, thanks again for your insightful
comments. I rated this post to be "excellent" as a reflection of all
your contributions to this thread.

I'm going to sign off this thread. I've got to spend some time
actually *doing* something :) I'm making a copy of all the essential
conclusions presented here, which I'll review as my development efforts
proceed.

Best wishes,
Richard
 
D

David Vallner

--------------enig2AF1F3DB86CADFEF036BFBE3
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

John said:
Most likely, I would
go with something like Flex[1] for the client, since it's relatively
painless to distribute upgrades that way.=20

Mind you, this is what I said in my first reply to Richard in the first
place and then went along with the idea. I didn't say the architecture
is an ideal one or one I would pick, just not without redeeming value of
-some sort-.

David Vallner


--------------enig2AF1F3DB86CADFEF036BFBE3
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (MingW32)

iD8DBQFFiw6ny6MhrS8astoRApVUAJ0X0Yapfxce4mNT4fin05XaY3STRQCfTfNA
4kiVVbGsZFSqKjvOgcfkuQs=
=TCl3
-----END PGP SIGNATURE-----

--------------enig2AF1F3DB86CADFEF036BFBE3--
 
J

Joel VanderWerf

Martin said:
Apart from the TkCanvas, which is utterly brilliant. How many other
toolkits give you a vector-oriented canvas, in which you can draw
graphical objects that respond to click events, for free?

I agree. TkCanvas doesn't do rotation or scaling as nicely as opengl,
but it's very nice for schematic diagrams, user interaction, and simple
animation. It has saved me a lot of work lately.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,599
Members
45,163
Latest member
Sasha15427
Top