Future standard GUI library


C

Chris Angelico

I didn't mean "trackpoints" or similar devices, but full keyboard
"navigation" of the entire GUI through shortcuts etc.

A "touch-type" GUI is a "must have" for any application that's supposed
to be used productively. The mouse is nice to "explore" a GUI or for
occasional/leisurely use, but once you use an application daily to earn
your living, it's a hopeless roadblock for productivity.

You have seriously underestimated the power of the combined
keyboard+mouse interface. I absolutely agree that keyboard-only will
(almost) always beat mouse-only, but keyboard AND mouse together can
beat either alone, if the UI is designed correctly.

Case in point: Partial staging of a file in git. I can use 'git add
-p' or 'git gui'. With the former, it's all keyboard; I can step
through the hunks, choose what to stage, move on. With the latter,
it's more visual; I right-click a hunk and choose "Stage this hunk"
(or "Stage this line", which is actually quite fiddly with 'git add
-p').

I am a self-confessed keyboard junkie. I will use the keyboard for
pretty much everything. Yet I use git gui and almost never git add -p,
the one exception being when I can't use git gui (eg it's not
installed on some remote headless system and installing it would
require fetching gobs of GUI libraries). It uses the mouse to good
result.
As is the "response time" behaviour of "web applications".

On a LAN, with a proper back-end, I can get instant response from a
web app. Obviously over the internet there's latency, but that's
nothing to do with the use of a web browser as a UI; you'll see that
with ssh just as much.
"No cursor animation ever" is an absolute "must have" requirement for
productivity applications.

Not really. There are times when the human will be legitimately
waiting for the computer. http://xkcd.com/303/ for one. But this still
has little to do with the use of a web browser UI; I can achieve
exactly that with the Yosemite Project, which can actually be a
three-computer system: the content is stored on one, the HTTP server
is on another, and the web browser is separate again. And this is only
a 100Mbit LAN. If you need moar speeeeeeed, you can always demand
gigabit or better.
And by "screenworkers" I didn't refer to programmers. Those people
rarely have to use the stuff that they implement.

Of course not, programmers never use software they've themselves
written. Never. Not in a million... oh wait, what's this I have? Hmm,
gcc used to compile gcc, RosMud being used by Rosuav, Neil Hodgson
using SciTE... naw, they're all statistical anomalies, carry on!

You really have a very low opinion of programmers for someone on a
programming mailing list :)

ChrisA
 
Ad

Advertisements

T

Terry Jan Reedy

Of course not, programmers never use software they've themselves
written. Never. Not in a million... oh wait, what's this I have? Hmm,
gcc used to compile gcc, RosMud being used by Rosuav, Neil Hodgson
using SciTE... naw, they're all statistical anomalies, carry on!

And I use Idle to improve Idle.

I use the HgWorkbench front-end to hg because point and click is often
*faster* for me than remember (or look up command and arg) and type
(without error, or correction after error).

Now back to ignoring the troll.

Terry
 
W

Wolfgang Keller

A "touch-type" GUI is a "must have" for any application that's
You have seriously underestimated the power of the combined
keyboard+mouse interface.

As someone who has started to use computers with the Atari ST and later
the Mac, and as someone who has never become proficient himself with
any of the various Unix shells (besides that I had to learn to *hate*
any version of MS DOS or MS (Not Responding)), I certainly don't
underestimate the value of the mouse.

But could it be that you have never seen an actually proficient user of
a typical "enterprise" application (ERP, MRP, whatever) "zipping"
through the GUI of his/her bread-and-butter application so fast that
you can not even read the titles of windows or dialog boxes.

Obviously, this won't work if the client runs on this pathological
non-operating system MS (Not Responding), much less with "web
applications".

Oh, and btw; SAP is a particularly illustrative example for the horrible
response time behaviour of applications with centralised application
logic. They call it "hour-glass display program" over here,
"SanduhrAnzeigeProgramm" in German.
On a LAN, with a proper back-end, I can get instant response from a
web app.

I have been involved as "domain specialist" (and my input has always
been consistently conveniently ignored) with projects for web
applications and the results never turned out to be even remotely usable
for actually productive work.

Sincerely,

Wolfgang
 
F

Frank Millman

Wolfgang Keller said:
But could it be that you have never seen an actually proficient user of
a typical "enterprise" application (ERP, MRP, whatever) "zipping"
through the GUI of his/her bread-and-butter application so fast that
you can not even read the titles of windows or dialog boxes.

Obviously, this won't work if the client runs on this pathological
non-operating system MS (Not Responding), much less with "web
applications".
[...]
On a LAN, with a proper back-end, I can get instant response from a
web app.

I have been involved as "domain specialist" (and my input has always
been consistently conveniently ignored) with projects for web
applications and the results never turned out to be even remotely usable
for actually productive work.

Hi Wolfgang

I share your passion for empowering a human operator to complete and submit
a form as quickly as possible. I therefore agree that one should be able to
complete a form using the keyboard only.

There is an aspect I am unsure of, and would appreciate any feedback based
on your experience.

I am talking about what I call 'field-by-field validation'. Each field could
have one or more checks to ensure that the input is valid. Some can be done
on the client (e.g. value must be numeric), others require a round-trip to
the server (e.g. account number must exist on file). Some applications defer
the server-side checks until the entire form is submitted, others perform
the checks in-line. My preference is for the latter.

I agree with Chris that on a LAN, it makes little or no difference whether
the client side is running a web browser or a traditional gui interface. On
a WAN, there could be a latency problem. Ideally an application should be
capable of servicing a local client or a remote client, so it is not easy to
find the right balance.

Do you have strong views on which is the preferred approach.

Thanks for any input.

Frank Millman
 
C

Chris Angelico

I am talking about what I call 'field-by-field validation'. Each field could
have one or more checks to ensure that the input is valid. Some can be done
on the client (e.g. value must be numeric), others require a round-trip to
the server (e.g. account number must exist on file). Some applications defer
the server-side checks until the entire form is submitted, others perform
the checks in-line. My preference is for the latter.

It's not either-or. The server *MUST* perform the checks at the time
of form submission; the question is whether or not to perform
duplicate checks earlier. This is an absolute rule of anything where
the client is capable of being tampered with, and technically, you
could violate it on a closed system; but it's so easy to migrate from
closed system to diverse system without adding all the appropriate
checks, so just have the checks from the beginning.

In terms of software usability, either is acceptable, but do make sure
the user can continue working with the form even if there's latency
talking to the server - don't force him/her to wait while you check if
the previous field was valid. I know that seems obvious, but
apparently not to everyone, as there are forms out there that violate
this...

ChrisA
 
F

Frank Millman

Chris Angelico said:
It's not either-or. The server *MUST* perform the checks at the time
of form submission; the question is whether or not to perform
duplicate checks earlier. This is an absolute rule of anything where
the client is capable of being tampered with, and technically, you
could violate it on a closed system; but it's so easy to migrate from
closed system to diverse system without adding all the appropriate
checks, so just have the checks from the beginning.

In my case, it is either-or. I do not just do field-by-field validation, I
do field-by-field submission. The server builds up a record of the data
entered while it is being entered. When the user selects 'Save', it does not
resend the entire form, it simply sends a message to the server telling it
to process the data it has already stored.
In terms of software usability, either is acceptable, but do make sure
the user can continue working with the form even if there's latency
talking to the server - don't force him/her to wait while you check if
the previous field was valid. I know that seems obvious, but
apparently not to everyone, as there are forms out there that violate
this...

I plead guilty to this, but I am not happy about it, hence my original post.
I will take on board your comments, and see if I can figure out a way to
have the best of both worlds.

Frank
 
Ad

Advertisements

C

Chris Angelico

In my case, it is either-or. I do not just do field-by-field validation, I
do field-by-field submission. The server builds up a record of the data
entered while it is being entered. When the user selects 'Save', it does not
resend the entire form, it simply sends a message to the server telling it
to process the data it has already stored.

Ah, I see what you mean. What I was actually saying was that it's
mandatory to check on the server, at time of form submission, and
optional to pre-check (either on the client itself, for simple
syntactic issues, or via AJAX or equivalent) for faster response.

As a general rule, I would be inclined to go with a more classic
approach for reasons of atomicity. What happens if the user never gets
around to selecting Save? Does the server have a whole pile of data
that it can't do anything with? Do you garbage-collect that
eventually? The classic model allows you to hold off inserting
anything into the database until it's fully confirmed, and then do the
whole job in a single transaction.

But if you want to use a "wizard" approach, where the user enters one
thing and then moves on to the next, that can work too. It gets clunky
quickly, but it can be useful if the early responses make the
subsequent questions drastically different.

ChrisA
 
F

Frank Millman

Chris Angelico said:
Ah, I see what you mean. What I was actually saying was that it's
mandatory to check on the server, at time of form submission, and
optional to pre-check (either on the client itself, for simple
syntactic issues, or via AJAX or equivalent) for faster response.

As a general rule, I would be inclined to go with a more classic
approach for reasons of atomicity. What happens if the user never gets
around to selecting Save? Does the server have a whole pile of data
that it can't do anything with? Do you garbage-collect that
eventually? The classic model allows you to hold off inserting
anything into the database until it's fully confirmed, and then do the
whole job in a single transaction.

The data is just stored in memory in a 'Session' object. I have a
'keep-alive' feature that checks if the client is alive, and removes the
session with all its data if it detects that the client has gone away.
Timeout is configurable, but I have it set to 30 seconds at the moment.

The session is removed immediately if the user logs off. He is warned if
there is unsaved data.

Frank
 
W

Wolfgang Keller

I share your passion for empowering a human operator to complete and
submit a form as quickly as possible. I therefore agree that one
should be able to complete a form using the keyboard only.

This is not just about "forms", it's about using the entire application
without having to use the mouse, ever.
Do you have strong views on which is the preferred approach.

Use a decent database RAD desktop (non-web) GUI application framework
which uses client-side application logics. "Validation" of input will
then be essentially instantaneous. Unless you run the client on that
pathological non-operating system MS (Not Responding), obviously. I've
posted a corresponding list of frameworks available for Python multiple
times already on this group:

using PyQt (& Sqlalchemy):
Qtalchemy: www.qtalchemy.org
Camelot: www.python-camelot.com
Pypapi: www.pypapi.org

using PyGTK:
Sqlkit: sqlkit.argolinux.org (also uses Sqlalchemy)
Kiwi: www.async.com.br/projects/kiwi

using wxPython:
Gui2Py: code.google.com/p/gui2py/
Dabo: www.dabodev.com
Defis: sourceforge.net/projects/defis (Russian only)
GNUe: www.gnuenterprise.org

Server-roundtrips required for simple user interaction are an absolute
non-starter for productivity applications. No matter whether in a LAN
or WAN. If you want a responsive application you have to de-centralise
as much as possible. Perfect solution would be if Bettina Kemme's
Postgres-R was available for production use, then even the persistence
could run locally on the client with asynchronous replication of
all clients ("peer to peer").

Sincerely,

Wolfgang
 
C

Chris Angelico

Server-roundtrips required for simple user interaction are an absolute
non-starter for productivity applications. No matter whether in a LAN
or WAN. If you want a responsive application you have to de-centralise
as much as possible.

Okay... how long does a round-trip cost? Considering that usability
guidelines generally permit ~100ms for direct interaction, and
considering that computers on a LAN can easily have sub-millisecond
ping times, it seems to me you're allowing a ridiculous amount of time
for code to execute on the server. Now, granted, there are systems
that suboptimal. (Magento, a PHP-based online shopping cart system,
took the better part of a second - something in the order of 700-800ms
- to add a single item. And that on reasonable hardware, not a
dedicated server but my test box was certainly not trash.)

For a real-world example of a LAN system that uses a web browser as
its UI, I'm using the Yosemite Project here. It consists of a
single-threaded Python script, no scaling assistance from Apache, just
the simplest it can possibly be. It is running on three computers:
yosemite [the one after whom the project was named], huix, and
sikorsky. I used 'time wget http://hostname:3003/airshow' for my
testing, which involves:

* A DNS lookup from the local DNS server (on the LAN)
* An HTTP query to the specified host
* A directory listing, usually remote
* Building a response (in Python)
* Returning that via HTTP
* Save the resulting page to disk

Since I'm using the bash 'time' builtin, all of this is counted (I'm
using the 'real' figure here; the 'user' and 'sys' figures are of
course zero, or as close as makes no odds - takes no CPU to do this).

The files in question are actually stored on Huix. Queries to that
server therefore require a local directory listing; queries to
sikorsky involve an sshfs directory listing, and those to yosemite use
NetBIOS. (Yosemite runs Windows XP, Huix and Sikorsky are running
Linux.)

My figures do have one peculiar outlier. Queries from Sikorsky to
Yosemite were taking 4-5 seconds, consistently; identical queries from
Huix to Yosemite were more consistent with other data. I have no idea
why this is.

So, the figures! Every figure I could get for talking to a Linux
server (either Huix or Sikorsky) was between 7ms and 16ms. (Any
particular combination of client and server is fairly stable, eg
sikorsky -> sikorsky is consistently 8ms.) And talking to the Windows
server, aside from the crazy outlier, varied from 22ms to 29ms.
Considering that the Windows lookups involve NetBIOS, I'm somewhat not
surprised; there's a bit of cost there.

That's the entire round-trip cost. The queries from Sikorsky to
Yosemite involve three computers (the client, the server, and the file
server), and is completed in under 30 milliseconds. That still gives
you 70 milliseconds to render the page to the user, and still be
within the estimated response time for an immediate action. In the
case of localhost, as mentioned above, that figure comes down to just
8ms - just *eight milliseconds* to do a query involving two servers -
so I have to conclude that HTTP is plenty fast enough for a UI. I have
seen a number of desktop applications that can't beat that kind of
response time.

There, thou hast it all, Master Wilfred. Make the most of it. :)

ChrisA
 
W

Wolfgang Keller

Okay... how long does a round-trip cost?

With a protocol that wasn't made for the purpose (such as HTTP) and all
that HTML to "render" (not to mention javascript that's required for
even the most trivial issues) - way too long.
Considering that usability guidelines generally permit ~100ms for
direct interaction,

That's "generous".

A proficient user with a responsive application can easily outpace that.

100ms is definitely a noticeable lag. Even I feel that and I don't use
touch-typing to use the GUI. 50ms might not be noticeable, but I don't
have the skills myself to test that.
(Magento, a PHP-based online shopping cart system, took the better
part of a second - something in the order of 700-800ms
- to add a single item. And that on reasonable hardware, not a
dedicated server but my test box was certainly not trash.)

That's not a question of hardware. Just like with MS (Not Responding).
That's the entire round-trip cost. The queries from Sikorsky to
Yosemite involve three computers (the client, the server, and the file
server), and is completed in under 30 milliseconds.

I am talking about applications that actually do something. In my case,
database applications. A PostgreSQL transaction is supposed to take at
most 25ms to complete (anything above is generally considered an issue
that needs to be solved, such as bad SQL), *server-side*. That leaves
you another 25ms for the entire network protocol (the pgsql protocol,
whatever it is, was designed for the purpose, unlike HTTP) *and* the
client-side application logic, including the GUI "rendering".

Qt is already quite sluggish sometimes, don't know why. GTK and
wxPython "feel" swifter, at least on an actual *operating* system. MS
(Not Responding) is definitely incapable to allow applications anything
remotely close to "responsiveness". Minute-long lockups with frozen
cursor are "normal".
That still gives you 70 milliseconds to render the page to the user,

Forget that.

25ms for client-server (pgsql) network protocol, client-side
application logic *and* GUI.

With a "web" application that would have to include "application
server"-side application logic, *and* generation of HTML (and
javascript), *and* HTTP protocol *and* HTML "rendering" *and*
client-side javascript.

Won't work.

Sincerely,

Wolfgang
 
Ad

Advertisements

C

Chris Angelico

With a protocol that wasn't made for the purpose (such as HTTP) and all
that HTML to "render" (not to mention javascript that's required for
even the most trivial issues) - way too long.

You keep saying this. I have yet to see actual timings from you.
100ms is definitely a noticeable lag. Even I feel that and I don't use
touch-typing to use the GUI. 50ms might not be noticeable, but I don't
have the skills myself to test that.

Okay, so let's talk 50ms then. We can handle this.
I am talking about applications that actually do something. In my case,
database applications. A PostgreSQL transaction is supposed to take at
most 25ms to complete (anything above is generally considered an issue
that needs to be solved, such as bad SQL), *server-side*. That leaves
you another 25ms for the entire network protocol (the pgsql protocol,
whatever it is, was designed for the purpose, unlike HTTP) *and* the
client-side application logic, including the GUI "rendering".

No problem. Again taking *actual real figures*, I got roughly 35-40tps
in PostgreSQL across a LAN. That's around about the 25ms figure you're
working with, so let's use that as a baseline. My benchmark was
actually a durability test from work, which was done on two laptops on
a gigabit LAN, with the database server brutally powered down in the
middle of the test. Each transaction updates a minimum of two rows in
a minimum of one table (transaction content is randomized some). So
that's 25ms for the database, leaving us 25ms for the rest.
25ms for client-server (pgsql) network protocol, client-side
application logic *and* GUI.

With a "web" application that would have to include "application
server"-side application logic, *and* generation of HTML (and
javascript), *and* HTTP protocol *and* HTML "rendering" *and*
client-side javascript.

Won't work.

I've demonstrated already that with basic hardware and a simple Python
HTTP server, network, application logic, and generation of HTML, all
take a total of 8ms. That leaves 17ms for rendering HTML. Now, getting
figures for the rendering of HTML is not easy; I've just spent fifteen
minutes researching on Google and playing with the Firebug-like
feature of Chrome, and haven't come up with an answer; so it may well
be that 17ms isn't enough for a full page load. However, I would say
that the matter is sufficiently borderline (bearing in mind that you
can always use a tiny bit of JS and a DOM change) that it cannot be
called as "Won't work"; it's what Mythbusters would dub "Plausible".

Of course, if you're trying to support MS IE, there's no way you'll
guarantee <50ms response time. This is all predicated on having at
least reasonably decent hardware and software. But using either the
100ms figure from common usability guidelines [1] [2] [3] or your more
stringent 50ms requirement [4], it's certainly entirely possible to
achieve immediate response using AJAX.

I've worked with real figures, hard numbers. You keep coming back with
vague FUD that it "won't work". Where are your numbers? And like Sir
Ruthven Murgatroyd, I fail to see how you can call this impossible in
the face of the fact that I have done it.

ChrisA

[1] http://www.nngroup.com/articles/response-times-3-important-limits/
[2] http://usabilitygeek.com/14-guidelines-for-web-site-tabs-usability/#attachment_719
[3] http://calendar.perfplanet.com/2011/how-response-times-impact-business/
[4] Can you provide a link to any study anywhere that recommends 50ms?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top