The Revenge of the Geeks

A

Arne Vajhøj

I don't really like SOAP...

IMHO, it seems wasteful and probably like a dedicated protocol could
probably be more efficient in most cases.

SOAP is designed by committee.
not really familiar with this.

looking it up.

actually, in a general sense, this sounds like how a lot of how my stuff
works internally.

The principle can certainly also be used in SE context.

Arne
 
A

Arne Vajhøj

dunno about an app working in a browser, I haven't personally really
looked much into this. the one thing I had noted which I felt might make
this worthwhile was "Google Native Client", but given it is Chrome-only
at this point, this is a drawback (better if Firefox supported it, but
the FF people apparently oppose it).

Adobe Flash sometimes seemed like a possible option, but isn't
particularly compelling, and the development environment apparently
costs money.

Java applet, Flash, SilverLight, Google Native, JavaScript - there
are plenty of options.

The Adobe GUI development tools cost money.

But you can actually develop MXML and AS with any
editor/IDE and you can build with ant and the Flex SDK.

Arne
 
B

BGB

BGB said:
I had generally imagined web-servers mostly like they were file-servers,
but with the plus-side of having a defined browser-level interface, and
the ability to use scripts or binaries to generate contents.

Terminology varies, but by widely-accepted convention a web server specifically
handles browser-like interactions using (mostly) HTML over HTTP. An application
server, as others explained upthread, serves all kinds of services in service of
applications designed to run in that environment. It might, and often does, incorporate
a web server as part of the panoply of services provided. But the concept you must
grasp is that application servers, such as those that implement Java EE, are kitchen-sink
propositions, giving all kinds of help to applications from an "enterprise" (read "project",
"organization", or professionally cognate term) perspective.

You can play reductionist mind games all you want, I mean, really, isn't all "just" machine
code in the end? Playing that type of game utterly misses the point and to reply in such
terms is inappropriate. The point of any framework, or computer language, or toolkit, is
to educe a closer mapping between the ontology of the enterprise (in its literal English meaning)
and the ontology of the model you're building. To sit disingenuously in the wrong ontology
serves neither your education nor contributes to the common weal.


my view of things is typically built from seeing how the lower layers
work, and how all the parts then fit together on top of this.

this "reductionism" is mostly trying to figure out how the tower is
built and how the parts generally fit together (and which parts comprise
various mechanisms, concepts, ...).

like, it doesn't make much sense to see the high-level apart from the
structures which support its operation.


like, what web-servers do makes little sense, apart from considering
what HTTP itself does.

not that there can't be abstractions, but normally at least these
abstractions need some sort of basis or foundation for their behavior
(even if incomplete or weak).

But you've been around this newsgroup a long, long time and by now you really should have
found out some of this for yourself. Java EE is well documented and the tools are free and open
source. So if you really had any genuine desire to understand the concepts and goals of the
specifications, you'd've done so already.

I never really went anywhere near Java EE though...

I had always thought the relation was between Java SE and EE was more
like that between, say, "Windows 7 Home" and "Windows 7 Ultimate", IOW:
there are differences, but they largely do the same things in the same way.

then just suddenly realizing that this is not the case, but that they
are in-fact rather different things.

Java EE is like a high-level language, but for deployment and connection of services. It's one of
those things that separates mere programmers from people who can solve problems with software
systems. Its goals are deployability, scalability, ops-friendliness, orchestration-ability (sorry :)),
stability, and pragmatic leverage for useful software systems. It encompasses a broad range of tools,
such as message queues, persistent storage, server clustering, resource management, orchestration,
troubleshooting, and more.

well, but it is also apparently intended for network/internet stuff.


in contrast to say, writing standalone apps for a desktop PC or an
Android phone or similar.

there is not a single type of programmer, or software, and there are
many types of software which may have little to do with either business
or the internet.


like, say, if a person is developing a game on their PC or cell-phone,
is a lot of this stuff involved? probably not.

if a person is running a big website, it probably matters, but not for a
single-player game, nor necessarily for traditional multiplayer (which
usually has a cap of around 8 or 16 players on a single server, and
where servers typically only exist briefly, and disappear when the
person hosting the server exits the game).

there are requirements, but they are typically different requirements
(for example, if using a network, bandwidth and latency may be much
bigger concerns than scalability, since it all has to be real-time and
typically all goes through a single-persons' home internet connection, ...).


likewise, even for a small-scale website, it may not matter all that
much, if all it ever does is mostly serve up static content and files,
any is typically low-traffic (and, likewise, is served via a home
internet connection), ...
 
B

BGB

Java applet, Flash, SilverLight, Google Native, JavaScript - there
are plenty of options.

yep, not saying that there aren't a lot of options here.


the main advantage of Native Client would be that it would be easier for
me to target it, mostly because it wouldn't require largely rewriting a
bunch of stuff (all the C parts of the project could be kept intact, and
compiled fairly directly, ...).

but, many of the other options could likely require writing code
specifically for them (in contrast to directly porting preexisting code).


Silverlight could work, but AFAIK C++/CLI doesn't work with it (sadly,
not that it really works great with .NET in general). the main thing
here is basically using C++/CLI to compile C code into CIL bytecode, but
using C++/CLI tends to reveal apparent ugly issues...

but, even as such, it is a little less work IME to port code between C
and C# than it is to port code between C and Java.


Flash at least theoretically has a C compiler for it (people have shown
some of the Quake-series games, namely Quake and Quake 3 Arena, running
on Flash before).


HTML+JS was fairly limited in the past, but should be more capable now
(like with WebGL and similar, ...).

I have actually written some small Java applets in the past (very long
ago), but nothing more recent.

both would (fairly likely) require a fair bit of porting effort (and/or
a trans-language compiler).


a further limitation in the JS case though is that, given code is sent
and recompiled from text form, this puts effective size limits on it
(trying to give it a giant mass of trans-compiled code probably wont
work very well, and some other areas of JS give a lot of room for doubt).

between them, trans-compiling to JVM bytecode would probably work
better, but performance is less certain (since pointers and structures
need to be simulated, ...).

probably going through an intermediate stage could be easier:
C -> BSVM bytecode -> JVM bytecode.

mostly as, at both stages, things would fit nicer (and the C->BSVM
conversion would lift out the vast majority of the pointers, namely by
converting most of the pointers to object references, ...).


still a lot of hassle though in either scenario.

The Adobe GUI development tools cost money.

But you can actually develop MXML and AS with any
editor/IDE and you can build with ant and the Flex SDK.

yes, ok.

may look into this.
 
A

Arved Sandstrom

On 1/26/2013 5:25 AM, Arved Sandstrom wrote:
[ SNIP ]
if it is on a different machine, and is providing something for being
accessed over a network, wouldn't that machine be by definition a server?
The terms "client" and "server" are common in discussions of CORBA, yes.
Strictly speaking what you've really got for any given local/remote
method invocation in a CORBA distributed system is an object that
receives the request, and a caller (client) that invokes the method on
the receiving implementation entity - the real server is an Object
Request Broker (ORB). Myself when doing CORBA I prefer to refer to
things as client, servant, ORB etc, and not use the term "server". YMMV.

AHS
 
A

Arved Sandstrom

On 1/26/2013 12:31 AM, BGB wrote:
[ SNIP ]
XML-RPC never really took off. Instead we got SOAP.

I don't really like SOAP...
[ SNIP ]

I don't know anyone who does, I know I don't. Still, it's what we've
got. For well-designed operations and schemas it's not that verbose, not
appreciably worse than JSON. Having WSDLs and the ability to validate is
useful, although over the years I've come to believe that WSDL-first is
an abomination unless the project is extremely structured and disciplined.

SOAP is also - still - the only game in town for various security and
transactional tasks, even if aspects of WS-Security are atrocious. For
true web services I'd use REST almost always, because SOAP actually
isn't much to do with the Web at all. But if I need application
security, encryption of portions of a message, non-repudiation,
transactionality etc,and I'm really doing RPC, I'm using SOAP.

AHS
 
A

Arne Vajhøj

I never really went anywhere near Java EE though...

Most Java developers work full time or part time in an EE
environment, but some does not.

We may have a tendency to forget about that. Please forgive us
for that.
I had always thought the relation was between Java SE and EE was more
like that between, say, "Windows 7 Home" and "Windows 7 Ultimate", IOW:
there are differences, but they largely do the same things in the same way.

then just suddenly realizing that this is not the case, but that they
are in-fact rather different things.

The terms SE and EE do give the wrong impression. For almost all
other products SE and EE means same type of product with different
feature set and different price point (SE support up to 4 CPU, do not
support clustering and cost 10 K$ - EE support up to 16 CPU, do support
clustering and cost 100 K$).

For Java EE is a server centric framework on top of SE that is
general purpose.
well, but it is also apparently intended for network/internet stuff.

Web apps can utilize much of this. It is standard with clustering for
web apps and web apps require some support for HTTP, thread pool,
transactions etc..
in contrast to say, writing standalone apps for a desktop PC or an
Android phone or similar.

They may need some of those features, but typical not many.
there is not a single type of programmer, or software, and there are
many types of software which may have little to do with either business
or the internet.

Internet is quite common. But client side can use internet without EE.
like, say, if a person is developing a game on their PC or cell-phone,
is a lot of this stuff involved? probably not.
Yep.

likewise, even for a small-scale website, it may not matter all that
much, if all it ever does is mostly serve up static content and files,
any is typically low-traffic (and, likewise, is served via a home
internet connection), ...

Even for a small scale web site the convenience of Tomcat over a
CGI-BIN could justify EE (or PHP or ASP.NET if Java is not a
prerequisite).

Arne
 
A

Arne Vajhøj

On 1/26/2013 5:25 AM, Arved Sandstrom wrote: [ SNIP ]
Both CORBA and DCOM are meant for distributed applications. Like Arne
said, both have to do with software components on numerous different
machines, possibly different languages, and having defined interfaces
for RPC. Myself I wouldn't even use the term "server" to explain what
DCOM and CORBA do, not at a high level.

if it is on a different machine, and is providing something for being
accessed over a network, wouldn't that machine be by definition a server?
The terms "client" and "server" are common in discussions of CORBA, yes.
Strictly speaking what you've really got for any given local/remote
method invocation in a CORBA distributed system is an object that
receives the request, and a caller (client) that invokes the method on
the receiving implementation entity - the real server is an Object
Request Broker (ORB). Myself when doing CORBA I prefer to refer to
things as client, servant, ORB etc, and not use the term "server". YMMV.

But then we are down at the EJB, JNDI, EJB client level.

I would call VisiBroker for a server.

Arne
 
A

Arne Vajhøj

On 1/26/2013 12:31 AM, BGB wrote:
[ SNIP ]
FWIW: I once messed briefly with XML-RPC, but never really did much
with
it since then, although long ago, parts of its design were scavenged
and
repurposed for other things (compiler ASTs).

XML-RPC never really took off. Instead we got SOAP.

I don't really like SOAP...
[ SNIP ]

I don't know anyone who does, I know I don't. Still, it's what we've
got. For well-designed operations and schemas it's not that verbose, not
appreciably worse than JSON. Having WSDLs and the ability to validate is
useful, although over the years I've come to believe that WSDL-first is
an abomination unless the project is extremely structured and disciplined.

SOAP is also - still - the only game in town for various security and
transactional tasks, even if aspects of WS-Security are atrocious. For
true web services I'd use REST almost always, because SOAP actually
isn't much to do with the Web at all. But if I need application
security, encryption of portions of a message, non-repudiation,
transactionality etc,and I'm really doing RPC, I'm using SOAP.

Standards are rarely optimal.

people are not too happy about HTTP and SMTP either.

But a standard is a standard.

SOAP got the tools support and all the standards that
build on top of it.

We can either accept it and live happy with it or invent
a time machine and go back to around 1998 and tell a few
people from IBM and MS how it should be done.

Arne
 
A

Arne Vajhøj

yep, not saying that there aren't a lot of options here.


the main advantage of Native Client would be that it would be easier for
me to target it, mostly because it wouldn't require largely rewriting a
bunch of stuff (all the C parts of the project could be kept intact, and
compiled fairly directly, ...).

Sure about that?

I would expect Native Client to block a lot of code for
security reasons.
a further limitation in the JS case though is that, given code is sent
and recompiled from text form, this puts effective size limits on it
(trying to give it a giant mass of trans-compiled code probably wont
work very well, and some other areas of JS give a lot of room for doubt).

It si common today to develop JS source code with comments, long
names, indentation etc. and then strip it before deploying to
reduce size.

Arne
 
B

BGB

On 1/26/2013 8:12 AM, Arne Vajhøj wrote:
On 1/26/2013 12:31 AM, BGB wrote: [ SNIP ]

FWIW: I once messed briefly with XML-RPC, but never really did much
with
it since then, although long ago, parts of its design were scavenged
and
repurposed for other things (compiler ASTs).

XML-RPC never really took off. Instead we got SOAP.


I don't really like SOAP...
[ SNIP ]

I don't know anyone who does, I know I don't. Still, it's what we've
got. For well-designed operations and schemas it's not that verbose, not
appreciably worse than JSON. Having WSDLs and the ability to validate is
useful, although over the years I've come to believe that WSDL-first is
an abomination unless the project is extremely structured and
disciplined.

SOAP is also - still - the only game in town for various security and
transactional tasks, even if aspects of WS-Security are atrocious. For
true web services I'd use REST almost always, because SOAP actually
isn't much to do with the Web at all. But if I need application
security, encryption of portions of a message, non-repudiation,
transactionality etc,and I'm really doing RPC, I'm using SOAP.

Standards are rarely optimal.

people are not too happy about HTTP and SMTP either.

well, luckily there is HTTP 2.0 in development, which "should" be a
little better, at least as far as it will Deflate the messages...

http://en.wikipedia.org/wiki/Http_2.0
http://en.wikipedia.org/wiki/SPDY

(in contrast to HTTP 1.1, it will multiplex the requests and responses
over a single socket, and also compress the data).

But a standard is a standard.

SOAP got the tools support and all the standards that
build on top of it.

We can either accept it and live happy with it or invent
a time machine and go back to around 1998 and tell a few
people from IBM and MS how it should be done.

or just blow it off and do whatever...


like, standards are useful so long as they are useful, but otherwise,
unless there is some greater reason (mandatory inter-op or orders from
above), why bother?

like, unless is better for the project overall (or otherwise benefits
the developers in some way), why not just go and do something different?

granted, yes, usually standards are a good thing, but usually these are
*good* standards. luckily at least, some of the worse offenders here
have gained the fate they deserve.
 
B

BGB

Most Java developers work full time or part time in an EE
environment, but some does not.

We may have a tendency to forget about that. Please forgive us
for that.

yeah.

granted, I am not particularly much of a serious Java developer either.
I am more often here occasionally for "interesting" topics, but the
majority of the code I write is in other languages.

The terms SE and EE do give the wrong impression. For almost all
other products SE and EE means same type of product with different
feature set and different price point (SE support up to 4 CPU, do not
support clustering and cost 10 K$ - EE support up to 16 CPU, do support
clustering and cost 100 K$).

For Java EE is a server centric framework on top of SE that is
general purpose.

yep, fair enough...

Web apps can utilize much of this. It is standard with clustering for
web apps and web apps require some support for HTTP, thread pool,
transactions etc..

yep.



They may need some of those features, but typical not many.

yeah.

many don't even necessarily need sockets.

Internet is quite common. But client side can use internet without EE.

it depends some on the app.

some categories of apps don't use the internet at all, and many others
in only rudimentary ways (like supporting updates), or complete a
specific task using application-specific protocols.


web-browsers and email clients are a few of the actually
internet-requiring apps (as their functionality is tied to the
internet), and web-apps basically *are* internet.


a question could then become how much of the various types of apps
people use:
most I use are plain (non-network) desktop apps;
then of course, email, usenet, and a browser;
main "web-apps" I end up using mostly at this point are Wikipedia,
YouTube, Google, and periodically Facebook and online-dating (not that
this ever amounts to much...).

counting task-bar icons, there are 11 groups of offline apps running,
and 3 internet-based apps: Firefox, Winamp, Thunderbird.


granted, I often end up with a lot of tabs open in FF, but by no means
does life revolve around it.

quick survey: most are Wikipedia articles, and some number of other
static pages, ...

Even for a small scale web site the convenience of Tomcat over a
CGI-BIN could justify EE (or PHP or ASP.NET if Java is not a
prerequisite).

could be.

I think it is general tradeoffs.

at least between C and PHP:
PHP is nice so far as it is fairly good at generating web-pages.

a C based CGI binary has a slight advantage when it comes to non-HTML
content, as it can do a few more advanced tricks, and allows sending
pretty much any kind of content (I suspect also streams, but would need
to confirm their behavior with more tests, basically to confirm if it is
actually streaming the content, as opposed to buffering it).

yes, I control my own server, so getting the code compiled for it isn't
an issue.


ASP.NET exists, but personally I have never used it, so can't say much
about it beyond this.
 
B

BGB

Sure about that?

I would expect Native Client to block a lot of code for
security reasons.

from what I have read, it basically gives a POSIX-like API with OpenGL
and a few other things, all running inside of its own sandboxed address
space and filesystem.

if it supports this much (along with other basic things, like ability to
build apps as multiple libraries, ...), most of my stuff should work
without too much issue.

hopefully at least it is less of a hassle than dealing with the Android
NDK, but who knows sometimes?...


(what partly killed my efforts on Android was not so much about getting
things built, but mostly me not having any good idea how to make my
stuff particularly usable via touch-screen UI, vs a mouse+keyboard UI,
and not having many good ideas for UIs which would work well absent a
mouse+keyboard interface). (a secondary issue mostly had to deal with
concern over the often low hardware stats of typical Android devices as
compared with a typical modern desktop PC, ...).

I did at least get as far as confirming that a lot of my stuff built and
worked on ARM-based targets though.


one can probably assume it supports most basic things, but I have yet to
find a good list of what parts of POSIX it supports. the lists I have
found seem to mention including most parts I typically make use of
(libdl, pthreads, calls like "mmap()", ...).

it was mentioned on the site that it does apparently lack BSD sockets
though (annoying, but not a critical loss...).


granted, yes, not like code will probably be usable unmodified, but this
is sort of to be expected in C land (you usually end up needing a bunch
of #ifdef's and globs of target-specific wrapper code anyways).


thus far, nothing looks particularly unusual though...

It si common today to develop JS source code with comments, long
names, indentation etc. and then strip it before deploying to
reduce size.

yes, but I mean, say you have a big C codebase, and try to trans-compile
to JS. (say, you trans-compile an Mloc-range application to JS and try
to get it loaded in the browser). even stripped, it would still be big.

the worry is that it could put a strain on the browser getting a large
app downloaded and compiled.

then again, I guess a proof of concept would be if anyone can get
something like Doom 3 trans-compiled to JS and running in a browser.
 
A

Arved Sandstrom

On 01/26/2013 04:47 PM, BGB wrote:
On 1/26/2013 8:12 AM, Arne Vajhøj wrote:
On 1/26/2013 12:31 AM, BGB wrote:
[ SNIP ]

FWIW: I once messed briefly with XML-RPC, but never really did much
with
it since then, although long ago, parts of its design were scavenged
and
repurposed for other things (compiler ASTs).

XML-RPC never really took off. Instead we got SOAP.


I don't really like SOAP...
[ SNIP ]

I don't know anyone who does, I know I don't. Still, it's what we've
got. For well-designed operations and schemas it's not that verbose, not
appreciably worse than JSON. Having WSDLs and the ability to validate is
useful, although over the years I've come to believe that WSDL-first is
an abomination unless the project is extremely structured and
disciplined.

SOAP is also - still - the only game in town for various security and
transactional tasks, even if aspects of WS-Security are atrocious. For
true web services I'd use REST almost always, because SOAP actually
isn't much to do with the Web at all. But if I need application
security, encryption of portions of a message, non-repudiation,
transactionality etc,and I'm really doing RPC, I'm using SOAP.

Standards are rarely optimal.

people are not too happy about HTTP and SMTP either.

well, luckily there is HTTP 2.0 in development, which "should" be a
little better, at least as far as it will Deflate the messages...

http://en.wikipedia.org/wiki/Http_2.0
http://en.wikipedia.org/wiki/SPDY

(in contrast to HTTP 1.1, it will multiplex the requests and responses
over a single socket, and also compress the data).

But a standard is a standard.

SOAP got the tools support and all the standards that
build on top of it.

We can either accept it and live happy with it or invent
a time machine and go back to around 1998 and tell a few
people from IBM and MS how it should be done.

or just blow it off and do whatever...


like, standards are useful so long as they are useful, but otherwise,
unless there is some greater reason (mandatory inter-op or orders from
above), why bother?

like, unless is better for the project overall (or otherwise benefits
the developers in some way), why not just go and do something different?

granted, yes, usually standards are a good thing, but usually these are
*good* standards. luckily at least, some of the worse offenders here
have gained the fate they deserve.
Another note on SOAP: many (I'd say most) of the pain points encountered
by a developer are not problems of SOAP itself. You can use a tool like
SoapUI to hit WSDLs, and inspect the raw XML being passed back and forth
- if the WSDLs and XSDs are well-crafted then the XML requests and
responses are quite readable and not particularly verbose.

You'd be better off using SOAP most of the time for RPC-type work then
rolling your own.

WS-Security is a different matter. That's complicated for many use
cases; you don't even want to look at the typical raw XML for a request.
:) OTOH there is really no other game in town for this aspect.

What really complicates things is the tooling. For Java, you'd probably
use Axis or CXF. I no longer like Axis at all, for various reasons, so
I've moved to CXF. But even CXF, if you generate your client classes off
a WSDL, the verbosity and complexity of the classes is offputting. You
have to acquire a fair bit of experience with a language-specific WS
framework in order to make generated code half-reasonable to work with.
..NET, say with C# as the language, is no better - lots of little gotchas
that you just have to be aware of.

So it's not all the fault of SOAP - programming language implementations
for developing client and server code complicate matters quite a lot.

Usually in the enterprise world you have little or no leeway as to how
systems talk to each other. You may have a few options to choose from,
but rolling your own is looked upon askance.

AHS
 
B

BGB

On 1/26/2013 8:47 PM, Arved Sandstrom wrote:
On 01/26/2013 04:47 PM, BGB wrote:
On 1/26/2013 8:12 AM, Arne Vajhøj wrote:
On 1/26/2013 12:31 AM, BGB wrote:
[ SNIP ]

FWIW: I once messed briefly with XML-RPC, but never really did much
with
it since then, although long ago, parts of its design were scavenged
and
repurposed for other things (compiler ASTs).

XML-RPC never really took off. Instead we got SOAP.


I don't really like SOAP...
[ SNIP ]

I don't know anyone who does, I know I don't. Still, it's what we've
got. For well-designed operations and schemas it's not that verbose,
not
appreciably worse than JSON. Having WSDLs and the ability to
validate is
useful, although over the years I've come to believe that WSDL-first is
an abomination unless the project is extremely structured and
disciplined.

SOAP is also - still - the only game in town for various security and
transactional tasks, even if aspects of WS-Security are atrocious. For
true web services I'd use REST almost always, because SOAP actually
isn't much to do with the Web at all. But if I need application
security, encryption of portions of a message, non-repudiation,
transactionality etc,and I'm really doing RPC, I'm using SOAP.

Standards are rarely optimal.

people are not too happy about HTTP and SMTP either.

well, luckily there is HTTP 2.0 in development, which "should" be a
little better, at least as far as it will Deflate the messages...

http://en.wikipedia.org/wiki/Http_2.0
http://en.wikipedia.org/wiki/SPDY

(in contrast to HTTP 1.1, it will multiplex the requests and responses
over a single socket, and also compress the data).

But a standard is a standard.

SOAP got the tools support and all the standards that
build on top of it.

We can either accept it and live happy with it or invent
a time machine and go back to around 1998 and tell a few
people from IBM and MS how it should be done.

or just blow it off and do whatever...


like, standards are useful so long as they are useful, but otherwise,
unless there is some greater reason (mandatory inter-op or orders from
above), why bother?

like, unless is better for the project overall (or otherwise benefits
the developers in some way), why not just go and do something different?

granted, yes, usually standards are a good thing, but usually these are
*good* standards. luckily at least, some of the worse offenders here
have gained the fate they deserve.
Another note on SOAP: many (I'd say most) of the pain points encountered
by a developer are not problems of SOAP itself. You can use a tool like
SoapUI to hit WSDLs, and inspect the raw XML being passed back and forth
- if the WSDLs and XSDs are well-crafted then the XML requests and
responses are quite readable and not particularly verbose.

You'd be better off using SOAP most of the time for RPC-type work then
rolling your own.

IMHO, better is probably not using RPC, if possible, but either way.

WS-Security is a different matter. That's complicated for many use
cases; you don't even want to look at the typical raw XML for a request.
:) OTOH there is really no other game in town for this aspect.

What really complicates things is the tooling. For Java, you'd probably
use Axis or CXF. I no longer like Axis at all, for various reasons, so
I've moved to CXF. But even CXF, if you generate your client classes off
a WSDL, the verbosity and complexity of the classes is offputting. You
have to acquire a fair bit of experience with a language-specific WS
framework in order to make generated code half-reasonable to work with.
.NET, say with C# as the language, is no better - lots of little gotchas
that you just have to be aware of.

So it's not all the fault of SOAP - programming language implementations
for developing client and server code complicate matters quite a lot.

well, maybe.

it is easier for a language which has built-in lists (in the Lisp
sense), but these are an uncommon concept in mainstream languages, and
if built by library features aren't quite as nice.


for example, in C, I can compose messages like:
t=dylist4s("message", dyint(1), dyint(2), dyint(3));
btSendMessage(tgt, t);

and, in the reciever:
t=btRecieveMessage(src);
if(t && dyFormIs(t, "message"))
{ ... }

which *could* always be a bit worse.


in my own language, it is like this:
t=#{#message, 1, 2, 3};
btSendMessage(tgt, t);
....
t=btRecieveMessage(src);
if(t && t[0]==#message) { ... }


for Java, maybe it could be something like:
t=Cons.list(message, 1, 2, 3); //lots of overloads
tgt.sendMessage(t);
....
t=src.recieveMessage();
if((t!=null) && t.formIs("message"))
{ ... }

probably with special Cons and MailBox classes.

Usually in the enterprise world you have little or no leeway as to how
systems talk to each other. You may have a few options to choose from,
but rolling your own is looked upon askance.

well, this is where the whole "mandatory interop or orders from above"
comes in. in such a case, people say what to do, and the programmer is
expected to do so.

but, I more meant for cases where a person has free say in the matter.


and, also, a person still may choose an existing option, even if bad,
because it is the least effort, or because it is still locally the best
solution.

like, rolling ones' own is not required, nor necessarily always the best
option, but can't necessarily be summarily excluded simply for sake of
"standards", as doing so may ultimately just make things worse overall.


historically there have been cases where standards organizations have
made some fairly bad standards, and most everyone else ended up ignoring
them (and they largely became forgotten).
 
B

BGB

On 01/26/2013 04:47 PM, BGB wrote:
On 1/26/2013 8:12 AM, Arne Vajhøj wrote:
On 1/26/2013 12:31 AM, BGB wrote:
[snip]


people are not too happy about HTTP and SMTP either.

huh ... what's wrong with HTTP ???

Nobody could possibly have imagined that HTTP would
become such an important protocol.

If people are unhappy about HTTP it's because they are trying to force
the protocol to do things it was never designed to do. The fact that it
*is* so flexible and extensible is surely testament to one mans vision
and humanities ingenuity in general.

I know we don't see eye to eye about many things Arne but that has to be
one of your more absurd statements.

I wonder how many contributors to this list would be making a (good)
living out of 'IT' if HTTP and the World Wide Web had never been invented.

probably the main issue is that it involves opening a connection for a
single request/response pair, and doesn't (generally) allow reusing the
socket for multiple requests (as-in, not without ugly hacks).

to some extent, this has lead to the development of technologies like
HTTP 2.0 / SPDY, and WebSockets, partly to better address some of these
use-cases.


but, for many apps, there may not be much need for HTTP inter-op, so
they may develop their own protocols.
 
B

BGB

I did at least get as far as confirming that a lot of my stuff built and
worked on ARM-based targets though.
Slightly OTT coment, but:

I can confirm that C ports very easily to ARM within the Linux/gcc
environment. My standard CLI editor is MicroEMACS and has been been for
years in UNIX, Linux, OS/9 68K and DOS/Windows environments, so when I
got a Debian-based RaspberryPi and found its editors were nano and a less-
capable vi than I'm used to, of course I ported Microemacs. The only
issue I has was that it is termcap-based but the RPi Debian port doesn't
support termcap. However, a quick grab for the GNU termcap and Microemacs
was up and running. No code changes needed at all[1].

I'm running my RPi in headless mode: if any of you are thinking of doing
the same and would like to do the same without needing a direct attached
keyboard and screen for the first boot, contact me offline for the gory
details.

[1] apart from the termcap file: surprisingly, GNU termcap doesn't
support the 'tc' attribute but the RedHat termcap does, so I had to edit
/etc/termcap to create a monolithic xterm definition for the RPi.

yeah.

Linux on ARM isn't a big deal to port to.

it was initially more scary-seeming, and I am not exactly a huge fan of
the Thumb ISA (I prefer x86 IMHO), but in-all, building for ARM wasn't a
big deal. (granted, the form I was targeting was running in QEMU, rather
than being an RPi).


Android is a little more funky, mostly as Google handled their build
tools in a very weird way for the NDK (at least at the time, dunno about
now, not looked into it more recently).

basically, stuff isn't handled like on a more normal Linux, but rather
there is lots of APK funkiness, ...


NaCl looks interesting, at least so far as it seems to work (still
slightly strange in a few ways, but no huge surprise there).

less clear is supporting JIT on NaCl. for x86, it would require some
tweaking, and for PNaCl, I have little idea. probably not a huge issue
though.

I still haven't really looked into how NaCl handles access to things
like file-resources (still to-be-researched).


granted, whether or not all this stuff makes enough sense as an
in-browser app to be worthwhile, is yet to be seen.
 
A

Arne Vajhøj

yeah.

granted, I am not particularly much of a serious Java developer either.
I am more often here occasionally for "interesting" topics, but the
majority of the code I write is in other languages.

As long as you don't try and shoehorn the different languages
into the same paradigm then that should not be a problem.

Arne
 
A

Arne Vajhøj

On 01/26/2013 04:47 PM, BGB wrote:
On 1/26/2013 8:12 AM, Arne Vajhøj wrote:
On 1/26/2013 12:31 AM, BGB wrote:
[ SNIP ]

FWIW: I once messed briefly with XML-RPC, but never really did much
with
it since then, although long ago, parts of its design were scavenged
and
repurposed for other things (compiler ASTs).

XML-RPC never really took off. Instead we got SOAP.


I don't really like SOAP...
[ SNIP ]

I don't know anyone who does, I know I don't. Still, it's what we've
got. For well-designed operations and schemas it's not that verbose, not
appreciably worse than JSON. Having WSDLs and the ability to validate is
useful, although over the years I've come to believe that WSDL-first is
an abomination unless the project is extremely structured and
disciplined.

SOAP is also - still - the only game in town for various security and
transactional tasks, even if aspects of WS-Security are atrocious. For
true web services I'd use REST almost always, because SOAP actually
isn't much to do with the Web at all. But if I need application
security, encryption of portions of a message, non-repudiation,
transactionality etc,and I'm really doing RPC, I'm using SOAP.

Standards are rarely optimal.
But a standard is a standard.

SOAP got the tools support and all the standards that
build on top of it.

We can either accept it and live happy with it or invent
a time machine and go back to around 1998 and tell a few
people from IBM and MS how it should be done.

or just blow it off and do whatever...

like, standards are useful so long as they are useful, but otherwise,
unless there is some greater reason (mandatory inter-op or orders from
above), why bother?

like, unless is better for the project overall (or otherwise benefits
the developers in some way), why not just go and do something different?

Because in the big picture compatibility is important and
the potential improvements are not.

Arne
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,901
Latest member
Noble71S45

Latest Threads

Top