Android—Why Dalvik?

  • Thread starter Lawrence D'Oliveiro
  • Start date
B

BGB

The funny thing, is that Java when it came out, was supposed to
solve all these differences by putting a virtual OS between the
application and the OS, this way one writes to this one common
virtual OS (the VM) and not have to worry about the different
OS's below it.

But now it seems we have different virtual OS's also coming out.

So, I have an brilliant solution I'd like to suggest:

we need is a SUPER VM

A super virtual OS, is a virtual OS which runs on top of a
virtual OS.

i.e. the super VM, hides which VM it is running under, so it
runs on top of all the other VM's:

SUPER VM
Java VM, Google VM, Windows NET VM, etc..
WINDOWS, LINUX, Mac, VMS, etc..

This offourse, until one comes up with a different version of the
SUPER VM, then we go and make a SUPER SUPPER VM. So we need
to make sure this time, that we have provisions in place to
prevent someone from making a different SUPER VM.

I would like to go patent this now.


well, IMO, trying to homogenize the environment is itself a problem...

this is actually part of what I think is a weak-point of the JVM strategy:
they try to gloss over the real OS/... by basically creating a new layer
of abstractions, and wrapping everything in the new API.

to many OS's? make a "one virtual OS to rule them all" (JVM).
too many languages? make a "one language to rule them all" (Java).

sadly, this strategy is prone to eventually show its limitations, as now
the VM/framework implementer has taken on the responsibility of
providing for pretty much any major feature the OS's might provide, and
that apps might want to make use of, and the issue of what features may
and may not exist on various targets, ...


my idea was less drastic:
rather than by creating an entirely new set of abstractions, one creates
a VM which is itself better suited to heterogeneous environments.

in C, we called this mechanism "#ifdef".
a new language can likewise devise newer, ifdef-like mechanisms.


for example, in my present language, this would be done something like:

$[ifdef(SOMEFEATURE)] public void someMethod()
{
...
}

and:
$[ifdef(SOMEFEATURE)]
{
...
}

where: $[...] is the present syntax for attributes (they started out
just as MS-style "[...]" attributes, but were more recently changed due
to the prior syntax creating syntactic ambiguity in some cases).


the advantage then, is that one doesn't have to provide as much, as for
any features not provided directly by the framework, well, one can go
back to the OS.

for example, I can make Win32 API calls from my scripting HLL, not
because I explicitly implemented support for Win32 API functionality,
but more because the VM can see all of the Win32 API functionality...

granted, yes, this same functionality will be absent if running on a
different OS, say, Linux, hence the need for an ifdef-like mechanism...


was recently though thinking of the issue like "maybe, you know, my
language might need some sort of standard library...". since, as-is,
nearly everything I have been doing API-wise has basically been via
making calls into C.


if by some chance a JVM port were made of the VM, then pretty much
everything would have to redirect to the Java class library instead.

lacking a defined API of some sort, this could be a little ugly.

but, then I have to come up with what the API should look like.
I could do nested packages and classes, or I could do it more like a
pseudo C style (maybe with packages and a lot of package-level
functions). it is... a decision...

my present personal leaning is mostly to do things C style, with
classes/... for some things, but not as the main style.

but, also possible would be to just partly rip off the design of the
Java class library... Packages: "bs.lang", "bs.io", "bs.util", ...
albeit I would probably diverge somewhat WRT the "io" package,
personally as I can't see just why there needs to be so many classes in
there, and would much rather assume doing file IO in a more C-like manner...

for example:

import bs.io;

void loadSomething()
{
File fd;
string[] sarr;
fd=File.open("foo.txt", "rt");
while(!fd.eof)
{
sarr=fd.gets().split();
if(!sarr[0])continue;
if(*sarr[0]==';')continue;
switch(sarr[0])
{
case "foo": ... break;
case "bar": ... break;
default: ... break;
}
}
}

past ideas here have also included putting all standard exceptions into
their own package, but there is not as much reason to do so with my
language, partly because I can put multiple classes into a single file
and so they are less liable to clutter up the package.

or such...
 
N

Nasser M. Abbasi

my idea was less drastic:
rather than by creating an entirely new set of abstractions, one creates
a VM which is itself better suited to heterogeneous environments.

in C, we called this mechanism "#ifdef".
a new language can likewise devise newer, ifdef-like mechanisms.

I know all about #ifdef. One a project long time ago,
I worked on porting Netscape web server source code, it would
build for I think 18 different platforms. Most of these are
flavors of Unix, few flavors of windows, and OS2 and such.

The same source code, 18 or so different build targets.

Just understanding the makefiles, never mind the 2 million
lines or so source code with the #ifdefs in them, was a
nightmare :)

The same was for the Netscape browser code.

Java is supposed to solve all this #ifdef stuff.

--Nasser
 
L

Lawrence D'Oliveiro

sadly, Linux has not entirely caught up to Windows WRT making an OS
which is solidly good either...

Except Android is kicking Microsoft’s arse.
 
L

Lawrence D'Oliveiro

Heck, even porting 32-bit code to a 64-bit target of an otherwise
identical system is often nontrivial for any decently sized project.

“Decently sized†like the Linux kernel? Which is portable across something
like two dozen different architectures, both 32-bit and 64-bit?
 
J

Joshua Cranmer

“Decently sized†like the Linux kernel? Which is portable across something
like two dozen different architectures, both 32-bit and 64-bit?

I would call the porting work of Linux nontrivial. After all, supporting
a new architecture (to my knowledge) requires cloning a directory and
probably several changes to that directory.

Remember that I said "nontrivial", not "impossible".
 
B

BGB

Except Android is kicking Microsoft’s arse.

on cellphones...


not seen any laptops or desktop PC's with Android though...

it is more mixed-bag with tablets though...
I had remotely considered getting a tablet which ran an Intel Atom and
Windows 7, but I am not certain about blowing the $300 on it...

also, there is the question of what exactly I would do with a tablet,
besides maybe use it like a wacom, that can't be accomplished with a
laptop or netbook (which I already have).

or such...
 
B

BGB

I know all about #ifdef. One a project long time ago,
I worked on porting Netscape web server source code, it would
build for I think 18 different platforms. Most of these are
flavors of Unix, few flavors of windows, and OS2 and such.

The same source code, 18 or so different build targets.

Just understanding the makefiles, never mind the 2 million
lines or so source code with the #ifdefs in them, was a
nightmare :)

The same was for the Netscape browser code.

yeah.

but, the #ifdef's and makefiles work fairly well IME...

millions of lines of code, and it all still goes strong.

#ifdef's and parallel sets of makefiles (and/or alternative build
systems) are just part of the game.

Java is supposed to solve all this #ifdef stuff.


but, at what cost?...

it is worth noting that the Java class library is... rather large... and
that also it tends to behave more like a "second class citizen" on many
OS's.

it is also a bit painful trying to work with OS-level functionality, and
neither JNI or JNA are particularly friendly in these regards.

also, mixed-language apps (mixed C / C++ and Java) are often a bit
painful as well.


granted, yes, a VM probably can and should gloss over what it reasonably
can from the OS (providing common APIs for many things, possibly
providing a simpler build system, ...), just I don't think it is
worthwhile to try to be an all-encompassing platform either.


also, it makes sense to allow that a common VM if being used in
different situations may be able to provide additional functionality, or
not provide certain functionality, and being able to handle this more
cleanly/easily is a good thing IMO.

also, the class libraries themselves may have to deal a fair amount with
OS-specific quirks.


hence, an ifdef-like system is a powerful tool, IMO.
well, along with a reasonably powerful native FFI...


or such...
 
L

Lawrence D'Oliveiro

on cellphones...

On a whole range of ultramobile devices.

In case you didn’t know, unit shipments of smartphones are now level-pegging
with PCs, and will probably surpass them in the next quarter or two.

Overall, ARM chips outsell x86 4:1, and the disparity is growing.
it is more mixed-bag with tablets though...

Windows tablets are not selling.
 
L

Lawrence D'Oliveiro

I would call the porting work of Linux nontrivial.

OK, how about userland code then? Like, for example, Python, which is
available on a range of different platforms, including Android?
 
L

Lawrence D'Oliveiro

I'd bet, these days, that the root cause of that situation is the fact
that the three operating systems have *completely* different GUI's.

So, has anybody come up with a worthwhile “universal†GUI that fits every
form factor and platform?
And actually, Linux alone has *two* (more than two, technically, but
only two popular ones)

This is why, you’ll notice, developers of Free Software like to decouple the
GUI from the underlying functionality. The main functionality is often made
available through command-line tools, while the GUI is just a front-end to
these.

This also gives you the instant advantage of very powerful workflow
automation, which tends to be clumsy with a GUI.
 
L

Lawrence D'Oliveiro

The funny thing, is that Java when it came out, was supposed to
solve all these differences by putting a virtual OS between the
application and the OS, this way one writes to this one common
virtual OS (the VM) and not have to worry about the different
OS's below it.

Those who knew the history of previous attempts to do this sort of thing
could already predict why it wasn’t going to succeed.
 
L

Lawrence D'Oliveiro

Just understanding the makefiles, never mind the 2 million
lines or so source code with the #ifdefs in them, was a
nightmare :)

Didn’t you have GNU autoconf to deal with all of that for you?
 
B

BGB

CPU architectures aren't really a major issue anymore. Linux, many
BSDish operating systems, OS X and Windows all run primarily on Intel
CPUs.

yes, but at the moment there are 2 partly incompatible operating modes:
32 and 64 bit mode...

also, I prefer to say x86 CPUs, rather than Intel CPUs, since Intel is a
particular company and not the sole supplier of x86 chips... hence,
calling them Intel CPUs sort of discriminates against everyone who uses
AMD and VIA chips.

However, you are right about executable formats. It would be less of an
issue if OS X supported the Linux ELF format. FreeBSD is able to use ELF
executables, but ELF is not that OS's native binary format.

or, everyone could use PE/COFF...

well, other relevant issues:
ABI differences;
different system libraries;
....

But to me, that eliminates *the biggest* benefit to using C (and dealing
with its hassles).

in which way in particular?...

much application code wouldn't likely notice much difference, and in
many cases, early type-handling and doing preprocessor magic could still
be done early (at compile time).

if the post-JIT ABI were the same as the native C ABI, then there
wouldn't even (particularly) be problems with native/VM code linking, or
with using globs of assembler...


IMO, "bytecoded" need not necessarily mean "terrible native-compiled C
interface"...


technically, my VMs mostly use the native C ABIs, differing mostly in
terms of name mangling (I use a custom name-mangling convention for HLL
functions) and simplifying some overly complex edge cases (such as the
AMD64 struct-passing rules, ...).

my own name-mangling rules and ABI were partly derived from the IA64 C++
ABI (used by GCC), but also somewhat from JNI and the JVM rules.

....

or, is it more a worry that delaying some things until JIT-time could
hurt the ability to produce as good of optimizations in the machine code?...


FWIW, I wrote a C compiler before, just it was much closer to being
standard C (rather than the somewhat tweaked version I was imagining).
not sure if/when I would get around to the new ideas though... (mostly,
I am working more on other things, and using my own custom HLL for a lot
of this stuff...).


or such...
 
B

BGB

Java's Swing, Nokia's Qt, wxWindows and gtk are about as close as it
gets.

which is in a way sort of sad really...


mostly in my case, I just sort of take the lazy route, and do most GUIs
via custom UI widget code and OpenGL.

granted, this doesn't allow native widgets (although one can do a fairly
"generic" style along the Lines of GTK meets Windows Classic), and
requires the app to drag around its own fonts, but, good enough...

in any case, since I often use OpenGL already, it makes a lot more sense
to use GL for the GUI as well, rather than have to deal with multiple
system-specific GUI backends (yes, the GTK people will claim lots that
their stuff is portable and so on, but generally its functionality on
non-Linux systems has been, not very impressive...).

granted, yes, there are possible drawbacks with the OpenGL route as well
(screen resolution issues, needing a GPU, ...).

but, no ideal solutions exist AFAICT...


yeah...

generally it makes a lot more sense IMO to build the apps' core logic
and machinery and build the UI on top than it does to try to build the
app starting with the UI.
 
L

Lawrence D'Oliveiro

Java's Swing ...
HAHAHAHAHA!

... Nokia's Qt ...

Possible, though Nokia’s embrace of Windows Phone 7 leaves it with a
question mark in the mobile space, unless someone else shows enthusiasm for
it.
... wxWindows ...

Which is just a compatibility layer over platform-specific GUIs, not a GUI
in itself.
and gtk ...

Never heard of that being used on mobile platforms, unlike Qt.
 
A

Andreas Leitgeb

Lawrence D'Oliveiro said:
Those who knew the history of previous attempts to do this sort of thing
could already predict why it wasn’t going to succeed.

I wouldn't exactly call "one size fits *almost* all" a failure.
 
D

David Segall

BGB said:
on cellphones...


not seen any laptops or desktop PC's with Android though...

Maybe not, but how many people need a laptop or a desktop? Perhaps the
cell phone "is the computer" and it will plug into something else if
people really want that. Motorola's Android
<http://tinyurl.com/64ok49t> could be the beginning of the personal
computer's future.

I should add that my only knowledge of the Motorola Atrix is via the
web site. I would appreciate comments from someone who has actually
used one despite the fact that it would be way off-topic for this
group.
 
J

Joshua Cranmer

Didn’t you have GNU autoconf to deal with all of that for you?

Understanding GNU autoconf is half the battle. It's written in a macro
language no one really understands, so half the code is copy-pasted.

Besides, all autoconf gets you is setting up the hundreds of #defines.
It does nothing else with respect to the #ifdef mess.
 
B

BGB

Lots of people, but more to the point, why would you want to run Android
on a full-blown PC?

yep... much like the XBox360, a cell phone is not really usable for
coding, ...

I did try using my netbook before, but given my projects' codebase used
up nearly the entire HD (actually, a flash-ROM / SSD), and its very slow
performance, this wasn't really workable either.

thus, a full-sized laptop works much better, even though still sucking
vs a full PC.


to replace a laptop or desktop PC, would require a cell-phone with
capabilities on par to a laptop or desktop.

so, the task would partly be to have a cell phone which can run Doom3
and Quake4 with no lag, has multiple TB of storage space, ... right now
(several years later, the requirements will be higher).

a cell-phone with performance on-par with a computer from 10 years ago
is not nearly as impressive from an "oh hell, I am going to replace my
desktop" sense... (much like "oh yeah... time to break out the Win98
install CD...").


meanwhile, desktops are really good at sitting on ones' desktop, and
being relatively powerful, so the desktop is like ones' "home base" of
computing, and the laptop is like their "mobile camp".

a cell-phone can be more mobile, but is unlikely to be able to replace
the above if it is weaker than either of the above.


hence, technology forms a gradient...
 
B

BGB

Understanding GNU autoconf is half the battle. It's written in a macro
language no one really understands, so half the code is copy-pasted.

Besides, all autoconf gets you is setting up the hundreds of #defines.
It does nothing else with respect to the #ifdef mess.

yeah, and it doesn't exactly tend to work well for non-Linux operating
systems (such as Windows...). apparently its main point is mostly to
deal with a lot of the internal inconsistency between the various Linux
distros.


hence, my personal preference for plain makefiles (with none of the
autoconf mess). with care, one can get Windows+MSVC and Linux+GCC to
work with pretty much the same core makefiles, reducing the pain of
doing the whole parallel-makefile-tree thing.

I guess there is also CMake and similar, but personally I have been a
bit too lazy to bother with it (although not perfect, my makefiles work
well enough, so it is hard to justify replacing them).


personally, I have found it to be less effort to not bother excluding
platform-specific source-files from the build, and instead make the
files essentially no-op if not being compiled on the correct target.

#include <mylibheader.h>
#ifdef MY_OS_OF_INTEREST
.... stuff ...
#endif
//EOF


also, having headers which examine information generally already
provided by the compiler (things like the OS, CPU architecture, ... are
generally already provided by various defines, although typically they
are compiler-specific, so it makes sense in the headers to "normalize
them"), and thus setting up lots of relevant #defines.

for example:
#ifdef _MSC_VER
#ifdef _M_IX86
#define X86
#define LITTLEENDIAN
....
#endif
#ifdef _M_IX64
#define X86_64
#define LITTLEENDIAN
....
#endif
....
#endif
#ifdef __GNUC__
#ifdef __i386__
....
#endif
....
#endif
....
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,521
Members
44,995
Latest member
PinupduzSap

Latest Threads

Top