The typical size of a machine code executable "Hello world",
generated by a high level language compiler, has increased steadily
over the years.
I don't think it's steady -- it happens in steps. Virtually every new
OS has a new executable format that increases overhead compared to
its predecessors. Just for example, a "Hello world" for MS-DOS as a
..COM file (written in assembly language) could be around 20 bytes --
but that had essentially no overhead; if you added up the size of the
string and the size of the code, you got the size of the file -- to
the byte.
If you did the same in a .exe file, you got overhead. Still using
assembly language, the file came to something like 200 bytes or so.
In 16-bit Windows, the bare minimum file size grew again -- to 512
bytes, and a hello world program ended up something like 5120 bytes.
In 32-bit Windows that went up to 720 bytes.
In 64-bit Windows, it's risen again -- for a 32-bit program, it's now
2560 bytes, and for a 64-bit program it's 3072 bytes.
That's a bit misleading though -- even in the 64-bit executable, the
actual machine code inside that executable is a mere 33 bytes. That's
larger than (for example) the 16-bit Windows version, but most of the
change is simply because addresses are larger -- 64-bits apiece
instead of 16. At least IMO, the expanded addressing capability makes
that minuscule increase in size *entirely* worthwhile.
The rest of the difference is entirely in the overhead of the
executable file format itself, not in the machine code in that
executable. The .com file had zero overhead, but carried quite a few
limitations with it. In a multitasking system, it would be
essentially impossible to share images loaded from such files between
processes, so the savings in disk space would come at the expense of
consuming substantially more memory in operation. That doesn't strike
me as a good tradeoff.