STL bitset class slow..

A

Andre Kaufmann

On Mon, 2011-03-07, Andre Kaufmann wrote:

And you almost never run the compiler yourself -- you write a Makefile
(or equivalent) which contains the details of how to build.

Yes, but I don't get it what's the advantage of:

a) Add files in a text editor to a makefile, set the compiler
switches etc (or optionally use a graphical tool)

b) Let the IDE handle the makefile -> add files with graphical tools,
set the compiler switches with graphical tools and then get a
project file which can be optionally built from a command line

So where is the advantage of a) over b) ?

Andre
 
A

Andre Kaufmann

On 06.03.2011 13:36, James Kanze wrote:

[...]
I like the mouse when I'm just browsing, without any specific
goal in mind. Or when I'm dealing with a graphical
representation, like UML. Otherwise, however, I don't like
having to move my hands from the base position on the keyboard.
It costs too much in productivity.

Agreed, but I don't see the difference controlling VI / VIM by keyboard
or VStudio by keyboard ? [o.k. VIM should be faster and perhaps more
responsive ;-) - but besides that point ?]
[...]
In the case of the debugger, I'm often in the same case, because
I don't use it that often. Still, typing in "help" takes less
time than searching through untold menus.

Yes I know what you mean, though I think the browser it's commonly the
fastest tool to get any information quickly ;-) - at least under Windows.
[...]
I used to think that was true. And since I've been working
under Windows, some things have surprised me. But although I'm
beginning to know (and understand) the Windows world, I'm still
far more productive under Unix. And that's true for the vast
majority of programmers I've worked with.

Well I think both "worlds" have evolved much and both have good tool
sets. But I think both "worlds" and tools are quite different.
[...]
Sometimes. Sometimes not. I've not been able to make it show
them systematically, everytime I step out of a function. And
since I use a fairly functional style of programming a lot,
that's a killer in my book.

Depends, if you debug applications built with release configuration then
yes, commonly you won't see the return values, since the functions don't
exist anymore, because they have been removed by the global optimizer.
But I don't think that is different under Unix - is it ?

For debug builds I hadn't that much problems, rather with automatic stl
expansion in combination with complex template classes.
[...]
I'm glad to hear it. But where does it put the core dump. And

There are 2 types of dumps (same format - but handling is different):

Process dumps:

Either you write it by yourself. You can register a callback function in
your application and you will get called and with a utility function you
may write the dump by yourself.
If you don't do anything Windows Error Reporting will start and the dump
files will be written to a temp directory and the dialog will list the
information which is ready to be sent to a central server.
If you have a look at the list, there should be a dump file in this list
too. This is the dump file.

These dumps can be loaded and debugged directly under Visual Studio or
Windows Debugger.


Core / System dumps: Windows BSOD - Blue Screen Of Death

Settings depend on the system. IIRC for Windows Vista and higher they
are enabled by default, under Windows XP you have to enable them.

It can be activated and the default path where it's written to can be
configured under: Right Click - My Computer - Properties - Advanced -
Startup and Recovery. Or Google "configure minidumps windows" should
give you a more detailed explanation.

These dumps can be analyzed only with Windows Debugger.

why don't any of the Windows specialists where I work know about
it.

I don't know. Sadly only few Windows developers know dump files and how
much they can help to track errors quickly. Also there are other
compiler vendors (a famous Windows pascal compiler for example), which
don't support Microsoft Debug Format and therefore analyzing dump files
of these applications is tedious.

Most of the Windows developers try to track down errors by trying to
reproduce the problems in the IDE / debugger, but don't realize that
there's a much better and much faster way to do it.

And only few know what's the context menu in the task manager:
"Write dump file" for, when you right click an applications name in the
task manager of Windows Vista and above.


Also I have the odd feeling, that there isn't much interest too.
We had a problem with a compiler vendor (>not< Microsoft/Intel and not
Open Source) where the C++ linker crashed and we couldn't help
ourselves. We contacted the compiler vendor and he asked us to sent a
sample project (unfortunately it occurred only in our biggest
application and we weren't allowed to sent any information).
I started a discussion in the vendors newsgroup and tried to attract the
attention to other developers, which had a similar problem to vote
for dump file support (either in Microsoft format or in the compiler
vendors format).

But although it would be a huge advantage for the compiler vendor and
the developers there was zero interest in dump files and their benefits:

:-/ :-(
You mean I have to go through Microsoft to get a core dump?

You can optionally, you can register your applications and if they are
registered and any dump file is received at Microsoft it will be stored
on an server, where only the company which has developed the
applications has access too (may be you can register a server address
for the dump files to be stored on your own server - don't know for sure).

But as I wrote - you can do it by yourself.
I prefer this method, though I must admit that it's not that simple
anymore under Windows Vista / Windows 7 - you have to hook a function to
be able to be called before "Windows Error Reporting".
Not a problem for me, but for "dump file starters" for sure.
[...]
This is quite handy to debug deadlocks (where no crash dump is
written).

Unless you send the program a signal to tell it to crash:).

O.k. agreed - one additional possibility to get the dump ;-).
But I don't want to see a clients face and mouth if you tell him "Please
crash the application" ;-)
And the crash happens on your machine, and not the user's (where
no Visual Studios is installed).

I just enter a http address in Visual Studio: e.g.

http:\\server:port\dump.dmp

and our application (a service) will return a snapshot dump, without
crashing the application. The application still runs and I'm able to
analyze the dump in Visual Studio.
Definitely. Even under Solaris, I'd use Sun's debugger with Sun
OS, and gdb with g++. Which is a pain.


I'm rarely interested in a variable; I want an expression. (As
I said, my style is often functional, and there just aren't many
variables.)

Expressions are evaluated too. Complex function calls (calling C++
functions from the debugger) have either a complex syntax or aren't
possible (depends).
But due to the side effects, I wouldn't add any function call watches
anyways ;-).

That's the one I didn't know. How do you activate it, and can I

There a 2 different command windows:

1) View - Other Windows - Command Window

To be able to control the IDE from the command line (compiling etc.)

2) Immediate Window

To evaluate variables and expressions.

Either enter "immed" in the Command Window or if you debug an
application use the menu: "Debug Windows - Immediate"
or by pressing CTRL - D + I (but depends on your configuration).

enter an arbitrary expression?

Yes, as I wrote calling functions of your application is not that simple
(name mangling :-/ and namespaces) if the function is not visible in the
current context / callstack.
But any other expressions can be evaluated without any problems (at
least the simple ones I entered - I don't use that feature extensively -
;-) ]

[...]

If it really works, it sounds like you'd get the best of both
worlds. Use an IDE when it's convenient, but have full access
to all of the power of the system when it's not.

Yes, IMHO each developer should and can choose his own and best
developing style. Some prefer command line tools, some IDEs and some use
both.
I think both (or three) styles are equally productive - in some cases an
IDE is better in other cases command line tools are better. Depends on
the developer and the type of the application - IMHO.

Andre
 
R

red floyd

Yes, but I don't get it what's the advantage of:

a) Add files in a text editor to a makefile, set the compiler
    switches etc (or optionally use a graphical tool)

b) Let the IDE handle the makefile -> add files with graphical tools,
    set the compiler switches with graphical tools and then get a
    project file which can be optionally built from a command line

So where is the advantage of a) over b) ?

In an IDE, you can only do what the IDE lets you do
(*cough*VisualStudio*cough*).
In a Makefile, you can do whatever you want.
 
A

Andre Kaufmann

[...]
So where is the advantage of a) over b) ?

In an IDE, you can only do what the IDE lets you do
(*cough*VisualStudio*cough*).

LOL - I don't care about such arguments - and I don't waste time and
energy for that.

I'm open to other tools and operating systems and want to learn and know
each of them, unfortunately there isn't enough time to do that at expert
level.
In a Makefile, you can do whatever you want.

Could you give me an illustrative example ?

My argument is the IDE (there are several ones - even under Unix) is
just a tool to handle the makefiles. You can choose whatever you want.
The makefile standard format for Windows is MsBuild.

You can edit the makefiles from VIM >> or << from the IDE.

So I can do whatever I want from the command line + the IDE.

And besides that, the IDE can be configured individually and supports a
plugin system, where you can do >different< things than from the command
line.

Additionally editing a GUI, Profiler, UML Tools - what you see is what
you get style tools - is IMHO impossible from VIM.
If a developer uses non visual GUI programming, then it's o.k to use VIM
- it's a different development style and architecture.
Different doesn't IMHO mean better or worse.

Just like comparing TEX <-> Open Office :-9

Andre
 
R

red floyd

[...]
So where is the advantage of a) over b) ?
In an IDE, you can only do what the IDE lets you do
(*cough*VisualStudio*cough*).

LOL - I don't care about such arguments - and I don't waste time and
energy for that.

I'm open to other tools and operating systems and want to learn and know
each of them, unfortunately there isn't enough time to do that at expert
level.
In a Makefile, you can do whatever you want.

Could you give me an illustrative example ?

Anything that requires a "custom build" step. It's horribly broken
in VS.

Anything that isn't stock compile/link

Or something that builds two targets. In VS, you need to have
multiple projects to do that.


-- Makefile

exefile : main.o table.o
$(CC) -o exefile main.o table.o

main.o : main.c

table.o : table.c

table.c : tabledata.txt
preprocess human readable tabledata.txt into table.c
 
J

Jorgen Grahn

Yes, but I don't get it what's the advantage of:

a) Add files in a text editor to a makefile, set the compiler
switches etc (or optionally use a graphical tool)

b) Let the IDE handle the makefile -> add files with graphical tools,
set the compiler switches with graphical tools and then get a
project file which can be optionally built from a command line

So where is the advantage of a) over b) ?

Just to mention a few:

It's standard, and it's open-ended. It's not tied to any specific IDE,
or to any specific programming language or even to any specific task.

/Jorgen
 
J

James Kanze

On 07.03.2011 11:29, Jorgen Grahn wrote:
Yes, but I don't get it what's the advantage of:
a) Add files in a text editor to a makefile, set the compiler
switches etc (or optionally use a graphical tool)
b) Let the IDE handle the makefile -> add files with graphical tools,
set the compiler switches with graphical tools and then get a
project file which can be optionally built from a command line
So where is the advantage of a) over b) ?

You get what you want, in a format which can easily be
copy/pasted to other projects.

In practice, in large companies, your makefile will just define
a couple of macros (e.g. sources = ...), then include some
global makefile, so you automatically get the standard compiler
options for your shop. And don't have to go through all of your
project files changing them if they change.

There's also the fact that make (or at least the ones I've used)
handle dependencies correctly. VS 2005 doesn't, and if you just
do build, you often end up not recompiling files that need it.
 
J

James Kanze

On 07.03.2011 23:11, James Kanze wrote:
On 06.03.2011 13:36, James Kanze wrote:
[...]
I like the mouse when I'm just browsing, without any specific
goal in mind. Or when I'm dealing with a graphical
representation, like UML. Otherwise, however, I don't like
having to move my hands from the base position on the keyboard.
It costs too much in productivity.
Agreed, but I don't see the difference controlling VI / VIM by keyboard
or VStudio by keyboard ? [o.k. VIM should be faster and perhaps more
responsive ;-) - but besides that point ?]

If you could control Visual Studio from the standard keyboard.
None of the experts where I work seem to be able to control it
without having to use function keys, or otherwise move their
hands from the standard position.

(Of course, there's also the issue that Visual Studio, at least
2005, is broken. It seems like I spend half my time doing full
rebuilds, because it won't compile all the object files whose
sources have changed. And I've had to create a couple of
separate, one file projects, because I need to build a small,
local program to generate part of my code.)
[...]
In the case of the debugger, I'm often in the same case, because
I don't use it that often. Still, typing in "help" takes less
time than searching through untold menus.
Yes I know what you mean, though I think the browser it's
commonly the fastest tool to get any information quickly ;-) -
at least under Windows.

In general, I find a GUI preferable for the tools you rarely
use, and a command line interface preferable for the ones you
regularly use: if you don't know the command, it's probably
faster searching through a well thought out menu hierarchy than
searching through the manual. So logically, I'd favor a GUI for
the debugger. But the VS debugger seems to go out of its way to
make it difficult to do one or two of the more common tasks.
(Not all of them, of course.)
[...]
Sometimes. Sometimes not. I've not been able to make it show
them systematically, everytime I step out of a function. And
since I use a fairly functional style of programming a lot,
that's a killer in my book.
Depends, if you debug applications built with release
configuration then yes, commonly you won't see the return
values, since the functions don't exist anymore, because they
have been removed by the global optimizer. But I don't think
that is different under Unix - is it ?

None of the debuggers are very good with optimized code. They
could all be better.
For debug builds I hadn't that much problems, rather with
automatic stl expansion in combination with complex template
classes.

I've yet to figure out what I'm doing differently, but
sometimes, I do see the return values. Most of the times not,
however.

But I often get the feeling that the debugger is not
deterministic. Some of my work is on Excel plug-ins. For over
a year, I've been starting Excel, attaching the debugger to the
process, and debugging from there. That's the way my collegues
showed me to do it. Sometime recently, however, that stopped
working. Now I have to specify Excel as my binary, and start it
from the debugger. (On the other hand, I can still debug the
Java plug-ins by attaching to the running Java process. Va
comprendre.)

I'll have to try your other suggestions at work.
 
R

red floyd

(Of course, there's also the issue that Visual Studio, at least
2005, is broken.  It seems like I spend half my time doing full
rebuilds, because it won't compile all the object files whose
sources have changed.  

I have the reverse problem. It insists that certain stuff which
hasn't changed is out of date.
 
Ö

Öö Tiib

I have the reverse problem.  It insists that certain stuff which
hasn't changed is out of date.

Usually it is case when visual studio build system is unable to find
source files or dependencies (included files), however integrated
compilers manage to find them somehow. Seems that the compilers are
written by smartest people in Microsoft but IDE and build system are
outsourced to some really dumb guys. The problems go away when you
organize the source files (and #include directives) so that the dumb
IDE does not lose track.
 
A

Andre Kaufmann

If you could control Visual Studio from the standard keyboard.
None of the experts where I work seem to be able to control it
without having to use function keys, or otherwise move their
hands from the standard position.

Hm, but you can assign to each function a new keystroke, or complex
macros, and there is also an emulation of emacs (free) and vim (free?)
available.
(Of course, there's also the issue that Visual Studio, at least
2005, is broken. It seems like I spend half my time doing full

Agreed. There are situations where this happens with this rather old
version and even with the new version.
I had to activate debug outputs to track down the problem (not existing
sources). Additionally the old versions sometimes simply locked some
object files, which also lead to rebuilds.
[...]
I've yet to figure out what I'm doing differently, but
sometimes, I do see the return values. Most of the times not,
however.

Hm, don't know for sure, since I don't use that feature that often.
[...]
I'll have to try your other suggestions at work.

Hope they will make your development experience under (this) IDE better.
(even not as good as you are used to with a true command line tools
developer experience)

I agree that there where some bugs and problems in the older versions,
which could drive you crazy. But since in the latest version of VStudio
each language and even other tool vendors use now MSBuild format
(comparable to makefiles), when something goes wrong the command line
tools wouldn't be better (using MSBuild) and MSBuild would be broken and
not the IDE.

Andre
 
A

Andre Kaufmann

On 09.03.2011 00:54, red floyd wrote: [...]
Could you give me an illustrative example ?

Anything that requires a "custom build" step. It's horribly broken
in VS.

Thank you for the example.
I used it quite frequently and it has changed a lot in VStudio 2010.
Anything that isn't stock compile/link

Or something that builds two targets. In VS, you need to have
multiple projects to do that.

Do you mean multiple executables or multiple code targets generated by
one "custom build step". If the latter then yes I miss that too. You can
generate multiple outputs but AFAIK only one output file can be tracked
if a rebuild is necessary or not.
-- Makefile

exefile : main.o table.o
$(CC) -o exefile main.o table.o

main.o : main.c

table.o : table.c

table.c : tabledata.txt
preprocess human readable tabledata.txt into table.c


The MSBuild equivalent would be (removed project conditions):
[ please don't let start a discussion if flat makefiles are more
readable as XML makefiles - they are ;-) ]

<ItemGroup>
<ClCompile Include="Main.cpp"/>
<ClCompile Include="Table.cpp"/>
</ItemGroup>
<ItemGroup>
<CustomBuild Include="TableData.txt">
<Command>
createSource.exe %(FullPath) $(ProjectDir)\Table.cpp
</Command>
<Message>
Generating $(ProjectDir)\Table.cpp
</Message>
<Outputs>
$(ProjectDir)\Table.cpp
</Outputs>
</CustomBuild>
</ItemGroup>

Andre
 
A

Andre Kaufmann

You get what you want, in a format which can easily be
copy/pasted to other projects.

Agreed. For such tasks and complex builds I don't use VStudio too.
For complex builds I'm using FinalBuilder (since we use other IDEs and
tools which aren't supported by VStudio)
There's also the fact that make (or at least the ones I've used)
handle dependencies correctly. VS 2005 doesn't, and if you just
do build, you often end up not recompiling files that need it.

Yes, agreed. But I would call this then a "bug" or "broken".
VS 2010 uses now MSBuild makefiles for C++ too, same as other IDE's like
CBuilder.

If the project is permanently rebuilt or some targets are not rebuilt in
this version of the IDE then the MSBuild system would be broken and it
would happen from the command line too.

Andre
 
G

Gerhard Fiedler

Andre said:
Hm, don't know for sure, since I don't use that feature that often.

When I step through code, I always (AFAIR) see return values in the Auto
window. Is that where you are looking?

Gerhard
 
A

Andre Kaufmann

When I step through code, I always (AFAIR) see return values in the Auto
window. Is that where you are looking?

Gerhard

Yes, that's the feature. But I never recognized it to fail.
Perhaps if multiple return values have to be displayed. But don't know
for sure.

Andre
 
J

James Kanze

When I step through code, I always (AFAIR) see return values in the Auto
window. Is that where you are looking?

When I see it, that's where I see it. But most of the time,
it's not there.
 
D

Dilip

Thanks.  I'll forward these to my work site (where Google Groups
is blocked).

I actually sent you a private email to your gmail account (the one you
are posting to Google groups with) with those links (mostly because I
was replying to a thread that was at least a couple of weeks old). Let
me know if you received them
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top