Version Control Software

F

Fábio Santos

[...]
It's
possible to get git for Windows, including gitk and 'git gui' (not
sure about any other graphical tools, they're the only two I use), but
the most convenient way to use them is from a ported bash.

I must disagree. I used git a lot on windows this past year, on a Console
shell (which is basically a CMD.EXE shell with tabs and appropriate
select/copy/paste) and it was quite useful.

I must although say that I wasn't doing any merges and such. I was just
committing, pushing and diffing to check what I'd done.

I used gitk and the git commands. You can't "git diff" or "git show" or
"git log" because paging will suck terribly. But gitk was a nice substitute
for all that.

YMMV
 
C

Chris Angelico

I must disagree. I used git a lot on windows this past year, on a Console
shell (which is basically a CMD.EXE shell with tabs and appropriate
select/copy/paste) and it was quite useful.

Maybe that's changed since the last time I installed it, then. Though
bash is still preferable to me, since that's what I use on Linux.

ChrisA
 
B

Benjamin Kaplan

I agree that branch/merge handling in svn is primitive compared to git
(haven't used hg enough to comment).

The last time we made the choice (4-5 years ago), Windows support for
get, bzr, and hg was definitely lacking compared to svn. The lack of
something like tortoisesvn for hg/git/bzr was a killer. It looks like
the situation has improved since then, but I'd be curious to hear from
people who do their development on Windows.

There's a TortoiseHg now that works well. http://tortoisehg.bitbucket.org

I haven't used it very much, but github has released a git client for
Windows. The underlying library is the same one Microsoft uses for the
Visual Studio git integration, so I assume it's fairly robust at this point.
http://windows.github.com
 
A

Anssi Saari

cutems93 said:
Thank you everyone for such helpful responses! Actually, I have one more question. Does anybody have experience with closed source version control software? If so, why did you buy it instead of downloading open source software? Does closed source vcs have some benefits over open source in some part?

I have some experience with ClearCase. I don't know why anyone would buy
it since it's bloated and slow and hard to use and likes to take over
your computer. I was very happy to dump it when my team was allowed to
use whatever we wanted but then we were not doing software either.

ClearCase is also admin heavy for the above reasons. I guess big
businesses buy things like that because other big businesses buy things
like that. Presumably they keep it because it's cheaper to pay
maintenance than move all source to some other system.

Now granted, Linux development went to commercial Bitkeeper for a while
since Linus Torvalds found it superior to CVS sometime over a decade
ago. When the agreement ended, Torvalds himself developed Git to be what
he needs. Other projects sprang up around the same time to get that job,
this means at least Mercurial if Wikipedia is to be believed.

Oh, as far as I know, commercial software vendors always ban their
customers from publishing any kinds of benchmarks or other comparisons
so it's unlikely you can find anything concrete for your commercial
vs. free choice.
 
R

Roy Smith

Anssi Saari <[email protected]> said:
I have some experience with ClearCase. I don't know why anyone would buy
it since it's bloated and slow and hard to use and likes to take over
your computer.

ClearCase was the right solution to certain specific problems which
existed 20 years ago. It does have a couple of cool features.

1) Every revision of every file exists simultaneously in the file system
namespace (CC exports its repo as a quasi-NFS file system). That means
you can look at every revision with all your normal command-line tools
(diff, grep, whatever).

2) It ships with an integrated build tool which can automatically learn
your dependency graph. This is paired with a feature called "winking
in". Let's say I'm building a humungous C++ project which takes hours
to compile. And I'm part of a team of 50 developers, all working on the
same code.

If I need foo.o, and some other developer has already compiled a foo.o
with exactly the same dependency graph (including what versions of the
toolchain and option flags), I just instantly and transparently get a
copy of their file instead of having to build it myself. This can
potentially save a huge amount of build time.

All that being said, it is, as Anssi points out, a horrible, bloated,
overpriced, complicated mess which requires teams of specially trained
ClearCase admins to run. In other words, it's exactly the sort of thing
big, stupid, Fortune-500 companies buy because the IBM salesperson plays
golf with the CIO.
 
G

Grant Edwards

All that being said, it is, as Anssi points out, a horrible, bloated,
overpriced, complicated mess which requires teams of specially
trained ClearCase admins to run. In other words, it's exactly the
sort of thing big, stupid, Fortune-500 companies buy because the IBM
salesperson plays golf with the CIO.

Years ago, I worked at one largish company where a couple of the
embedded development projects used ClearCase. The rest of us used CVS
or RCS or some other cheap commercial systems. Judging by those
results, ClearCase requires a full-time administrator for every 10 or
so users. The other systems seemed to require almost no regular
administration, and what was required was handled by the developers
themselves (mayby a couple hours per month). The cost of ClearCase
was also sky-high.
 
D

Dave Angel

Years ago, I worked at one largish company where a couple of the
embedded development projects used ClearCase. The rest of us used CVS
or RCS or some other cheap commercial systems. Judging by those
results, ClearCase requires a full-time administrator for every 10 or
so users. The other systems seemed to require almost no regular
administration, and what was required was handled by the developers
themselves (mayby a couple hours per month). The cost of ClearCase
was also sky-high.

if I remember rightly, it was about two-thousand dollars per seat. And
the people I saw using it were using XCOPY to copy the stuff they needed
onto their local drives, then disabling the ClearCase service so they
could get some real work done. Compiles were about 10x slower with the
service active.

Now that was on Windows NT, when Clearcase was first porting from Unix.
So perhaps things have improved.
 
D

Dennis Lee Bieber

if I remember rightly, it was about two-thousand dollars per seat. And
the people I saw using it were using XCOPY to copy the stuff they needed
onto their local drives, then disabling the ClearCase service so they
could get some real work done. Compiles were about 10x slower with the
service active.
My previous employer had standardized on ClearCase... Probably
because the cost could be billed to the customer as part of the contract
(and a customer that probably feels comfortable with big commercial
version control and /reporting/ tool).

RCS, Update [really old -- as I recall, it required columns 72-80 to
store its versioning data; and how many people worry about a 72 column
limit even in FORTRAN?], and the like tend to (my experience) fall apart
if given binary files -- not just source (text) files. I believe that
program used ClearCase to also archive each /build/, rather than just
the status of the sources and makefiles needed to recreate the build.
Now that was on Windows NT, when Clearcase was first porting from Unix.
So perhaps things have improved.

Well... I actually had an unofficial (the "free" version) of
GNAT/GPS installed on my system at work (nice of GNAT to have an
"install for current user" that didn't need admin)[If I really pushed, I
could maybe have gotten the $$$$ support version -- we were paying for
the support for Solaris&SunOS]. I'd found that GPS ran faster, accessing
ClearCase internally, for editing files than running GPS on the Sun
boxes... I did have to do the build on the Sun, but that was done by
just opening an X session on the Sun next to my WinXP box and invoking
clearmake.
 
T

Tim Delaney

if I remember rightly, it was about two-thousand dollars per seat. And
the people I saw using it were using XCOPY to copy the stuff they needed
onto their local drives, then disabling the ClearCase service so they could
get some real work done. Compiles were about 10x slower with the service
active.

I can absolutely confirm how much ClearCase slows things down. I completely
refused to use dynamic views for several reasons - #1 being that if you
lost your network connection you couldn't work at all, and #2 being how
slow they were. Static views were slightly better as you could at least
hijack files in that situation and keep working (and then be very very
careful when you were back online).

And then of course there was ClearCase Remote Client. I was working from
home much of the time, so I got to use CCRC. It worked kinda well enough,
and in that situation was much better than the native client. Don't ever
ever try to use ClearCase native over a non-LAN connection. I can't stress
this enough. The ClearCase protocol is unbelievably noisy, even if using
static views.

CCRC did have one major advantage over the native client though. I had the
fun task when I moved my local team from CC to Mercurial of keeping the
Mercurial and CC clients in sync. Turns out that CCRC was the best option,
as I was able to parse its local state files and work out what timestamp
ClearCase thought its files should be, set it appropriately from a
Mercurial extension and convince CCRC that really, only these files have
changed, not the thousand or so that just had their timestamp changed ...
CCRC at least made that possible, even if it was a complete accident by the
CCRC developers.

Tim Delaney
 
C

Chris Angelico

I can absolutely confirm how much ClearCase slows things down. I completely
refused to use dynamic views for several reasons - #1 being that if you lost
your network connection you couldn't work at all...

And that right there is why modern source control systems are
distributed, not centralized. It's so much easier with git; we lost
our central hub at one point, and another dev and I simply pulled from
each other for a bit until we got a new Scaphio online. With
centralized version control, that would have basically meant a
complete outage until the new box was up.

ChrisA
 
R

Roy Smith

Chris Angelico said:
And that right there is why modern source control systems are
distributed, not centralized. It's so much easier with git; we lost
our central hub at one point, and another dev and I simply pulled from
each other for a bit until we got a new Scaphio online. With
centralized version control, that would have basically meant a
complete outage until the new box was up.

ChrisA

The advantage of DVCS is that everybody has a full copy of the repo.
The disadvantage of the DVCS is that every MUST have a full copy of the
repo. When a repo gets big, you may not want to pull all of that data
just to get the subtree you need.
 
G

Giorgos Tzampanakis

The advantage of DVCS is that everybody has a full copy of the repo.
The disadvantage of the DVCS is that every MUST have a full copy of the
repo. When a repo gets big, you may not want to pull all of that data
just to get the subtree you need.

Also, is working without connection to the server such big an issue? One
would expect that losing access to the central server would indicate
significant problems that would impact development anyway.
 
T

Tim Delaney

On 2013-06-15, Roy Smith wrote:

Also, is working without connection to the server such big an issue? One
would expect that losing access to the central server would indicate
significant problems that would impact development anyway.

I work almost 100% remotely (I chose to move back to a country town). Most
of the time I have a good internet connection. But sometimes my clients are
in other countries (I'm in Australia, my current client is in the US) and
the VPN is slow or doesn't work (heatwaves have taken down their systems a
few times). Sometimes I'm on a train going to Sydney and mobile internet is
pretty patchy much of the way. Sometimes my internet connection dies - we
had a case where someone put a backhoe through the backhaul and my backup
mobile internet was also useless.

But so long as at some point I can sync the repositories, I can work away
(on things that are not dependent on something new from upstream).

Tim Delaney
 
C

Chris Angelico

Everyone and every device is connected to the internet all the time, or
else the universe comes to an end.

Get off my lawn! ;-)

So some of us think that version control is a single-player game, but
CVS-Box One thinks always-on gaming is a reasonable thing?

*ducks*

ChrisA
 
C

Chris Angelico

The advantage of DVCS is that everybody has a full copy of the repo.
The disadvantage of the DVCS is that every MUST have a full copy of the
repo. When a repo gets big, you may not want to pull all of that data
just to get the subtree you need.

Yeah, and depending on size, that can be a major problem. While git
_will_ let you make a shallow clone, it won't let you push from that,
so it's good only for read-only repositories (we use git to manage
software deployments at work - shallow clones are perfect) or for
working with patch files.

Hmm. ~/cpython/.hg is 200MB+, but ~/pike/.git is only 86MB. Does
Mercurial compress its content? A tar.gz of each comes down, but only
to ~170MB and ~75MB respectively, so I'm guessing the bulk of it is
already compressed. But 200MB for cpython seems like a lot.

Anyway, this problem is a good reason for dividing a repository up
into logically-separate parts. If you'll often want only one subtree,
maybe that shouldn't be a subtree of a monolithic repository.

ChrisA
 
R

rusi

Yeah, and depending on size, that can be a major problem. While git
_will_ let you make a shallow clone, it won't let you push from that,
so it's good only for read-only repositories (we use git to manage
software deployments at work - shallow clones are perfect) or for
working with patch files.

Hmm. ~/cpython/.hg is 200MB+, but ~/pike/.git is only 86MB. Does
Mercurial compress its content? A tar.gz of each comes down, but only
to ~170MB and ~75MB respectively, so I'm guessing the bulk of it is
already compressed. But 200MB for cpython seems like a lot.

[I am assuming that you have run "git gc --aggressive" before giving
those figures]

Your data would tell me that python is about twice as large a project
as pike in terms of number of commits. Isn't this a natural conclusion?
 
C

Chris Angelico

Yeah, and depending on size, that can be a major problem. While git
_will_ let you make a shallow clone, it won't let you push from that,
so it's good only for read-only repositories (we use git to manage
software deployments at work - shallow clones are perfect) or for
working with patch files.

Hmm. ~/cpython/.hg is 200MB+, but ~/pike/.git is only 86MB. Does
Mercurial compress its content? A tar.gz of each comes down, but only
to ~170MB and ~75MB respectively, so I'm guessing the bulk of it is
already compressed. But 200MB for cpython seems like a lot.

[I am assuming that you have run "git gc --aggressive" before giving
those figures]

They're both clones done for the purpose of building, so I hadn't run
any sort of garbage collect.
Your data would tell me that python is about twice as large a project
as pike in terms of number of commits. Isn't this a natural conclusion?

I didn't think there would be that much difference, tbh. Mainly, I'm
just seeing cpython as not being 200MB of history, or so I'd thought.
Pike has ~30K commits (based on 'git log --oneline|wc -l'); CPython
has roughly 80K (based on 'hg log|grep changeset|wc -l' - there's
likely an easier way but I don't know Mercurial). So yeah, okay, it's
been doing more. But I still don't see 200MB in that. Seems a lot of
content.

ChrisA
 
S

Steven D'Aprano

I didn't think there would be that much difference, tbh. Mainly, I'm
just seeing cpython as not being 200MB of history, or so I'd thought.
Pike has ~30K commits (based on 'git log --oneline|wc -l'); CPython has
roughly 80K (based on 'hg log|grep changeset|wc -l' - there's likely an
easier way but I don't know Mercurial). So yeah, okay, it's been doing
more. But I still don't see 200MB in that. Seems a lot of content.

If you're bringing in the *entire* CPython code base, as shown here:

http://hg.python.org/

keep in mind that it includes the equivalent of four independent
implementations:

- CPython 2.x
- CPython 3.x
- Stackless
- Jython


plus various other bits and pieces.


Plus, no offence intended at Pike which I'm sure is an awesome language,
but it may not be quite as much active development as Python... as you
point out yourself, there are nearly three times as many commits to
CPython as to Pike, which coincidentally (or not) corresponds to the
CPython repo being nearly three times as large as the Pike repo.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top