|
OK, so now I've watched the video, and, as I expected, it seems to cover only that one simple use case which mimics Tags (though maybe it doesn't). But it says nothing about the full power of Labels. The presenter isn't wrong, but the video may leave viewers thinking that that's all Labels are good for.
As I have said several times in this discussion, Labels are much more flexible than Tags. Tags are stupid, I see no real world reason to use one. Git should add the same flexibility to Tags, then it will be usable.
I will also state that I would never create a Label the way the presenter demonstrates it. It can be very dangerous to do that in a high-velocity team environment.
In that regard, as I mentioned in my original response, I need an API with which I can implement a utility to standardize how Labels are created and maintained for my team. The TFS GUI in VS was not good enough for the job. Using TFS' .net API, I was able to write a utility which used the tickets and changesets to create the Labels we needed.
modified 6-Nov-23 13:49pm.
|
|
|
|
|
Btw, I have used labels in the past, but that was like well over a decade ago. Seems like forever and I totally accept my memory is fuzzy in that regards, so I could be going senile.
Jeremy Falcon
|
|
|
|
|
Jeremy Falcon wrote: Name one thing TFS does better... I'm waiting. Shelve sets that reside on the cloud vs. stashes that reside on a local machine.
Disclaimer: I'm not a Git expert and don't claim to be one.
/ravi
|
|
|
|
|
Ravi Bhavnani wrote: Disclaimer: I'm not a Git expert and don't claim to be one. Thanks for being the first person to actually make a legit point though. You're right about that btw. So, can totally see how it would appear on the surface. It's a difference of philosophy though.
Git is all about branching. Way more than most SCMs. Stashes are meant to be about uncommitted changes. It's not always stuff you want to share. If there's a change set you want to keep on a remote server because you want to share the code or access it on another machine, just put your committed changes in a branch. You can have temporary branches all day long.
It's just a difference of philosophy, where branches are used to the extreme in Git, but it's doable. Git really does excel at merging, so branches are used a lot.
Jeremy Falcon
|
|
|
|
|
Perhaps I'm not using stashes the way they were intended to be used. Let me explain my scenario and maybe you could suggest a better approach.
I work on a large enterprise app that consists of a dozen web API apps. During development, the apps are deployed to my local IIS at the end of a successful build. Each app's web.config needs to be tweaked to reference the developer's machine and development port numbers, and databases that reside on the dev's SQL Server. So what I stash is essentially my set of machine specific web.config files. My daily work routine is as follows:- Stash web configs and have no local changes.
- Pull latest code.
- Unstash web configs.
- Build locally.
- Work on my tasks.
Before I create a pull request, I create a branch with my changes and create the PR from that branch. When the PR completes, I delete that branch.
Maybe there's a better way of doing steps 1 and 3?
/ravi
|
|
|
|
|
You're doing it exactly the way you should, including deleting temporary work branches. And, I can 100% see why you'd want to have local configs shared somewhere. The question is where? Should that be in the repo? From a security perspective some will say, never put connection info anywhere in a repo / where it's not needed, and I personally fall into that camp. Especially because git will never forget changes and booboos happen. But then, life happens too.
So, given that, you got a couple options:
- You can still push a stash remotely, but it's a pain compared to other SCMs. But if you want ideas, scroll down until you see Scott Weldon's answer. You can totally work with detached heads and commits in git, as long as you remember the short SHA, where git puts stuff, etc. Of course, this is a pain to do. But if you're just looking to backup/share something every now and again, it may be worth it.
- If the structure of that file is the most important thing (so you can remember what settings you need), you can always create a template settings file that contains everything but the actual connection values and put that into the repo permanently. This is what I do, using something like dotenv (been the web world lately). But, there will be a template file that shows everything that needs to go in the actual config that gets committed while the real values are saved elsewhere. This way you can share the how but not the what.
I guess at the end of the day, the difference is, for connection info, I'll put that in Confluence or something if it needs to be shared. Both ideas of keeping it in the shared repo (all in one place) and keeping out out of the shared repo (security) have merit ya know.
Jeremy Falcon
|
|
|
|
|
Thanks for your reply.
Unfortunately it's not just connection strings but IIS port numbers. These are specific to the developer's machine. I've decided that the fastest way forward is to write an automated script that updates a dev's config files. Devs will use this script at the start of their workday. But IMHO this is a workaround for something (remote stashes) an SCM should provide out of the box.
/ravi
modified 8-Nov-23 2:03am.
|
|
|
|
|
Ravi Bhavnani wrote: But IMHO this (remote stashes) is a workaround for something an SCM should provide out of the box. Fair enough. Thanks for being the first person to say something legit though. There's another dude going on for days who clearly doesn't know what he's talking about it. So, your chat was totally refreshing.
Jeremy Falcon
|
|
|
|
|
Maximilien wrote: Maybe there's just something I don't get from the system. It's not you.
|
|
|
|
|
Stick with git. It'll take you much further and make you more flexible.
Jeremy Falcon
|
|
|
|
|
I haven't run into any issues with using Git. I use Git via Visual Studio and Azure DevOps Repos, and to me, it is seamless. I get the idea of a local repo and a remote repo, with branching, that Git uses. I have used Bitbucket and Subversion, liked both, and found I like Git more, especially for team use.
|
|
|
|
|
Maximilien wrote: but going from git to TFVC feels like a step back. Because it is a step back...
TFVC is a centralized version control system, like Subversion.
You basically went from Git, which is decentralized, back to a different flavor of Subversion.
Microsoft has been ditching TFVC in favor of Git for years now.
|
|
|
|
|
Exactly. I dunno about you, but a good dev does frequent (ok, not too frequent) commits. I don't miss slow commits at all.
A commit is a check-in for the non-git folk among us.
Jeremy Falcon
|
|
|
|
|
When I was managing a dev team of 6 on a pretty large codebase, we used TSVC and later Git. TSVC worked very well for us, and was relatively easy to manage. The obvious difference was centralised vs. decentralised/distributed. We didn't need that complexity. Torvaulds designed Git because he was managing many developers spread around the world in different time zones, and the distributed nature of Git matched that. You pick the tool suited to the job in hand.
Moving from any tool you are familiar and competent with to another is often going to feel awkward, if not a backward step, until you are experienced with the new one. Persevere and you will eventually find they both have their strengths and weaknesses.
With regard to "must use commandline" (or Gui) I find the professional elitism or evangelism that pervades so many software developers irksome and irritating, and use of the command line as some sort of implied superiority especially so. I use the command line myself when it's easier or has options I need that are not supported by the available Gui tools, and Gui tools when they are more convenient. The commanline is useful for automating processes with scripts, most other times it's easier to use a Gui. Just pick the approach that suits the circumstances.
|
|
|
|
|
|
"Use the right tool for the right job." Git is not the right tool for any job I've had, but TFS is.
I have never had a job in which a distributed version control system would make a positive difference.
I found TFS to be much more usable than Git -- TFS just worked. Git was a step back. Maybe the Visual Studio Git integration will some day be up to what we have with TFS.
For myself, my experience is more like:
CMS (OpenVMS) -> look at VSS, reject it -> look at Subversion, reject it, get forced to use it anyway -> rolled my own* -> Subversion again, but with Tortoise -> TFS -> get forced to use Git -> roll another of my own
What makes CMS and TFS superior to VSS, Subversion, and Git is Classes (in CMS) and Labels (in TFS). The others do not have an equivalent feature.
TFS also has integrated ticketing and a .net API, which makes it the best system I currently know of.
A feature which CMS (OpenVMS only) has which the others don't is the ability to target multiple libraries (repositories) at the same time -- I wish the others would add that, but they won't. CMS also supports Groups, which the others don't.
HP has again killed the OpenVMS Hobbyist program, so I can't give any demos of how great CMS is.
* I may have mentioned this before. I began it in 2009 (as I recall) and I got it to a vaguely usable state before reaching a major decision point and stalled. The major features are pretty much that of CMS. Occasionally, I think about getting back on it.
|
|
|
|
|
I wish I could say that my experience was that TFS "just worked", because I found the opposite to be true. I've used (and administrated) all of these source control systems:
* Perforce
* ClearCase
* TFS
* git (in CLI and also via BitBucket, GitLab, GitHub)
* svn
* SourceSafe (which is by far the worst of all of these)
* MKS Source Integrity
* PVCS
* CVS
* sccs
* CMS (VAX)
Of these, TFS is a bottom-quartile experience. If you're heavily into the Windows dev train, then it bumps up in usefulness just a hair, but it still sucks. There are 3 items on this list that I'd quit my job over rather than use again, and TFS is one of them.
|
|
|
|
|
Br.Bill wrote: If you're heavily into the Windows dev train
Oh, yes, I agree with that. I should have added a qualifier. I doubt anyone would use TFS if they are not using Visual Studio.
I was mainly using SSIS, which necessitated the use of Visual Studio anyway.
Trying to use Git with SSIS was very problematic (for me anyway, there were other issues involved [e.g. corporate politics] which didn't help) and I had to create a nasty work-around to get it to work.
I have never had a choice of version control system, they are always dictated by management.
As to writing code (C/C++ C# etc.) I prefer not to use Visual Studio at all.
Similarly for SQL code -- I develop that in SSMS, not Visual Studio, then I have to generate scripts via the SMO API in .net to get them right.
modified 7-Nov-23 18:56pm.
|
|
|
|
|
I totally and publicly renege all my previous cussing, blaming, excoriations, and bad-mouthing of AOMEI.
It isn't quite fair to judge a piece of software when my USB interface is delivering about fourteen or fifteen Megabytes Per Second.
And...
That figure itself is being very generous to the USB hardware. I highly suspect that the speed number is about ten or twelve. I used a stopwatch in my hand and watched Two Gigabytes of files go across a Microsoft Windows Explorer file copy. It was five individual files.
They (Microsoft) now have a little speedometer that shows you the instantaneous transfer rate as the files are being moved from one place to the other. The highest number that I ever saw, I think, was 32 Megabytes Per Second, and that was just occasionally.
At this moment, the hardware of my PC appears to be the problem.
Resolved: Yes, I really should put my own PC together with my own two hands.
|
|
|
|
|
C-P-User-3 wrote: The highest number that I ever saw, I think, was 32 Megabytes Per Second
That sounds about right for a USB 2.0 port maxing out.
From some random search:
Quote: USB 2.0 clock speed is 480 megabits per second. That's 60 megabytes per second. Given the protocol overhead and the fact that USB 2.0 is half-duplex, the maximum data rate will be 30-40 megabytes per second. The 480 megabits per second limit applies to the USB controller and is shared between the ports attached to it.
|
|
|
|
|
So, on my (Win'10) machine, under "Device Manager", and clicking on "Universal Serial bus Controllers", I read...
...I clearly lack understanding.
Brain assistance is always welcome.
|
|
|
|
|
I suppose it's entire possible to have a USB3 controller that has some USB2 ports connected to. Or a USB3 controller that is (incorrectly) configured to work in 2.0 mode. I have to think this would be a BIOS setting.
Someone with more knowledge on this topic that I do should be able to provide better informed answers. Ultimately it's not because a controller is reported as 3.0 that it means that's what it'll always use...
|
|
|
|
|
Is the thing you have plugged into this USB controller USB3 compatible?
|
|
|
|
|
Peter Adam wrote: Is the thing you have plugged into this USB controller USB3 compatible? You is a mind reader
That has been my exact question for days now.
|
|
|
|
|
I realize this is five days ago but I was looking for a senseful comment on "intelligent" and this is what I wandered into; yes, I read the other backup software post made adjacent to this about the same issue of slowness.
If I'm reading this correctly, XHCI driver installed, this computer is now ok and everything runs as expected?
Didn't slow mouse movement come before USB drive speedlessness?
Anyway, about your use of the word "clone" in reference to building your own computer, AOMEI as I recall once installed has an icon on the main menu for HELP. Within the document spawned by a click on it, you'll find two words which it might be to a user's advantage to consider as contrasting idioms "CLONE" and "IMAGE". For what it's worth a CLONE can be made to a target drive partition and is used in reference to a drive and it's partition(s). An IMAGE is technically just a backup of whatever the software can say is an origin made to some storage space available in the system, possibly to external USB, DVD, CD, etc. And that is a file with an extention.
So, your mouse really didn't drive you crazy before your drives came into general use? I'm amazed and astonished.
|
|
|
|
|