|
jschell wrote: trønderen wrote:There is no physical contact between the arm/head and the platter, and no physical wear from long use.
Nor did I claim that. Nor was it my intention to repeat what you said, but to add information.
I have never before heard of a disk arm "wearing out". If this was a problem, disks that have been operating for many, many years should have broken down a long time ago due to arm failure. You don't hear much of that! (*)
Also, the failure of a disk arm is not necessarily a result of mechanical wear. It could be e.g. an electronics failure, dirt entering into the mechanical parts, deformation due to mechanical shock (dropping the disk to the floor etc.) or a number of other reasons.
The person reporting this problem reports how much the disk arm moves back and forth 1/4 of an inch. You cannot see that without opening the disk. That makes me somewhat suspicious. I would never trust a disk that has been opened up by an amateur.
trønderen wrote:a cell in an SSD is worn out with repeated writes.
Again not something I said. Again, I didn't intend to repeat what you said, but to add information.
(*) A disk arm is operated very much like a loudspeaker cone. A history from a while back: One of my study mates had ordered a really powerful audio amplifier, and a set of large speakers. The amp arrived before the speakers, so he tried out the amp with his old, small speakers. This guy was into classical music, and he reported that when playing the "1812 Overture", when the cannons were fired, he had smoke effects from his speakers.
If you could directly control the power applied to your disk arm, you could possibly have it provide similar smoke effects. But you cannot, unless you really set out with determined wish to destroy your disk unit.
|
|
|
|
|
Yes I am aware.
It's not an arm moving causing the wear. It's the natural process which every bit of flash ever is susceptible to and bunches of fragmentation can cause more of it.
If you google hard enough I'm sure you'll find I'm not fibbing to you.
|
|
|
|
|
Other posts provided links but I still have not seen anything authoritative.
Following, still not authoritative, seems to follow what other even less authoritative sources say. And at least the post date is more recent.
https://www.pcmag.com/how-to/how-to-defrag-your-hard-drive-in-windows-10[^]
That link, and others, state that a SSD should not be 'defragged'. But rather it is 'trimmed'. And that is what that process does.
As noted in my other post, my windows 10 computer does NOT have the stated process enabled. I didn't turn it off. And I think I remember installing Windows 10 directly (I remember because I was annoyed that it didn't come installed out of the box.)
But I can't find anything that suggests whether the default is turned on or off.
|
|
|
|
|
No, it's utter nonsense. For at least two reasons.
For one, assuming you are running a recent Windows, NTFS isn't prone to fragmentation anymore in the way FAT used to be in the days of old.
And second, you are just wasting write cycles on that SSD (which are still limited below the lifetime of a "spinning rust" drive) for little gain, if any at all.
|
|
|
|
|
Doesn't Windows itself refuse to defrag a SSD? Try running Windows defrag on SSD and it will simply do some 'trimming', and won't show any fragmentation status. That should be clear answer. A maker of OS should know best.
|
|
|
|
|
Try this: Run defrag c: from an elevated command prompt.
Be patient, it takes quite a while, and you get:
Pre-Optimization Report:
Volume Information:
Volume size = 930.65 GB
Free space = 868.64 GB
Total fragmented space = 20%
Largest free space size = 863.72 GB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
The operation completed successfully.
Post Defragmentation Report:
Volume Information:
Volume size = 930.65 GB
Free space = 868.64 GB
Total fragmented space = 0%
Largest free space size = 863.75 GB
Note: File fragments larger than 64MB are not included in the fragmentation statistics.
Ok, I have had my coffee, so you can all come out now!
modified 6-Dec-23 7:50am.
|
|
|
|
|
Short answer: No.
Long answer:
Unlike mechanical drives, data blocks aren't stored physically in the same order as they are logically. Blocks are physically fragmented internally in most cases, regardless (including what order the OS believes them to be in). But this doesn't matter, since all blocks are accessed at the same speed (**), eliminating any speed advantage of sequential access (++).
** Hypothetically, a high end drive could read/write multiple physical flash chips simultaneously, allowing a block to be accessed without waiting for a prior one to finish, if stored on a different chip.
++ Some Flash Translation Layer (FTL) structures may have a slight speed advantage from accessing co-located logical blocks (such as unfragmented reads). But the speed improvement would be trivial compared to the speed increase of sequential access of mechanical drives.
While a given manufacture's implementation may vary, the mapping of blocks generally works something like describe on one of these pages:
Overview of SSD Structure and Basic Working Principle(2)
Coding for SSDs – Part 3: Pages, Blocks, and the Flash Translation Layer | Code Capsule
|
|
|
|
|
...downhill!
VS consuming huge amount of memory isn't new (even MS decided to ignore it totally)...
But now I have something new... And it confirmed several times...
I have a solution with around 80 projects in it, only a several loaded at any given time... If I reload a project to change something it will not compile until VS closed and re-opened...
Until that time it will report compilation failed without any actual error, but also without the option to run...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
After the last update of VS2022 my colleague reported that debugging with step over and step into didn't work anymore. It was not clear to me if he was talking about C++ or C# debugging, he also uses other debugging tools that might interfere with VS debugging.
|
|
|
|
|
RickZeeland wrote: debugging with step over and step into didn't work anymore. It was not clear to me if he was talking about C++ or C# debugging
Interesting you'd mention that. I installed the latest update last week, and on Thursday/Friday, on multiple occasions, single-stepping (F10) seemed to continue execution or couldn't recover or something like that. I attributed it to me fat-fingering it, but happened enough times that now I see your post, I'm wondering if there's something to it.
In my case that would be C#.
|
|
|
|
|
wow. Not testing much, are you Microsoft.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
One has to remember that multi-project solutions don't (always) compile if you haven't checked the proper project(s) in the "Build | Configuration Manager" unless you specifically ask to "Build / Rebuild" that project. (Been there)
On the other hand, when VS is "sleeping", it "seems" to release (more) excess memory. I think they're doing a lot of tinkering.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
I have a very precise dependency tree, so compiling the main project will compile everything that is outdated - I also mostly do build-solution...
But the main issue is that, there is no error behind the fail and re-opening VS solves the problem - which indicates that VS does no know how to reload a unloaded project correctly... anymore... (which is fixed by re-opening VS and the solution)...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
Kornfeld Eliyahu Peter wrote: VS consuming huge amount of memory isn't new
Versus which IDE that uses very little?
Kornfeld Eliyahu Peter wrote: I have a solution with around 80 projects in it
To me that would be an organization problem. I would break it into different solutions and if that was not possible then it would suggest different sort of problem.
|
|
|
|
|
Wordle 897 3/6*
🟩🟨⬛⬛⬛
🟩🟨🟩⬛🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 897 3/6
🟩🟩⬜⬜⬜
🟩🟩⬜🟩🟩
🟩🟩🟩🟩🟩
All green 💚.
|
|
|
|
|
Wordle 897 3/6
⬛⬛🟩⬛⬛
⬛🟨🟩⬛🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
⬜⬜🟩⬜⬜
🟨⬜⬜⬜⬜
🟩🟩🟩🟩🟩
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 897 4/6
⬜⬜⬜⬜⬜
⬜⬜⬜⬜⬜
🟨🟨⬜🟩🟨
🟩🟩🟩🟩🟩
|
|
|
|
|
Wordle 897 3/6
🟩⬜🟨⬜⬜
🟩⬜⬜⬜🟨
🟩🟩🟩🟩🟩
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Wordle 897 4/6
🟩🟩⬛⬛⬛
🟩🟩⬛⬛🟩
🟩🟩⬛⬛🟩
🟩🟩🟩🟩🟩
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
Wordle 897 4/6
⬜⬜🟩🟨⬜
⬜⬜🟩⬜🟩
⬜⬜🟩⬜🟩
🟩🟩🟩🟩🟩
|
|
|
|
|
I recently bought a new bluetooth device and it only uses Bluetooth LE. The Bluetooth adapter on my desktop is too old to connect.
So $16 later, I have an upgraded adapter that supports BT 5.0. My new device connects and works!
But now my older bluetooth devices don't connect...
clarification: They won't pair with the new adapter.
The difficult we do right away...
...the impossible takes slightly longer.
modified 2-Dec-23 14:25pm.
|
|
|
|
|
That's disturbing to hear.
|
|
|
|
|
So, all of your devices happen to connect to either BT, or BT LE exclusively...?
Can they not co-exist? Maybe installing the LT adapter disabled the older one...check Device Manager and such.
Beyond that, I'm just guessing. I've never purchased a BT device, and those that came with one, have had it disabled.
|
|
|
|