|
Maybe "minimal api" means "most of the methods are commented out because they're not working yet".
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
Quote: .NET 6 marks the completion of Microsoft’s plan to unify the different .NET implementations.
As usual MS is declaring victory rather than achieving it. The latter would mean I could update my solution from .net 4.x to .net 6 and at most have to click a "fix all the breaking changes" button and be good to go.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
|
|
|
|
|
I'm not sure that's true.The minimal API thing is just a change to the templates. Personally, I think the minimal API thing is a pile of crap, and essentially changed the way this stuff has been done since the beginning of time. In a bad way.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
I wasn't commenting on the minimal APIs; but MS declaring victory in unifying .net. Which as long as there are enough breaking changes that porting from framework to core will require significant work, is not unified.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
|
|
|
|
|
And they're still enhancing LINQ, while failing to fix obvious omissions such as support for the "?." operator. Which means you never know until compilation if you can use certain valid C# expressions in LINQ.
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
Alan Kay.
|
|
|
|
|
Everyone likes the idea of squeezing as much life out of their laptop as possible, and every new iteration of Windows has made great promises about prolonging battery life. And here I thought I had a perpetual calculation machine
I guess I'm alone in thinking that value should be clamped to never appear as more than 100%?
|
|
|
|
|
So does Android 12 on my phone if you use it while being charged.
|
|
|
|
|
Kent Sharkey wrote: I guess I'm alone in thinking that value should be clamped to never appear as more than 100%? Even the infamous Windows progress bar never went above 100, as far as I've seen. Must be a new batch of interns...
|
|
|
|
|
Did they took notes from Samsung?
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
I too run into the issue of underestimating the complexity of projects on first glance. I do it a lot actually. Software estimation is the root of most evil
Unless you go with the correct answer - "it depends"
|
|
|
|
|
Kent Sharkey wrote: Unless you go with the correct answer - "it depends" Kind of related[^]
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
It will enable enterprise admins to choose the drivers to deliver via Windows Update in their environment out of an assortment of matching options and schedule them for deployment. Pick one from column A, and two from column B
|
|
|
|
|
These clowns - I'm going to make a prediction. Clearly this initial offering is targeting enterprise level shops. Where I work, the IT group standardizes on well know common platforms - you can have any laptop you want as long as it's a Lenovo xxx or a Dell abc. Doing so, they skip the entire driver mess to begin with.
My prediction - under the covers, the clown group, some she-her or a he-him, will decide to fold it into Windows 11 under the covers. Hell, they might even try to do it with Windows 10. And it will be an unmitigated disaster as the automatic update of drivers is already.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
This is so obviously going to end bad...
I am going to start cooking popcorn for the new wave of news about breaking things or things breaking.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
The landmark 64-bit Visual Studio 2022 is now generally available, for the first time offering developers much more memory to work with, along with other innovations like IntelliCode and Hot Reload. 64 bits, no waiting*
*Just kidding, there will still be waiting
|
|
|
|
|
A newly published patent shows designs for a Microsoft Arc-like mouse that bends. They're all bendable if you really try hard enough
|
|
|
|
|
If you're migrating an app to .NET 6, the breaking changes listed here might affect you. Because you can't make an omelet without breaking some APIs
|
|
|
|
|
Today’s release is the result of just over a year’s worth of effort by the .NET Team and community. The sixiest .NET ever!
|
|
|
|
|
Bet v1 is faster on comparable hardware.
|
|
|
|
|
I definitely would not take that bet.
TTFN - Kent
|
|
|
|
|
We asked our community to share about a time they sat down and wrote code that truly made them proud. Bath tub not required
Posted to see if we can get a few of our own stories here
|
|
|
|
|
Two stories (the two I was thinking about when I replied to honey the codewitch[^]:
1a: Detecting hydrogen fires in realtime after the Challenger accident - We had a multispectral camera, basically a camera with a spinning wheel having 6 narrowband filters in front of the CCD where this spin rate was sync'd to the cameras 60hz scan rate, and I managed to do two things (quite impressive given this was 30 years ago) - flip the image capture board into capture frame for the desired filter, flip back to display the captured frame for the next 5 frames, rinse and repeat. (By removing the IR filter in front of the CCD it was just barely able to detect the emissions around 950nm from burning hydrogen.) All during the vertical refresh interval, so it had to be assembly code, and the code let you move to the previous / next filter on the wheel. The point being, so you could see just the filter you wanted to see.
1b: The PhD people had created a complicated FFT to analyze all six frames of a captured set of images to determine if a hydrogen fire existed. It took like 30 minutes to run (remember, this was 30 years ago) and produced a questionable image result and then you had to tweak the parameters to try again. I realized that the entire process was just a lookup table of intensity for each filter band. So I wrote a near-real-time translation to produce a single video frame from the six filter frames.
2: The PhD's had been working on analyzing the failure modes of switch rings in satellites (this[^] is a simple but good example). The idea being, analyze the ring that the engineers dreamed up for handling failed TWTA's (Travelling Wave Tube Amplifier) which would be switched to spare TWTA's on the satellite, and determine what the failure modes were that couldn't be handled even if there were available spares. This is not as simple as one might think, as the output of one switch can be the input of another switch as an alternate input.
The point being, the PhD's were using the tools in their PhD toolbox: complicated algorithms of network analysis that they couldn't figure out and if they did, nobody could figure out how to turn into code. I ended up looking at the problem from the opposite direction - given an output combination, what were the valid inputs which then produced a list of inputs that couldn't be handled, based on the simple rule that a T-switch or C-switch (another pic here[^]) could only have a maximum of two inputs and two outputs. This greatly simplified to complexity of the analysis because the rule was simple: if there are 3 or 4 inputs going to the switch, this is a failure case in the ring topology.
The result was that the code could analyze fairly complex rings in less than an hour. Once multithreaded processors came out, I refactored the code to multitask the analysis for the # of cores, so it could handle more complex topologies. To my knowledge, the satellite manufacturer is still using my code to this day, originally written almost 30 years ago in C++, then rewritten in C#, without performance degredation mind you.
Sorry for the long post!
|
|
|
|
|
Fascinating stories - thanks for sharing them!
originally written almost 30 years ago in C++, then rewritten in C#, without performance degredation mind you.
I'd be interested in hearing an expansion on this. C++ folks tend to be rather religious that there's just no way that C# could be as performant as C++ for anything other than contrived scenarios, and a real world example with an explanation would be an interesting tidbit to add to that ageless argument. (Possibly in the Lounge, or an article, if not appropriate here).
I say this with love as I used to be one of those C++ folks.
|
|
|
|
|
Gjeltema wrote: C++ folks tend to be rather religious that there's just no way that C# could be as performant as C++
The main issue was my use of the STL and all the memory allocations / deallocations that were sucking up a ton of time. I improved on that in the C# code (one could argue if I did the same thing in C++ it would still be even faster than the C# code.)
However, C#'s p-code, compiled to native CPU code, is really, in my experience, just as fast as C++ unless you spend a lot of time optimizing the C/C++ code.
Conversely, in the C# code, I discovered this issue[^] with memory allocations in threads and wrote that article about it.
|
|
|
|
|
The situation was where a Xerox Data Systems (XDS) Sigma 5 was being used to collect and process telemetry data transmitted by a satellite.
This was in 1974 and the network was probably designed in the 1960s and was state-of-the-art equipment for its time. The highest speed line was 220kbps and the communications controller occupied two full cabinets.
The satellite in question would transmit three tape reels worth of data every day. If the data came through low-speed lines (because that was what certain ground stations could support), the computer could process about two tapes worth of data, saving the last reel as unprocessed data. If the high-speed line was in use, then it was all the computer could do to just write the data to tape reels. Over one year, about 300 unprocessed reels of data were sitting in the tape library.
The contract called for the system to support 3 satellites in orbit simultaneously. The computer was choking on the data from just one satellite. A second satellite was to go up in about six months, with a third one scheduled for a year later.
At that, I was working in an obscure field known as computer performance evaluation. This called for probes to be connected to certain pins available on the motherboard. For the IBM 360 series, these pins were known. Data obtained from these probes would tell you which part of the CPU were being used frequently and there was software written to analyze this data.
Unfortunately, the pin output information was not available for other computers. In fact, it was not in the interest of the computer vendor to optimize performance as they could sell faster and bigger processors to the customer. The hardware monitoring equipment was in fact sold by two vendors independent of IBM.
Thus, I had to figure out how to simulate a hardware monitor in software.
The program, conceptually, was trivially simple. Every 100 milliseconds or so, my program would interrupt the computer, look at what instruction was being executed and in which part of the memory that instruction resided: was it in the operating system or in the application program?
It turned out that a vast majority of time was being spent in the area reserved for program overlaying. Ah, yes, this particular OS didn’t have virtual memory (hardly any of the OS on various computers had virtual memory at that time and certainly not on the XDS Sigma 5 which would hardly qualify as a minicomputer) so we programmatically swapped overlays of the application program as needed into main memory.
The measurements suggested that we needed to add more main memory to the computer so that the overlays can be more optimal. The request for 16 KB of additional memory was approved (for a 64 KB computer) at a cost of $10,000.
After the additional memory was installed, I changed the overlay pattern to minimize page swapping.
The results were spectacular.
In the next 30 days, not only did we process the daily load of 3 tapes but processed the backlog of 300 tapes, which is like a total of 13 tapes a day. With 3 satellites in orbit, we only would need to process 9 tapes a day!
I was mightily pleased that I could do this so early in my career and the very first time I ventured into OS territory.
Later, I would go on to use hardware monitors on IBM computers and measure performance. It was awesome to see a selector channel on an IBM 360 run full-bore at 100% utilization and one knew at once that the IBM laser line printer running at 20,000 lines per minute was going full blast printing utility bills for the local electric company or the check-sorter for the bank was reading and sorting checks. Not even disk drives could achieve that level of utilization on an I/O channel; in fact, exceeding 35% on a channel used by disks was an indication that it was the choke point for the computer.
|
|
|
|
|