|
These clowns - I'm going to make a prediction. Clearly this initial offering is targeting enterprise level shops. Where I work, the IT group standardizes on well know common platforms - you can have any laptop you want as long as it's a Lenovo xxx or a Dell abc. Doing so, they skip the entire driver mess to begin with.
My prediction - under the covers, the clown group, some she-her or a he-him, will decide to fold it into Windows 11 under the covers. Hell, they might even try to do it with Windows 10. And it will be an unmitigated disaster as the automatic update of drivers is already.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
This is so obviously going to end bad...
I am going to start cooking popcorn for the new wave of news about breaking things or things breaking.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
The landmark 64-bit Visual Studio 2022 is now generally available, for the first time offering developers much more memory to work with, along with other innovations like IntelliCode and Hot Reload. 64 bits, no waiting*
*Just kidding, there will still be waiting
|
|
|
|
|
A newly published patent shows designs for a Microsoft Arc-like mouse that bends. They're all bendable if you really try hard enough
|
|
|
|
|
If you're migrating an app to .NET 6, the breaking changes listed here might affect you. Because you can't make an omelet without breaking some APIs
|
|
|
|
|
Today’s release is the result of just over a year’s worth of effort by the .NET Team and community. The sixiest .NET ever!
|
|
|
|
|
Bet v1 is faster on comparable hardware.
|
|
|
|
|
I definitely would not take that bet.
TTFN - Kent
|
|
|
|
|
We asked our community to share about a time they sat down and wrote code that truly made them proud. Bath tub not required
Posted to see if we can get a few of our own stories here
|
|
|
|
|
Two stories (the two I was thinking about when I replied to honey the codewitch[^]:
1a: Detecting hydrogen fires in realtime after the Challenger accident - We had a multispectral camera, basically a camera with a spinning wheel having 6 narrowband filters in front of the CCD where this spin rate was sync'd to the cameras 60hz scan rate, and I managed to do two things (quite impressive given this was 30 years ago) - flip the image capture board into capture frame for the desired filter, flip back to display the captured frame for the next 5 frames, rinse and repeat. (By removing the IR filter in front of the CCD it was just barely able to detect the emissions around 950nm from burning hydrogen.) All during the vertical refresh interval, so it had to be assembly code, and the code let you move to the previous / next filter on the wheel. The point being, so you could see just the filter you wanted to see.
1b: The PhD people had created a complicated FFT to analyze all six frames of a captured set of images to determine if a hydrogen fire existed. It took like 30 minutes to run (remember, this was 30 years ago) and produced a questionable image result and then you had to tweak the parameters to try again. I realized that the entire process was just a lookup table of intensity for each filter band. So I wrote a near-real-time translation to produce a single video frame from the six filter frames.
2: The PhD's had been working on analyzing the failure modes of switch rings in satellites (this[^] is a simple but good example). The idea being, analyze the ring that the engineers dreamed up for handling failed TWTA's (Travelling Wave Tube Amplifier) which would be switched to spare TWTA's on the satellite, and determine what the failure modes were that couldn't be handled even if there were available spares. This is not as simple as one might think, as the output of one switch can be the input of another switch as an alternate input.
The point being, the PhD's were using the tools in their PhD toolbox: complicated algorithms of network analysis that they couldn't figure out and if they did, nobody could figure out how to turn into code. I ended up looking at the problem from the opposite direction - given an output combination, what were the valid inputs which then produced a list of inputs that couldn't be handled, based on the simple rule that a T-switch or C-switch (another pic here[^]) could only have a maximum of two inputs and two outputs. This greatly simplified to complexity of the analysis because the rule was simple: if there are 3 or 4 inputs going to the switch, this is a failure case in the ring topology.
The result was that the code could analyze fairly complex rings in less than an hour. Once multithreaded processors came out, I refactored the code to multitask the analysis for the # of cores, so it could handle more complex topologies. To my knowledge, the satellite manufacturer is still using my code to this day, originally written almost 30 years ago in C++, then rewritten in C#, without performance degredation mind you.
Sorry for the long post!
|
|
|
|
|
Fascinating stories - thanks for sharing them!
originally written almost 30 years ago in C++, then rewritten in C#, without performance degredation mind you.
I'd be interested in hearing an expansion on this. C++ folks tend to be rather religious that there's just no way that C# could be as performant as C++ for anything other than contrived scenarios, and a real world example with an explanation would be an interesting tidbit to add to that ageless argument. (Possibly in the Lounge, or an article, if not appropriate here).
I say this with love as I used to be one of those C++ folks.
|
|
|
|
|
Gjeltema wrote: C++ folks tend to be rather religious that there's just no way that C# could be as performant as C++
The main issue was my use of the STL and all the memory allocations / deallocations that were sucking up a ton of time. I improved on that in the C# code (one could argue if I did the same thing in C++ it would still be even faster than the C# code.)
However, C#'s p-code, compiled to native CPU code, is really, in my experience, just as fast as C++ unless you spend a lot of time optimizing the C/C++ code.
Conversely, in the C# code, I discovered this issue[^] with memory allocations in threads and wrote that article about it.
|
|
|
|
|
The situation was where a Xerox Data Systems (XDS) Sigma 5 was being used to collect and process telemetry data transmitted by a satellite.
This was in 1974 and the network was probably designed in the 1960s and was state-of-the-art equipment for its time. The highest speed line was 220kbps and the communications controller occupied two full cabinets.
The satellite in question would transmit three tape reels worth of data every day. If the data came through low-speed lines (because that was what certain ground stations could support), the computer could process about two tapes worth of data, saving the last reel as unprocessed data. If the high-speed line was in use, then it was all the computer could do to just write the data to tape reels. Over one year, about 300 unprocessed reels of data were sitting in the tape library.
The contract called for the system to support 3 satellites in orbit simultaneously. The computer was choking on the data from just one satellite. A second satellite was to go up in about six months, with a third one scheduled for a year later.
At that, I was working in an obscure field known as computer performance evaluation. This called for probes to be connected to certain pins available on the motherboard. For the IBM 360 series, these pins were known. Data obtained from these probes would tell you which part of the CPU were being used frequently and there was software written to analyze this data.
Unfortunately, the pin output information was not available for other computers. In fact, it was not in the interest of the computer vendor to optimize performance as they could sell faster and bigger processors to the customer. The hardware monitoring equipment was in fact sold by two vendors independent of IBM.
Thus, I had to figure out how to simulate a hardware monitor in software.
The program, conceptually, was trivially simple. Every 100 milliseconds or so, my program would interrupt the computer, look at what instruction was being executed and in which part of the memory that instruction resided: was it in the operating system or in the application program?
It turned out that a vast majority of time was being spent in the area reserved for program overlaying. Ah, yes, this particular OS didn’t have virtual memory (hardly any of the OS on various computers had virtual memory at that time and certainly not on the XDS Sigma 5 which would hardly qualify as a minicomputer) so we programmatically swapped overlays of the application program as needed into main memory.
The measurements suggested that we needed to add more main memory to the computer so that the overlays can be more optimal. The request for 16 KB of additional memory was approved (for a 64 KB computer) at a cost of $10,000.
After the additional memory was installed, I changed the overlay pattern to minimize page swapping.
The results were spectacular.
In the next 30 days, not only did we process the daily load of 3 tapes but processed the backlog of 300 tapes, which is like a total of 13 tapes a day. With 3 satellites in orbit, we only would need to process 9 tapes a day!
I was mightily pleased that I could do this so early in my career and the very first time I ventured into OS territory.
Later, I would go on to use hardware monitors on IBM computers and measure performance. It was awesome to see a selector channel on an IBM 360 run full-bore at 100% utilization and one knew at once that the IBM laser line printer running at 20,000 lines per minute was going full blast printing utility bills for the local electric company or the check-sorter for the bank was reading and sorting checks. Not even disk drives could achieve that level of utilization on an I/O channel; in fact, exceeding 35% on a channel used by disks was an indication that it was the choke point for the computer.
|
|
|
|
|
|
Singapore in 1980 had a hundred plus banks trading in foreign currencies. Some banks didn’t have regular retail or commercial banking operations but had a trading floor for trading US dollars against British pounds, Japanese yen, Italian lira, Deutsche Mark, French franc, etc.
The profits were really minuscule as exchange rates normally varied within a very short band. One could trade a million dollars against the British pound and show just a few thousand dollars in profit when lucky.
This particular bank had a young man from London who was their trader. About 9 months into the job, he started drinking heavily during lunch and showed other erratic behavior. The alarmed manager called the audit firm where I was employed to look into the books. They discovered that the trader had been booking false profits by putting into the computer system incorrect (and favorable to his trades) exchange rates. The profits he had booked were about $6 million. The trader was fired and a very experienced and much older trader brought in to fix the mess.
The new trader closed out all the trades and said that this should stop the losses. The next day, the books showed a new loss of $450,000. The trader said the computer software was screwed up and there was no way there could be additional losses.
The auditors refused to certify the books unless and until the trades were re-run on the computer for every single day with the correct exchange rates for that day for each currency. Presumably, one could get this from the daily newspapers but where does one go for six-month old newspapers? Each day’s run would take five hours or so and six months of daily processing would be in excess of 1000 hours.
This was around December 1 and the books have to be closed on December 31 and shortly thereafter the audited results have to be submitted to the relevant authorities. There were hardly 700 hours in the rest of December if we ran the system 24 hours a day so this was an impossible task.
I came to know of the situation when an auditor ran up to me in the office and asked if I had heard about the major disaster and related the story to me.
I went to the audit partner and offered to look into the computer system to determine what can be done. She said the decision was to re-run all the processing with the correct exchange rates and that I could do nothing to alter the situation. I told her politely that perhaps the client should make the decision about involving an IT consultant. She called the bank manager who, happy to grasp at straws, accepted my offer.
The new trader met me and told me I was wasting my time as he has done what one was supposed to do in these situations to limit the loss and the system continued to report losses in hundreds of thousands of dollars every day. As far as he was concerned, the software was erroneous.
I got the pile of documentation on the software and started reading through it. I also got the reports generated by the computer and was looking to see if the reports could be correct.
After two days of these, I came to the conclusion that the system in fact was correct.
When you book a trade , let us say with Citibank, to trade USD $5 million against Japanese Yen at an exchange rate 98.34 yen to a dollar, that contract may be due in a week.
You could book a false exchange rate 101 yen to dollar in your computer and show more yens and thus a bigger profit.
But when time came to settle a week later, Citibank is not going to pay you 101 yen to a dollar! They would pay only 98.34 yen! On the day the settlement was due, the trader would have to use the mutually agreed exchange rate.
This meant that the system in fact was self-correcting. The previous trader was merely postponing the inevitable, not averting it.
The elderly trader refused to accept my reasoning. He said there was no was for the bank to sustain more losses as it was doing when he had closed out all the trades.
I went back to the daily numbers and looked at the reports.
The losses were accumulating because of a simple reason: the total amount of money traded was closed out but not in each pair of currencies! This meant that if you had bought yen against US$10 million and sold French franc to get back US$10 million, those two currencies could move against you and you could sustain losses. To truly close out all trades, There should be no outstanding balance in any currency!
The trader was shocked when I told him this. I had discovered something that he, in his years of experience in trading, did not!
He went back to the trading floor and closed out trading in each pair of currencies.
For the next few days, I showed him that there was no further losses and the total loss had stabilized.
I continued to monitor the books for a few more days.
Then I saw a discrepancy of about Sing$20.
I broke my head for an hour on this problem and reluctantly left the building at 6 pm, to tackle the problem the next day.
As I stepped on the sidewalk, with the locked building doors closing behind me, I knew the answer.
The USD-Sing$ exchange rate had held steady for three days but the rate change that day resulted in the small difference I had observed.
I was elated when my calculations the next morning confirmed my hunch.
Around December 25, I wrote up my findings and told the audit partner that we had no reason to refuse to certify the books.
PS. A decade or more later, Barings Bank in Singapore suffered more than a billion dollars in trading losses in a manner identical to what this bank had experienced.
I just couldn’t believe that bankers haven’t understood separation of responsibilities and that traders should not be allowed to input wildly inflated exchange rates but a second person should be responsible for verifying the reasonableness of the exchange rate in every trade.
|
|
|
|
|
Vivi Chellappa wrote: I just couldn’t believe that bankers haven’t understood... Bankers and politicians are reeeeeaaaaaalllllyyyyyy slow learners what responsibility means.
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
I’ve seen this happen to newer developers that start their first job. The human ego gets in the way of learning. You’re afraid to ask questions. "There are no stupid questions, only stupid people"
Sorry, feeling mean this morning.
|
|
|
|
|
There's also a skill to asking the right stupid question.
"Exactly why do I need this Javascript framework???"
Fear has a lot to do with it too. Asking the question above, one has to overcome fear of both asking and of the answer.
|
|
|
|
|
Marc Clifton wrote: "Exactly why do I need this Javascript framework???"
That applies to a lot of tech in the sense that when you're learning something new it's rare that you initially see the motivation/rationale for the new thing. If you're lucky you might eventually see some content which does do this and then you either appreciate (or otherwise) why the tech was created.
Kevin
|
|
|
|
|
Kent Sharkey wrote: "There are no stupid questions, only stupid people" I always said a variation of this:
There are no stupid questions, but there are some answers that...
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
The best way to learn a new programming language, just like a human language, is from example. To learn how to write code you first need to read someone else’s code. Sad news for those wanting to learn Assembly
|
|
|
|
|
I've always disagreed with the notion that assembly is a language. Assembly is just machine code represented by mnemonics. A language would be any abstraction of machine code, while assembly is not an abstraction, it's the actual thing.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
/shrug
Everything else I could think of has a standard library, so the article makes more sense than my attempted joke.
TTFN - Kent
|
|
|
|
|
Oh, In no way did I mean to disparage your joke. I just wanted to put in my 2 cents.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
I disagree, assembly languages are simply the first type of language developed, with a closer relationship to the underlying code - mostly one-to-one, but facilities such as macros and names for addresses introduce the first higher-level abstractions.
And of course, macro assemblers introduced the first tool for more general abstraction.
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
Alan Kay.
|
|
|
|
|