The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Let me try a (speculative) answer along different lines:
1. If it's a QuickFormat, then virus data can still exist in sectors that are marked as clean of files. Of course, this virus is not active. I am just pointing out that a virus could use this to store its payload or stolen data for later use, if it was able to reactivate itself somehow.
2. Another vector would be a false format. If you format a disk (not the OS boot disk) from a computer that has malware, it could run a fake format that leaves things apparently blank, but in reality the disk is booby-trapped for the virus to reactivate itself. It would be quite tricky to pull this off, survive an OS reinstallation etc.
This has been discussed as to WHY you SHOULD ONLY USE a charging USB Cable!
The USB can be flashed from a public charging stand infected. And it is basically impossible to detect, because the virus LIES about being installed (imagine my shock!), and it adds itself on all future updates.
I remember back in my early career (DOS, pre-Windows 3.1) where we had to do manufacturer specific low-level formats to remove certain infections. Getting the utilities from the manufacturers was like pulling teeth. I spent 3 days strait at one client's office rebuilding ALL of their computers, then another full day scanning all of their floppies. Virus coders have become even craftier since then. Now days, you can infect so many different parts of a computer to survive formatting. Just about every component has its own flash-able memory that can be infected.
Money makes the world go round ... but documentation moves the money.
If the firmware of your ethernet network adapter or WLAN adapter gets infected your machine is lost.
An attacker can send you secrect data packets over the network and gain direct access to your RAM.
Your machine could also disconnected temporary or permanently from the internet ('internet kill switch').
Have a look at this article - the Intel management engine (IME) is basically a very small computer running inside your PC that has pretty much unrestricted access to every part of your PC and is completely unmonitored by your human facing operating system. The IME is so low-level that it's said to operate at 'Ring -3', i.e. it has more privileged access than your main operating system in kernel mode. And it has its own space for firmware, which could hold malware that would survive a disk being reformatted (or even taking out the old disk and putting in a new one).
And of course, vulnerabilities exist inside the IME - it's running software, so it's pretty much guaranteed it has bugs, and bugs lead to vulnerabilities - and those have been demonstrated several times... The 'Ring -3 rootkit' is particularly scary - something that can monitor everything your PC does, lives outside of your ability to see it, and is very difficult to remove...
Java, Basic, who cares - it's all a bunch of tree-hugging hippy cr*p
We were running ProDOS on our machines. The boss came in one day with a 5Mb HDD and we were awed. He would take me and my coworkers code into his office load it onto the HDD and compile and link it. The HDD sounded like a jet plane taking off when it spun up.
No never even heard of Beagle Bros. We were a tiny shop writing give away software that went with new modem boards.
I'm not sure how many cookies it makes to be happy, but so far it's not 27.
Assembly languages vary. I worked with one where you e.g. loaded float register 4 from memory location xyzzy by "F4 := xyzzy" and the multiplied it by 4.5 with an "F4 * 4.5" instruction. A conditional jump was written like e.g. "IF = GO Label" (if the flag bits were not set other ways, you would have to precede the IF by a "COMP x, y". Call this syntactic sugar - it definitely is! - but it makes the code a lot easier to read than the treaditional assembler acronym letter soup.
This was definitely a CISC machine: E.g. it had a loop instruction "W LOOPI i, imax, TopLoop" for incrementing i, comparing it to imax, and until imax was reached, jump to TopLoop- a for-loop control in a single instruction. There were call instructions transferring a list of arguments onto the stack, moving the stack pointer and checking for stack overflow. There were heap allocate/free instructions (and the call instruction might allocate stack frames from the heap, for coroutine use). A comprehensive set of string instructions, e.g. for translating a string to another (single-byte) encoding / case / ... - strings were addressed through descriptors giving the location and length. Math functions such as square root, X**Y, log and trig functions were single instructions.
The distance in abstraction level between K&R C and this assembler/instruction set was so moderate that at the time, I didn't really see any advantage of C other than that it could also be compiled for more primitive instruction sets. In fact, we used to refer to C as "Machine independent assembler" I preferred the machine dependent one... (But for the most part, we were programming in higher level languages than C).
I started assembly language programming in the late 70's but in the last 20 years in my career as an Embedded Systems Engineer (now retired), I saw no need to use assembly language in any of the products I worked on. C/C++ compiler tech for anything from an 8051 to a TI DSP generate excellent code that's relatively hard to improve upon.
Years ago I wrote a front-end program in dBASEIII that generated AutoLisp scripts to manipulate AutoCad drawings. The engineers would provide a few dozen input parameters, the AutoLisp script was built, then AutoCad was fired up and produce the drawing for them.
I remember spending lots of time counting parenthesis and certainly agree with another comment - LISP = Lost In Stupid Parenthesis.
The end result worked really well, and I learned how to count really well.
No reflection on you, Mike, but the article's a load of crap. I'm of the opinion that if you "dread" working in a programming language, then that's a sign that you don't understand it. That's a problem you can correct and should before you try working in it.
Yeah, there are languages and environments you like working in more than others, but sometimes you don't have a choice. In that case stop whining, suck it up, and dig in.