The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
No, I wouldn't say so. It is like saying that C# and assembler coding on the 8086 works along the same principles.
The event driven model pushed by early Windows was never embraced by application developers: Some tiny little event triggers an (ideally) atomic state transition. Win16 didn't have preemptive scheduling - there was no need for it. Or wouldn't have been, if developers had adopted the event driven philosophy.
Software developers - with the exception of those who come from the digital telephone exchange world - are trained in sequential top-to-bottom programming. Even managing a handful of (more or less persistent) threads is "advanced matter" in the training, and protection of data structures and synchronization are, for the most part, poorly mastered. But that is the only way programmers can handle e.g. peripherals: By setting up a thread that, like a 1970 style Fortran program, runs from top to bottom (although with some loops and conditional statements).
Event oriented programming is reduced to exceptional cases, where you set up a callback and attach it to some OnSomethingHappened case. The main body of the application code does not reflect the fundamental event driven paradigm of Windows16, where you might say that everything, the entire application logic, was written as a large number of OnSomethingHappened handlers, all of them tiny and near-atomic.
With Windows95 came a collection of "helper" functions for supporting event handling in a more sequential-code-looking way. I saw it as (and believe it was intended as) an outstretched hand to old sequential programmers to ease the transition to "real" event driven programming. It rather started the snowball running down the hill, back to the Fortran style coding. Even in the 1970s, interrupt handlers were required to handle external events (and on most machines you could trigger an interrupt from software as well) - they were called interrupt handlers, not event handlers, but the difference between the two is minimal.
After Windows9x, I haven't seen any application code following the event driven paradigm; we are back to the sequential way of doing things. The core of Win10 is still event driven, as OSes always were with peripherals and timing, maybe more so than some other OSes thanks to its historical background. But no application is programmed by the even driven paradigm.
You can't (in any reasonable way) make a GUI application without handlers for input events, like you can't make an OS without interrupt handlers for physical units.
But even if you register callbacks for user input events, that doesn't mean that your basic programming paradigm is event oriented coding, as in FSM: One event leads to one atomic transition.
Certainly, there may be software designers who base the entire code design on an FSM-like model, but those are few and far between today (with the exception of communication protocols).
I recall from the Win 3.x/9x days an article about how a compiler was rewritten to fit the event model: E.g. the scanner delivering a new token from the input stream was modeled as an event (read: Windows message). Adding a node to the DAG was an event, etc. In software built by such a model, the mainloop (a concept the modern programmer never sees) is essentially an array indexed by current state and event code, each table element indicating the next state and usually some atomic action. Lots of the program logic is reflected in this state table, not in a sequence of if-else or while-loop statements. The use of flow control statements are in the atomic state transition actions.
In those days, I made (for educational purposes only, to teach students event driven programming) an implementation of X.215, the OSI Session Layer protocol, which is "rather messy" if you have to code it in sequential code. Implemented as an FSM model, it is almost trivial. Of course: Creating that table is a huge task (but for X.215, ITU had done that task for us ).
Event-driven development focuses on the state table. Obviously, you can implement a state table solution in any language that can handle function pointers (aka. delegates, in C# lore). But you don't see it done that way very often.
I'm not saying that some reference-documentation needs to be in book-format; only pointing out that authors spend a lot of time gathering knowledge and putting it in an accesible format. Not just as paper-books (which I prefer, because it squats bugs better than an tablet), but also as ebooks
Very true. And I do agree to an extent that paper books (and some ebooks) are scrutinized before publishing depending on the publisher. However, as you alluded to, the majority of the quality is placed upon the author regardless of the type of media. It may be easier to spread misinformation online but at the end of the day the onus is still upon the reader to determine validity. That's why I generally hold no bias towards different media types.
I have no idea how these things are handled elsewhere though. In the US you can publish pretty much whatever you want as long as there's profit to be made. Truth and accuracy be damned. Looking at you revisionist history books
Eddy Vluggen wrote:
The syntax, yes
Those links are more than syntax. They list built-in objects and available functionality, cover this and prototype-based inheritance, show available WebAPIs and their compatabilities, explain how HTML/CSS/JS interact, discuss the DOM, and more.
Eddy Vluggen wrote:
Technology is changing? Where?
Win10 is largely still working according to the same principles as Win95.
Oh Windows I should have been more specific. Technology was too broad a term.
All hail logic gates! The one, true computing technology, haha.
I totally agree. I read book when I want to understand a new subject. When come to coding, books are useless in my opinion. Most books only skim to basic and only one view. I want to know multiple ways of accomplish the same job and pick the best for a situation. I would say 90% of the time, I found answer on StackOverflow.com MSDN is great on documentation, but not so with how to get specific job done.
Because in my current job, starting in 2013, I've been gradually downgraded from a cube with 5'6" walls, drawers, and lockable shelves, to a triangular desk area with 8" dividers, to a rolling table 5'6" by 3'. My books are currently stacked almost 3' high on one corner of the table to create some semblance of a visual blind. Big over the ear head phones help with the auditory distractions.
It's "collaborative" space.
I bet the idiots who designed this format don't have to use it.
I managed to get moved to the hardware lab. It took an extensive testing of a very noisy electrical motor (it had to be controlled by the software and I'm the hardware communication man in the company since apparently I'm the only one who does it reliably) which had seemingly random errors after thousands of runs.
*GROOOOOAN* *CHUNK!* *GROOOAAAN* *CLANG!* for weeks 8 hours/day and suddenly my request to be moved got approved. After 5 years I was asking.
It had programmable firmware (via logical blocks on the programming interface) running in parallel with commands from the outside. The OS that run the firmware had race conditions with the commands received and the input so after thousands of tests the only solution was to scrap the firmware entirely and controlling the unit completely from RS232 (9600 baud, eternal) including safety stops and level photocells. [Note: safety not towards humans but towards other components of the machine, safety towards humans was managed the hard electrical way]
Also it had an encoder... on paper. Actually it counted the clock cycles the enable was "1" so if the STOP_PHOTOCELL was ignored (it happened because of the aforementioned race conditions and the fact that the photocell emitted a pulse when active and then returned to idle state) and the engine arrived to the hard stop, that is a 10kg steel block) the so called "encoder" kept counting in the direction of the movement forever and kept pushing or pulling the load against the physical block.
Funny, I always start with Google search, but almost 99% end up in StackOverFlow. May be I should just go there directly. Perhaps because Chrome let user search by typing query directly at the URL bar.
I certainly grew up BG. Furthermore, friends label me as a squirrel, stuffing away nuts everywhere.
Then: I am surprised myself how often I go down in the bookshelves in the basement to dig up some information - quite often because I have mentioned some mechanism or algorithm or architecture or tool to a younger colleague, one that was developed and used long BG, and little information is available on the Internet.
Today, I am using Google heavily, but it certainly is best suited for reference information, stuff that you more or less know in advance but need the API details etc. You don't learn the philosophy of a paradigm by googling, not even architectural concepts. You find no wisdom in Google, only facts.
So I still buy books, to more easily "see the big picture". But, just like others are dissatisfied with online tutorials, the art of writing good textbooks is also deteriorating. I frequently wish I had an electronic, editable version of the book I am reading so I could remove all that chitchat that expands 200 pages of useful info into an 700 page monster. Remove all the references to how it is done in this and that old system (which I never used), comparing it to how it is in this system. Adding a few explanations about why you would use this and that mechanism, and for what.
I am really dissatisfied with most modern textbooks: They are extremely wordy, poorly organized and not very good at giving you "the big picture". Yet, for subject I do not know at all, but need to learn thoroughly, they are still a lot better than a google hit count of 2.3 million, where you still miss out a lot of good references because you are so new to the subject that you do not know which are good search terms.
One specific problem: Those who have recently learned a new technology well enough to write a book, too often write as if the reader has lots of experience with older technologies. E.g. you buy a book to learn WPF, and the athor makes hundreds of references back to Windows Forms, essentially describing the differences, not giving an independent description of WPF as it appears to a reader who never worked with Windows Forms.
Or, athors who have been deep down in the inner workings of the lower layers, assuming that every reader has a comparable background. Like explaing the semantics of C# mechanisms by referring to the CIL constructs generated by the compiler. I did not know CIL details until I "had to", to understand this author's explanations, but I have seen several similar intermediate languages, so to me, it was OK. But a reader who has never been inside a compiler, I guess this would be a barrier.
Hmm. I may be a crotchety old fart here, but I still keep my books. My active library here at work includes the following titles:
The C Programming Language by Kernighan and Ritchie The C++ Programming Language 3rd edition by Stroustrup Pro C# 2008 and the .NET 3.5 Platform by Troelsen Pro WPF in C# 2008 by MacDonald Encyclopedia of Graphics File Formats by Murray and VanRyper Internetworking with TCP/IP, volumes I, II, and III by Comer and Stevens The Bar Code Book 3rd edition by Palmer
plus a collection of the O'Reilly pocket guides/references. I keep these books because some of the them are out of print or current editions have omitted information I still need. In all cases I still use them. For example, I recently spent a couple of months with Internetworking with TCP/IP volume I open on my desk while I was debugging the TCP/IP 'stack' in a piece of embedded software.
Besides, the monasteries will need paper books after the Singularity and the monoAI has taken all programming information into itself.
When i was in tech support i learned from a guy who was about to retire. I used to ask him how they did the job before google, but he'd always shut me down with an angry "Hey!" like he didn't want to talk about it.
Last Visit: 3-Apr-20 13:21 Last Update: 3-Apr-20 13:21