|
ZurdoDev wrote: It IS amazing how much stuff we can do with a bunch of changes in voltage. True enough.
Voltage and polarity; that's all there is to it, really.
(And that's essentially a complete computer manual in under 25 words.)
(And it's about as useful as some of the cr@ppy manuals I've had to wade through.)
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
You know what they say: knowledge is dangerous!
Chris Maunder wrote: Which implies some kind of state management going on.
What I found fascinating when simulating TCP/IP over satellite and dealing with rain fade (packet loss due to rain) is that the hardware (or more precisely, the software in the hardware) handles all the protocols for requesting retransmittal of packets. Something we take for granted is that packets will be received in the correct order, which is achieved by the hardware layer, packet numbering, and ACK's.
By default, TCP/IP over satellite is darn slow, as every ~1400 byte packet requires an ACK, which is a 500ms round trip (250ms from ground to satellite to ground, 250ms back for the ACK.) We were simulating buffering packets on the ground and simulating the ACK so we could get a continuous stream of packets, and using a separate "Error ACK" to resend only the garbled packets. It worked quite well in the simulations, achieving near hardwired transmittal rates.
That was the closest I got to diving into the bowels of how things talk to each other.
Latest Article - A Concise Overview of Threads
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
Now add a torrent client on top of that, add meatballs, and serve.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Computer Networks 5th (International Economy Edition) Paperback – January 9, 2010 by Andrew S. Tanenbaum (Author), David J. Wetherall (Author)
Internetworking with TCP/IP Vol.1: Principles, Protocols, and Architecture - Douglas E. Comer
Internetworking with TCP/IP Vol.2: Internals and Implementation
Internetworking With TCP/IP Vol.3: Client-Server Programming And Applications Versions
Caveat Emptor.
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
|
|
|
|
|
abmv wrote: Computer Networks 5th (International Economy Edition) Paperback – January 9, 2010 by Andrew S. Tanenbaum (Author), David J. Wetherall (Author) I got my basic introduction to networking from the 1st ed of this book - the original source of the quote "The nice thing about standards is that you have so many to choose from."
The remark comes after a discussion of the tiny little details in which HDLC and SDLC differs, making them incompatible. Tanenbaum continues something like (this is quoted from memory): "And if you don't like this year's crop, just wait until next year, and you will have a few more to choose from."
|
|
|
|
|
Chris Maunder wrote: would anyone be interested in taking a stab at an article that walks through the life cycle of a bit of data The trouble is that it's one of the things that are very simple to use because of the almost ridiculously complicated and convoluted processes going on in the background.
And where to stop.
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
|
That should be a good start - a thousand pages for TCP/IP alone ...
(Well, take it with a grain of salt - the TOC seems to suggest that there is some coverage of e.g. X.25, FR and ATM, ... in the part labeled "Core TCP/IP protocols" )
Note that the book is (C)2006, twelve years old. Now, IPv4 is 37 years old, so for the basic protocols it doesn't matter. But if you today made the statement opening the chapter on ATM, "ATM-based networks are of increasing interest for both local and wide area applications", half of your audience would dive straight into their smarphones to google what the h "ATM" is, and the rest would try to make sense of the role of banking machines in networking. I guess that the IPv6 parts may have other statements about state of deployment an completeness of the protocol suite that may not be quite up to date.
Thanks for the reference to the book - but honestly, I think that for a completely blank beginner, it will serve more as a sledgehammer to knock the reader to the ground than as a simple-to-read introduction to networking!
|
|
|
|
|
|
I was teaching such stuff (and even wrote parts of the text forthe course, but in Norwegian), in an age when we still were hoping for networks to be orderly and well designed. (That is, in the 1990s.) I see other commenters mentioning the OSI model, which was an essential part of the orderlyness. Well, those were the days...
Today, networking is more or less a total mess. Lots of stuff isn't what it appears to be: Maybe the name is the same, but under the hood there is a hodge-podge of different, and largely incompatible, technologies that must be explicitly bridged between. Take "Ethernet" as an example: A multitude of cable types, a dozen different physical connectors, different topologies, different allocation strategies, ... But to the user, it is presented as the eternal, single Ethernet-techology (just with a more convenient RJ45 plug).
ooo oo00oo ooo
Along another axis: One ting you ask explicitly about is "discontinous communication" (often called "asynchronous" communication). First: In a protocol stack, it may vary from one layer to another. The cable may carry a continous stream of bits at well defined steady rate. The user of this cable may insert some packets now, some then, in a varying rate. If these packets carry some real time data, such as streaming sound or video, the packets may be receied "now and then" at irregular speed, but are buffered so that they can be forwarded to the sound/video units as a contstant speed, continuous bit stream.
Second: On the fly, I can list a whole series of alternatives for handling asynchronous traffic in the physical layer alone. It much depends on whether you have the line to yourself or share it with other:
POTS, i.e. analog modem. When you have nothing to say, the line is quiet. When you want to send a byte, you sound a buzzer ("start bits") to wake up the receiver, followed by your eight data bits, and a final buzz ("stop bit") to call it off. In later modem protocols, more than eight bits were sent at a time, to reduce the percentage-wise red tape in start and stop bits. Bits were encoded as a beeping tone that in the oldest versions switched between two frequencies ("FSK"), one for binary 0, the other for binary 1. Later, fairly complex schemes were introduced: Each wave peak could vary both in its amplitude and delay (i.e. phase), peak hitting (or being close to) one of, say, 32 defined points, conveying 5 bits per full wave (i.e. per Hz analog channel bandwidth).
HDLC is somewhat similar (binary, on/off, no analog), but the line is always "busy", sending bits at a specific speed. With nothing to send, a stream of "flags", binary 01111110, is transmitted. Once this pattern is broken, data follows: An arbitrary length bit stream up until the 01111110 pattern reappears. (If the data contains 01111110, it is broken up by inserting a 0 after the fifth 1; the receiver will remove it before forwarding the data.)
Multiple user complicates matters. A classical Ethernet bus cable (or any other CSMA/CD medium) is quiet until one station wants to send something. The sender listens to the cable until it is quiet, then starts transmitting - but listens at the same time: If some other station also started transmitting, they both hear a garbled signal, different from what they tried to send themselves. Both stop transmission, and waits a while before making a new attempt.
In newer descendents of CSMA/CD, such as LAP-D for the ISDN D channel, differs in one respect: All senders are synchronized, so that bits overlap exactly. The electronics makes a 1 trump a 0, so the station transmitting a 1 won't notice any collision. One transmitting a 0, and hearing a 1, must cease transmitting before the next bit and try again later. The one with the 1 bit succeeds and need not retransmit. So, there is less waste of capacity.
Other schemes do not rely on senders listening and retracting in case of collision. E.g. in Bluetooth Smart (old name: BT Low Energy), a central master allocates time slots to each slave: If you have more to say, come back in 84 ms! Usually, a slave gets a time slot at regular intervals, every x milliseconds, depending on need. (The "low energy" is much due to this: The slave can turn off its radio up until its next time slot, to save power.) Several other protocols have masters doing central allocation of capacity, in various ways.
Then there are reservation schemes: DQDB (which really deserved more success than it got - today it is dead) operated two bus lines, one each way, carrying a continous stream of fairly small "cells", with a header flag indicating "busy" or "free". If you have something to send, you wait for a "free" cell, set the flag to "busy" and fill in your data bits as the cell flies by. But this requires that you have requested the cell by setting a request flag in one of the cells running in the opposite direction. This ensures that all stations have equal right to reserve capacity.
Various reservation schemes are used by other protocols, often using a circulating "token" that a sender must hold to be allowed to transmit.
Now, which other schemes for allocating capacity to varying-bitrate users deserve to be mentioned? I'd like to mention one ATM facility, even if I don't know if anyone ever made serious use of it:
Many sound/video coding schemes have a "layered" design: You can drop parts of the data, reducing quality somewhat. GSM is a prime example: Certain elements, such as phoneme codes, have very good error correction, and will "always" get through. The "sound energy", volume, goes without wasting capacity on error bits. So if that protocol element is unrecoverably garbled, the volume is simply kept steady. At low S/N, speech may sound flat, but phonemes get through, giving a comprehensible speech.
An ATM switch may have, say, eight incoming lines all wanting to send cells to the same outgoing line, exceeding its capacity. The switch inspects incoming cells for header flags indicating "discardable", and drop those cells, while letting non-discardable ones through. Presumably, discardable cells carry information similar to "sound energy"; it could be improved image details, high-frequency sound content etc.
You request ATM connection (yes, ATM recognizes connections at the bottom layer, just like analog POTS!) of a guaranteed bit rate - say, 64 kbps for a phone. Every switch along the route must agree to this guarantee. If you don't use all your 64 kbps, the switches may take what you don't use for discardable cells. As soon as you open your mouth, those discardable ones must yield. You can send non-discardable cells only up to 64 kbps (or whatever you reserved), but you may send lots of discardable ones to improve sound quality when traffic is low and network capacity is available.
This facility provides an "overbooking" facility where noone is rejected (until the network is saturated with non-discardable cells), yet running without "empty seats" when the load is lower: All available capacity can be used to provide improved capacity.
... Maybe that is enough to illustrate the number of alternatives and options. Yet, lots of people would yell: Why didn't you include this method? and that method? ... which strengthens my point! All what I have written here covers one tiny little speck of what you want to learn, and I have just mentioned a few alternatives, not provided a tutorial! Covering all the issues, with thorough explanations, will fill volumes.
ooo oo00oo ooo
The bad thing about the mess we have today is that you cannot pick one solution to a problem (say, how to multiplex serveral users on the same medium) and combine with a freely chosen solution to another problem (say, how to identify the other end): If you want TCP services, you are more or less bound to use IP addresses. If you use SMTP email, you are more or less forced to use TCP connections. In some cases, you can in principle make unusual combinations: Say, if you replaced TCP/IP with TCP/ATM, mapping a TCP connection directly onto an ATM connection, that TCP would be relieved of a whole lot of work; ATM would do it far more efficiently. But everyone that you want to talk to would say: TCP/ATM? That is TCP/IP, mate! We know nothing of this "ATM". ... Other protocol suites are largely the same. They expect one given technology (or maybe one of two or three selected ones, such as IPv4 and IPv6) at lower layers.
OSI was an attempt to define common interfaces to different groups of network functionality, i.e. layers. You could have different ways of, say, multiplexing several connections over one channel, but the implementation would be local to that layer, all of the alternative implementations available to any higher layer. All the multiplexing implementations would use the common interface to the lower layer, so that every kind of physical transport could carry multiplexed traffic. Similar for all other kinds of network functions: OSI defined abstract functionalities, allowing different implementations. At each layer, isolated from the otheres, the communicating "layer entities" would open a communication with something like: "I can do multiplexing accoring to standards X, Y and Z - do you know any of those?" "I know X and Z" "Fine, we'll use X, then" - but noone outside that layer would care (or even know).
That didn't occur. The network wars of the 1980s-90s were won by the mess. Too bad.
Now, I certainly will not argue in favor of every specific OSI protocol, and especially not in the way the functions were grouped into the seven layers. If OSI had succeeded, I think we today would have had an "OSI II" with a similar layering, but quite different allocation of functions to layers (and not the TCP/IP way!). It the years of war, there was no room for such discussions. And after the mess won the war, it was essential to the victors to make as explicit as possible that any concept drawn from the defeated enemy would be bluntly rejected. And so it was. Nothing was learned from the OSI discipline.
|
|
|
|
|
Without reading the entire thread, try this: Internetworking with TCP/IP[^] by Douglas Comer, et al. I have the 2nd edition. Volume 1 is a great introduction to low-level TCP/IP and other base network protocols.
Sadly, I have had to learn this stuff in the course of debugging the TCP/IP 'stack' in some embedded software of ours.
Software Zen: delete this;
|
|
|
|
|
Try reading TCP/IP Illustrated (Vol 1 and 2), by Stevens. That will cover most of the low level stuff.
|
|
|
|
|
Buy a pair of 2$ micro controllers off ebay. Buy a pair of radio ICs (doesn't really matter much, as long as its not BLE or WiFi).
Configure radio IC. Send a packet. Be amazed.
Spend next 6 months working on a low level network for micros.
Worked for me
Sincerely, an EE who skipped all tele-communications classes because they sucked.
|
|
|
|
|
|
|
2 men are about to be hanged, 1 man to the other: "First time?"
|
|
|
|
|
So it's a line from a show, not a quote.
|
|
|
|
|
As soon as he clicked "Post Message", it became a quote.
Software Zen: delete this;
|
|
|
|
|
A few weeks ago, I replaced my file server motherboard/cpu/ram, and soon realized that one of my shared drives was no longer available to other machines on the network. When I tried mounting/accessing the drive share on a remote machine, I got the error message, "Failed to mount Windows share: file exists". I tried to get to it on my main desktop, and got the same message, so I figured there was a problem on the server itself.
Today, I tried to start a movie that was stored on the share, and of course, Kodi couldn't find it. Since it was a christmas movie, I decided that I should fix the problem before SWMBO finds out, and managed to fix it without even googling it. It turns out that the share in question was originally connected to the add-on SATA card, and its mount point was mnt/media/media3. All the rest of the drives were "/mount/...UUID...", so I deleted the existing SAMBA share, and created a new one. Job done!
SWMBO is happily watching the movie as we speak.
BTW, it looks as if Linux has the same problem with cryptic error messages (and maybe even a worse problem).
BTW #2, the new hardware boots Lubuntu to the UI in less than 10 seconds on that box. nVME drives freakin' rule!
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
|
John Simmons / outlaw programmer wrote: BTW #2, the new hardware boots Lubuntu to the UI in less than 10 seconds on that box. nVME drives freakin' rule!
I get very similar performance (lubuntu, sata 6 ssd).
Miss the days when you could take your time making coffee (and if win 3.1 bake fresh cookies) while the machine booted. (though I've seen win10 is getting back up there.)
|
|
|
|
|
|
Over engineering has been going on for a long time.
Now there are just more ways to do it and more ways to rationalize it. Not to mention of course more ways to introduce yet another new technology into the mix as well.
|
|
|
|
|
Interfacitis that's what I'm going to tell my colleague who is so fond of creating interfaces for even the most simple of applications !
|
|
|
|
|
Love them while building, hate them while debugging.
Guess how often I use them?
|
|
|
|
|