The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
would anyone be interested in taking a stab at an article
Rick above has the right idea. That's not an article; it's a book.
We went over the OSI layers back when I was in college. While it might make for an interesting read all these years later, I actually don't want to start thinking about it at this level. Somehow I'm thinking I'd fall into the trap of knowing just enough to be dangerous, and then start trying to "optimize for the wrong things".
None of this is important to a developer, and yet it kind of is important to a developer in the same way garbage collection, disk access patterns and thread scheduling. You don't need to know them, but knowing them gives you the knowledge to make informed decisions.
Actually I'm not so sure that is true, if it really matters (say real real-time apps) then there will be mechanisms developed to work with & around it.
Consider database design, was a time the way hard disks worked mattered to bot the database server developers and the app developers, so people optimised with the hardware in mind. Now databases can be distributed, on different media, even on unknown media (cloud).
Even the way new hardware works, for instance if you read up on SSD's vs disks even though the o/s treats them the same the old rules don't apply, yeah you may still play around witch caching/buffering but optimising say refresh block sizes/transfer speeds for the hardware, not a factor that matters
Network low level magic: knowing how it works, well there's one common way it does it for lan, about 3 common different methods for wireless, about 5 different common methods for WAN... there's nothing to optimise because one network session may use any one or more of these at the same time.
Knowing how your network connection at low level is working right now will not give any clue as to what will be happening later, and "later" could be hours, minutes or even milliseconds.
At high level app level nothing about the current connection at lowest level is useful to the app because it may be different before it can even apply whatever optimisations or rules to cater for what it found. The speed is about the only useful factor, but even that doesn't dictate low level say packet sizes or negotiation/error correction overheads...
yawn, got called in to monitor some app, waste of half a day that I could have used for sleeping, but whatever, charging for a full day even though it's only 10 AM and I'm done
Very valid point(s). It helps to know something about subnets and addresses when you organize a network but that is a rather high level. I have found that over the years the only lower level networking issue I have had to explicitly deal with at any level was the limited size of a single Ethernet packet. Once I wrote a function to continue reading until all the data was received that became a non-issue. I still have to deal with it every time I come across another "protocol" but having that base available makes it very easy. The interesting thing is I don't deal with that size exactly, only whether I have all the data yet as it arrives in sections. I just have to figure out how to determine the amount of data to receive.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
Actually, this is the very reason I got a Computer Engineering degree instead of a Computer Science degree. I had enough experience with the software side of things on my own as a hobbyist I wanted to understand how the hardware level worked.
I can remember in one class early on we couldn't get our circuit to work just right. Turns out it was because of the wires we were using were too long and so the signals weren't getting to the right places at the right times.
It IS amazing how much stuff we can do with a bunch of changes in voltage.
Everyone is born right handed. Only the strongest overcome it.
Which implies some kind of state management going on.
What I found fascinating when simulating TCP/IP over satellite and dealing with rain fade (packet loss due to rain) is that the hardware (or more precisely, the software in the hardware) handles all the protocols for requesting retransmittal of packets. Something we take for granted is that packets will be received in the correct order, which is achieved by the hardware layer, packet numbering, and ACK's.
By default, TCP/IP over satellite is darn slow, as every ~1400 byte packet requires an ACK, which is a 500ms round trip (250ms from ground to satellite to ground, 250ms back for the ACK.) We were simulating buffering packets on the ground and simulating the ACK so we could get a continuous stream of packets, and using a separate "Error ACK" to resend only the garbled packets. It worked quite well in the simulations, achieving near hardwired transmittal rates.
That was the closest I got to diving into the bowels of how things talk to each other.
Computer Networks 5th (International Economy Edition) Paperback – January 9, 2010 by Andrew S. Tanenbaum (Author), David J. Wetherall (Author)
Internetworking with TCP/IP Vol.1: Principles, Protocols, and Architecture - Douglas E. Comer
Internetworking with TCP/IP Vol.2: Internals and Implementation
Internetworking With TCP/IP Vol.3: Client-Server Programming And Applications Versions
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
Computer Networks 5th (International Economy Edition) Paperback – January 9, 2010 by Andrew S. Tanenbaum (Author), David J. Wetherall (Author)
I got my basic introduction to networking from the 1st ed of this book - the original source of the quote "The nice thing about standards is that you have so many to choose from."
The remark comes after a discussion of the tiny little details in which HDLC and SDLC differs, making them incompatible. Tanenbaum continues something like (this is quoted from memory): "And if you don't like this year's crop, just wait until next year, and you will have a few more to choose from."
That should be a good start - a thousand pages for TCP/IP alone ...
(Well, take it with a grain of salt - the TOC seems to suggest that there is some coverage of e.g. X.25, FR and ATM, ... in the part labeled "Core TCP/IP protocols" )
Note that the book is (C)2006, twelve years old. Now, IPv4 is 37 years old, so for the basic protocols it doesn't matter. But if you today made the statement opening the chapter on ATM, "ATM-based networks are of increasing interest for both local and wide area applications", half of your audience would dive straight into their smarphones to google what the h "ATM" is, and the rest would try to make sense of the role of banking machines in networking. I guess that the IPv6 parts may have other statements about state of deployment an completeness of the protocol suite that may not be quite up to date.
Thanks for the reference to the book - but honestly, I think that for a completely blank beginner, it will serve more as a sledgehammer to knock the reader to the ground than as a simple-to-read introduction to networking!
I was teaching such stuff (and even wrote parts of the text forthe course, but in Norwegian), in an age when we still were hoping for networks to be orderly and well designed. (That is, in the 1990s.) I see other commenters mentioning the OSI model, which was an essential part of the orderlyness. Well, those were the days...
Today, networking is more or less a total mess. Lots of stuff isn't what it appears to be: Maybe the name is the same, but under the hood there is a hodge-podge of different, and largely incompatible, technologies that must be explicitly bridged between. Take "Ethernet" as an example: A multitude of cable types, a dozen different physical connectors, different topologies, different allocation strategies, ... But to the user, it is presented as the eternal, single Ethernet-techology (just with a more convenient RJ45 plug).
ooo oo00oo ooo
Along another axis: One ting you ask explicitly about is "discontinous communication" (often called "asynchronous" communication). First: In a protocol stack, it may vary from one layer to another. The cable may carry a continous stream of bits at well defined steady rate. The user of this cable may insert some packets now, some then, in a varying rate. If these packets carry some real time data, such as streaming sound or video, the packets may be receied "now and then" at irregular speed, but are buffered so that they can be forwarded to the sound/video units as a contstant speed, continuous bit stream.
Second: On the fly, I can list a whole series of alternatives for handling asynchronous traffic in the physical layer alone. It much depends on whether you have the line to yourself or share it with other:
POTS, i.e. analog modem. When you have nothing to say, the line is quiet. When you want to send a byte, you sound a buzzer ("start bits") to wake up the receiver, followed by your eight data bits, and a final buzz ("stop bit") to call it off. In later modem protocols, more than eight bits were sent at a time, to reduce the percentage-wise red tape in start and stop bits. Bits were encoded as a beeping tone that in the oldest versions switched between two frequencies ("FSK"), one for binary 0, the other for binary 1. Later, fairly complex schemes were introduced: Each wave peak could vary both in its amplitude and delay (i.e. phase), peak hitting (or being close to) one of, say, 32 defined points, conveying 5 bits per full wave (i.e. per Hz analog channel bandwidth).
HDLC is somewhat similar (binary, on/off, no analog), but the line is always "busy", sending bits at a specific speed. With nothing to send, a stream of "flags", binary 01111110, is transmitted. Once this pattern is broken, data follows: An arbitrary length bit stream up until the 01111110 pattern reappears. (If the data contains 01111110, it is broken up by inserting a 0 after the fifth 1; the receiver will remove it before forwarding the data.)
Multiple user complicates matters. A classical Ethernet bus cable (or any other CSMA/CD medium) is quiet until one station wants to send something. The sender listens to the cable until it is quiet, then starts transmitting - but listens at the same time: If some other station also started transmitting, they both hear a garbled signal, different from what they tried to send themselves. Both stop transmission, and waits a while before making a new attempt.
In newer descendents of CSMA/CD, such as LAP-D for the ISDN D channel, differs in one respect: All senders are synchronized, so that bits overlap exactly. The electronics makes a 1 trump a 0, so the station transmitting a 1 won't notice any collision. One transmitting a 0, and hearing a 1, must cease transmitting before the next bit and try again later. The one with the 1 bit succeeds and need not retransmit. So, there is less waste of capacity.
Other schemes do not rely on senders listening and retracting in case of collision. E.g. in Bluetooth Smart (old name: BT Low Energy), a central master allocates time slots to each slave: If you have more to say, come back in 84 ms! Usually, a slave gets a time slot at regular intervals, every x milliseconds, depending on need. (The "low energy" is much due to this: The slave can turn off its radio up until its next time slot, to save power.) Several other protocols have masters doing central allocation of capacity, in various ways.
Then there are reservation schemes: DQDB (which really deserved more success than it got - today it is dead) operated two bus lines, one each way, carrying a continous stream of fairly small "cells", with a header flag indicating "busy" or "free". If you have something to send, you wait for a "free" cell, set the flag to "busy" and fill in your data bits as the cell flies by. But this requires that you have requested the cell by setting a request flag in one of the cells running in the opposite direction. This ensures that all stations have equal right to reserve capacity.
Various reservation schemes are used by other protocols, often using a circulating "token" that a sender must hold to be allowed to transmit.
Now, which other schemes for allocating capacity to varying-bitrate users deserve to be mentioned? I'd like to mention one ATM facility, even if I don't know if anyone ever made serious use of it:
Many sound/video coding schemes have a "layered" design: You can drop parts of the data, reducing quality somewhat. GSM is a prime example: Certain elements, such as phoneme codes, have very good error correction, and will "always" get through. The "sound energy", volume, goes without wasting capacity on error bits. So if that protocol element is unrecoverably garbled, the volume is simply kept steady. At low S/N, speech may sound flat, but phonemes get through, giving a comprehensible speech.
An ATM switch may have, say, eight incoming lines all wanting to send cells to the same outgoing line, exceeding its capacity. The switch inspects incoming cells for header flags indicating "discardable", and drop those cells, while letting non-discardable ones through. Presumably, discardable cells carry information similar to "sound energy"; it could be improved image details, high-frequency sound content etc.
You request ATM connection (yes, ATM recognizes connections at the bottom layer, just like analog POTS!) of a guaranteed bit rate - say, 64 kbps for a phone. Every switch along the route must agree to this guarantee. If you don't use all your 64 kbps, the switches may take what you don't use for discardable cells. As soon as you open your mouth, those discardable ones must yield. You can send non-discardable cells only up to 64 kbps (or whatever you reserved), but you may send lots of discardable ones to improve sound quality when traffic is low and network capacity is available.
This facility provides an "overbooking" facility where noone is rejected (until the network is saturated with non-discardable cells), yet running without "empty seats" when the load is lower: All available capacity can be used to provide improved capacity.
... Maybe that is enough to illustrate the number of alternatives and options. Yet, lots of people would yell: Why didn't you include this method? and that method? ... which strengthens my point! All what I have written here covers one tiny little speck of what you want to learn, and I have just mentioned a few alternatives, not provided a tutorial! Covering all the issues, with thorough explanations, will fill volumes.
ooo oo00oo ooo
The bad thing about the mess we have today is that you cannot pick one solution to a problem (say, how to multiplex serveral users on the same medium) and combine with a freely chosen solution to another problem (say, how to identify the other end): If you want TCP services, you are more or less bound to use IP addresses. If you use SMTP email, you are more or less forced to use TCP connections. In some cases, you can in principle make unusual combinations: Say, if you replaced TCP/IP with TCP/ATM, mapping a TCP connection directly onto an ATM connection, that TCP would be relieved of a whole lot of work; ATM would do it far more efficiently. But everyone that you want to talk to would say: TCP/ATM? That is TCP/IP, mate! We know nothing of this "ATM". ... Other protocol suites are largely the same. They expect one given technology (or maybe one of two or three selected ones, such as IPv4 and IPv6) at lower layers.
OSI was an attempt to define common interfaces to different groups of network functionality, i.e. layers. You could have different ways of, say, multiplexing several connections over one channel, but the implementation would be local to that layer, all of the alternative implementations available to any higher layer. All the multiplexing implementations would use the common interface to the lower layer, so that every kind of physical transport could carry multiplexed traffic. Similar for all other kinds of network functions: OSI defined abstract functionalities, allowing different implementations. At each layer, isolated from the otheres, the communicating "layer entities" would open a communication with something like: "I can do multiplexing accoring to standards X, Y and Z - do you know any of those?" "I know X and Z" "Fine, we'll use X, then" - but noone outside that layer would care (or even know).
That didn't occur. The network wars of the 1980s-90s were won by the mess. Too bad.
Now, I certainly will not argue in favor of every specific OSI protocol, and especially not in the way the functions were grouped into the seven layers. If OSI had succeeded, I think we today would have had an "OSI II" with a similar layering, but quite different allocation of functions to layers (and not the TCP/IP way!). It the years of war, there was no room for such discussions. And after the mess won the war, it was essential to the victors to make as explicit as possible that any concept drawn from the defeated enemy would be bluntly rejected. And so it was. Nothing was learned from the OSI discipline.
Without reading the entire thread, try this: Internetworking with TCP/IP[^] by Douglas Comer, et al. I have the 2nd edition. Volume 1 is a great introduction to low-level TCP/IP and other base network protocols.
Sadly, I have had to learn this stuff in the course of debugging the TCP/IP 'stack' in some embedded software of ours.