The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Allocating most memory during initialization, based on configuration parameters, is good practice in servers.
I worked on telephone switches and never wrote anything recursive. But I did write some interesting code in which function #1 invoked function #2, which in turn invoked function #1.
A former boss wrote the code for call waiting. When one of the calls needed something done on the other call, it sent a message. After several releases the code stumbled into an obscure path where the two calls just kept exchanging messages, creating an overload situation. Although there was code to guard against one call getting into an infinite loop, this hadn't been anticipated. When the code was fixed, defensive code was also added to guard against any one call using an unbecoming percentage of the CPU time.
But I did write some interesting code in which function #1 invoked function #2, which in turn invoked function #1.
That counts as (indirect) recursion!
I understand that it was unintentional. If that had happened in a no-stack implementation (such as classical Fortran), your program would be likely to crash when control returned to the first function #1, or at least upon return from it - possibly long time after the second call of #1.
You did not tell whether this indirect recursion caused problems or not! Did this happen in a stack based environment, saving the situation, or do you recall it because it failed?
(recursion, n.: When you first curse because of the crash, then recurse when you understand why it crashed.)
It certainly is recursion. Mutual recursion, perhaps?
It was deliberate, and in a stack-based environment, but it would rarely occur. The code is still running after more than 30 years, and I doubt it has ever recursed beyond one level, although it would be possible to set up a test where it did.
Even garbage collection may leave the heap with external fragmentation. If you want GC to leave a heap with no external fragmentation, either the GC must trace every pointer update in all software, which is rather intrusive. In languages allow casting between pointer and non-pointer types, the task of tracing e.g. the use of an integer that has received a (int)pointer value is non-trivial.
Or you must add another level of indirection for all heap access: The code "pointer" is really an index (or "handle") into a pointer table, the only place where the actual heap pointer is found. This strategy is used in some systems, like for some Windows structures, and, I believe, JVM. This adds a (small) execution overhead, but the biggest problem is that it is poorly fit for systems manipulating addresses directly, such as C/C++ pointer arithmetic.
In some old architectures, now more or less completely forgotten, indirect memory addressing was directly supported by hardware. One of the few that enjoyed a (short, very short!) commercial life was the Intel 432 CPU. In the 1980s, there was a whole crowd of experimental, one-of-a-kind, such "capability based" machines. Typically, the pointer table also had a lot of access control flags etc., and could support virtual memory. (Sometimes, I wish that Intel brushed up that 432 architecture so that it could be used in modern systems - it did have a number of interesting features!)
The primary task of GC is to detect inaccessible memory block blocks to have them freed, and to combine neighboring free blocks into a single larger one. Packing memory blocks comes as an extra.
Most routers are based on Linux. The Linux weenies claim Linux gets great uptime.
Very good point. I run Ubuntu on my home rig and haven't booted into Windows for a couple of weeks now either -- since WFH. But, you are right...it may be Linux's fault down at the bottom.
I'll have to see if I can tell what my generic cable modem (from Spectrum) runs.
The closest approximation to what you're claiming I said is that there are routers that have to be rebooted more frequently than some of my Windows systems (or words to that effect--they're still there for you to go back to and re-read).
I sometimes get nostalgic, remembering that "Made for Windows 95"(*) sticker: To be granted permission to put that on your product (ads, package etc.), there was a set of requirements you had to satisfy. One of them was that installation/uninstallation should not require any restart. This was considered a major step forward - from the DOS days onward, we were used to most software installation requiring a reboot.
I believe this was upheld for Win98, but for XP, the no-reboot requirement was gone, and it has been gone since (although the facilities for making no-reboot installers are a lot better today than they were for Win95). I can - sort of - excuse MS for requiring reboot after updates of OS kernel modules (but talk to those making e.g. telephone switch software: Every module, kernel or not, must be replaceable without rebooting the switch).
A few years ago, when Windows updates were downloaded and run one by one, most of them did not require a reboot. Nowadays, with everything wrapped up into a composite package, chances are high that at least one of the components sits so deep into the core of the OS that a reboot is required. A small update may happen to comprise only no-reboot components; there is no rule requiring an update after every Windows update.
(*) Maybe I do not remember the wording "Made for Windows95" correctly; it may have been slightly different.
Are you sure that it is "real" Linux, or just "Linux-like"?
I have never been working with routers specifically, so you may be right. Lots of monitors / OS / executives / kernel (whatever you call it) present themselves as "Linux-like", and it doesn't take very much to claim to be Linux-like. Often it has a tiny little fraction of the API, with "what is needed", with identical function declarations, but the implementation is completely independent and not based on Linux source code.
I started working with embedded systems using the 8-bit 8051 architecture. Even for the 8051, there were people claiming to have Linux-like kernels. Chips of today are far more powerful, and many of them could run "true" Linux, but you will usually try to keep RAM size down to reduce both cost and power consumption. There generally is no need for a significant part of the Linux functionality. If you look up "List of embedded operating systems" in Wikipedia, there is a long list not in the "Embedded Linux" category (but the majority of them would claim to be "Linux-like" - or at least many of their users would say so).
On the other hand: The task of IP routing requires so much processing resources that I guess the extra burden of running a "full" Linux may not make that much difference.
(Nostalgia: 25 years ago, I was supervisor for a student project setting up a 8-to-8 switch: This was a single AT-bus board with eight 155 Mbps lines in, 8 out. Ideally, if none of the outputs were fed more than 155 Mbps, this AT-board could reach a throughput 0f 1.24 Gbps, which was quite a feat in 1995. But that was ATM routing, not IP routing.)
Are you sure that it is "real" Linux, or just "Linux-like"?
That's really the key, isn't it?
In Linux's defense, how hard do you have to work at it to take some Linux source, make a change, and as a result destabilize it so badly that you now have a version that has to be rebooted every few days?
Even if you allocate all memory in the initialization phase and even if you do not use any dynamic tricks - such as recursion, dynamic function calls or the like - there could still be a problem. If the programmer is a bit sloppy s/he could forget to initialize a variable (for example a pointer variable). If that variable is used only in certain situations that could cause a problem then.
Another common situation is not to handle all exceptions. An given exception may be very rare and thus forgotten about, until that day of doom when a situation occurs that throws the exception. If that one is not handled or handled in the wrong way we also have a problem that will cause the state of the modem into unknown land. Suddenly a variable may have a new value that stopped parsing or whatever.
Communication devices must be designed to deal with any kinds of interrupts at all time. It is complicated to test if the software is stable under all circumstances and the cheaper the device is, the lesser time to test is give the developer. So, lack of testing could also be a problem.
It all leads up to the situation:
1 In the beginning - all is fine
2 In due cause something happens
3 That something changes the state of the modem into uncertainty
4 Now the modem is behaving "strange"
5 So: Reboot it and go back to 1.
That is my take on why you should reboot your hardware from time to time...
I was told that the best reason to regularly reboot your router is that when glitches happen, the modem negotiates to find a slower speed that would work, but it never tries to negotiate for a faster speed. So as time goes by and glitches occur, it's going slower and slower until you reset it by rebooting it.
Either way, I'm finding weekly reboots are becoming necessary for my router. I should test it to see if a simple software reboot fixes it, or if it truly needs the power off/wait 30 seconds/power on process we've been doing, since I could do the software reboot from anywhere on the wifi, without a trip to the basement.
was told that the best reason to regularly reboot your router is that when glitches happen, the modem negotiates to find a slower speed that would work, but it never tries to negotiate for a faster speed. So as time goes by and glitches occur, it's going slower and slower until you reset it by rebooting it.
That is a very interesting point and seems to match the experience.
Not a single living person knows how everything in your five-year-old MacBook actually works. Why do we tell you to turn it off and on again? Because we don’t have the slightest clue what’s wrong with it, and it’s really easy to induce coma in computers and have their built-in team of automatic doctors try to figure it out for us. The only reason coders’ computers work better than non-coders’ computers is coders know computers are schizophrenic little children with auto-immune diseases and we don’t beat them when they’re bad.
Ok, first thing you have to remember is that all modems (Adsl, Vdsl, whatever) [static or dynamically addressed] are allocated their slot in the switching system using DHCP and similar technologies.
If your on a cheap as chips consumer connection then there's a dead cert that your dynamically addressed, and... on dynamically addressed connections your ISP simply does not ever allocate enough IP leases in their DHCP pool for every individual customer to have one. None of them do....
Because IP leases that sit unused are expensive.
What they do instead, is they allocate just enough to meet average peak demand on a daily basis, and that usually works well, simply because not everyone is trying to connect all the time.
With many cable modems for example they connect on demand, so if everyone in the household is sleeping, all the computers and phones are off, and nothing is using the connection, the modem will typically release it's lease for someone else to use, and re-request a new one as soon as the first bit of data for the day goes through it.
As I say, this generally works well, as lot's of devices go off as others come online and the amount in the lease pool has enough allocations.
Some ISP's however get it very wrong, or flat out don't care as long as they get to spend only what they want to spend to provide a service, couple that with times of network stress when there is more than the normal average using the system, and more people staying online longer, it means that very quickly the lease pool dries up.
Some ISP's can and do dynamically expand the lease pools to meet demand, many don't because it's all about the $$$ at the end of the day.
Many modems and other devices also have "forced lease" renewals, so for EG: you connect on a morning, you get a 12h lease, the world goes crazy, every one log's on, 12 hours later whether you like it or not, your modem is forced to re-lease it's connection and boom.... no free slots in the pool.
A lot of consumer modems are made in china as cheaply as possible, so even though they might run linux, things like buffer chips and octal drivers are done with as cheap a chip cost as possible, you might only have a couple of K of buffer memory, or the octal line drivers might have a 3ms latch time on them instead of a 1ms latch time.
What you end up with is a device who's hardware performs very, very poorly when it's auto connection starts hammering the ISP line to try and get a new lease, and dare I say it... yes, the bit's do kind of get stuck
Buffers fill up quickly, overflows happen, the OS starts to get shirty because there's so many hardware fails...
And then you reboot....
Reboots generally work a) Because all the buffers etc get cleared out and b) Since most routers wen they start up have to do a RIP negotiation with the ISP they actually stand a better chance of getting a new lease in the pool.
Static IPs are not too bad, they still have to DHCP and what not, but they are ALWAYS returned the same lease with the same IP, Gateway, Subnet, Masks, AAA+ identifier etc. If the passageways are busy/clogged however, then that might cause blockages too so to speak.
The last thing you have to watch for is your contention ratios.
Many ISPs will happily try to connect 100 customers to a traffic circuit that's only rated for 50, again this is going to lead to line speed problems, pool leasing issues etc.
Again, the thinking is simple "All 100 customers are not ever going to try and be online at exactly the same time, and we can easily manage the overlaps..."
One thing that happens to cable modems is that they lose sync with the incoming signal and cannot ever re-acquire sync. When you reboot, it gets to start the sync process from scratch.
Another thing may be memory fragmentation. Your router is running a small linux O/S and a piece of software written in C that is about 30 years old, with some company-specific patches slapped in there to make it look different from the generic software. With various network handshakes that can be broken in the middle, it's not too hard to believe that cruft builds up in the memory until the router chokes.
I have never seen any protocol specification were "they lose sync with the incoming signal and cannot ever re-acquire sync". Re-synching has been an integral part of link protocols for something like 40-45 years (SDLC, HDLC).
Of course there may be implementations that is incapable of doing a proper resync, but this is not a general and unavoidable problem.
(A slightly funny, historical note - I believe that this was in the late 1970s: A newspaper transmitted its data to the print shop using a byte oriented protocol over an analog phone line. For some time, they experienced regular "Framing error"s, which is another way to say "Unable to synchronize properly". It was soon discovered that this occurred when a single byte was transmitted - with two or more bytes, the receiver was able to sync properly. Single-byte messages were used in one specific situation: When a positive ACKnowledge was returned. So for quite a while, until the receiver was tuned up, they interpreted a "Framing error" as a positive ACK.)
Last Visit: 18-Sep-20 2:11 Last Update: 18-Sep-20 2:11