|
Apologies for the shouting but this is important.
When answering a question please:
- Read the question carefully
- Understand that English isn't everyone's first language so be lenient of bad spelling and grammar
- If a question is poorly phrased then either ask for clarification, ignore it, or mark it down. Insults are not welcome
- If the question is inappropriate then click the 'vote to remove message' button
Insults, slap-downs and sarcasm aren't welcome. Let's work to help developers, not make them feel stupid.
cheers,
Chris Maunder
The Code Project Co-founder
Microsoft C++ MVP
|
|
|
|
|
For those new to message boards please try to follow a few simple rules when posting your question.- Choose the correct forum for your message. Posting a VB.NET question in the C++ forum will end in tears.
- Be specific! Don't ask "can someone send me the code to create an application that does 'X'. Pinpoint exactly what it is you need help with.
- Keep the subject line brief, but descriptive. eg "File Serialization problem"
- Keep the question as brief as possible. If you have to include code, include the smallest snippet of code you can.
- Be careful when including code that you haven't made a typo. Typing mistakes can become the focal point instead of the actual question you asked.
- Do not remove or empty a message if others have replied. Keep the thread intact and available for others to search and read. If your problem was answered then edit your message and add "[Solved]" to the subject line of the original post, and cast an approval vote to the one or several answers that really helped you.
- If you are posting source code with your question, place it inside <pre></pre> tags. We advise you also check the "Encode "<" (and other HTML) characters when pasting" checkbox before pasting anything inside the PRE block, and make sure "Use HTML in this post" check box is checked.
- Be courteous and DON'T SHOUT. Everyone here helps because they enjoy helping others, not because it's their job.
- Please do not post links to your question into an unrelated forum such as the lounge. It will be deleted. Likewise, do not post the same question in more than one forum.
- Do not be abusive, offensive, inappropriate or harass anyone on the boards. Doing so will get you kicked off and banned. Play nice.
- If you have a school or university assignment, assume that your teacher or lecturer is also reading these forums.
- No advertising or soliciting.
- We reserve the right to move your posts to a more appropriate forum or to delete anything deemed inappropriate or illegal.
cheers,
Chris Maunder
The Code Project Co-founder
Microsoft C++ MVP
|
|
|
|
|
What happens if a driver developer sends a command to a sound board ( just a random pick) which the board doesn’t recognize/doesn’t know how to handle? Could that cause a crash of the sound board and require a restart?
If the data on the sound board gets corrupted could that make the entire OS unstable?
modified 2 days ago.
|
|
|
|
|
As always, it depends on the hardware. The response is going to be dictated by the chip the command was sent to, the code running on the chip, any error handling, or any command/response logic, ...
It could throw an invalid command message back to the driver, it could just ignore the command entirely, it could put the chip is a bad state, ...
If you're the one developing the hardware and driver, everything is up to you.
If you're NOT the one who developed the hardware, there's just too many factors you have no control over.
I seriously doubt there's going to be documentation on the hardware sufficient to tell you what will happen.
But the only way to tell is to try it!
|
|
|
|
|
I find that very interesting
modified 2 days ago.
|
|
|
|
|
That really goes for any software, doesn't it - driver or whatever?
You may consider a driver to be a process, like any other process in the system. All processes should be be prepared for arbitrary commands and parameters, and reject invalid ones in a 'soft' way - ignoring silently or explicitly rejecting. 'Invalid' includes 'Invalid in the current state'.
I am certainly not saying that all processes (including drivers) do handle all sorts of illegal commands and parameters, just that they should do, driver or otherwise. Regardless of their abstraction level.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Off topic electronics question: Digital circuit means that it is clock based, the circuit board functions at a certain clock frequency. A washing machine digital circuit board functions at a certain clock rate just like a PC motherboard. Is that true?
|
|
|
|
|
Digital circuits are not by definition clocked, if you by that mean that there is a central clock setting the speed of "everything". Circuits may be asynchronous, going into a new state whenever there is a change in the inputs to the circuit. Think of a simple adder: You change the value on one of its inputs, and as soon as the voltages have stabilized, the sum of the two inputs are available at the output. This happens as quickly as the transistors are able to do all the necessary switching on and off - the adder doesn't sit down waiting for some 'Now Thou Shalt Add' clock pulse.
You can put together smaller circuits into larger ones, with the smaller circuits interchanging signals at arbitrary times. Think of character based RS-232 ("COM port"): The line is completely idle between transmissions. When the sender wants to transfer a byte, he alerts the receiver with a 'start bit' (not carrying any data), at any time, independent of any clock ticks. This is to give the receiver some time to activate its circuits. After the start bit follows 8 data bits and a 'stop bit', to give the receiver time to handle the received byte before the next one comes in, with another start bit. The bits have a width (i.e. duration) given by the line speed, but not aligned with any external clock ticks.
Modules within a larger circuit, such as a complete CPU, may communicate partly or fully by similar asynchronous signals. In a modern CPU with caches, pipelining, lookahead of various kinds, ... not everything start immediately at the tick. Some circuits may have to wait e.g. until the value is delivered to them from cache or from a register: That will happen somewhat later within the clock cycle. For a store, the address calculation must report 'Ready for data!' before the register value can be put out. Sometimes, you may encounter circuits where the main clock is subdivided into smaller time units by a 'clock multiplier' (PCs usually have a multiplier that creates the main clock by subdividing pulses from a lower frequency crystal; the process can be repeated for even smaller units), but if you look inside a CPU, you should be prepared for a lot of signal lines not going by the central clock.
The great advantage of un-clocked logic is that it can work as fast the transistors are able to switch: No circuit makes a halt waiting for a clock tick telling it to go on - it goes on immediately.
The disadvantage is that unless you keep a very close eye on the switching speed of the transistors, you may run into synchronization problems: One circuit is waiting for a signal that 'never' arrives. It doesn't arrive on time. Or maybe it arrived far too early, when the receiver was not yet ready to handle it. So asynchronous, non-clocked logic is mostly used in very small realms of the complete circuit (but possibly in lots of realms).
For special purposes, you may build a circuit to do a complete task, all in asynchronous logic. If you are building something that is to interact with other electronics, you will almost always depend on clocking in order to synchronize interchange of signals. So all standard CPUs use clock signals for synchronizing both major internal components and with the outer world. Asynchronous operation is mostly limited to the between-ticks interchanges between the lower layer components.
I think you can assume that you are right about the washing machine circuit board: Almost for certain does it have a clock circuit (probably a simple RC oscillator - it doesn't need the precision and speed of a clock crystal). The major reason is for communicating with the surroundings in a timely (sic!) manner. Today, chances are very near to 100% that the machine uses an embedded processor for control. This will require a clock for its interface to the world, and most likely for keeping its internal modules in strict lockstep as well.
I would guess (without knowing specific examples) that in application areas such as process control, there is more asynchronous logic, both because the outer world (being controlled) goes at its own pace regardless of any clock, and you have to adapt to that - an external interrupt signal is by definition asynchronous. Also, in some environments, immediate reaction to special events is essential. The speed of asynchronous logic may provide a feedback signal noticeably faster than a clocked circuits that "all the time" must wait for ticks before it will go on.
A historical remark:
You explicitly said digital circuits. In the old days, we had analog computers, handling continuous values, not discrete neither in level nor time. You couldn't do text editing on such a computer, but if you had developed a set of differential equations for, say, controlling a chemical process, you could program this equation system by plugging together Lego-brick-like modules for integration, derivation, summation, amplifying or damping etc. in a pattern directly reflecting the elements of the differential equations. This setup was completely un-clocked; each "brick" reacted immediately to changes of its inputs. Basic calculus functions was a direct physical consequence of how the brick was composed of capacitors, resistors, maybe transistors and other analog components. One of the best known usage examples at my Alma Mater was a professor running simulator for cod farming in a fjord: He had a number of potentiometers to adjust the amount of fodder given to the fish, the water temperature, the amount of fresh water running down to the fjord when snow melted in spring, and so on. (Almost) immediately could he read the effect on the cod population on the analog meters connected to the outputs.
I never got to try to program an analog computer (I was maybe 3-5 years late, no more) but I had a relative (retired years ago) whose special expertise was in analog computers. He shook his head when digital computers pushed out the last analog ones, around 1980: It will take ages before digital computers can do the job of analog ones. They are not by far fast enough. Besides, if you need to integrate a signal, plugging in an integrator is straightforward. Writing 100 lines of code to do it digitally is not, is error prone and far abstracted from the real world. Even though changes happen simultaneously in twelve different "bricks", with immediate interactions, you have to do them sequentially one by one on an digital computer, and the mutual interactions never comes naturally; you have to devise separate communication channels to exchange them ...
He was forced to switch to digital computers for the second half of his professional life, but he never made friends with them.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Does a clocked digital circuit board have “cells” resembling CPU registers, something that plays as persistent memory between flashes/waves of current. I’m just trying to visualize how a board you plug into a motherboard works. In the short blackout between cycles information has to be kept somewhere.
|
|
|
|
|
Register-like mechanisms are present in almost all kinds of circuit boards, especially when dealing with interface to external units. I don't think 'cells' is a common term; they are more commonly called registers, occasionally 'latches' (for registers catching input values).
Registers on peripheral interfaces are used for preserving internal values as well, not just values input or output. Quite often, they are addressed in similar ways. With memory mapped IO (which is quite common in modern machines, except for the X86/X64), one address is a straight variable, another address sets or returns control bits or data values in the physical interface circuitry, a third address is a reference to the internal status word of the driver. So which is a 'register', which is a 'variable', which one is actually latching an input or output value? The borders are blurry. Whether you call it a register, a variable, a latch or something else: It is there to preserve value, usually for an extended period of time (when seen in relationship to clock cycles).
When some interface card is assigned 8 word locations (i.e. memory addresses) in the I/O space of a memory mapped machine, don't be surprised to see them referred to as 'registers', although you see them as if the were variable locations. When you address the variables / registers in you program, the address (or lower part of it) is sent to the interface logic, which may interpret it in any way it finds useful. Maybe writing to it will save the data in a buffer on the interface. Maybe it will go to line control circuitry to set physical control lines coming out of the interface. There is no absolute standard; it is all up to the interface.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Does bios function like an operating system? It captures keyboard input, has the ability to display characters on screen, on my laptop it even displays animations. All this is done without using drivers. What does the battery on the motherboard do? Does it help keep the bios always loaded in memory or is the bios booted from permanent memory when you turn on the PC?
Bios uses the processor to function hence it must be written in assembly, is that how it works?
modified 28-Sep-24 17:15pm.
|
|
|
|
|
You can consider it a very simple OS, lacking a lot of the functions you expect today. Some primitive drivers are part of this "OS". These are mainly for getting the 'real' OS loaded into memory and give it control - often referred to as 'bootstrapping'. In principle, you could have the BIOS load your program rather than the real OS, but most likely, your program would request services that the BIOS doesn't provide.
In the old days (DOS), applications relied on the drivers that are part of the BIOS to handle the few devices that the BIOS have drivers for, such as keyboard and character based output to a screen or printer. Executing driver code out of the BIOS ROM was slow. PCs started to copy the BIOS code to RAM at startup, to run it faster. Often, the BIOS code was limited and couldn't utilize all the functions of the peripheral, so OSes started providing their own code to replace the BIOS drivers.
Today, the OS has its own drivers for 'everything', so those provided by the BIOS are used only for the bootstrapping process. Even though execution out of the BIOS ROM is slow, those drivers are used for such a brief time that it doesn't really matter much. I doubt that any modern motherboard would care to copy the BIOS drivers from ROM to RAM for speedup, the way they did in the old days. Note that in the old days, those drivers were used all the time; the OS didn't have a better replacement driver. So then it made much more sense to copy to RAM than it does today.
When you boot up, the OS isn't there yet, so you need something to read the disk, floppy, USB stick or whatever medium you keep you OS on. If your OS is on a medium for which your BIOS doesn't have a driver (say, a tape cassette), you may be lost - unless your BIOS has a driver for, say, a floppy drive, and you can load the tape station driver from the floppy, and use that driver (loaded to RAM) to load the real OS from the tape. (This is not very common, though.) We had USB for years before we got BIOSes with drivers for the USB interface. During those years, you could not boot your OS from a USB stick the way you can today. Even before that, we had the same issue with CD/DVD drivers: The BIOS didn't have CD drivers, so the CD/DVD drive was useless until the OS had been loaded with its CD drivers.
The mainboard battery: Flash is a more recent invention than the PC. In the old days, the data area used by the BIOS, holding e.g. the order in which to try the various boot devices, was held in what is called CMOS, an extremely low-power, but not very fast memory technology. Functionally, it was used like a flash is used today, but even if it drew almost no current, it was dependent on a certain voltage to keep the state intact. (The C in CMOS is for 'Complimentary', indicating two transistors blocking each other, none of them carrying any current to talk of. But if one of them lets go of its blocking, the house of cards falls down.) I would think that recent motherboards have replaced CMOS with flash, so they will not loose information when the battery is replaced.
The battery has a second function: The motherboard has a digital clock, running even when the power is turned off, the mains cable unplugged, and for a portable, the main battery is empty. This cannot be replaced by any battery-less function. If you have to replace the mainboard battery, expect the clock to be reset to zero. Even if the BIOS makes use of a flash for storing setup parameters, the battery is needed for the clock.
Sure, the BIOS uses the CPU. Or, I'd rather say it the other way around: The CPU uses the BIOS as its first program to run at startup. All CPUs, from the dawn of computers, fetches their first instruction from a fixed address (00000...000 is a viable candidate, but some CPUs differ). That is where you put the BIOS. The BIOS is the first instructions executed by the CPU. You could say that it is much like any other program. In principle, it could be written in any language, but its tasks are so down-to-physical hardware that it very often is written in assembly - at least the initial part of it, setting up the registers, interrupt system, memory management. When that is done, it may call routines written e.g. in C for things like handling the user dialog to set up the boot sequence, report the speed of the fans and all the other stuff that modern BIOSes do today. (Mainboards of today call their initialization code UEFI rather than BIOS, but their primary functions are the same.)
A computer doesn't have to have a BIOS. One of the first machines I programmed did not. When powered on, the PC register was set to 0 and the PC halted. The front panel had 16 switches; the instructions were 16 bit wide. So I flipped the switches to the value of the first instruction and pressed 'Deposit'. This stored the switch positions at address 0 and advanced the PC register to 1. I flipped switches for the next instruction; Deposit stored it at address 1 and advanced to address 2. The mini-driver for the paper tape reader was 15-20 instructions long. Consider that my "BIOS"! After flipping and depositing it, I placed a paper tape in the reader, containing the disk driver. Then I pressed the 'Reset' button, the PC register was reset to 0 and the CPU taken out of halt. The CPU ran the paper tape driver, which loaded the paper tape and at the end of the tape ran right into the code loaded, to run the disk driver to load the OS bootup code that loaded the rest of the OS.
Also, the computer doesn't have to have a built in clock running when the power was off; so it need no battery for that purpose. Most computers have one, but you have to set it after power on. Until the advent of PCs, most computers did not have a battery clock. E.g. after a fatal crash, the operator would have to restart and then set the time explicitly from his own watch.
There is a story about that from the University of Copenhagen - it must have been in the early 1970s: After a crash, the operator set the time and date, but didn't notice that he had typed the year wrong, ten years into the future. This wasn't noticed until after they had run the maintenance program deleting all files that hadn't been referenced for three months. (I guess that is when they noticed it!)
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Thank you for taking the time to reply tronderen, that’s an interesting post.
|
|
|
|
|
Quote: In the old days (DOS), applications relied on the drivers that are part of the BIOS
How did that function? I don't understand much about hardware or driver programming and I'm looking to broaden my horizons. Without a driver the CPU doesn't know the 'love language' of the equipment that sits in a slot. But only the equipment producer knows how to address the piece of hardware it has produced. Is there a universal language that works for all video cards, sound boards etc.?
Back in the old days (and even today I think, I'm not sure I've only had laptops in recent years) a slot like PCI matched hardware from different categories how did that work?
modified 29-Sep-24 15:28pm.
|
|
|
|
|
Back in the DOS days, video cards sat at a certain address. The BIOS didn't need any special drivers. It just wrote directly to the addresses the video RAM was at.
Back then, the bus and cards couldn't assign addresses, ports, DMA channels, and IRQs. You had to manually manage the separation of the hardware yourself. Then you told the drivers where the hardware sat in memory and/or how it was configured to listen on.
|
|
|
|
|
Calin Negru wrote: How did that function? The BIOS is really nothing but a function library. In the DOS days, you could in principle call, say, the driver for outputting a character on the serial line by calling the function directly, by its BIOS address. Well, not quite - the return mechanism wouldn't match you call, but we are close. Rather than a direct function call, you used the interrupt system to call the driver function.
You may think of physical interrupts, coming in on the INT or NMI (Non-Maskable Interrupt) line, as a hardware mechanism for calling a driver function when something, like an input, arrives from a device. Hardware will put the input value into a temporary register in the interface electronics (not a CPU register) and the driver function will move the value from that register into memory. Each interrupt source (device), or group of devices, provide an ID to the interrupt system so that a different function is called for each device (group), each knowing that specific device type and how to transfer the value from the interface electronics to the CPU. The interrupt system has a function address table with as many entries as there are possible interrupt IDs, so the ID is used to index the table. This table is commonly called an 'interrupt vector'.
All computers have at least one extra location in the interrupt vector that is not selected by a hardware device ID, but your software can use a specific instruction to make a function call to the BIOS, OS, Supervisor, Monitor, ... Whatever you call it. Intel calls it an INT(errupt) instructions, on other machines it may be called an SVC (SuperVisory Call), MON(itor) call, or similar. On some machines, the instruction may indicate the interrupt number (i.e. the index to be used in the vector), so that different service functions have different interrupt numbers. Others have a single 'software interrupt' number and vector entry, with a single handler that reads the desired service number from a register. Many machines started with giving each service a separate ID, but the number of services outgrew the interrupt vector, so they had to switch to the second method. DOS is a mix: A number of services have their own interrupt ID, but the majority of BIOS driver functions use INT 21, with a service selector in the AH register. (Other DOS multipurpose software interrupts are 0x10 for video functions, INT 13 for low-level disk functions, INT 16 for keyboard functions and INT 17 for printer functions.)
The primary function of an software interrupt call is that of an ordinary function call. But there is something extra: Privileged instructions are enabled, memory management registers are updated to bring OS code into the address space etc. This you cannot do by an ordinary function call. So an interrupt function is not ended with a plain return, but by a specific 'Return from interrupt' instruction that restores non-privileged mode, MMS registers etc. to 'user mode'.
DOS didn't have anything as fancy as 'privileged instructions' and MMS. So the main purpose of software interrupts were to make the application independent of, say, the location of the serial line handler. Regardless of BIOS version or vendor, to call the driver function for outputting a character on the console, you executed an INT 21 instruction with 2 in the AH register and the character code in the DL register. You may consider the BIOS specification similar to a high level language interface specification: It provided detail parameter and functional information, and the BIOS vendor provided the implementation of this interface.
Back in those days, interrupts were fast! I worked on machines designed in the mid 1970s: The first instruction of the handler routine was executing 900 ns (0.9 microseconds) after the signal occurred on the interrupt line. (For the 1970s, that was quite impressing!) Later, memory protection systems became magnitudes more complex, and getting all the flags and registers set up for an OS service has become a lot more expensive. Processors have long pipelines, and you have to (or at least should) empty them before going on to a service call. Software interrupts of today take a lot more time in terms of simple instructions execution times, compared to 50 years ago. When the 386 arrived with a really fancy call mechanism, all sorts of protections enforced, MS refused to use it in Windows - it was too slow. (They rather requested a speedup of Illegal Instruction interrupt handling, the fastest way they had discovered to enter privileged mode.) That is why Win32 programs never had access to 4 GiB address space: With the 386 call mechanism, user code and OS could have separate 4 GiB spaces, but MS decided that '2 GiB should be enough for everybody', so that they could use a faster interrupt mechanism that made no MMS updates.
But only the equipment producer knows how to address the piece of hardware it has produced. Is there a universal language that works for all video cards, sound boards etc.? There are lots of hardware standards for each class of hardware. The video card makers, or USB makers, or disk makers, sit down to agree on a common way to interface to the PC: They will all use this and that set of physical lines, signal levels, control codes etc. Then the driver on the PC side may be able to handle all sorts of VGA terminals, say. Or each video card vendor's interface on the PC bus, because they all use the same interface.
Over the years, such industry standards have grown from specifying the plug and voltages, and little else, to increasingly higher levels. USB and Bluetooth are primary examples where this is prominent: Very general 'abstract devices', such as a mass storage device, are defined by the interface, and the manufacturer on the device side must make his device appear as that abstract device, no matter its physical properties.
Furthermore: In the old days, we often for a few years had a multitude of alternatives, with highly device specific drivers before the vendors got together to clean up the mess. Nowadays, new technology (such as USB3, Bluetooth 5.0) tends to come from the very beginning with standards for use. Today's standards tend to be far more forward-looking than the old ones: E.g. they have open-ended sets of function codes, exchange lots of configuration values for bitrates, resolutions, voltages, ... so that the standard can live long and prosper. If the other part cannot handle the recent extensions, such as a higher resolution, it report is, and that extension isn't used on the connection.
Almost all general peripherals of today present themselves as one of those abstract devices defined for the physical interface. You still need a driver for each of those, but there aren't that many different ones. For special purpose equipment you still may have to provide a special driver, because it provides functions not covered by any of the standard abstract devices. If it uses a standard physical device, say USB, it hopefully uses that in a standard way so that you can use a standard USB driver and only have to write the upper levels of the driver yourself.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Another excellent response. Do you think it is worth consolidating all this into an artice?
|
|
|
|
|
Hello,
I use Arduino Uno to read the voltage change across a Thermistor terminals.
To read The temperature, I would use Steinhart–Hart equation:
I/T=A + B LnR + C (Ln R)^3 to convert voltage to temp, I can write this equation using C++ via Arduino IDE, then I'll get the Temperature.
My question is: how to do it without using the Arduino, I mean using only electronic components, what is the circuit design that can give me a Ln or cubic power?
Thank you
|
|
|
|
|
The short answer is to start with Log amplifier - Wikipedia[^]. You can assemble a bunch of them to do the trick, but that is really doing tings the hard way. For limited temperature spans, there are simpler approximations for linearizing to a reasonable accuracy. Feed any search engine with "linearize thermistor"
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
I’m trying to get a better understanding of how RAM memory works. I brought up this question before. This time I’m trying to find out a little bit more.
There is no optical fiber on the motherboard hence in the scenario where you want to place 32 bits in memory if you want to send them at one time you need 32 copper lines connecting the CPU socket to the memory slots. What happens if you want to send more information. Let’s say you want to send four 32 bit integers. Before sending the actual data does the CPU ( the Operating System) use the same 32 lines to instruct the memory slots where should the integers about to be sent be placed?
How does the memory know the address range in which it should place the four integers?
|
|
|
|
|
CPU sockets of today has a tremendous number of 'pins' (they aren't really pins nowadays, but the name sticks), typically 1200-1500. Usually, far more than 32 of these carry data to/from RAM. More typical is 128 or 256, the length of a cache line. If you want to modify anything less (such as a single byte or a 32 bit word), the CPU must fetch the entire cache line from RAM, modify the part of it that you want to modify, and write the entire cache line back to memory.
The CPU uses another set of pins to indicate the RAM address to be read from or written to. Since the arrival of 32 bit CPUs, the CPU has rarely been built to handle as much RAM as the logical address space; the 386 did not have 32 address lines, you could not build a 386 PC with 4 GB memory. Nor does today's 64 bit CPUs have 64 address lines. The memory management system will map ("condense", if you like) the used memory pages spread over the entire 64 bit address space, even multiple 64 bit spaces - one for each process, down to the number of address pins required to cover the amount of physical RAM that you have got.
Note that when transfers between RAM and the CPU cache (inside the CPU chip) goes in units of an entire cache line of, say, 128 bits or 16 bytes, there is no need to tell the RAM which of the 16 byte(s) are actually used - they are transferred anyway. So there is no need to spend address lines for the lowermost 4 bits of byte address. The number of external pins is a significant cost factor in the production of a chip, so saving 4 bits gives an economic advantage.
In the old days, pins were even costlier, and you could see CPUs that first sent the memory address out on a set of pins during the first clock cycle. The memory circuits latched this address, for use in the next clock cycle, when the CPU transferred the data value to be written on the same pins as those used for the address. Or reading: In the first cycle, the CPU presents the address it wants to read; in the next cycle, the RAM returns the data on the combined address/data lines. There were even designs where the address was too long to be transferred in a single piece: In cycle 1, the high address were transferred, in cycle 2 the low address, and in cycle 3, data was transferred. (And in those days, you fetched/wrote a single byte at a time, and cache was rarely seen.)
This obviously put a cap on the machine speed, when you could retrieve/save another data byte no faster than one every two or three clock cycles. To win the speed race, general processors today have separate, wide address and data buses. I guess that you still can see multiplexed address/data buses in embedded processors (ask Honey about that!).
Your scenario with four 32 bit words to be saved: If they are placed at consecutive logical addresses, as if they were a 16 byte array, they might happen to fit into the same cache line. When the cache logic determines that it is necessary to write it back to RAM, one address is set up on the address lines, and a single transfer is made on the data lines. If the 16 bytes is not aligned with the cache borders, but spans two cache lines, each of the two parts are written to memory at different times, in two distinct operations. If the four words are located in distinct, no-coherent virtual addresses, they are written back to RAM in 4 distinct operations: 4 addresses on the address bus, each with a different cache line contents on the data bus. Note that the entire cache line is written in each of the write operations, and could include updates to other values in the same lines that hadn't yet made it to RAM.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I get the picture, thank you
|
|
|
|
|
I have been graciously given PCB with LED's to monitor SOME serial data. It has LED for DRX and DTX.
My current serial data code only sends and I have no connection to any "remote serial device" , but I can see both DRX and DTX flashing. Good.
BUT
why is DRX flashing?
Is is because my "serial data communication" is set for "local loop back"?
How do I verify my "modem" settings" AKA "AT" commands ?
Thanks
|
|
|
|
|
jana_hus wrote: How do I verify my "modem" settings" AKA "AT" commands ?
Specific question gets a list of sites that can help you.
modem at commands - Google Search[^].
modified 10-Sep-24 16:14pm.
|
|
|
|
|
FROM https://e-junction.co.jp/share/Cat-1_AT_Commands_Manual_rev1.1.pdf:
Quote: 2.29. Controls the setting of eDRX parameters +CEDRXS
Syntax
Command Possible Responses(s)
+CEDRXS=[<mode>,[,<acttype>[,<requested_edrx_value>]]]
+CME ERROR: <err>
+CEDRXS? [+CEDRXS: <acttype>,<requested_edrx_value>[<cr><lf>
+CEDRXS: <act-type>,<requested_edrx_value>[...]]]
+CEDRXS=? +CEDRXS: (list of supported <mode>s),(list of supported
<act-type>s),(list of supported <requested_edrx_value>s)
Description
The set command controls the setting of the UEs eDRX parameters. The command controls whether
|
|
|
|
|