View Full Version : Explained: NB, FSB, SB, Voltage and more -> Motherboard settings

April 24th, 2009, 02:35
What the heck do all of these settings do…who knows….
Sound familiar??? Well hopefully this helps.

CPU Ratio Control:
This is your CPU multiplier which used in conjunction with your FSB frequency controls what overall frequency your CPU runs. Increase the CPU multiplier and increase the Front Side Bus frequency and you overclock your CPU higher…wow its magic http://www.blazingpc.com/forum/images/smilies/wink.gif

FSB Frequency:
<TABLE border=0 cellSpacing=0 cellPadding=6 width="100%"><TBODY><TR><TD style="BORDER-BOTTOM: 1px inset; BORDER-LEFT: 1px inset; BORDER-TOP: 1px inset; BORDER-RIGHT: 1px inset" class=alt2>Wikipidea Definition: “In personal computers, the front side bus (FSB) or system bus is the physical bi-directional bus that carries all electronic signal information between the central processing unit (CPU) and the northbridge.
Some computers also have a back side bus which connects the CPU to a memory cache. This bus and the cache memory connected to it are faster than accessing the system RAM via the front side bus.
The maximum theoretical bandwidth of the front side bus is determined by the product of its width, its clock frequency and the number of data transfers it performs per clock tick. For example, a 32-bit (4-byte) wide FSB with a frequency of 100 MHz that performs 4 transfers/tick has a maximum bandwidth of 1600 MB/second. The number of transfers per tick is dependent on the technology used, with (for example) GTL+ offering 2 transfers/tick, EV6 4 transfers/tick, and AGTL+ 8 transfers/tick.”

What you need to know…increasing the FSB with a high CPU ratio will increase the speed of your computer.

FSB Strap:
http://www.thetechrepository.com/showthread.php?t=30 (http://www.thetechrepository.com/showthread.php?t=30)
The FSB Strap has been well defined and described by others so I am providing a link.
What you need to know: Adjusting your FSB strap up and down will provide more memory speed options on the newer intel chipset boards and will also provide you with a faster speed at a given strap. The higher the strap the faster your computer will run up to matching FSB of that strap. So a 400 FSB strap kicks in at 400 FSB and will not balance out until much higher say 480 or so. PM me if you need help with this http://www.blazingpc.com/forum/images/smilies/wink.gif

PCIe Frequency:
<TABLE border=0 cellSpacing=0 cellPadding=6 width="100%"><TBODY><TR><TD style="BORDER-BOTTOM: 1px inset; BORDER-LEFT: 1px inset; BORDER-TOP: 1px inset; BORDER-RIGHT: 1px inset" class=alt2>Wikipedia: PCI Express, officially abbreviated as PCI-E or PCIe, is a computer expansion card interface format introduced by Intel in 2004. It was designed to replace the general purpose PCI expansion bus, the high end PCI-X bus and the AGP graphics card interface. Unlike previous PC expansion interfaces rather than being a bus it is structured around point to point full duplex serial links called lanes. In PCIe 1.1 (the most common version as of 2007) each lane carries 250 MB/s in each direction. PCIe 2.0 doubles this and PCIe 3.0 doubles it again.
Each slot carries one, two, four, eight, sixteen or thirty-two lanes of data between the motherboard and the card. Lane counts are written with an x prefix e.g. x1 for a single lane card and x16 for a sixteen lane card. Thirty-two lanes of 250MB/s gives a maximum transfer rate of 8 GB/s (250 MB/s x 32) in each direction for PCIe 1.1. However the largest size in common use is x16 giving a transfer rate of 4 GB/s (250 MB/s x 16) in each direction. Putting this into perspective, a single lane has nearly twice the data rate of normal PCI, a four lane slot has a comparable data rate to the fastest version of PCI-X 1.0, and an eight lane slot has a data rate comparable to the fastest version of AGP.
PCIe slots come in a variety of sizes referred to by the maximum lane count they support. A larger card will not fit in a smaller slot but a smaller card can be used in a larger slot. The number of lanes actually connected may be smaller than the number supported by the slot size. Therefore, a 16 lane card cannot be used in an 8 lane slot, though a card may only use 8 lanes out of 16. The number of lanes are "negotiated" during power-up or explicitly during operation. By making the lane count flexible a single standard can provide for the needs of high bandwidth cards (e.g. graphics cards, 10 gigabit ethernet cards and multiport gigabit ethernet cards) while also being economical for less demanding cards.

What you need to know: You can normally increase your PCIe frequency up to 110 to 115 and achieve faster frames per second on your video card because it will communicate with your bus and cpu faster at a higher frequency so don’t be afraid to experiment.

Dram Frequency:
Ok ill fill in more latter but this is the speed that your ram runs at…essentially the faster the frequency and lower the clocks (dividers) the faster your ram will read/write memory.

Dram Timing:
Dram timings are like your clocks on your CPU but work in reverse the higher the clock levels on your dram the slower your ram will run. When you hear the term loosen your ram timings we are talking about making the ram numbers bigger and slowing down certain operations performed by your ram. The tighter the smaller the values of your ram timings and the faster your ram will run.

Each of the timing operations takes a finite amount of time to complete. Each of these operations also has a "timing symbol" associated with it. These symbols are always written like "tCAC", which in this case would mean the Column Access time, and specifies the minimum number of nanoseconds necessary for the operation to complete.

Ok now for the nuts and bolts...Note I personally lower and test many different combinations of timings because each timing has an impact on the other timings. So lower one test it if it works lower the next test it if it doesnt work set it back and lower the next ect...

<TABLE border=0 cellSpacing=0 cellPadding=6 width="100%"><TBODY><TR><TD style="BORDER-BOTTOM: 1px inset; BORDER-LEFT: 1px inset; BORDER-TOP: 1px inset; BORDER-RIGHT: 1px inset" class=alt2>The first important timing symbol to consider is tCLK, which is the system clock speed. If your CPU is running at 233 MHz (3.5x66MHz), then your system clock is running at 66 million cycles per second. This equates to about 15 ns for tCLK. (The clock cycle length in nanoseconds is calculated simply by taking the reciprocal of the clock speed; 1 divided by 66.6 million cycles per seond = 15 x 10-9 seconds per cycle, or 15 nanoseconds per clock cycle). In other words, each clock cycle takes 15 ns to complete. The term "Synchronous" in SDRAM means that every operation in the chip happens in sync with the system clock; therefore any operation that takes 15 ns or less to complete can occur every clock cycle (at 66MHz), but any operation that takes between 16ns and 30ns requires two clock cycles. Note that a 100MHz system clock speed, such as that found on the latest systems running at 350+ MHz, is equivalent to a 10ns clock cycle. This of course means that in order for the SDRAM to complete its activities within one clock cycle, at 100 MHz, everything must happen much faster than it does at 66 MHz.

Now let's look at the timings of the memory itself. For SDRAM, there are 5 important timings:

1. The time required to switch internal banks (tRP);
2. The time required between /RAS and /CAS access (tRCD);
3. The amount of time necessary to "prepare" for the next output in burst mode (tAC);
4. The column access time (tCAC); and
5. The time required to make data ready by the next clock cycle in burst mode (read cycle time).

Each timing factor will play a role in determining the overall performance in any system. Of the five, two are most commonly referenced in marketing and sales literature: read cycle time and tCAC, though you will rarely, if ever, see them called that. Another important timing is the "access time" or tAC.

It's important to note that when you see an SDRAM chip referred to as either "10 ns" or "8 ns", what is really being measured is the "read cycle time". Note that this is not measuring the same timing that EDO or FPM was when they were specified as 60 ns or 70 ns. For the older (asynchronous) DRAM, the timings given were the total amount of time required for a complete memory access (row access, column access and output). In the case of SDRAM, it is the amount of time required to perform a read operation after the initial read (burst mode) that is being given. See this page for more.

The reason this issue of the speed rating is important, is that the PC100 SDRAM spec requires a maximum of 8ns burst cycle time. This does not mean that a chip marked as 10ns will not actually operate at 8 ns (just as a 60 ns EDO chip may actually operate at 50 ns or faster); it just means that there is no guarantee it will operate faster than 10ns in burst mode, which may not be sufficient for use in a 100 MHz system.

Access time (tAC) is the amount of time it takes to "open" the output line from the prior clock "tick". A control line triggers action by a change in state, which is called a "rising edge" (transition from "0" to "1") or "falling edge" (transition from "1" to "0"). When the line "drops", an operation is signaled to begin; however, there is a period of time that must pass before the signal stabilizes. In order to be able to send data out every 10 ns, this time between the last system clock "tick" (rising edge) and the beginning of the output signal must be fast enough to allow the signal to stabilize before beginning the actual output operation. For the PC100 spec, this time is specified as 6 ns.

Another common marketing term attached to SDRAM modules is either "CAS2" or "CAS3". Unfortunately, this is a misnomer; these should be called CL2 or CL3, since they refer to CAS Latency timings (2 clocks vs. 3 clocks). The CAS Latency of a chip is determined by the column access time (tCAC). This is the time it takes to transfer the data to the output buffers from the time the /CAS line is activated.

The "rule" for determining CAS Latency timing is based on this equation:

CL * tCLK >= tCAC

In English: "CAS Latency times the system clock cycle length must be greater than or equal to the column access time". In other words, if tCLK is 10ns (100 MHz system clock) and tCAC is 20ns, the CL can be 2. But if tCAC is 25ns, then CL must be 3. The SDRAM spec only allows for CAS Latency values of 1, 2 or 3.

OK, now let's put this all together: first the CPU activates the row and bank via the /RAS line. After a period of time (tRCD), the /CAS line is activated. When the amount of time required for column access (tCAC) has passed, the data appears on the output line and can be transferred on the next clock cycle. The time that has passed is approximately 50 ns for the first piece of data to become available. Subsequent transfers may be performed via burst mode (every clock cycle), or by cycling /CAS if necessary (which requires an amount of time dictated by tCAC, also called the CAS Latency period). For burst mode operation, the access time (tAC) must be 6ns, so that the signal can stabilize and an output operation can begin by 8 ns after the last one. The transfer of the data takes 2 ns or less, which means that the data is available every 10 ns on a burst transfer--just in time for the next 100 MHz clock signal!

---> source http://www.pcguide.com/art/sdramTiming-c.html (http://www.pcguide.com/art/sdramTiming-c.html)

CPU Voltage (vCore):
In order to increase the speed of a cpu past a certain level the cpu vCore will need to be increased this directly controls how much voltage is running through your CPU and will also make your CPU hotter. See my thread on thermal charateristics of cpu's as they relate to cooling and voltage migration.

CPU PLL Voltage:
As FSB and other frequency adjustments are made and termination settings adjusted to compensate for noise in the voltage lines the CPU PLL may become out of balance. To compensate for higher FSB overclocks the PLL voltage must be adjusted upward accordingly. This is a hit or miss adjustment which will require fine tuning. For more on PLL see the excerpt below:

<TABLE border=0 cellSpacing=0 cellPadding=6 width="100%"><TBODY><TR><TD style="BORDER-BOTTOM: 1px inset; BORDER-LEFT: 1px inset; BORDER-TOP: 1px inset; BORDER-RIGHT: 1px inset" class=alt2>Frequency synthesizers use PLLs to generate frequencies from reference sources. Although some PLL-based systems use more than one reference, a system can generate multiple frequencies from a single reference. For example, one Cypress CY2291 clock generator replaces many traditional metal-can oscillators on a PC motherboard. This replacement results in a significant reduction in board space and cost. Rarely does an external oscillator generate the reference frequency; instead, a crystal connected to the synthesizer usually supplies the reference frequency.
Figure 1 shows a block diagram of a PLL. Note that the PLL offers two types of correction. The first type is a frequency correction for large differences between the reference and feedback inputs. Applying power to the frequency synthesizer or significantly changing the feedback frequency activates frequency correction. The second correction is a type of fine tuning based on phase corrections.
VCO stands for voltage-controlled oscillator, P is a multiplier in the feedback path, and Q is a divider in the reference path. The phase/frequency detector detects differences in phase and frequency between the reference and feedback inputs. The device generates compensating up and down (increase- and decrease-frequency) signals. If the feedback-input frequency is less than the reference frequency, the pulse width of the up signal is greater than that of the down signal; if the feedback frequency is higher than the reference frequency, the pulse width of the down signal is greater than that of the up signal. These control signals pass through a charge pump and a loop filter to generate a control voltage that feeds into a VCO. The frequency of this oscillator depends on the VCTRL input. At steady state, the VCO frequency is:
The output frequency of the PLL is:
where FVCO=VCO frequency, FREF=reference frequency, P=multiplier (in the feedback path), Q=divider (in the reference path), and N=post divider.

NB Voltage:

Increasing the North Bridge voltage allows for higher FSB frequencies and a stable system. See below for full NB definition:

<TABLE border=0 cellSpacing=0 cellPadding=6 width="100%"><TBODY><TR><TD style="BORDER-BOTTOM: 1px inset; BORDER-LEFT: 1px inset; BORDER-TOP: 1px inset; BORDER-RIGHT: 1px inset" class=alt2>The northbridge typically handles communications between the CPU, RAM, or PCI Express, and the southbridge. Some northbridges also contain integrated video controllers, which are also known as a Graphics and Memory Controller Hub (GMCH). Because different processors and RAM require different signalling, a northbridge will typically work with only one or two classes of CPUs and generally only one type of RAM. There are a few chipsets that support two types of RAM (generally these are available when there is a shift to a new standard). For example, the northbridge from the NVIDIA nForce2 chipset will only work with Socket A processors combined with DDR SDRAM, the Intel i875 chipset will only work with systems using Pentium 4 processors or Celeron processors that have a clock speed greater than 1.3 GHz and utilize DDR SDRAM, and the Intel i915g chipset only works with the Intel Pentium 4 and the Celeron, but it can use DDR or DDR2 memory.

The name is derived from drawing the architecture in the fashion of a map. The CPU would be at the top of the map at due north. The CPU would be connected to the chipset via a fast bridge (the northbridge) located north of other system devices as drawn. The northbridge would then be connected to the rest of the chipset via a slow bridge (the southbridge) located south of other system devices as drawn.

The northbridge on a particular system's motherboard is the most prominent factor in dictating the number, speed, and type of CPU(s) and the amount, speed, and type of RAM that can be used. Other factors such as voltage regulation and available number of connectors also play a role. Virtually all consumer-level chipsets support only one processor series, with the maximum amount of RAM varying by processor type and motherboard design. Pentium-era machines often had a limitation of 128 MB, while most Pentium 4 machines have a limit of 4 GB. Since the Pentium Pro, the Intel architecture can accommodate physical addresses larger than 32 bits, typically 36 bits, which gives up to 64 GB of addressing, though motherboards that can support that much RAM are rare because of other factors (operating system limitations and expense of RAM).
A north bridge typically will only work with one or two different southbridge ASICs; in this respect, it affects some of the other features that a given system can have by limiting which technologies are available on its southbridge partner.

FSB Termination Voltage:
Source -> Line -> Component -> Short line -> Termination

the "lines" in this case are signal traces. High speed switching operations can cause reflections on the line (in essence, this is resonant noise caused from high-speed switching on the lines as the memory controller gates on and off to place data on the bus, nothing more than rising and falling voltages). clearly enough, the faster the signal switches (higher frequency memory) the more noise.
So to compensate for noise and reflection as the NB voltage and FSB frequency is raised so to must be the FSB Termination voltage be raised to mirror the NB voltage increase and the Frequency increase. This will in effect smooth the ripple effect caused by increasing on end and not the other http://www.blazingpc.com/forum/images/smilies/wink.gif No exact formula here folks trial and error to find the best fit… (parts of this description were borrowed from FCG thanks for the contribution Chris)

South Bridge Voltage:

Increasing the southbridge voltage is usually only needed if you running a high overclock and are having problems with a PCI device, Hard drive or high definition on board audio. Increasing the SB voltage may help under these conditions. In short the Southbridge, also known as the I/O Controller Hub (ICH), is a chip that implements the "slower" capabilities of the motherboard in a northbridge/southbridge chipset computer architecture. The southbridge can usually be distinguished from the northbridge by not being directly connected to the CPU. Rather, the northbridge ties the southbridge to the CPU.


<TABLE border=0 cellSpacing=0 cellPadding=6 width="100%"><TBODY><TR><TD style="BORDER-BOTTOM: 1px inset; BORDER-LEFT: 1px inset; BORDER-TOP: 1px inset; BORDER-RIGHT: 1px inset" class=alt2>Overview
Because the southbridge is further removed from the CPU, it is given responsibility for the slower devices on a typical microcomputer. A particular southbridge will usually work with several different northbridges, but these two chips must be designed to work together; there is no industry-wide standard for interoperability between different core logic chipset designs. Traditionally this interface between northbridge and southbridge was simply the PCI bus, but since this created a performance bottleneck, most current chipsets use a different (often proprietary) interface with higher performance.

The name is derived from drawing the architecture in the fashion of a map. The CPU would be at the top of the map at due north. The CPU would be connected to the chipset via a fast bridge (the northbridge) located north of other system devices as drawn. The northbridge would then be connected to the rest of the chipset via a slow bridge (the southbridge) located south of other system devices as drawn.

The functionality found on a contemporary southbridge includes:
• PCI bus. The PCI bus support includes the traditional PCI specification, but may also include support for PCI-X and PCI Express.
• ISA bus or LPC Bridge. Though the ISA support is rarely utilized, it has interestingly managed to remain an integrated part of the modern southbridge. The LPC Bridge provides a data and control path to the Super I/O (the normal attachment for the keyboard, mouse, parallel port, serial port, IR port, and floppy controller) and FWH (firmware hub which provides access to BIOS flash storage).
• SPI bus. The SPI bus is a simple serial bus mostly used for firmware (e.g., BIOS) flash storage access.
• SMBus. The SMBus is used to communicate with other devices on the motherboard (e.g. system fans).
• DMA controller. The DMA controller allows ISA or LPC devices direct access to main memory without needing help from the CPU.
• Interrupt controller. The interrupt controller provides a mechanism for attached devices to get attention from the CPU.
• IDE (SATA or PATA) controller. The IDE interface allows direct attachment of system hard drives.
• Real Time Clock. The real time clock provides a persistent time account.
• Power management (APM and ACPI). The APM or ACPI functions provide methods and signaling to allow the computer to sleep or shut down to save power.
• Nonvolatile BIOS memory. The system CMOS, assisted by battery supplemental power, creates a limited non-volatile storage area for system configuration data.
• AC97 or Intel High Definition Audio sound interface
Optionally, the southbridge will also include support for Ethernet, RAID, USB, audio codec, and FireWire. Rarely, the southbridge may also include support for the keyboard, mouse, and serial ports, but normally these devices are attached through another device referred to as the Super I/O
Some of the newer settings:

Load Line Calibration/CPU Voltage Damper
Enabling Load line calibration/Voltage Damper is simply a tool that eliminates voltage droop. Most motherboards today have an effect called voltage drooping where the voltage set for the CPU drops or lowers as load to the CPU is applied. Setting LoadLine calibration or Voltage Damper to enabled eliminates this.

Voltage References:
What it is:
A voltage reference is an electronic device (circuit or component) that produces a fixed (constant) voltage irrespective of the loading on the device, power supply variation and temperature. It is also known as a voltage source but in the strict sense of the term, a voltage reference often sits at the heart of a voltage source.
The distinction between a voltage reference and a voltage source is, however, rather blurred especially as electronic devices continue to improve in terms of tolerance and stability.

Why we can set it:
All voltage across the motherboard can potentially droop so setting the refrence high helps eliminate load droop or noise. This will likely only help you under extreme overclocking stability and should normally be set to auto.<!-- / message --><!-- sig -->__________________
Heat cannot travel from a cold body to a hot body but always travels from the body of a higher temperature to a colder one.

April 24th, 2009, 02:40
Do me a favor...if you don't know what you are doing please don't Overclock your MB. If you want to tweak your pooter a bit this thread should give you a better understanding of what each part of your MB does.Leaving all your MB settings in Auto mode is a total waste of a computer. Most of the time the voltage is set low and the CPU. Buss setting are fo-barred...Mike

April 24th, 2009, 09:13
Will you buy me a new one if i mess this one up:wavey:
thanks at least i have some idea now

April 24th, 2009, 10:52
Sure tang buddy.....how you want your pooter....rare or well done...lolquote=Henry;162738]Will you buy me a new one if i mess this one up:wavey:
thanks at least i have some idea now