Shared Video Memory and Memory Bandiwidth Issues? 37
klystron2 asks: "Does shared video memory consume a huge amount of memory bandwidth? We all seem to know that a notebook computer with shared video/main memory will have performance drawbacks.... But what exactly are they? It's easy to see that the amount of main memory decreases a little bit, but that shouldn't make a big difference if you have 1GB of RAM. Does the video card trace through memory every time the screen is refreshed? Therefore consuming a ton of memory bandwidth? If this is the case then the higher the resolution and the higher the refresh rate, the lower the performance of the system, right? I have searched the Internet for an explanation on shared memory and have come up empty. Can anyone explain this?"
Pro/Con (Score:3, Interesting)
Wrt. performance the benefits of a separated frame buffer outweigh those of shared memory, in my experience. I'm not sure if this is true as well wrt. the performance/power consuption ratio (use suitable definition), however. Especially when the (Dvi LCD/TFT) screen already has a frame buffer and VSync is only 20-40Hz. (Ditch GPU altogether?)
Anyone with ideas, data?
--
As far as we know, our computer has never had an undetected error -- Weisert
I'd assume so, too (Score:5, Interesting)
I presume today's bus speeds, processor caches and other buffers are sufficiently fast and large enough to share the memory without too much of a noticable effect...
'T'ain't nuthin' compared to a Sinclair ZX-81... (Score:5, Informative)
Since the greeblie had no interrupts and they were too lazy to quantise the BASIC interpreter so that they could run it in the interframe and still generate reasonably consistent sync pulses, the screen went away completely while programs ran. A modern monitor would go postal, faced with a constantly appearing/vanishing sync pulse train but TVs are kind of used to dealing with cruddy signals.
I think the Sinclair was branded a Timex in the UK.
Re:'T'ain't nuthin' compared to a Sinclair ZX-81.. (Score:3, Informative)
No, Timex sold the Sinclair ZX81 in North America. Sinclair Research Ltd. sold the Sinclair ZX81 in the UK. The US variant was named the Timex Sinclair 1000.
The Timex 1000 was pratically identical to the ZX81, except for a few changes on the circuitboard and a whopping 2K of RAM instead of the 1K that the ZX81 had.
Steve
Re:'T'ain't nuthin' compared to a Sinclair ZX-81.. (Score:2)
It's sort of a fond memory, in the same sense that might "fondly" remember the first time you got sick from drinking to much...
I stand corrected, ta! (Score:2)
Re:'T'ain't nuthin' compared to a Sinclair ZX-81.. (Score:2)
The ZX-80 suffered from that, but the ZX-81 could display and execute.
It also had a fast mode, so you could ignore the display and use the whole 3.5MHz for your app.
As described here [old-computers.com]
I'm learning a lot today (-: (Score:2)
Re:'T'ain't nuthin' compared to a Sinclair ZX-81.. (Score:2)
Re:'T'ain't nuthin' compared to a Sinclair ZX-81.. (Score:1)
Later models came which so called FastRam, which wasn't affected by the slowdown, but caused all sorts of troubles, as programms couldn't deal with the fact that this RAM wasn't accessible by the grafic c
Re:'T'ain't nuthin' compared to a Sinclair ZX-81.. (Score:1)
Re:'T'ain't nuthin' compared to a Sinclair ZX-81.. (Score:2)
normally don't have Fastram. Even the 512K Ram expansion were not real fastram. It were called slowram and had the performence of chipram, but were not accesable for the chipset. The worst of both words)
Martin Tilsted
Re:'T'ain't nuthin' compared to a Sinclair ZX-81.. (Score:1)
Pixels are read from RAM everytime (Score:5, Informative)
Yes. The pixels on the screen are read out every single frame time (i.e., 60 to 75 times each second). The DAC (Digital to Analog Convertor) must be fed the pixel data every time -- with video in main RAM, there is no other place to store this image data because the main memory is this buffer. The product of the frame rate, resolution, and color depth tells you how much bandwidth is consumed.
The exact performance impact is not easy to predict though. Where it gets tricky is with CPUs that have large L1, L2, and L3 caches. It is possible for the CPU to be running at 100% while the video is being read if the CPU is finding all the data and instructions in the cache. But if the CPU must access main RAM, then there will be competition.
Re:Pixels are read from RAM everytime (Score:1)
I would imagine that the shared memory part comes into play when you have a lot of clipping. But then again if you're buying a shared memory system, I guess the minor advanage of 3 Meg RAM isn't real
Cost & space efficiency vs. performance (Score:3, Informative)
No, these systems have no separate frame buffer - main RAM is the buffer. Even when nothing is changing on the screen, the video subsystem is reading data at the full frame rate from RAM.
Althou
Band*I*Width? (Score:5, Informative)
But seriously, you may want to take a look at this [tomshardware.com] Tom's Hardware article detailing the weaknesses of an integrated chip.
For those looking for the quick answer, I'll do my best to summarize. First off, since integrated graphics tend to be low cost solutions, transistor counts are nowhere near current add-in boards. From the article, Nvidia's FX5200 has 47 million transistors (FX5600=80 million and FX5900=130 million), while their onboard solution (equivalent to GeForce4 MX440) has only 27 million.
Then, there's the question of memory bandwidth. Dual channel DDR 400 has a peak of 6.4GB/s, which is shared, while an equivalent GeForce4 MX440 would have a dedicated 8GB/s.
Now, to your question. Does this consume a ton of bandwidth and affect performance? Well, that would all depend on what you're doing with it.
If you're running 3D games and the like, then both performance and bandwidth will be an issue and limit your framerates. Comparing the previous review and this [tomshardware.com] review of add in boards, shows about a 25% reduction in framerate (at 1024x768) between an add in GeForce4 MX440 and an NForce2 integrated chipset in UT2003, and an almost 40% reduction in 3DMark 2001. Since the machines were not identical, don't take the numbers as gospel, but they were similar enough to make a meaningful comparison IMHO.
That being said, for normal 2D work, bandwidth utilization is negligible and shouldn't seriously impact performance as shown by this [tomshardware.com] SysMark 2002 test. AFAIK, this doesn't take into account extremely intensive RAM->CPU loads, but I wouldn't expect results to vary significantly, since memory requirements for 2D work are relatively low.
Be warned though, that Tom's Hardware did note image quality issues with most of the integrated chips-which they theorized was the result oflow cost manafacturing, not a limit of the technology itself. This theory is bolstered by the fact that their low cost add in card (Radeon 9200) suffered the same problems.
Re:Band*I*Width? (Score:2)
I don't think he is concerned with 3D rendering performance. He is concerned with the impact on main memory bandwidth, since a part of main memory is being used as a frame buffer.
For a constant image to appear on screen, the frame buffer must read for each frame displayed. 70 times per second for 70Hz. This can add up to hundreds of megabytes per second, depriving this bandwidth from the CPU.
These shared main memory/frame
Re:Band*I*Width? (Score:1)
luckily for you all, my desktop is a 1.5ghz athalonxp, 712 megs of ddr 266 ram, and a geforce 3 top of the line when i bought it video card, and i run it at 1024x768 32bpp,
I ran sysoft sandra on both of these computers, and run 3dstudio Max animation on both of them, and use them both daily.
the stats: my
Re:Band*I*Width? (Score:2)
Did you measure the main memory speed? I would have thought the laptop would be about 10% slower than the desktop, considering the resolution/colour depth and the usage of DDR main memory.
I think i'm (hopefully) sitting pretty thanks to DDR.
Definitely. The quicker main memory becomes, the easier they can get away with profit maximizing techniques like this. Dedicated frame buffer memory of equal speed to main memory (all other things being equal), will always be faster. But if i
Lets do some sums (Score:4, Informative)
1280 x 1024 x 32 x 75 = 3145728000 bits/second just to display
That's 375 Mb/s.
If you've got DDR 2700 memory, that's a peak rate of around 2540 Mb/s.
Therefore, the screen refresh alone is taking up 15% of your memory bandwidth.
You've also got to be drawing the screen every frame, let's say it'd doing this 25 times a second, and that the game you're playing had an average overdraw per pixel of 1.5 and it hits the z-buffer on average twice per pixel.
You've got 125Mb/s used up with the colour and 125Mb/s used up with z-buffer accesses (assuming 16bit buffer) that uses up 10% of your maximum data rate
Overall, then, a quarter of the maximum available bandwidth is being used by the video card.
Re:Lets do some sums (Score:2)
Re:Lets do some sums (Score:2)
For textures, the remaining eight bits are often used as the alpha (transparency) channel.
Re:Lets do some sums (Score:2)
I think I may have written Mb rather than MB, I always forget which one is bits and which one is bytes. My bad.
Re:Lets do some sums (Score:2)
Tom's Hardware have an article about that. (Score:3, Insightful)
Funny I was just reading an article [tomshardware.com] over on Tom's Hardware guide [tomshardware.com] about that
The article benchmarks three different boards with integrated graphics solutions (Intel i865G , nForce2, & SIS 651) using both the integrated graphics hardware, and a $50 graphics card.
Unsurprisingly, in 3D applications, all have quite poor performance [tomshardware.com], only the nForce 2 system has acceptable performance with even older games at low resolution.
More important to your question, They also run comparative benchmarks using windows office applications [tomshardware.com], with both the integrated graphics, and the $50 card. The graphs clearly show, that there is no effective difference in performance, and that the benchmark results are largely CPU bound.
In concussion, I would not expect integrated graphics to hut general computing performance. Though I would of course check that the graphics performance is adequate, as it may not be possible to update in the future.
Re:Tom's Hardware have an article about that. (Score:4, Funny)
Lol... stop banging your head on the desk - then you'll stop getting concussions, and integrated graphics will cease and desist making a hut over computer performance.
Re:Tom's Hardware have an article about that. (Score:2)
I wouldn't put too much emphasis on what you read at THG. Once upon a time, an article at Tom's hardware tried to claim that AGP provided no gains over PCI, by comparing current (at the time) PCI and AGP 3D cards. A stupid stupid way to prove the point.
The AGP cards were new and working in glorified PCI mode. Not using advanced AGP features. What's more, the software being used to benchmark "PCI vs AGP" exploited fill rate limits
One point about memory (Score:4, Informative)
When you read a location of RAM, the RAM chips have to read the entire row that location lives in. For a memory that is 256 million locations (where a location could be a bit, or a byte, or even a dword, depending upon the memory's layout), to read a location means loading 16 thousand locations into the sense amps of the chip.
Now, once you've fetched the data into the sense amps, reading the rest of the row out can happen much faster than that initial access.
CPUs tend to access things more or less sequentially when it comes to code (modulo jumps, calls, interrupts, and context switches), but data isn't quite as nice.
Video, on the other hand, is great from the DRAM controller's point of view - it can grab an entire row of data and shove it into the display controller's shift register. And wonder of wonders, the next request from the video refresh system is going to be the very next row!
So while video refresh does take bandwidth, in many ways driving the video controller is "cheaper" than feeding the CPU.
(the details in this post GREATLY simplified for brevity)
Depends on the implementation... (Score:5, Informative)
However, in today's systems it's FAR more complicated that this.
First, some older implementations, particularly the Intel 810, used a 4MB display cache. The net of this is that the display refresh was generally served from a secondary memory and didn't interfere with main memory bandwidth. As well, Intel used some technology Chips & Tech developed that basically did run-length encoded compression on the display refresh data (look right at your screen now, there's a LOT of white space, and RLL will shrink that substantially.)
Today most chip sets incorporate a small buffer for the graphics data and compression techniques to minimize the impact of display refresh on bandwidth.
But wait -- it gets even MORE complicated. With integrated graphics on the north bridge of the chip set, the memory controller in the chip set knows both what the CPU and what the graphics core want to access. So the chip set actually does creative scheduling of the memory accesses so that the CPU doesn't get blocked unless absolutely necessary. So most of the time the CPU is either getting its memory needs services by its own cache, or it's getting (apparently) un-blocked access to memory. So the impact of graphics is much less than the simple equation above would suggest.
Finally... we now have dual-channel memory systems. Even more tricks to keep the graphics and CPU memory accesses separate come into play here.
So, the short answer is yes, there's an impact, but it used to be much worse. Innovative design techniques have greatly reduced the impact so that in non-degenerate cases it doesn't affect the system too much. In a degenerate case of your app never getting cached and doing nothing but pound on the memory system with accesses, however, then you'll see the impact in line with the bandwidth equation above.