I'm creating plots of various technology trends sourced from the NVIDIA and AMD list of GPUs: https://owensgroup.github.io/gpustats/ I hope this is useful for the community. The plots are automatically generated from direct parsing of the NVIDIA/AMD pages. I welcome suggestions of improvements and other plots that could be useful.
Three notes I wanted to make for folks who edit this page:1. Consistency across tables (and to the tables on the AMD page) is very helpful. It is much easier when a column in one table is labeled the same way as the identical column in another table. The source code shows there are lots of special cases I had to handle.2. It is helpful when columns describe what they do (and don't have to resort to a footnote). If a column is labeled X but actually its contents are X (Y) or X Y, where Y is in italics (for instance), that's troublesome.3. There's some discussion here about providing too much information on this page. From my point of view, this is the best single place to put information, and I am happy to see more information rather than less.
Also posting on AMD talk page for feedback there. --Jowens (talk) 17:44, 31 August 2017 (UTC)[reply]
List for the mobility cards (xxxm) needs to also list what type of MXM interface it uses as this has changed several types over the course of the mobility GPUs (173.224.162.96 (talk) 16:23, 4 May 2016 (UTC))[reply]
Why would nvidia create 600 cards with old 40nm technology? This looks like 3rd party manufacturers are trying to rip off the public. — Preceding unsigned comment added by 101.171.213.83 (talk) 00:40, 4 June 2012 (UTC)[reply]
I dont like at all this paragraph. Im working with GLSL because of my degree proyect and it can get wrong to the reader. For each version "GLSL x.x" should be replaced by "at least GLSL x.x" because OpenGL does not always limit the GLSL version. In addition, OpenGL version 1.5 supports at least GLSL 1.0 which is stated in the spec. I have even tried with GLSL 1.1 with OpenGL 1.5 and it works properly. In fact, it depends more on the graphic card more than in the OpenGL version. I will change it if nobody says anything. —Preceding unsigned comment added by Capagris (talk • contribs) 16:23, 1 October 2009 (UTC)[reply]
I think it's important to have a separate section, since integrated GPUs are a class in their own right. At the very least, desktop and mobile GPUs that are actually IGPs should be clearly marked as such.
p.s. Who put the "first, second, third generation" marketing BS in? —Preceding unsigned comment added by 207.38.162.22 (talk) 15:23, 18 April 2009 (UTC)[reply]
Why is this article considered 'too technical' and yet the ATi equivalent article Comparison of ATI Graphics Processing Units is not?Also, the 7900GX2 is of course 2 GPUs on one board, in this light should it not be the TOTAL MT/s, Pipes x TMU's x VPU's, that are atated and not the specs of half the card?
Why is this version of the 6800 not listed here? My card, as listed in nVidia's nTune utility is a standard GeForce 6800 chip with 512MB of memory, with clock speeds of 370MHz Core and 650MHz Memory. These were the factory clock speeds I received the card with; it was purchased from ASUS. --IndigoAK200 07:34, 27 November 2006 (UTC)[reply]
This seems like a comparison of graphics cards not of GPU chips ... and in that vein, why is there no mention of nVidia's workstation products (Quadros)?--RageX 09:00, 22 March 2006 (UTC)[reply]
This article especially needs an explanation of the table headers (eg. what is Fillrate? What is MT/s?) ··gracefool |☺ 23:56, 1 January 2006 (UTC)[reply]
NVIDIA's website indicates that the 7300GS has a 400MHz RAMDAC gpu. Is there a reason that everyone is changing that to 550MHz? Where did you acquire that information? --bourgeoisdude
The Process Fabrication (gate lenght) should be listed in nm instead of μm, the fractional values are quite cumbersome, beside, the industry more commonly use nm than μm now that we see processing units manufactured on a 45nm being announced.
The bus column does not list PCI for many of the cards in the FX family and the Geforce 6200. I suspect there are other mistakes of excluding the PCI bus from the MX family. I will add PCI as one of the bus options for the 6200 and 5500, as I am sure these two cards support PCI. —The preceding unsigned comment was added by Coolbho3000 (talk • contribs) 22:45, 10 May 2006 (UTC)
I have made the 6200 PCI a seprate row because of its differences from the other 6200 versions (boasts a NV44, not NV44a core, yet doesn't support Turbocache). I have named this section the 6200 PCI. Please correct me if you think this isn't suitable. —The preceding unsigned comment was added by Coolbho3000 (talk • contribs) 22:52, 10 May 2006 (UTC)
Wouldn't it be apropos to include an column for the highest version of OpenGL supported? Not all of us use Windows. :) OmnipotentEntity 21:53, 22 June 2006 (UTC)[reply]
Bandwidth is calculated incorrectly. I've changed it to use GB/s, where GB/s=10^9 bytes/second. To properly calculate bandwidth in GiB/s it's (bus width * effective clock of memory) / 1073741824 (bytes/GiB) / 8 (bits / byte)
these also work for calculating memory bandwidth. — Preceding unsigned comment added by 220.235.101.12 (talk) 08:36, 7 February 2012 (UTC)[reply]
Geforce4 MX does not have a vpu of any kind. nvidia's drivers allow certain vertex programs to use the NSR that's been around since the nv11 days, but only if the (very simple) vertex program can be run on the gpu. otherwise it's done by the cpu.http://www.beyond3d.com/forum/showthread.php?t=142
Geforce FX5200 is a 4 pixel unit/1 textuer unit design as stated here http://www.beyond3d.com/misc/chipcomp/?view=chipdetails&id=11&orderby=release_date&order=Order&cname= and here http://www.techreport.com/etc/2003q1/nv31-34pre/index.x?pg=2
Updated note to reflect that NV31, NV34 and NV36 all only have 2 FPU32 units as described here http://www.beyond3d.com/forum/showthread.php?p=512287#post512287
DirectX 8.0 introduced PS 1.1 and VS 1.1. DirectX 8.1 introduces PS 1.2, 1.3 and 1.4.
source: shaderx,
http://www.beyond3d.com/forum/showthread.php?t=5351
http://www.beyond3d.com/forum/showthread.php?t=12079
http://www.microsoft.com/mscorp/corpevents/meltdown2001/ppt/DXG81.ppt
Thus NV20 was DirectX 8.0, but NV25 and NV28 supported the added ability of PS 1.2 and 1.3 as introduced in 8.1.
I've listed any card with a T&L unit as having 0.5 VPUs since it can do vertex processing, but it is not programmable. This also allows better compatibility with Radeon comparisons.
The sheets are too tall to see the explanation columns and card specs at the same time, if I want to compare, I need to scroll back and forth. Could someone edit the tables to have the column explanations at both the top and the bottom?
The fillrate listed for each graphics card on both the Comparison of ATI and Comparison of NVIDIA GPU pages is based off of: "core speed * number of pixel shaders" for discrete shaders or "core speed * number of unified shaders / 2" for unified shaders.
The fillrate listed would be correct only if the 8800GTS had 128 unified shaders (500 * 128/2 = 32,000) instead of 96. The correct fillrate should be 24,000 (500 * 96/2 = 24,000).
Should this be changed, or do we need a source explicitly stating 24,000 MT/s as the fillrate?
Nafhan 20:44, 24 January 2007 (UTC)[reply]
Found page on NVIDIA homepage listing 24000 MT/s as fillrate for 8800GTS, and made update.
Nafhan 21:21, 26 January 2007 (UTC)[reply]
Its all wrong, fillrate is the number of pixels that can be written to memory so core speed * number of ROPs, 8800GTS will then have 500 * 20 = 10000 MT/s, to confirm I ran a benchmark and got "Color Fill : 9716.525 M-Pixel/s"
Graphics library version for this card is mentioned 9 in this entry. Which is not true. It is not even complete 8.1; proof = http://translate.google.com/translate?hl=en&sl=zh-TW&u=http://zh.wikipedia.org/wiki/GeForce4&sa=X&oi=translate&resnum=3&ct=result&prev=/search%3Fq%3Dnvidia%2BNV18b%2Bengine%26hl%3Den%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-US:official%26sa%3DG —The preceding unsigned comment was added by Acetylcholine (talk • contribs) 18:22, 24 February 2007 (UTC).[reply]
The PCX 4300, PCX5300, PCX5750, PCX5900, and PCX5950 need to be added
Reply: I just added the PCX 4300. Dominar_Rygel_XVI (talk) 15:41, 26 February 2010 (UTC)[reply]
Hi,
There are at least 2 very important values missing. This are the vertex througput and the power consumption.The fillrate does not say much today, the overwhelming fillrate is used for doing anti aliasing, in my opinion no criterion to buy a new GPU.
As for me, I want to compare my current hardware to those that I might buy.Take this for example:
Model | Year | Code name | Fab(nm) | Bus interface | Memory max (MiB) | Core clock max (MHz) | Memory clock max (MHz) | Config core1 | Fillrate max (MT/s) | Vertices max (MV/s) | Power Consumtion est. (W) | Memory | Graphics library support (version) | Features | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Bandwidth max (GB/s) | Bus type | Bus width (bit) | DirectX® | OpenGL | |||||||||||||
GeForce FX 5900 XT | Dec 2003 | NV35 | 130 | AGP 8x | 256 | 400 | 700 | 3:4:8:8 | 3200 | less than 356, more than 68, maybe 316 | 22.4 | DDR | 256 | 9.0b | 1.5/2.0** | ||
GeForce 7600 GT | Mar 2006 | G73 | 90 | PCIe x16, AGP 8x | 256 | 560 | 1400 | 5:12:12:8 | 6720 | 700 | 22.4 | GDDR3 | 128 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, Dual Link DVI | |
GeForce 7900 GS | May 2006 (OEM only) Sept 2006 (Retail) | G71 | 90 | PCIe x16 | 256 | 450 | 1320 | 7:20:20:16 | 9000 | 822,5 | 42.2 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI | |
GeForce 7900 GT | Mar 2006 | G71 | 90 | PCIe x16 | 256 | 450 | 1320 | 8:24:24:16 | 10800 | 940 | 42.2 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI | |
GeForce 7950 GT | Sept 2006 | G71 | 90 | PCIe x16 | 256, 512 | 550 | 1400 | 8:24:24:16 | 13200 | 1100 | 44.8 | GDDR3 | 256 | 9.0c | 2.0 | Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, HDCP, 2x Dual Link DVI |
You can find the est. power consumtion at http://geizhals.at/deutschland/?cat=gra16_256but I believe it is not allowed to take it from there...
Does anyone know where to get real tech specs from nvidia?
I'd like to see power, too, but estimated power is problematic. The only dependable number found in the specs is TDP and with a suitable note containing the words "less than" it's useful. —Preceding unsigned comment added by 68.183.61.32 (talk) 16:37, 3 December 2010 (UTC)[reply]
JPT 10:02, 2 March 2007 (UTC)
It would be very helpful to add columns for connection types, both motherboard (PCI, PCI Express, PCI Express 2.0) and video (DVI, HDMI, S-Video, VGA). The information must exist somewhere, but is nearly impossible to find. I recently bought a PC from one of the 'big 3,' and the graphics card does not have the promised S-video output; I don't think the sales staff lied to me, it's just that the personnel who interact with customers are limited to marketing blandishments like "this one is for games and that one is for word processing." I think many people choose cards based on what will connect to the equipment they already have, especially where some formats are difficult to convert to others, so making that information accessible would help a lot.TVC 15 (talk) 18:13, 16 July 2008 (UTC)[reply]
There are models what have additional suffixes (ie: 7600 gs KO) should we add entries for these cards? Or explain what they mean on this page? Otherwise this is a fantastic reference page. Thanks everyone!
66.194.187.140 18:53, 1 April 2007 (UTC)Scott[reply]
I've changed the layout back to how it was a week or so ago, keeping the desktop graphics cards together and the laptop cards together - it is far easier to compare cards this way, as the Go series is not really comparable to the desktop range anyway.Also - what is the difference between the 7950GX2 and the 7900GX2? They use the same core running at the same clock speeds; in fact the only difference apparent from this article is the date of release, and since the earlier one was OEM, it implies that they are the same card! Yazza 18:26, 21 May 2007 (UTC)[reply]
DirectX 8.1 introduced features supported by NV25/NV28 in the form of Pixel Shader v1.3 (and vs 1.1 from dx8.0).DirectX 9.0 contained support for the extended shader model 2 supported by NV3x (HLSL targets PS2_a and VS2_a)the DirectX section and the relevant GPU sections have been modified.
I would like to inquire about the latest video card. Why is the GeForce 8800 not listed yet? If I am not wrong, this card is already available in the USA. I got the information from the latest edition of PC Gamer, September 2007. --Siva1979Talk to me 08:45, 20 July 2007 (UTC)[reply]
The recent NVIDIA's Quadro FX datasheets are boasting 12 pixel per clock rendering engine on all product ranges, even though many of these products do not have 12 pixel/vertex shaders, or even 12 raster operator engines, or even generate 12 pixels per clock. Does anyone know what does the statement really mean? Jacob Poon 23:08, 20 September 2007 (UTC)
The Tesla table lists a "Pixel (MP/s)" in the Memory column. I think this is supposed to be "Bandwidth reference". Can anyone confirm and fix if necessary? Anibalmorales 20:24, 11 October 2007 (UTC)[reply]
I think it would be good to add the TDP when that's known.-- Roc VallèsTalk|Hist - 17:11, 25 October 2007 (UTC)[reply]
First off, I'm glad you added the TDP.
Secondly, I think the numbers need to be checked. This site has a pretty comprehensive break-down of the power requirements of different ATI (is that a swear word here?) and NVidia GPUs.
http://www.atomicmpc.com.au/forums.asp?s=2&c=7&t=9354&p=0
The 9600 GT on the wiki states a TDP of 92 watts, while the other site claims 61 Watts. I wouldn't be surprised if the wattage is lower as the 9600 has
- a smaller die - fewer transistors.
...yes I know the 9600GT has a slightly higher core and shader frequency but it has @ 1/2 the # of shaders.
Also the TDP varies with memory. There is one TDP value when often a card comes in 256, 512 & 1024 MB variants that draw different TDPs. —Preceding unsigned comment added by 206.191.62.18 (talk) 13:12, 22 July 2008 (UTC)[reply]
Where is the GeForce 8300 GS? —Preceding unsigned comment added by 201.66.31.220 (talk) 07:05, 21 November 2007 (UTC)[reply]
On the subject of which version of DirectX this video card will use, it seems people keeps on changing my edit of "10" to "10.1". From http://en.wikipedia.org/wiki/GeForce_9_Series , you if you check source #1 of that page, it is an old article from dailytech stating which version of DirectX the card will use, but if you check source #4, you'll see that the source dailytech quoted, which is located at actually stated that the card will use DirectX 10.0, not 10.1. Obviously dailytech made a typo. To reinforce the chip only supporting DirectX 10, please check source #5 of the page http://en.wikipedia.org/wiki/GeForce_9_Series which contains a full review of the card. I will change it back to "10" to reflect my findings. If there are any new information reguarding to the card, please change it to to reflect this new information and please cite source Baboo (talk) 06:35, 27 January 2008 (UTC)[reply]
Isn't there supposed to be a 9800 GTS? —Preceding unsigned comment added by 71.104.60.85 (talk) 19:11, 11 February 2008 (UTC)[reply]
The 9600 GT is already launched. The 9800 GX2 will be launched in March followed by the 9800 GTX and the 9800 GT around the end of March and the beginning of April. The 9600 GS will come out in May. The 9500 GT will be launched in June while the 9500 GS will launch in July. I can't confirm the 9800 GTS... (Slyr)Bleach (talk) 01:56, 24 February 2008 (UTC)[reply]
[2]yeah. i'll modify the tmu's to 128 because
Single chip with "dual G92b like" cores
Can someone tell me why they removed my edits on the 9-series? The 9600GSO has been out for a few days now (check the Nvidia site for specs), but when I added it, as well as details for the 9800GT (early specs are out on this card), they were edited out. I've put them back up now. Sure enough the specifications on the 9900GTX/GTS are a little speculative, but the specs for the 9600GSO are rock solid; just need to verify that it has 12 ROP's like the 8800GS. Put the 9800GT specs (early) up too; I don't know why no-one's added this card sooner. There's been discussion about and specs on the 9800GT for a while, though I've yet to see anything concrete about the 9800GTS. —Preceding unsigned comment added by 78.148.132.151 (talk) 09:51, 6 May 2008 (UTC)[reply]
All ROP numbers where ROP > Pixel units are wrong. A card should not have more ROPs than pixel pipelines, because a card can't render more pixels than it's processing. Further, IIRC FX 5800 and FX 5900 can issue 8 pixel ops if no z test is done. Finally, there needs to be consistency in differentiating cards with no vertex units at all with those that have a fixed function vertex unit. Both are 0 right now, but it's a rather significant difference.
Does anyone think that the GeForce GTX series should be split into its own section? Nvidia doesn't seem to be using the GeForce 9 series name for these chips and they are based on a different design than the GeForce 9/8 series(es?) are. (I hate trying to figure out the plural of series! :)) -- Imperator3733 (talk) 14:19, 23 May 2008 (UTC)[reply]
In the section on the FX Nvidia cards it lists their opengl support as 1.5/2.0**, but there is no explanation on what this syntax means. Asterisks with no annotations are a frequent sight on wikipedia that needs to be dealt with. I have no idea what is meant by this instance. Dwr12 (talk) 21:20, 2 July 2008 (UTC)[reply]
The whole GeForce 8x00 integrated GPU line (8100, 8200, 8300) is missing from the tables. The GeForce_8200_Chipset page contains a bit more information, but it's subject to merging into the GeForce_8_Series page.195.38.110.188 (talk) 23:38, 31 July 2008 (UTC)[reply]
I've notice the entire current Quadro Mobile line is missing from this page.
High End
NVIDIA Quadro FX 3700M
NVIDIA Quadro FX 3600M
NVIDIA Quadro FX 2700M
Mid-Range
NVIDIA Quadro FX 1700M
NVIDIA Quadro FX 1600M
NVIDIA Quadro FX 770M
NVIDIA Quadro FX 570M
Entry Level
NVIDIA Quadro FX 370M
NVIDIA Quadro FX 360M
They are listed on Nvidia's homepage here: http://www.nvidia.com/page/quadrofx_go.htmlEvil genius (talk) 07:48, 8 September 2008 (UTC)[reply]
There are two columns both labeled as "PureVideo w/WMV9 Decode" but with different content !--Xerces8 (talk) 11:38, 5 October 2008 (UTC)[reply]
Removed the GF 300 series + some speculation cards from GF 200 series. Unless the specs of a new card aren't officially announced, they should not be here. —Preceding unsigned comment added by 213.35.167.28 (talk) 19:06, 25 October 2008 (UTC)[reply]
See http://developer.nvidia.com/object/opengl_3_driver.html. Some of these now do OpenGL 3.0 with the correct driver. Jesse Viviano (talk) 07:46, 6 February 2009 (UTC)[reply]
...is needed, otherwise the remarks made are pretty much common gossip which shouldn't be on here. Anon —Preceding unsigned comment added by 85.102.53.150 (talk) 22:41, 8 April 2009 (UTC)[reply]
This section is completely unnecessary, nothing has been announced or even revealed on this series, and whoever added it obviously had not quantifiable evidence or citation to back it up. Adding DirectX 12 to the section was a rookie troll mistake. —Preceding unsigned comment added by 59.167.36.93 (talk) 02:09, 20 April 2009 (UTC)[reply]
How come cards without pixel shaders have their core config listed as if they do? For example, the GF2 Ti has a core config of 0:4:8:4. However, the footnote for the core config syntax is: Vertex shader : Pixel shader : Texture mapping unit : Render Output unit. This suggests the GF2 Ti has no vertex shaders but 4 pixel shaders. It's pretty common knowledge the Geforce 3 was nVidia's first consumer card to incorporate pixel shaders. I noticed this a while back and it's never been changed, so I'm thinking it's not an error. Can someone explain why the core config syntax is the way it is?24.68.36.117 (talk) 19:42, 16 June 2009 (UTC)[reply]
So I checked the page "GeForce 200 Series", and it says that all of the NVidia GeForce 200 series cards support Open GL 3.0, yet on this page it reads that they all support Open GL 2.1. Also, this page holds that the GeForce 130M supports Open Gl 2.1, but the "GeForce 200 Series" page says that it is a modified 9600GSO, which this page says does in fact support 3.0. Can anyone makes sense of this? RCIWesner (talk) 17:03, 14 July 2009 (UTC)[reply]
. now in the article is reported support for OpenGL 3.3, but here GeForce_100_Series say 3.2--Efa (talk) 01:28, 26 January 2012 (UTC)[reply]
Any references to support the alleged postponing of GTX 380 (and other GT 300 cards) from Q4/09 to Q3/10 and the changes in specifications? To me the changes made by 70.131.80.5 seem like vandalism. See also edits to GeForce 200 Series. —Preceding unsigned comment added by Anttonij (talk • contribs) 8:32, 23 September 2009 (UTC)
"October 3, 2009 3:01 AM EST @ person who keeps making all these messy speculations regarding the GF100 cards- Do you really believe that Nvidia will release 18 graphics cards for the GT300? Your basis is ludicrous and seems like the figures were just materialized from nowhere. At least the other speculations are more logical and consistent. No video card will provide such a horrible performance for such a high price neither will the enthusiast end cost ridiculous prices. Please stop taking any medications or abusing alcohol. Take a walk and let the oxygen in your blood flow to your head. "
^--- this doesn't belong in the article, try to keep dialogue like this private. —Preceding unsigned comment added by 78.96.215.71 (talk) 08:39, 3 October 2009 (UTC)[reply]
I believe that the table for the GT300 series should be scrapped until the release of the actual graphics cards later this year or early next year. This is the best way to prevent any unwanted changes or speculative, fraudulent rumors on the specifications so that the factual integrity in Wikipedia remains steadfast. —Preceding unsigned comment added by 71.189.49.39 (talk) 16:22, 4 October 2009 (UTC)[reply]
Today 75.56.50.233 tried to vandalise the Geforce 300 section and the Geforce 200 section. Can something be done about this? —Preceding unsigned comment added by 60.50.150.249 (talk) 22:44, 12 October 2009 (UTC)[reply]
I believe 70.131.87.247 may also be a vandal of the GeForce 300 section, as the extremely high specifications (resulting in 6264 Gigaflops, without the GFLOPs column updated -- not to mention the other columns which clock speed changes affect) were edited over the values sourced from Tech Arp. I've undone this users edits and corrected my own edits as best I can. I'll continue to undo future edits as vandalism, unless the user responds to comments on their talk page. Ltwizard (talk) 04:18, 20 November 2009 (UTC)[reply]
Nvidia cards have unlinked shader and core clocks. AMD have linked ones. —Preceding unsigned comment added by 112.201.119.209 (talk) 21:57, 19 November 2009 (UTC)[reply]
75.57.69.93 vandalised the GeForce 200 and GeForce 300 sections, I believe it's the same person as the "Vandalism by 75.56.50.233" section. —Preceding unsigned comment added by 60.51.99.254 (talk) 10:10, 2 December 2009 (UTC)[reply]
Again, some kiddies tries to vandalise the GeForce 300 sections. Request page to be protected for 1 month. —Preceding unsigned comment added by 60.50.148.130 (talk) 23:58, 4 December 2009 (UTC)[reply]
84.86.163.122 reverted the FLOPS performance numbers of the GeForce 300 series back to the old values before my edit on December 10th, without providing any reason. He didn't even change the number of shader cores, rops, etc. or the clock rates back to the old values.
I was using the latest numbers derived by Tech Report:http://techreport.com/articles.x/17815/4
And discussed here:http://www.brightsideofnews.com/news/2009/12/8/nvidia-gf100fermi-sli-powered-maingear-pc-pictured.aspx
Is there any reason why we should trust the older numbers more than this? —Preceding unsigned comment added by 70.77.41.210 (talk) 22:13, 12 December 2009 (UTC)[reply]
According to specifications on certain OEM branded PCs, there exist this graphic card. According to some websites, this graphic card is a rebrand of GT130 or 9500GT, and is not available on retail market. I have filled up some specifications, although unconfirmed. Can anyone help to fill up these information? —Preceding unsigned comment added by 121.7.182.72 (talk) 07:41, 22 December 2009 (UTC)[reply]
According to the official Nvidia website, some specifications such as OpenCL and memory clock, especially for 9xxxM GT, GT 1xxM and GT 2xxM are different from what is on the table. Can anyone rectify this issue? —Preceding unsigned comment added by 121.7.182.72 (talk) 08:13, 22 December 2009 (UTC)[reply]
As technology is advancing, we need to update our measurements. 3000 million transistors ("three thousand million") is convoluted and confusing. We could just use billions of transistors as the measurement and say 3 next to the 300 series cards and use a decimal for those that are less than 1 teraFLOPS like 0.350 for 350 gigaFLOPS. I would like permission to edit. --KittenKiller (talk) 03:02, 23 December 2009 (UTC)[reply]
The G92 core of the gts250 only supports Compute Capability 1.1 —Preceding unsigned comment added by 216.93.210.226 (talk) 01:19, 30 December 2009 (UTC)[reply]
Alienware's new 11" notebook, the M11X, was demonstrated using a GT335M at CES.
http://www.engadget.com/2010/01/07/alienware-m11x-netbook-gets-official/ —Preceding unsigned comment added by Alexander Royle (talk • contribs) 22:30, 9 January 2010 (UTC)[reply]
-Since this topic says 300M series, I figured I would put this here. Nvidia has on their tables a lot of newer 300m series GPUs, no enthusiast chips yet. I figured someone would want to add it to the table
http://www.nvidia.com/object/geforce_m_series.html
Hugenhold (talk) 02:43, 5 February 2010 (UTC)[reply]
As of April 7, 2010, the only cards from the GeForce 400 series that have been announced are the GTX 480 and GTX 470. NVidia has not offered any information about possible GTX 485 or GTX 495 cards at this point - their future existence has not even been confirmed. The entry for the rest of the GTX 400 line, especially the entries for the GTX 485 and 495, contains nothing but wild speculation as to technical specifications and release dates. The bottom of this page provides, "Encyclopedic content must be verifiable." Most of what is contained in the GeForce 400 Series table is absolutely not verifiable at this time. It should be removed. - TJShultz (talk) 20:55, 7 April 2010 (UTC)[reply]
I think that standalone section for compute capability support is needed.All informations should be on one place in standalone section. It is possible search and find in tables but it is too difficult. That table is good(Compute capability table), but if somebody is searching for support of compute capability on card, then is searching for card(GeForce GT 220) not for identification of chip(G84,G96,G96b,GT218,GT200b).Also standalone section for OpenCL support on cards should be useful.Sokorotor (talk) 14:12, 28 April 2010 (UTC)[reply]
I was over at the Nvidia Quadro page about to add some information to the table, when I decided to follow a link here to fine a duplicate table (equally lacking information). This is a big problem. Nobody wants to add the same information in two places. Is there a way we can make ONE set of tables, and then reference them from the various articles? Krushia (talk) 16:18, 8 May 2010 (UTC)[reply]
I'd like to know why these GFLOP estimations are so low compared to the previous series GT200 of nVidia and why a new formula has been taken in use to calculate the GFLOPs for exactly this series.If we calculate by the same formula used in the previous series (shader count [n] and shader frequency [f, GHz], is estimated by the following: FLOPSsp ≈ f × n × 3.) the results seem to make alot more sense when taking into consideration the new amount of streamprocessors and higher clocks.How does it make sense that a GTX 295 with 240 x 2 streamprocessors and a 1242 MHz shader clock is estimated at 1788.480 GFLOPs while a GTX 480 with 480 streamprocessors but a shader clock of 1401 MHz is only estimated at a meager 1344.96 GFLOPs. Shouldn't it be more like 2017.44 GFLOPs? Especially when taking into consideration how much the GTX 480 outperforms the GTX 295 in several benchmarks and tests. —Preceding unsigned comment added by 212.27.19.216 (talk) 18:55, 20 May 2010 (UTC)[reply]
I'm kind of new so please forgive errors in layout. ok... pretty much from the gforce 6 series thru gt200 the scalar parts of the chip were arranged ina madd plus mul arrangement where each one could perform two multiplies and one add per clock cycle(save the 7 series which instead of a madd and a mulhad two madds but could only use both if no texture operations were being performed at the time). also prior to dx10 the scalar units werearranged in a vec4 fashion so having 8,12,16,20, or 24 vec4s meant having 32,48,64,80, or 96 (64,96,128,160 or 192 for gforce 7) scalar units (not in-cluding vertex shaders). however, the additional mul or madd on top of the first madd were rarely if ever used in actual games. at least one major techsite in summer 2005 quoted nvidia saying the 7800gtx had a 2.4x performance advantage vs 6800ultra out of a possible aprox 3x in an early build of unrealengine 3, but this generally was not borne out in then modern games during that GPU's effective lifetime. the switch to all scalar brought increases incomputational efficiency and very much in clockspeed but at the expense of adding on chip elements for each scalar versus vectorized unitthat in addition to providing dx10 functionality ballooned the transistor count and die size at 90nm considerably for g80. but even the 8800gts640 couldperform as well in many cases as the dual gpu 7950gx2 without separate vertex shaders mainly due to the jump in shader speed from 500mhz to 1.2 ghz.
The current gf1xx based GPUs from nvidia differ from the older silicon in that the shaders now have "only" one new and improved FMA(fused multiply add)instead of the traditional madd plus mul arrangement and so perform 2 flops per unit per clock versus three. However, at least one if not several majortech sites tried to isolate and utilize the performance of the addition mul using custom code when g80 was new, but to no avail. regardless, the efficiencyand raw clock speed born out of nvidia's unified scalar shader architecture allows top dx11 gforce cards to perform as well as or better than top Radeons with vec4 plus one architecture at lower clocks that have almost double the theoretical gflops when averaging performance across a spectrum of recent and new gaming titles.
to sort of address the original question, if you want to compare the latest cards to previous generations you can artificially add 50 percent to the theo-retical gflops performance of the newer cards versus those already on record for gforce 8, 9, gt(x,s, or blank) 1xx,2xx or 3xx card. those card's glops numbers are considered by many to be inflated by 50 percent anyway by the mostly unused mul, but it is easier for comparison sake to increase the glopsnumbers of the gf1xx based cards than to go back and amend all the older cards. plus, as mentioned, the theoretical gflop performance is certainly notthe end all of performance measuring. different games take advantage of the different aspects of a card's architecture in different ways. while theredoes seem to be a trend of moving away from texture limitation to being shader bound, various games take advantage of a card"s frame buffer size, memorybandwidth, and/or raw pixel fill rate via rops/render back ends as well. seriously, when was the last time you couldn"t crank up a high level of high qual.af of textures on even a bottom end card? and some could argue that lately rival amd has had more raw shader power than it knew what to do with as far as balance of different units on the silicon are used. reorganization for better performance in current games can be evidenced both in the likes of thenvidia gf104 and the barts based radeons that sacrifice shader count for a smaller die capable of lower power consumption and greater clock speeds appliedto the remaining additional parts of the chip whose performance does not trail far behind their larger, more power hungry older siblings. taking into account the differences in the shaders, the number of various functional units, and their clock speeds, it's not too hard to see how when the gtx480 firstcame out most reviews saw a significant performance increase versus the gtx295 but slight losses to the likes of a gtx275 sli setup. sorry i don't have abunch of links for references but if you google reviews of relevant cards on major sites you will see gobs of confirmation. I hope this was helpful.Jtenorj (talk) 06:22, 14 December 2010 (UTC)[reply]
There are three GT 330 cards: PCI ID 0x0CA0, 0x0CA7 (both GT215) and 0x0410 (G8x/G9x, possibly G92), see Nvidia's VDPAU readme. --Regression Tester (talk) 15:35, 9 September 2010 (UTC)[reply]
Not a single confirmed source about Nvidia GTX 500 generation that is based on Fermi architecture. There's a rumor about GTX 580 floating around internet, but no confirmed info on that, not to mention specifications, release date, price point, power consumption etc. Also, there's no such thing as DX 11.1.
Suggest deleting GTX 500 section before any confirmed information appears. —Preceding unsigned comment added by Poimal (talk • contribs) 06:59, 16 October 2010 (UTC)[reply]
The information below needs clarifying:
Example card:
Model | Year | Code name | Fab (nm) | Transistors (Million) | Die Size (mm2) | Number of Die | Bus interface | Memory (MiB) | SM count | Config core 1,3 | Clock rate | Fillrate | Memory Configuration | API support (version) | GFLOPs (FMA)2 | TDP (watts)4 | Release Price (USD) | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz) | Shader (MHz) | Memory (MHz) | Pixel (GP/s) | Texture (GT/s) | Bandwidth (GB/s) | DRAM type | Bus width (bit) | DirectX | OpenGL | OpenCL | ||||||||||||||
GeForce GTX 460 | July 12, 2010 | GF104 | 40 | 1950 | 368 | 1 | PCIe 2.0 x16 | 768 | 7 | 336:56:24 | 675 | 1350 | 3600 | 16.2 | 37.8 | 86.4 | GDDR5 | 192 | 11 | 4.1 | 1.1 | 907.2 | 150 | $199 |
1024 2048 | 336:56:32 | 21.6 | 115.2 | 256 | 160 | $229 |
2 Each Streaming Multiprocessor(SM) in the GPU of GF100 architecture contains 32 SPs and 4 SFUs. Each Streaming Multiprocessor(SM) in the GPU of GF104/106/108 architecture contains 48 SPs and 8 SFUs. Each SP can fulfil up to two single precision operations FMA per clock. Each SFU can fulfil up to four operations SF per clock. The approximate ratio of operations FMA to operations SF is equal: for GF100 4:1 and for GF104/106/108 3:1. The theoretical shader performance in single-precision floating point operations(FMA) [FLOPSsp, GFLOPS] of the graphics card with shader count [n] and shader frequency [f, GHz], is estimated by the following: FLOPSsp ≈ f × n × 2. Alternative formula: for GF100 FLOPSsp ≈ f × m × (32 SPs × 2(FMA)) and for GF104/106/108 FLOPSsp ≈ f × m × (48 SPs × 2(FMA)). [m] - SM count. Total Processing Power: for GF100 FLOPSsp ≈ f × m ×(32 SPs × 2(FMA) + 4 × 4 SFUs) and for GF104/106/108 FLOPSsp ≈ f × m × (48 SPs × 2(FMA) + 4 × 8 SFUs) or for GF100 FLOPSsp ≈ f × n × 2.5 and for GF104/106/108 FLOPSsp ≈ f × n × 8 / 3.[15] where SP=Shader Processor (CUDA Core), SFU=Special Function Unit, SM=Streaming Multiprocessor, FMA=Fused MUL+ADD (MAD).
Using this formula, FLOPSsp ≈ f × n × 2:
FLOPSsp ≈ 1.350 × 336 × 2, gives us 907.2, which is what is listed in the specifications table.
However, using this formula; FLOPSsp ≈ f × m × (48 SPs × 2(FMA) + 4 × 8 SFUs):
FLOPSsp ≈ 1.350 × 7 × (48 × 2 + 4 × 8), we get 1209.6
Then to go one and use the third formula; FLOPSsp ≈ f × n × 8 / 3:
FLOPSsp ≈ 1.350 × 336 × 8 / 3, we again get 1209.6
Which of these formulas, is the correct one? —Preceding unsigned comment added by Murpha 91 (talk • contribs) 11:19, 28 November 2010 (UTC)[reply]
when you are talking about graphics cards versus professional or gpgpu specific cards, you are talking single precision floating point performance and notdouble precision or some hybrid of the two. the spec. funct. units mentioned are not extra separate units from the stated shader counts but are a numberof those units capable of the additional functionality. so to take the example of the gf104 based gtx460, you have 1350 mhz shader clock times 336 shaderprocessors times 2 floating point ops per shader processor per clock. the given value of 907.2 single precision gflops in the chart is correct. Jtenorj (talk) 04:38, 14 December 2010 (UTC)[reply]
According to this article on a Apple G4 Dual 867Mhz (Mirrored Door) computer:
http://www.everymac.com/systems/apple/powermac_g4/stats/powermac_g4_867_dp_mdd.html
"By default, this model has a NVIDIA GeForce4 MX graphics card with 32 MB of DDR SDRAM."
This article states this computer was introduced on August 13, 2002 and discontinued on January 28, 2003. There appears to be no mention of this particular version of the MX processor in this article as none are listed with 32MB of memory. —Preceding unsigned comment added by 216.51.155.157 (talk) 01:42, 12 January 2011 (UTC)[reply]
The Apple Power Macintosh G4 1.0 (FW 800) Shipped with the 64Mb version.
Part Numbers:Apple Nvidia GeForce4 MX 64MB ADC VGA AGP Video Card 180-10074-0000-A01Apple PowerMac G4 nVidia GeForce 4 MX Video Card 64MB AGP VGA 630-3845 603-0133 — Preceding unsigned comment added by 2602:306:BC82:B600:65E6:1E42:9BD4:A535 (talk) 02:55, 28 October 2016 (UTC)[reply]
Tables in this article are missing the most important information - how well card performs in benchmarks. Are there any reputable benchmarks that compare majority of graphics cards on the market, other than http://www.videocardbenchmark.net/ ? A real benchmark is most important - often there is a 2x or 5x difference in speed between seemingly very similar cards. Pathbuiltnow (talk) 12:56, 16 January 2011 (UTC)[reply]
I would really appreciate if the CUDA version was listed alongside each graphics card as for some cards (like the GF430) only support much earlier CUDA versions that cards of a previous generation (such as GT240). For the budding HPC developers out there it will really help. — Preceding unsigned comment added by 124.149.71.250 (talk) 01:40, 5 June 2011 (UTC)[reply]
Mars cards are not in nVidia codename line, they are custom branded cards, so they do not belong here. Spam removed. — Preceding unsigned comment added by 93.115.248.39 (talk) 07:31, 29 June 2011 (UTC)[reply]
I don't have a good source for the data for missing hardware, but the GT435M is missing. — Preceding unsigned comment added by 68.183.63.74 (talk) 18:29, 6 September 2011 (UTC)[reply]
I'd like to propose that we split this page into multiple NVIDIA Comparison pages, specifically:
1.) Desktop - GeForce/Vanta/Riva/NV1
2.) Mobile - GeForce M/GeForce Go/Mobility Quadro
3.) Workstation - Quadro/Tesla (which already has its own page)
This page is ridiculously long as it stands right now and will only get worse with the release of NVIDIA's Kepler (600 series) in Q1 2012 and AMD's 7000 series in 2012. Comparing GPUs for the average technical user coming here is a challenge because they need to first find out where desktops GPUs end and where laptops/mobile GPUs begin. It involves a silly amount of scrolling and scanning for titles if they don't follow the contents box. Honestly, how many users of Wikipedia use the contents box for every page visit? Furthermore, it is very rare that users will want to compare different platforms to each other when comparing GPUs available for their specific platform.
From personal experience, I visit this page frequently to compare updates specs and performance information for one platform at a time. Knowing the comparison between Quadro GPU 'X' and a GeForce GPU 'Y' isn't something I ever do because one is designed for CAD/data processing workplace applications while the other is tweaked for consumer and game applications.
If the community agreed to a page split, we could certainly link all three of the NIVIDA comparison charts at the top of each chart page for easy navigation.
cipher_nemo (talk) 19:19, 23 September 2011 (UTC)[reply]
hi cipher...the article is a useful resource as it has all models on it for comparison. splitting it wouldnt make it a useful reference anymore. sure, lots of models, maybe the only split that might be useful is putting models older than three years into another article. hope that helps. cheers. 203.219.135.147 (talk) 04:38, 18 November 2011 (UTC)[reply]
Okay, two people think this should be done, with no comments from anyone else. If I take the huge time to split it, if someone reverts it, can I be assured that the reversion will be reverted? HelviticaBold 21:52, 25 July 2012 (UTC)[reply]
A friend mentioned that he owned an 8900 card and I was curious so looked it up. A google search provided some information about it but I saw it being absent from the list on this article. Was it never an official card? If it wasn't and it did indeed exist should it not be noted in the article? Perhaps the 8900 moniker was erroneously applied to other 8000 series cards from misinformation.
Nvidia's own website does not indicate the card's existence either:http://www.nvidia.com/page/geforce8.html
Two examples of articles encountered with some information on 8900 cards:
http://www.tweaktown.com/news/7055/geforce_8900gtx_and_8950gx2_pricing_and_information/index.html
http://www.theinquirer.net/inquirer/news/1028324/geforce-8900gtx-8950gx2-details-listed — Preceding unsigned comment added by 216.222.172.58 (talk) 21:45, 14 October 2011 (UTC)[reply]
This Nvidia page seems to incorrectly list the memory clock rate as the effective clock rate (x4 for cards with GDDR5 memory). AMD page has the same format for memory but lists the base clock rate. BroderickAU (talk) 01:56, 26 October 2011 (UTC)[reply]
I am going to re-arrange the page to present the information in a more clean manner and correct any missing and incorrect info to the best of my ability. — Preceding unsigned comment added by Blound (talk • contribs) 15:28, 4 December 2011 (UTC)[reply]
I'm strongly inclined to blank the new section on the 600 series. Not only is it completely unreferenced, not only do some numbers appear incredible to me (7680 single-precision gflops for a 680, really? That would require two FMAs per shader, per clock! And at the incredible 3 GHz shader clock rate!), but top-tier release dates are given as "Q1 2012", which is in contradiction with recent leaks showing top revisions of Kepler design only arriving in Q4 2012. And the claim of XDR2 memory in all cards, including the lowest 650, is extremely bold. There was some talk about the possibility of seeing XDR2 in Southern Islands, but, as far as I know, even the possibility of seeing XDR2 in this generation of NVIDIA cards never even cross anyone's mind. --Itinerant1 (talk) 08:31, 27 December 2011 (UTC)[reply]
Delete it. Rumours aren't for encyclopaedias. And wikipedia does not lead, it follows. We have to wait for reliable information. Rlinfinity (talk) 13:28, 4 January 2012 (UTC)[reply]
600 series is currently an OEM series just like 300, and it's based on Fermi. It's not yet on this page but the specs of laptop 6xxM GPU's are already on geforce.com, they look like GF11x. I think Kepler (GKxxx) will rather be the 700 series. Albert Pool (talk) 12:04, 12 January 2012 (UTC)[reply]
Albert is right, any additions to the current spec tables are coming from rumour sites which don't even have accurate technical information. 220.235.101.12 (talk) 08:47, 7 February 2012 (UTC)[reply]
I don't know how to add a reference or even if it's an acceptable source but the release notes for the latest beta of AIDA64 show the 670M and 675M as using the GF114M core. http://www.aida64.com/downloads/aida64extremebuild1812y4qdz2gtxvzip There's some cores listed for low-end cards here: http://www.aida64.com/downloads/aida64extremebuild1807m7bnd8glcszip — Preceding unsigned comment added by 71.82.143.25 (talk) 01:27, 10 February 2012 (UTC)[reply]
I added two references to confirm some of the known GTX 680 card's specifications. The NDA was lifted yesterday and NVIDIA is going to be showcasing the card very soon. We need the listing there. If any of the confirmed specs are changed at the showcasing, we can update it then. Until that point, there's no sense removing the GTX 680 listing as was done by one random person. I've only included the KNOWN specs, and did not guess at anything. Two references saying the same thing should secure that piece of information. cipher_nemo (talk) 13:56, 13 March 2012 (UTC)[reply]
Shader speed on the GTX 680 is double the clock speed, so 1006 becomes 2012 according to manufacturers. An anonymous user keeps trying to switch this back based upon 3rd party reviews which are not as reliable as the manufacturer themselves. cipher_nemo (talk) 17:13, 22 March 2012 (UTC)[reply]
The anonymous user is me (Alexander Smetkin), and if you think that GeForce 680 uses double clock speed - so go and change GFlops to 6Tflops (2000*1500*2). Manufacture sites are not reliable, they are just sites for users, not specialized hardware sites. But a numerous professional reviews that you can find in the internet use diagrams provided by nVidia itself and they are reliable! 21:28, 22 March 2012 (UTC)
Anon user (Alexander Smetkin) found the GTX 680 whitepages that listed the shader clock speed as "n/a". Good job, Alexander! :-) cipher_nemo (talk) 21:07, 22 March 2012 (UTC)[reply]
Adjusted the 600 series table to more correctly display the clock rate differences in kepler vs other parts. — Preceding unsigned comment added by 124.149.172.68 (talk) 20:47, 7 April 2012 (UTC)[reply]
The Kepler boost clock consists of 9 steps, the first of which is the quoted base clock. The clock increments another 8 times in multiples of 13 up to a total of 1100 on the GTX 680 and 1019 on the GTX690. The average boost clock of these cards are 1058 and 967 respectively. Articles displaying any higher are reflecting deficiencies in the monitoring modules.124.169.11.0 (talk) 10:08, 30 April 2012 (UTC)[reply]
can something be done about this 99.142.36.30. — Preceding unsigned comment added by 220.235.102.144 (talk) 08:45, 15 February 2012 (UTC)[reply]
I believe the correct way to calculate the pixel fillrate is:
S * 2 * C
where S = Streaming Multiprocessor Count, C = Core Clockrate, and the 2 is there because each SM does two instructions per cycle.
As of now, I would say all pixel fillrates for the GeForce 4 and 500 series cards are incorrect, since they are based on the the old ROP * C formula.
Louis Waweru Talk 16:58, 19 March 2012 (UTC)[reply]
not sure using proper post format but the pixel fill rate is the number of fully rendered pixels that are sent to the frame buffer per second. so the calculation of core clock times rops is correct. the figure you get when you multiply the core clock times the number of shaders times 2(one fused multiply add or fma if you prefer) per shader give you the theoretical gflops/tflops performance, which is already calculated in a separate column towards the far right of the chart. — Preceding unsigned comment added by Jtenorj (talk • contribs) 05:08, 5 April 2012 (UTC)[reply]
I guess i did that wrong cause it didnt show my username, date and time of edit/ comment, so here goes...
Jtenorj 00:10, 05 April 2012 (US central standard or daylight or whatever it is now) — Preceding unsigned comment added by Jtenorj (talk • contribs)
Fermi does only process 2 pixels per streaming multiprocessor per clock. If you run a a fill rate test, you'd see it's fairly consistent. Kepler does 4 pixels per SM per clock which makes the listed fill rate for the 680 correct (4x8=32) but will cause inconsistencies when disabling SMs but keeping the same ROP count. Hardware.fr is the only site I know that tests pixel fill rate. http://www.hardware.fr/articles/866-6/performances-theoriques-pixels.html Keep in mind boost clocks when looking at Kepler fill rate. — Preceding unsigned comment added by 71.82.143.25 (talk) 12:28, 9 September 2012 (UTC)[reply]
My german is a little rusty, but I'm pretty sure that's wrong. They have theoretical gflops/tflops for those calculations and pixel fill rate should be rops times base clock(48rops x 772mhz=37.056gpixels/sec).
GF100/GF110 have 32 shaders per cluster while the likes of GF104/GF114 and GF106/GF116 have 48 shaders per cluster. Ideally, each shader works on one 32 bit sub pixel (red, green, blue and alpha) per clock so a cluster of 32 shaders would do 8 pixels per clock(ppc) and a cluster of 48 would do 12ppc. However, it's not that simple. As newer versions of Direct x come out, the pipeline becomes longer and can handle more instructions in one pass. Shader programs in games vary in length, with simple ones making it through the pipe in one pass while other longer and more complex shader codes require the data to be looped through the pipe one or more additional times.
Separately, the amount of time data gets bounced around in the rops is dependent on user settings in game as well since they work on multi sample aa, HDR lighting, shadow data and more. If settings in a game are dialed down(no aa, no shadows for example) then things might get done in one pass. If complex shadows and high levels of old school aa need to be calculated(not counting fxaa which is done in the shaders), more loops through the rops may be required. The amount of time a game spends in the shaders and the time spent in rops may not match up nice and neat(likely this is the norm, since different games make use of the resources on a chip differently).Jtenorj (talk) 22:06, 17 December 2012 (UTC)[reply]
The article should refer to zh:NVIDIA顯示核心列表 — Preceding unsigned comment added by Hyins (talk • contribs) 12:27, 24 March 2012 (UTC)[reply]
what is that? — Preceding unsigned comment added by 174.58.252.142 (talk) 09:06, 1 September 2012 (UTC)[reply]
A column listing the maximum resolution each GPU supports, as well as if it supports HDTV 1920x1080 (1080i or 1080p) and for the newer chips, UHDTV 7680 × 4320 (4320p). Of course there should be a note that just because a specific GPU supports those resolutions, any given implementation may not have the BIOS/firmware/driver support and/or the digital output connections for those resolutions. 66.232.94.33 (talk) 02:49, 12 May 2012 (UTC)[reply]
Quadro Plex 7000, Quadro K5000 are missing. Ianteraf (talk) 12:35, 9 August 2012 (UTC)[reply]
VGX K1 and K2 also missing. Ianteraf (talk) 07:21, 20 October 2012 (UTC)[reply]
Matthew Anthony Smith recently inserted a large amount of links to the http://www.techpowerup.com/ site into the table headers of many of the GPUs. I removed them as part of a quality assurance / cleanup effort which unfortunately became necessary after many controversial edits (in various articles) by this user.
Personally, I think, we don't need any of these links at all, but if you think a link is useful, I suggest to add a single link to the database under "External links" again. Also, links to reliable references are acceptable as well inside the table if we use proper syntax.
Finally a note on the various "facts" templates I added to some of the table values. I did not want to blindly revert all the potentially problematic edits in one go, but found various table values or their semantics changed by Matthew Anthony Smith without any edit summary. Some values were simply changed, in some cases, footnotes were removed and in many cases lists of values and ranges were converted to look the same. These values were flagged by me in order to make readers aware of the change. They need to be carefully checked by someone using a reliable reference and can be removed afterwards, ideally by providing the reference at the same time as well. Thanks. --Matthiaspaul (talk) 17:15, 9 September 2012 (UTC)[reply]
Maybe wiki page should follow nvidia own specs. 24.6.187.56 (talk) 21:17, 12 December 2012 (UTC)[reply]
Hello, the tables miss double precision performance for chips, compared to AMD comparison. Their addition is more than welcome! 93.129.54.204 (talk) 17:55, 28 December 2012 (UTC)[reply]
We're missing the specs for the low-end version of the original GeForce 256, which was known as the GeForce SE. I had such a beast when they were still sold.The designation seems to have been re-used for later bottom-end types, so it might be hard to find the actual specs. 173.216.111.38 (talk) 02:04, 6 January 2013 (UTC)[reply]
The Titan graphics board is part of the 600 series and not of 700. — Preceding unsigned comment added by 93.38.164.178 (talk) 14:22, 18 February 2013 (UTC)[reply]
How it came out that Titan has 3.2Tflops? If you multiply cores by frequency by two (MAD), the formula that gets correct results for all other cards - you get 4.5Tflops, not 3.2Tflops. --80.246.242.38 (talk) 13:09, 12 March 2013 (UTC)[reply]
In the single precision peak performance on the 8, 9 and 200 series, the 2nd MUL from the SFU is not available in a MAD (multiply-add) instruction, which is used for peak performance, counting as 2 FLOPS each. The MAD-MUL is just a marketing slogan and is nowhere present in real life achievable performance. Also, from the 400 series and on the 2nd MUL from the SFU is not available at all. — Preceding unsigned comment added by 93.38.164.178 (talk) 14:49, 18 February 2013 (UTC)[reply]
The Tesla table incorrectly shows a MAD+MUL peak performance. The 2nd MUL was only available from G80 to GT200 from the SFUs, but not in a MAD instruction for peak performance scenario. From GF100 and on the MUL from the SFUs was not available anymore. — Preceding unsigned comment added by 93.38.171.227 (talk) 09:12, 19 February 2013 (UTC)[reply]
You are correct, and I just fixed this in rev https://www.search.com.vn/wiki/?lang=en&title=List_of_Nvidia_graphics_processing_units&oldid=694130624Mbevand (talk) 10:13, 7 December 2015 (UTC)[reply]
Apparently Nvidia is readying a cut down Titan http://gamingio.com/2013/03/nvidia-prepping-a-cut-down-version-of-the-geforce-gtx-titan/ — Preceding unsigned comment added by 210.50.30.132 (talk) 11:02, 31 March 2013 (UTC)[reply]
the opengl versions on the geforce 4-7 series dont match the corresponding main articles, example the 6 series states opengl 2.1 on the list but only 2.0 on the main article. which pages are correct?
if additional opengl support was added through a driver update,that should be listed if possible. — Preceding unsigned comment added by Stewievader2 (talk • contribs) 20:10, 22 February 2014 (UTC)[reply]
after some more searching its apperent that information on opengl support for is very limited, and what is available isnt consistent because of driver updates adding later versions of opengl Stewievader2 (talk) 20:57, 22 February 2014 (UTC)[reply]
— There is a lot of confusion about what versions of openGL is supported on which card, since this is largely dependent on driver support. And there is a lot of outdated info on nVidias site. This is probably due to nvidia not bothering to refresh info on cards no longer in production. I will edit all series I tested myself. I use Sascha Willems inofficial OpenGL hardware database as a reliable source for versions supported. (http://delphigl.de/glcapsviewer/gl_about.php) Niplas (talk) 01:56, 18 December 2015 (UTC)[reply]
There is a column for SMX Count. Not not only is it not defined in this article, it is not defined anywhere in Wikipedia.
The pixel fill rate calculations that we have been using has been proven to be inadequate because http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980 has proven that our method is wrong. We might need to rearchitect the table to account for the amount of active rasterizers, the number of active streaming multiprocessors, and the fragments both of those can process per cycle. Since the pixels are only generated by the ROP from one or more fragments depending on the antialiasing mode, the rasterizers and streaming multiprocessors can force some ROPs to go idle if there are not enough of either the rasterizers or the streaming multiprocessors. Jesse Viviano (talk) 19:55, 13 January 2015 (UTC)[reply]
The "Notes" column in the Mobile GPUs section seems to be nothing but original research. Each mobile part has a desktop part listed to which it is "similar", according to no provided source, along with a percentage that seems to indicate how much of the desktop part's performance the mobile part ostensibly delivers. Where did these come from? 125.254.43.66 (talk) 05:26, 18 February 2015 (UTC)[reply]
Can driver support (as either "Yes" or "No") for various versions of Windows be added (or a new chart or table created) ?
I can't find this very basic information available anywhere as a simple chart or table. — Preceding unsigned comment added by 174.94.2.177 (talk) 14:07, 28 March 2015 (UTC)[reply]
I think I found a duplicate of this article. 2A02:8420:508D:CC00:56E6:FCFF:FEDB:2BBA (talk) 22:06, 29 March 2015 (UTC)[reply]
That article compares motherboard chipsets, this article compares/lists GPU's. TheGuruTech (talk) 22:00, 22 November 2016 (UTC)[reply]
Nvidia has updated the specifications of all gpus in the fermi range and newer.The Directx version has been changed t0 12.0 API. — Preceding unsigned comment added by 197.190.165.22 (talk) 16:01, 29 April 2015 (UTC)[reply]
Nvidia's updated list of gpus that support dx12 is up at the geforce.com site and does indeed cover gpu families back to fermi.[1]50.43.34.62 (talk) 02:51, 20 August 2015 (UTC)[reply]
As it stands, it appears that Fermi, Kepler and Maxwell V1 support DX12 FL 11_0 (my Maxwell V1 GTX750 does not do FL12 and this is confirmed by various forums etc.)
Maxwell V2 and Pascal support FL12 Ace of Risk (talk) 00:43, 5 April 2017 (UTC)[reply]
The GeForce 6600 GT based videocards were produced in AGP 8x and PCI-E versions. Memory frequencies on them were different - for AGP version they were lower. Why this article specifies 950 MHz frequency for AGP version while nVidia site and contemporary reviews I have found all specify 900 MHz? This change was made somewhere in between Sep and Dec 2012, without specifying any sources.P h n (talk) 15:12, 3 June 2015 (UTC)[reply]
In this edit I changed the text in the 'Notes' field for the fastest 6 models of the 9xxM notebook GPUs, to describe the equivalent desktop GPU. The current text said, basically, "X% performance of <desktop GPU Z>", with one reference to this Anandtech article - but all were (rightfully so) tagged with 'Original research?' since April 2015.
Generally speaking, the notebook GPUs are similar to their desktop brothers, just clocked 5-15% slower, which equates to equivalently lower GP/s and GT/s, and is "graded"/branded differently, with a skew in the naming convention (980M≈970, 965M≈960 and so on).
Being an encyclopedia, what I'd like to know is technical facts, so to speak; "What version of the desktop GPU am I getting". And then let other resources elaborate what that means in practice (references for that would be nice, of course). I think most technical people, who are likely to even read a table like this, would prefer the data to be presented or explained like this. And can translate the change in clock speed between models.
Yes, strictly speaking, my "reading" or "translation" of the tables between families is original research, but I hope/don't think many people will disagree with this attempt at clarification.
The formatting is a bit off, I can't figure out how to force the 'Notes' colum wider. Perhaps the text should be shortened; remove GPU brand name and keep the code name (identifies the specific desktop GPU, so not the other way around).-- Katana (talk) 01:04, 29 December 2015 (UTC)[reply]
Hello fellow Wikipedians,
I have just added archive links to 2 external links on List of Nvidia graphics processing units. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
Y An editor has reviewed this edit and fixed any errors that were found.
Cheers.—cyberbot IITalk to my owner:Online 20:37, 13 January 2016 (UTC)[reply]
For completionists, the GeForce GT 745A is available to be added. Information about the GPU is available at TechPowerUp. GPU name is GK107, supports DDR3 memory and DirectX 11.2, and release date is Aug 26th, 2013. It is used in the HP Sprout.
Likewise, the new'ish (November 2015) 945M is also not on the list. Nvidia specifications page here and was added to driver package 352.63 for Linux on Nov. 16 2015 here. -- Katana (talk) 03:30, 10 February 2016 (UTC)[reply]
The entry for the GTX1080 states the bandwidth is 320GB/s but it should be just over 654GB/sec. — Preceding unsigned comment added by 139.218.76.12 (talk) 05:10, 8 May 2016 (UTC)[reply]
Technically, Fermi could support the Vulkan API [1], but Nvidia does not plan to add Vulkan support for it due to the small install base (less than 10%) [2][3]. Nvidia's current vulkan driver page [4] reflects this.
[1] (p. 50-51) http://on-demand.gputechconf.com/siggraph/2015/presentation/SIG1501-Piers-Daniell.pdf [2] https://www.youtube.com/watch?v=nGkpPp2tGSs&t=46m25s [3] (p. 55-56) http://on-demand.gputechconf.com/gtc/2016/events/vulkanday/Vulkan_Overview.pdf [4] https://developer.nvidia.com/vulkan-driver
— 2003:6A:646D:6ED9:38BA:3938:9363:C6D8 (talk) 15:47, 27 May 2016 (UTC)[reply]
Is there any boost if ALL cores are utilized with FMA operations (performance being 2 * core number * boost frequency). I thought the clock is only "boosted" when utilization is less than the ABSOLUTE maximum, thus staying below the power goal.
Can anyone explain or is it just marketing by GPU manufacturers? — Preceding unsigned comment added by 77.187.98.48 (talk) 09:43, 3 June 2016 (UTC)[reply]
The current page states: 2-way SLI HB[59] or traditional 4-way SLI as supported by the 1080/1070. I don't know how a wiki would work around nvidia's latest limits around 3- and 4-way SLI, but with how support is now benchmarking only, shouldn't it be changed?
[...]With the GeForce 10-series we’re investing heavily in 2-way SLI with our new High Bandwidth bridge (which doubles the SLI bandwidth for faster, smoother gaming at ultra-high resolutions and refresh rates) and NVIDIA Game Ready Driver SLI profiles. To ensure the best possible gaming experience on our GeForce 10-series GPUs, we’re focusing our efforts on 2-way SLI only and will continue to include 2-way SLI profiles in our Game Ready Drivers.[...]178.17.146.218 (talk) 16:22, 14 June 2016 (UTC)[reply]
A 3GB version was announced http://hexus.net/tech/news/graphics/95698-nvidia-geforce-gtx-1060-3gb-equipped-fewer-cuda-cores/ but we don't know all the specs. Should it be added to the table now or when we have more information? — Preceding unsigned comment added by Denis.giri (talk • contribs) 07:16, 16 August 2016 (UTC)[reply]
Series | 600 | 700 | 900 | 10 |
Low | ||||
Mid | Low | |||
High | Mid | Low | ||
High | Mid | Low | ||
High | Mid | |||
High |
Each generation produce about 1.5 times as many Single precision GFLOPS as the previous generation.
Just granpa (talk) 16:37, 16 August 2016 (UTC)[reply]
SYSTEM SPECIFICATIONSGPUs 8x Tesla GP100TFLOPS (GPU FP16 / CPU FP32) 170/3GPU Memory 16 GB per GPUCPU Dual 20-core Intel® Xeon® E5-2698 v4 2.2 GHzNVIDIA CUDA® Cores 28672System Memory 512 GB 2133 MHz DDR4 LRDIMMStorage 4x 1.92 TB SSD RAID 0Network Dual 10 GbE, 4 IB EDRSoftware Ubuntu Server Linux OS / DGX-1 Recommended GPU DriverSystem Weight 134 lbsSystem Dimensions 866 D x 444 W x 131 H (mm)Packing Dimensions 1180 D x 730 W x 284 H (mm)Maximum Power Requirements 3200W
Reference:
I have tried to fill in as much data as I could gleam from the nVidia and HP and DELL OEM websites.
I seem to have 3 completely separate versions of this card.One is clearly a FX3400. and one is clearly a FX4400, but identifies itself as a FX3400/4400.The third card... has the ram of a 4400/ the GPU of a 3400, and ZERO markings. Literally nothing on the card.I may have to remove the heat sink to fix a small problem with the dirt, but...GPUz, Speccy, and the nVidia control panel all show different speeds for the GPU, and I would have a tendency to believe that Speccy is right, as the nVidia control panel is ambiguous. The last time this happened was on a 7600GT, that after a year and a half, GPUz added that it was a /b variant on the GPU.
So we have a FX3400, a FX4400 ( which are both FX3400/4400 ) and some weird frankencard.
I would have a tendency, since these are OEM cards, to label them all FX3400/4400, and have a second entry for the faster card with more memory. — Preceding unsigned comment added by 2602:306:BC82:B600:65E6:1E42:9BD4:A535 (talk) 03:13, 28 October 2016 (UTC)[reply]
This is mostly an expansion on my explanation to remove the note on the GeForce 900/10 series that stated the both series lack DirectX 12 "fundamental features." I took issue with this because the decision seemed arbitrary which features and at what tier level was "fundamental" at best, outright biased at worst. What makes Tier 3 Resource Binding more fundamental than Conservative Rasterization or Rasterized Order Views that Maxwell 2 and Pascal support, but not GCN? Now if we were to say the "fundamental" levels were the features required, then it still makes no sense to have it here when the AMD page does not have it, considering that Maxwell 2 and Pascal have higher feature level support (which I presume requires certification from Microsoft) than GCN.
Also Asynchronous Compute is not a feature of DirectX 12 or Vulkan. It's a method of handling the multiple command queues both API expose to the GPU. As much as I scoured through the developer literature (admittedly, not very much, mostly Intel's notes and some from Microsoft's MSDN), Asynchronous Compute was never mentioned.
Different clocks, different TDPs... These should be put on here. I'm going to start looking into it but if someone better at finding this info beats me to the punch, then hats off to you! — Preceding unsigned comment added by 174.100.206.132 (talk) 23:23, 14 August 2017 (UTC)[reply]
I think it's a mistake to take the 10-series out of temporal sequence and put it at the top. I strongly suggest reverting this change. This page is meant as a technical reference - not a buyer's guide. — Preceding unsigned comment added by 131.239.51.241 (talk) 18:12, 4 October 2017 (UTC)[reply]
Hello fellow Wikipedians,
I have just modified 3 external links on List of Nvidia graphics processing units. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
Cheers.—InternetArchiveBot (Report bug) 11:50, 20 November 2017 (UTC)[reply]
Hello fellow Wikipedians,
I have just modified one external link on List of Nvidia graphics processing units. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:Graphics cards for gaming
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
Cheers.—InternetArchiveBot (Report bug) 18:18, 26 December 2017 (UTC)[reply]
Could an expert please add the 7500 LE; it's missing. Tempshill 16:41, 19 August 2007 (UTC)[reply]
This AT&T ISP ATI troll keeps on vandalising the Nvidia GeForce 400 section. Request a permanent ban on his ISP and protection on this page. It's the same person, the IPs all trace back to Illinois.And this is not the first time, even earlier you can see the same IP ranges vandalising the GeForce 200 and 300 sections. —Preceding unsigned comment added by 124.13.112.81 (talk) 03:07, 12 October 2010 (UTC)[reply]