Vintage 3D - early history of 3d acceleration

Vintage 3D


Tried more Trident and Vérité 2 drivers, some values were added or corrected. And then my motherboard died after all the abuse. A sign to end this? But I just acquired first Permedia, the IBM version. So for this test (perhaps last one) a K7S5A fine-tuned for similar performance was used. Thanks to that performance difference between IBM and the Texas Instruments version can be quantified- assuming my cards are good representatives. And my methods adequate, despite all efforts there are some strange inconsistencies remaining. But there only few empty fields left...


I played a bit with Super Socket 7 platform, looking into things like memory bandwidth dependancy of Intel 740 AGP. I tried to find out more about Laguna3D, tried every driver I could find to get more out of them, and broke into one of my spare cards to measure die size. Think I finally at least tighten the range of possible clocks and clarified distinctions between 5464 and 5465 in the review. After some time elaborating on (lack of) meaning of "pixel pipeline" today, I decided to change their count for unified shader architectures. The new number is guestimation of how many pixels shader arrays can work on at once, and it led to realistic pixel:texel ratios. It is a wild move, but old count has no meaning for current archs. And other candidates, like amount of rasterized pixels, would most of the time not say something specific about SKU either.


Review of Starfighter PCI is up. Long story short, local texturing does not have to be always faster.


This month I investigated CPU and PCI bandwidth dependancy of PCX1. K7S5A with mobile Athlon and slowly timed SDR memory was used. It is a system too modern for PCX1, but with downclocking it seemed reasonable. As you can see the PCI bandwidth was hardly a factor there. Maybe with platform from proper time it would look different.
And 1024x768 jpeg Tomb Raider screenshot from Simon was added to PCX2 review, because even with his guidance I could not get screenshots to work.

Also there is yet another new column in the database, called alternative clock. There will be value for aletrnative domain clock (mostly shader), or boost or base clocks. It depends on SKU obviously.
I am going to review one more card in April, time to slay another false legend :)

More updates in the database, most of triangle setup rates should be correct. Because I don't like empty cells, the geometry processing column will now also contain geometry shader engines versions and unit counts- they started to appear almost exactly with transition to universal shaders, so hopefully it won't cause confusion.
After spending some more time with 3D Rage II cards, I am convinced they need three clocks to map one texel. First 3D Rage even more.
Fixed my blunder with Midas 3, the card has only one megabyte of memory for textures and parameters. My bad.

This month nothing retro, I was working on my database. Recently triangle setup rates became a bit interesting. I used these in one column with vertices transformation rates used for vertex shader era cards. Once unified shaders came, the numbers exploded and I stopped filling them. And now we finally have chips scaling triangle rates above one per clock (not necessarily rasterization). I wanted to put this in, so the values are finally divided into separate columns. As usual, the values are theoretical peak. In case of more options (like when some Geforce SKU has variable amount of GPCs) best case is used. For cards without setup engine, maximal triangle throughoutput will be used in that column. I certainly made mistakes when refilling them from top to bottom, but it will get fixed later.

Happy new year. No promises on new content, time will tell.
older news

Purpose of this website is to remember and find new information about the first generation of gaming 3D cards. What is first gen? Well, there are many definitions, common one would probably define three early generations of early 3d gaming chips: first geometry accelerators like Millennium or Imagine 128 II, then first texture mappers like Gaming Glint or Virge and finally "mature" architectures like Verite and Voodoo. I want to cover all of this range from very first accelerators (if possible) to anything released before Voodoo2. From there on 3d chipsets were having more and more comprehensive reviews and are therefore well known. I want to show barely tested chips in extensive collection of real games. The benchmark suite currently consists of around 40 games from 1996 to 1999 and 2-3 artificial benchmarks. Reviews are not done from user perspective. I am trying to examine and compare performance, which means no proprietary APIs. Cards are tested with latest or best available drivers and in a system saturating video accelerators with power unreachable at that time. In the future I might build a low end rig to examine performance in budget PC. The main purpose is to learn about chips, which were not reviewed extensively in ways we are used to now. In fact, information about gaming performance of most first generation cards is remembered almost only through word of mouth. I am not a professional hardware reviewer, neither graphics technology expert, but I will try to do my best to reveal real capabilities of vintage 3D cards.

My benchmarking practices are definitely not the best. I don't run the tests more times unless the results seem off. I simply don't have enough time. Also, most of the old boards lack the option to disable vsync despite trying various tweakers. Because of this I run all the tests with vsync, unless the application itself can disable it. This shouldn't be much of a problem with some high speed CRT, but I have only LCDs now. Therefore 75 Hz is used. Still the results have value, since vsync is almost always on by default and most of users don't change video settings. The performance corresponds to user experience, however speed differences between cards can be skewed.