The page that you have requested could not be found at this time. We have provided you a list of related content below or you can use our site search to find the information that you are looking for.

Potential Navi Benchmark: Better Graphics, Lower Compute Performance than Vega 64

An unknown AMD GPU (66AF:F1) has appeared on CompuBench, and some believe it could be a Navi part. GFXBench scores allude to a card that exceeds the Radeon RX Vega 64 in graphics capability but falls behind in certain compute tests, even against the Vega 56. Notebook Check advises this could actually be a Vega 20 GPU ("we've seen Linux drivers listing 0x66AF as Vega 20").

A comparison of the GFXBench scores of the AMD 66AF:F1 with the Radeon RX Vega 64 shows that the purported Navi variant leads significantly in the Aztec Ruins Normal Tier (1080p) and High Tier tests 1440p). This could imply that GCN6 in Navi is tailored more towards raw graphics than compute. We aren't exactly sure about the specs of this particular entry but expect to see variants with anywhere between 20 to 40 higher clocked CUs when Navi launches.

Posted by Megalith March 10, 2019 1:25 PM (CDT)

GeForce GTX 1660 Ti Mega Benchmark: 34% Faster than GTX 1060 6G at 1440p

Hardware Unboxed’s latest mega benchmark pits the NVIDIA GeForce GTX 1660 Ti against 33 games at 1440p. Steve found the 1660 Ti performing 34% faster on average compared to the GTX 1060 6G, with significant differences in titles such as Shadow of the Tomb Raider, Rainbow Six Siege, and Apex Legends. While the GPU performed similarly to the GTX 1070 ("same performance"), it was 14% slower on average than the RTX 2060 and 8% slower on average than the Radeon RX Vega 56.
Posted by Megalith February 24, 2019 6:00 PM (CST)

FFXV Benchmark: NVIDIA GeForce GTX 1660 Ti on Par with GTX 1070, Titan X

"It could be the mid-range savior many gamers have been hoping for." According to a new Final Fantasy XV benchmark, NVIDIA’s next Ti effort performs similarly to a GTX 1070 and Titan X. VideoCardz has published photos of the GIGABYTE GeForce GTX 1660 Ti OC and packaging.

A score of 5000 points would make the new GPU 40% faster than the GTX 1060 6GB, and 70% faster than the GTX 1060 3GB in this specific benchmark. Currently those two cards are available from $200-300 and $150-250, respectively, so the GTX 1660 Ti would be better value than all but the cheapest 1060 3GB models on offer.

Posted by Megalith February 17, 2019 12:25 PM (CST)

AMD Radeon VII 33-Game Benchmark: "It Makes the GTX 2080 Look Pretty Good"

Hardware Unboxed’s Steve Walton has put the Radeon VII through its paces with 33 titles (public driver), but the numbers aren’t really in AMD’s favor. Their latest effort is 7% slower on average at 1440p compared to the GeForce GTX 2080, leading to Walton’s comment the Radeon VII has "managed to make the GTX 2080 look pretty good in today’s climate." He admits he was hoping it would "obliterate" the 2080 so NVIDIA would reconsider their pricing, but that’s not happening. AMD’s latest effort was also 5% slower than the 1080 Ti at 1440p on average.
Posted by Megalith February 10, 2019 5:50 PM (CST)

AdoredTV Analyzes AMD Engineering Sample Benchmarks

Some AMD engineering samples with strange performance figures have been popping up in the UserBenchmark database recently. In an effort to put those results in perspective, AdoredTV just uploaded a video that starts with a brief history of CPU memory hierarchies. Then, he attempts to analyze just what's going on with the AMD engineering sample's inconsistent latency curves. Check it out in the video below:

A look back to the early days of cache-less computing, to what's coming next with Zen 2.

Posted by alphaatlas February 05, 2019 8:12 AM (CST)

NVIDIA DLSS Boosts Performance by up to 50% in the Port Royal Benchmark

NVIDIA has released a new video that demonstrates the power of its DLSS technology. In the Port Royal benchmark, NVIDIA DLSS boosts performance by up top 50%. While both images are ray traced in the demo, the DLSS image is clearly brighter, clearer and sharper. Deep learning makes this possible as it improves transparent surfaces, anti-aliasing, and more.

3DMark's latest benchmark test, called Port Royal, moves users through an environment featuring real-time ray-traced reflections and shadows, to test the performance of GeForce RTX GPUs and their RT Cores, the tech that makes real-time ray tracing a reality. DLSS has now been added to Port Royal, enabling GeForce RTX users to see and measure the performance and image quality benefits of this game-changing technology.

Posted by cageymaru February 04, 2019 1:53 PM (CST)

Check Clockspeeds and Benchmarks Before Buying a Gaming Laptop

Nvidia RTX-equipped gaming laptops are out, and both Nvidia and their retail partners are heavily promoting them. But before you jump on one, remember that Nvidia (and AMD) have a long-running habit of using somewhat deceptive branding for their laptop GPUs. As TechSpot points out, the mobile RTX 2080, 2070, and 2060 are not necessarily equivalent to their desktop counterparts. While they appear to use the same silicon as the desktop variants this time around, which hasn't always been the case, mobile RTX GPUs ship with significantly lower clockspeeds than desktop cards. The desktop RTX 2080, for example, features a base clock of 1,515 Mhz, while the standard mobile version runs at 1,380 Mhz. Meanwhile, the RTX 2080 "Max-Q" variant only runs at a base speed of at 735 MHz and boosts to 1095 Mhz under ideal conditions, which a laptop isn't necessarily going to have. Nvidia's Turing architecture is relatively power efficient, meaning these laptops are likely to perform well, but don't expect desktop performance from a laptop graphics card carrying the same name. Thanks to tordogs for the tip.

With so much leeway in terms of what speed to clock cards at, it’s easy to see how performance could vary across different laptops with Nvidia's RTX series cards. The lesson here, again, is to pay close attention to the actual clock speed of the GPU in the machine you’re considering purchasing.

Posted by alphaatlas January 30, 2019 8:35 AM (CST)

AMD Radeon VII Benchmarks Leak Out

Citing the Twitter dataminers APISAK and Komachi, Videocardz spotted some Radeon VII benchmarks that leaked out ahead of the card's official launch. The card appears to have achieved a graphics score of 6688 in Fire Strike's Ultra preset, which beats out a factory overclocked RTX 2080's score of 6430, and Videocardz says the Radeon VII appears to be faster in the performance preset as well. Meanwhile, the VII appears to struggle with Final Fantasy 15's built-in benchmark. It only slightly edges out an RTX 2070 in the "standard quality" benchmark, while falling between the GTX 1070 TI and the aging 980 TI with the "high quality" preset.

AMD Radeon VII board partner models: There will be no custom models at launch. Our sources tell us to expect such designs no sooner than during Computex (late May). So far we have few reference-based models with their own branded packaging and sometimes even custom stickers.

Posted by alphaatlas January 29, 2019 10:50 AM (CST)

AMD Cinebench Benchmark Demo at CES 2019 Buries the Current Intel Lineup

AdoredTV on YouTube analyzes what was actually shown in the AMD Ryzen Zen 2 vs Intel Core i9-9900K demonstration at CES 2019. He explains what the numbers mean, how AMD derived them, and why the AMD Zen 2 7nm chip that defeated the cream of the crop from Intel was actually just a lower-midrange model. Last of all, AdoredTV discusses how the Ryzen 5 pulled off this feat while using only half the power of the Intel offering.

Intel gets rekt by lower-midrange Ryzen 5.

Posted by cageymaru January 11, 2019 7:41 PM (CST)

Here are AMD's Radeon VII Benchmarks

AMD announced their Radeon VII card yesterday, and made some interesting performance claims to go with it. AMD says their 7nm Vega Radeon is at least 25% faster than Vega 64, and showed a few benchmarks with even higher gains during their CES presentation. But, in footnotes buried in their presentation and their press release, AMD spelled out the testing methods behind those claims. On January 3rd, AMD benched Vega 64 and Radeon VII on an Intel i7 7700K rig with 16GB of 3000Mhz DDR4. They didn't mention the motherboard or any other hardware specifics, but they did say they used "AMD Driver 18.50" and Windows 10. All the games they tested were run at "4K Max Settings," and they took the average framerate of 3 separate runs to get an average FPS measurement for each game. I've tabulated the benchmark data below (click for a larger image):
Most games do indeed show a 25% percent gain or better, but there are clear outliers in the data. Performance in Fallout 76, for example, seems to have increased a whopping 68%. Strange Brigade saw a similarly disproportional 42% increase, while Hitman 2 only ran about 7% faster. The Strange Brigade data is particularly interesting, as that was the game AMD seemingly used to compare performance to Nvidia's RTX 2080. Using what appears to be the same test setup as above, AMD claims Strange Brigade ran at 73 fps on a RTX 2080, compared to 61 FPS on Vega 64 and 87 FPS on Radeon VII. Regardless, benchmarks from manufacturers have to be taken with a grain of salt, and as AMD mentions, performance on pre-release drivers can be sketchy anyway. We'll be sure to verify some of AMD's claims ourselves soon. Discussion
Posted by alphaatlas January 10, 2019 10:29 AM (CST)

Denuvo's Negative Impact on Performance, Loading Times Revealed in Benchmarks

Overlord Gaming has tested the performance and loading times of six more games with and without Denuvo and found that the infamous anti-tamper technology does, in fact, have an adverse effect: titles such as Dishonored 2 and Bulletstorm appear to see increases in framerate by about 5% to 10% with the DRM removed, while loading times improved by as much as 25% in some titles. eTeknix is promoting at least some level of skepticism, however, as the testing methodology and hardware involved isn’t made entirely clear.

This would, therefore, appear to be the most compelling and complex evidence to date. Evidence that clearly suggests that it does, indeed, have a pretty notable impact on performance. Without meaning to sound critical of the results, I would perhaps have liked to have seen a little more detail in the methodology. For example, was the tests carried out on the same system? Two identical systems? In addition, was the RAM fully "cleared" before each test was conducted?

Posted by Megalith December 29, 2018 1:05 PM (CST)

First Gaming Benchmarks Revealed for the NVIDIA Titan RTX

JayzTwoCents benchmarked the Titan RTX earlier this week to see how it compared to the 2080 Ti and found it only provided an average improvement of 10 FPS in 4K, leading credence to those who say the price difference isn’t at all justifiable. This sentiment was echoed by Gamers Nexus last night after Steve Burke published his findings and proclaimed "this isn’t a worthwhile purchase for gaming or enthusiast users." Linux fans may check out Phoronix for Michael Larabel’s take, who was more positive and noted there were "no driver issues to report."

As we can see, the Titan RTX is faster than the RTX2080Ti by around 10 FPS in 4K on average and let me tell you that this performance boost does not justify the enormous price difference between the RTX 2080 Ti and the Titan RTX. I mean, the GeForce RTX 2080 Ti is already considered an overpriced GPU, so I can’t see anyone getting the Titan RTX -- at $2500 -- just to use it for gaming. Yes, it can play games, but there is no reason at all for PC gamers to invest on it, even when the money is no object.

Posted by Megalith December 23, 2018 2:30 PM (CST)