Adventures of switching from NVIDIA to AMD!

An NVIDIA GPU on fire

Recently, my gaming desktop, which I would argue is a Frankenstein of various components since 2014, underwent another upgrade. This upgrade involved my GPU of course. I upgraded from the NVIDIA 1080Ti, to the AMD Radeon RX 7900 XTX. This is the first time I’ve used an AMD GPU in my desktop since 2013, and to say the least, the migration itself wasn’t the smoothest, but I am pretty happy with the upgrade after working out some of the bugs.

My GPU prior to the upgrade was the EVGA GeForce 1080Ti FTW3, featuring the EVGA iCX cooler design. This card was purchased for around $720 new, just before the Cryptocurrency rush which decimated the GPU market. I ran this GPU in my system for approximately five years, and it’s seen system upgrades from a single 1080p 144Hz display, to a triple monitor setup sporting a 28″ 4K HDR 144Hz display, and two 24″ 1080p 144Hz monitors in SDR. The GPU even to this day, remained pretty competent at running many popular games at 4K resolution with maxed or nearly maxed settings, with some exceptions. Games such as BattleBit Remastered, no problem. Battlefield 2042 with the eye candy turned up and HDR enabled? Cinematic frame rates but still playable. CS:Go, of course, this game runs on a potato, and gaming at 4K is pretty glorious. Games such as Halo Infinite and Overwatch 2? Playable as well.

Playable however, is the key word. This doesn’t mean the game was running smoothly. Battlefield 2042 for example would run at 20-30FPS. Overwatch 2 would run around 40-50FPS. Games at 4K would begin to fill up the video memory (11GB) of the 1080Ti to the point where merely adjusting the render resolution from say 50% to 60% in Halo Infinite would completely bomb out the frame rate to unplayable levels. If I’m watching video on other monitors, of course, the frame rate of videos would dip and stutter to similar frame rates of the game I was playing. Five years out of a gaming GPU is a pretty good life when playing AAA games, wouldn’t you say?

Choosing the AMD / XFX MERC310 Radeon RX 7900 XTX

I chose the RX 7900XTX for a few reasons.

Price: For anyone who has been GPU shopping over the last year or two, you’ll know all about the pricing for high end chips. Compared to the past, GPU prices have quite literally, gone to the moon. Whatever that means. Moon as in always up, or moon as in what happened to the meme stocks once the meme launched, left the planet, and died in a horrible crash landing on the back of the moon. When I got serious into PC gaming, high end flagship GPUs like the NVIDIA 8800GT were running for $180-220 and were pretty affordable. Today a flagship GPU would run you somewhere between $850-$1,000 for AMD’s offering, to $1,300 – $2,600 for NVIDIA’s offering. Sure, we can blame market conditions caused by COVID-19, AI and Cloud Computing, Sanctions, Wars, and price fixation, all of which are current world issues. We could blame Cryptocurrency booms (no longer an issue with the move to Proof of Stake) and gamer demand during the COVID-19 lockdowns (no longer in effect in 2023) for prices going up as well. Or we could blame any number of other reasons as to why GPUs are so expensive today. Corporate desire maybe?

In my instance, two factors were considered. Raster/$ and VRAM. As mentioned previously, I run a triple monitor setup, game in 4K, and modern games already punish the 11GB of VRAM I have with a 1080Ti. This limited my options down to a set of GPUs: Intel Arc A770, AMD RX 7900 XT, AMD RX 7900XTX, NVIDIA GeForce 4080, and the NVIDIA GeForce 4090. All of these GPUs sport 16GB or more of video memory. In my mind, more == better when it comes to the VRAM long term, especially as games become more demanding, rely on texture streaming more (a feature which is still in infancy on Windows compared to say, Xbox and PlayStation), and as I consider higher end monitors. Not to mention, with the ways games aim for upscaling (DLSS, FXR) today for VRAM savings and improved framerates, the VRAM comes in handy since I play at native resolution without those technologies.

I am not doing much work with AI/Generative products, and likewise, don’t have much of an interest in it right now due to how much misuse and hype is circulating around it. NVIDIA has excellent support for these applications, and AMD is supposed to have a more open development framework for it, and is less matured there. This is of low consideration for me.

Performance: Raster/$ ended up being a harder proposition, since the Raster performance in games ends up being all over the place based on how a game engine is built, and the drivers for a GPU. Intel for example with their A770 GPUs has been making some impressive strides in their first generation part to making their GPU compete with NVIDIA and AMD’s previous generation parts, for only $300 or less new. However, issues with DX11 and older titles due to the lack of hardware support for these older games (Despite Intel’s work at integrating DirectX-over-Vulkan fixes from the community) along with poor game support for Intel GPUs ended up making me discount the A770 for the system. Hopefully Intel’s Battle Mage line-up ends up being a very solid contender next generation. On the AMD vs. NVIDIA front, this really boiled down to what made sense. In the case of the 7900XTX, it trades blows with the 4090 and the 4080, and often slots itself somewhere in the middle between those two cards in Raster. Ray Tracing performance is not really in my consideration, since many of the games I play whether new or old, don’t utilize Ray Tracing at all. They care more about Raster. Given the 7900XTX is cheaper than both a 4080 and a 4090 at $960 (what I paid, price has since dropped further while NVIDIA cards have gone up…), the price difference works out to be about $200 to the 4080, and $1,050 for a 4090. This ended up becoming a no brainer after much debate.

Software: This is more a pet peeve of mine. When it comes down to the presentation of the software, AMD is doing better in my mind than NVIDIA. With NVIDIA, they include GeForce Experience for gaming enhancements like a HUD, ShadowPlay, automatic game tuning, and driver updates. Separate of that is the NVIDIA Control Panel, a piece of software which works, but also hasn’t been updated in a significant fashion since the early 2000s at least. All of these programs work and do their job, however, GeForce Experience requires the use of an NVIDIA Account in order to even use. Without it, the software is there taking up space unless you choose to remove it from the driver installation bundle. When GeForce Experience launched several years ago, I was against the use of an account as it seemed pointless to me but, ultimately caved. To this day, it still seems pointless as it serves no purpose other than to unlock an application.

The AMD Adrenaline Software pretty much does everything you might expect a GPU Control panel to do, and then some. Out of the box, it auto-detects ALL of my games, regardless of what storage device I have them installed to (GeForce Experience always required me to manually add the other drives). It supports AMD’s version of what NVIDIA calls ShadowPlay. It automatically searches for and updates drivers. It has your typical GPU controls for adjusting display settings and GPU properties like Resizable BAR. It doesn’t require an account to use! You just install the driver. To top it off, it also has features which NVIDIA restricts in their control panel to their workstation cards like the Quadro RTX chips – built-in temperature monitoring, clock speed monitoring, and performance tuning. AMD’s control panel builds in overclocking functionality for adjusting power targets and clock speeds, which is quite cool. That’s one less program I have to download to the computer to maximize performance, or likewise fine tune it based on the game I’m playing.

Below are some screenshots of their control panel.

With that said, the software overall has been fine. AMD’s drivers are not as mature as NVIDIA’s. For example, I experienced a fair amount of stuttering while the GPU was under load when interacting with Chromium applications. AMD seems to have fixed this with the introduction of GPU Hardware Accelerated Scheduling in the latest drivers (23.12.1). The update has also brought with it some occasional game crashes where the game thinks the graphics driver has hung, but it hasn’t actually crashed or hung. This seems to be a scheduling related bug that only happens with Alt+Tabbing, specifically if I press Alt (which can bring up context menus) before pressing Tab. Now, when it comes to creative work, I have noticed that Vegas Pro did not feature good support for AMD accelerated video encoding (AMD VCE), and to solve for that I needed to use a plug-in called Voukoder. However, compared to my old 1080Ti, I experience less crashes with the AMD card both when editing, and when encoding video, and these lack of crashes seem to point towards the restrictions NVIDIA has on their NVENC/NVDEC video engine which AMD does not have. I would regularly hit NVDEC limits which would cause VEGAS to lose access to the GPU, or VLC would fail to show video because VEGAS was using the GPU and had too many streams open. I’m glad to be ridden of that without having to meddle with hacked drivers.

12VHPWR (ATX 3.0): Yes, 12VHPWR is another reason why I avoided NVIDIA this round. I am constantly seeing posts on /r/pcmasterrace, /r/nvidia, and on other various forums about melted 12VHPWR connectors from poor connector contact, cable sag, power draw, and what not. Some of the failures have taken months, while others have taken weeks. Sure, the connector and standard should be good for 600 Watts of power delivery, and yes my power supply has a 12VHPWR / ATX3.0 connector and cable, but 600 Watts at 12V is risky with how few conductors exist in a 12VHPWR cable. It seems sensible that distributing 400 Watts of Power (The 7900XTX’s power draw with overclocks) across far more conductors (the classic PCI-E ATX 2.0 8-pin connectors, x3, at a total of 150w total per connector, plus another 75w from the PCI-e slot) would avoid the fire risk that 12VHPWR has been proving to be. I’m playing it safe. With that said, NVIDIA Cards are generally more power efficient, but the 4080/4090 having connector melting posts is unsettling.

It hasn’t been all (mostly) fine and dandy though!

When I first installed the AMD Radeon RX 7900 XTX, all was well. I uninstalled the NVIDIA Drivers, used DDU to perform a clean-up, shut down, installed the AMD GPU, installed the AMD Drivers, rebooted, and all was fine… for about a day. After the first day of use, the GPU began to exhibit a strange lock-up/hang problem which would only occur in specific situations. For example, when running games, Alt+Tabbing and interacting with a Chromium application would hang all of the displays. I started to mess around with Resizable BAR and disabling hardware acceleration in the Chromium applications. You know, the things that are pitched around online as frequent fixers for AMD GPUs. That seemed to help, but the loss of hardware acceleration in Chrome is less than ideal. Disabling Resizable BAR in Windows seemed to help.

On the following day, I started noticing worsening behavior. My displays would take longer than normal to wake up from sleep. I then disabled Resizable BAR all together in the BIOS, and turned off a few other options such as IOMMU support in my BIOS, thinking that this may be potentially an issue with my motherboard/CPU combination (ASUS PRIME X370-PRO with an AMD Ryzen 7 5800X3D). Nope. Reboot into Windows and…. hang at the login screen! I can’t get into Windows! I flip into my card’s second BIOS at this point, thinking something is wrong with the factory profile on the primary BIOS, and the same thing happens. Check power connections, re-seat the card, nothing! Then I start unplugging displays and booting up only with one monitor at a time. I noticed that the GPU would hang whenever Windows loaded my display calibrations (for color accuracy). Disabling that, and all is fine… mostly.

The hanging problem still persists. Except now it occurs whenever video accelerated content plays on my primary display (a 4K Gigabyte M28U with Freesync Premium enabled) and is put into full screen mode. VLC? Hang. Firefox? Hang. Disable Hardware acceleration? Still hanging! So then I head into Radeon Settings to check the status of my monitors. I notice upon mousing over my primary display, the Gigabyte M28U, the graphics driver hangs up. But if I mouse over the other two displays I have (An AOC 24G2W1G4 + ASUS VG248QE), the graphics driver does not hang, and a red outlined box appears as it should to identify which display I’m selecting.

I then switch around my 4K display from the center DisplayPort port to the right-most port. I move my VG248QE to the center DisplayPort port. The 4K display comes up without an issue. The VG248QE? It’s detected by Windows and in Radeon Settings, but no video. However, the graphics driver is not hanging anymore whether I try to play video, full screen video, mouse over the display in Radeon Settings, or run a game and interact with Chromium apps. I switch the cables back, and the problem returns. I switch my AOC 24G2W1G4 to the center port, same problem; detected in Windows but no video sync. Only the 4K monitor is syncing on that port, which makes no sense, since that monitor of my three demands the most amount of bandwidth from DisplayPort. Could it be something with Adaptive Display Refresh Rate (FreeSync)? Perhaps, but I never dove into it further. Instead I came up with a better fix.

The fix? Switch the AOC 24G2W1G4 to HDMI, put the Gigabyte M28U onto DisplayPort, and the ASUS VG248QE onto DisplayPort. Avoid the center DisplayPort output on the GPU. No more hangs, no more crashes, and ALL of the displays are operating at their max resolution, max refresh rate (144Hz), with AMD FreeSync Pro/FreeSync Premium, and with the 4K monitor also running at HDR mode. Hardware acceleration enabled in Chromium apps and all now works without any crashes. After turning on Resizable BAR and IOMMU, performance only went up, without any crashes or instability returning. Finding this sort of solution made me perform some web searches, and it turns out, others have experienced this sort of thing as well with other AMD 7900 XTXs [Link 1] [Link 2] [There are more…], with one of the DisplayPort ports misbehaving, or in the case of cards with USB-C ports, the USB-C output causing problems. DDU appears to fix the problem for those affected for a day or two before issues return again. Sounds suspiciously like my issue, and if it is truly a driver problem, hopefully AMD fixes it soon (note: I’ve been using the card for a few months since finding this fix, and I have not tried the center DisplayPort port since…)

Hopefully this helps anyone experiencing issues like I did!

How is Video Encoding/Decoding with the AMD GPU?

Honestly, not bad. Going into the switch from NVIDIA to AMD, I did have some fears about video acceleration on AMD. The last time I used an AMD Card in my primary rig was back when the AMD Radeon HD5770 was out. That card was another one of those “built well, but with quirks” GPUs that ran games well, until you tried to use accelerated video decoding. On that card, the VRAM clock speeds would drop from 1200Mhz to 900Mhz with a noticeable performance impact while gaming so long as video accelerated content was playing. AMD seems to have fixed that problem at least, and that is no longer an issue.

AV1 Decoding performance is great. YouTube (in Mozilla Firefox) uses AV1 and decode works great. Discord? They haven’t updated their VCE plug-in beyond using Direct3D for Decode and Encode, so no support for AV1 accelerated screen capture or decoding yet. Handbrake? The latest versions work great with AMD VCE, with HEVC and AV1 encoding producing video I’m very satisfied with (transcoding VC-1 content to HEVC 10-bit has no perceivable quality loss at CQ 14!). H.264 encoding? AMD is a bit weak here in the encoding performance, but the quality has improved to be competitive since last year’s driver updates to add B-Frame support.

OBS has great support for AMD VCE, so no complaints there. In fact when I got the card, I performed a few Live streams to YouTube at 1440p, 12Mbps with games running at 4K. Check it out:

How about some Rig photos?

Of course. For reference, here are my hardware specifications:

  • CPU: AMD Ryzen 7 5800X3D
  • Motherboard: ASUS PRIME X370-PRO
  • Power Supply: SeaSonic Vertex GX-1000
  • Graphics Card: AMD / XFX MERC310 Radeon RX 7900XTX
  • RAM: Corsair Vengeance LPX 32GB ‎(CMK32GX4M4D3600C18)
  • Storage: Intel SSD 530 256GB (Old! I’ve had this since 2014. It’s reaching the write limit with 640TB written…)
  • Storage 2: Samsung 840 Evo 1TB
  • Storage 3: Seagate ST2000DM001 (2TB 7200RPM Disk, CMR)
  • Storage 4: Seagate ST4000DM004 (4TB 7200RPM Disk, SMR)
  • CPU Cooler: Noctua NH-U12S
  • Case Fans (x5): Noctua NF-A12 PWM
  • Case: CoolerMaster MasterBox E500L

Conclusion

After a few days of headaches and “break-in” I can say that the card has been working well multiple months later. No complaints about the purchase of an AMD Card this time around. I’ll be sure to post more if I come across anything of interest.