- Introduction and other observation
- Test Bench and Testing Methodology
- Futuremark Benchmark
- OpenGL Benchmarks
- Middle Earth: Shadow of Mordor
- Rise of the Tomb Raider
- GPU Computation Benchmark
- Folding at Home and LuxMark OpenCL Benchmark
- Overclocking Impressions
- Online Purchase Links
Nvidia GeForce GTX 1070 is the second Pascal architecture card that uses the GP104 core. But as you would it expect from a model below a high-end graphic card, it does have some sections disabled. Another factor to consider is that the GeForce GTX 1070 uses the GDDR5 memory, unlike GDDR5x on the GTX 1080. Apart from the model number, you cannot differentiate between two cards visually.
When Nvidia GeForce GTX 970 came out, it was loved by a lot of people. Even now after everyone got a good look at the pricing of the new-age graphic cards. Nvidia GTX 1070 has big shoes to fill, especially now that AMD is rolling out newer cards and also listed its Radeon RX 490 card in its rewards promo.
Specifications- GTX 1070, GTX 1080 and GTX 970
GPU Engine Specs: GTX 1070 NVIDIA CUDA Cores 1920 Base Clock (MHz) 1506 Boost Clock (MHz) 1683 Memory Spec: Memory Speed 8 Gbps Standard Memory Config 8 GB GDDR5 Memory Interface Width 256-Bit Memory Bandwidth (GB/sec) 256 Technology Support: Multi-Projection Yes VR Ready Yes NVIDIA Ansel Yes NVIDIA SLI Ready Yes – SLI HB Bridge Supported NVIDIA G-Sync-Ready Yes NVIDIA GameStream-Ready Yes NVIDIA GPU Boost 3.0 Microsoft DirectX 12 API with feature level 12_1 Vulkan API Yes OpenGL 4.5 Bus Support PCIe 3.0 OS Certificates Windows 7-10, Linux, FreeBSDx86 Display Support: Maximum Digital Resolution 7680×4320@60Hz Standard Display Connectors DP 1.4, HDMI 2.0b, DL-DVI Multi Monitor Yes HDCP 2.2 Graphics Card Dimensions: Height 4.376″ Length 10.5″ Width 2 Slot Thermal and Power Specs: Maximum GPU Temperature (in C) 94 Graphics Card Power (W) 150 W Recommended System Power(W) 500 w Supplementary Power Connectors 8-Pin GPU Engine Specs: GTX 1080 NVIDIA CUDA Cores 2560 Base Clock (MHz) 1607 Boost Clock (MHz) 1733 Memory Spec: Memory Speed 10 Gbps Standard Memory Config 8 GB GDDR5X Memory Interface Width 256-Bit Memory Bandwidth (GB/sec) 320 Technology Support: Multi-Projection 10 Gbps VR Ready Yes NVIDIA Ansel Yes NVIDIA SLI Ready Yes – SLI HB Bridge Supported NVIDIA G-Sync-Ready Yes NVIDIA GameStream-Ready Yes NVIDIA GPU Boost 3.0 Microsoft DirectX 12 API with feature level 12_1 Vulkan API Yes OpenGL 4.5 Bus Support PCIe 3.0 OS Certificates Windows 7-10, Linux, FreeBSDx86 Display Support: Maximum Digital Resolution 7680×4320@60Hz Standard Display Connectors DP 1.4, HDMI 2.0b, DL-DVI Multi Monitor Yes HDCP 2.2 Graphics Card Dimensions: Height 4.376″ Length 10.5″ Width 2 Slot Thermal and Power Specs: Maximum GPU Temperature (in C) 94 Graphics Card Power (W) 180 W Recommended System Power(W) 500 W Supplementary Power Connectors 8-Pin GPU Engine Specs: GTX 970 NVIDIA CUDA Cores 1664 Base Clock (MHz) 1050 Boost Clock (MHz) 1178 Memory Spec: Memory Speed 7.0 Gbps Standard Memory Config 4 GB GDDR5 Memory Interface Width 256-bit Memory Bandwidth (GB/sec) 224 Technology Support: Multi-Projection Yes VR Ready Yes NVIDIA Ansel NVIDIA SLI Ready Yes NVIDIA G-Sync-Ready Yes NVIDIA GameStream-Ready Yes NVIDIA GPU Boost 2.0 Microsoft DirectX 12 API with feature level 12_1 Vulkan API OpenGL 4.4 Bus Support PCIe 3.0 OS Certificates Windows 8 & 8.1, Windows 7, Windows Vista, Linux, FreeBSD x86OS Certification Display Support: Maximum Digital Resolution 5120×3200 Standard Display Connectors Dual Link DVI-I, HDMI 2.0, 3x DisplayPort 1.2 Multi Monitor Yes HDCP Yes Graphics Card Dimensions: Height 4.376 Length 10.5 Width 2 Slot Thermal and Power Specs: Maximum GPU Temperature (in C) Graphics Card Power (W) 98 C Recommended System Power(W) 145 W Supplementary Power Connectors 8-Pin
While the GTX 1070 uses the same GDDR5 that GTX 970 did, its now doubled the RAM size and 1Gbps more speed and a bump in its GB/s memory bandwidth. GTX 1070 also supports 8K maximum resolution like the GTX 1080. Throughout these three reference editions, the dimensions are the same.
Naturally, the GTX 1070 uses the same 16nm FinFET process with 7.2 billion transistors. The GeForce GTX 1070 uses 1920 CUDA cores, 120 TUs and 64 ROPs. The Founders edition has a base clock of 1506 MHz and boost of 1682 MHz. Its 8GB GDDR5 memory uses a 256-bit bandwidth. Its TDP is 150 watts. Just like the GTX 1080, this card requires extra power from a single eight-pin power connector.
Founders Edition design
The reference edition variants between the two Pascal cards are the same. There’s really nothing more to add.
Output and connections
As a choice of video outputs, Nvidia still provides reference support for dual-link DV-I. The rests are a total of 3 DIsplayPorts, 1 HDMI. It provides SLI support and has a backplate. Same as GTX 1080.
Nvidia didn’t make any changes when it comes to internal designs. The lateral fan scoops the air and blows it through the channels provided by the heatfins and blocked from the top with plexiglass. Therefore, the hot air blows through the rear I/O slot. When the GTX 1080 was dismantled, it did not have any copper heatpipes.
The graphic cards here are benchmarked with default base/boost clock keeping the fan speed automatic. The CPU’s clock speed down to its base spec, turn off Turbo, auto-overclock and power saving/thermal throttling issue. When overclocked, the fans are manually set between 75% to 100%, while keeping the power limit to 100% and temp level to maximum. I also turn off onboard GPU via BIOS to avoid any weird issue (not that it does create any problems).
Sure, it’s a 16nm FinFET design with a low TDP. But the surface area of the core is naturally smaller, and hence the room to let the GPU cooler take off the heat is also lesser than older 28nm variants. Add the lack of heatpipes and that Founders Edition costs more than certain non-reference designs by most of its AIC companies. Unfortunately, Nvidia used a different Torx screw where I cannot take a look at the FE’s GPU cooler from the inside. Those who purchased aftermarket waterblock will need to carefully use a plier if they don’t have the right tools for the job. Argh!!!
Clearly, Nvidia needs to re-think its heat dissipation design. Nvidia charges a certain premium for the reference edition.[divider]
Are you wondering why this review took this long to publish? Its because I met with a problem concerning Ashes of the Singularity benchmark. The GTX 1080 needed to be recalled for side-to-side comparison.
The graphic cards are returned so that it can be given to other reviewers or for promo events. At the time I had the GTX 1080, I couldn’t compare it with the previous generation so it was difficult to advise properly. I’ve purchased Ashes of Singularity, DOOM and Hitman, while updating The Rise of the Tomb Raider which included the DX12 only recently. Windows 10-64 bit was also updated. So was 3DMark benchmark. The differences in updates simply made me take this route.
Unfortunately, I noticed the problem with AOTS when compared it with the earlier GTX 1080 review. The end result of the graph seemed wrong. I’ve requested Nvidia to send the GTX 1080 so that I can do a side-by-side comparison. By the time it came back, Ashes of the Singularity rolled out at least one update which included a setting. Nvidia WHQL 369.25 does not work with GTX 1070. Hence both the pascal cards’ graphs are rebuilt reflecting performance using the .39 driver.
I did not include Ashes of the Singularity. PC Perspective highlighted this issue where it’s assumed that Stardock is limiting the FPS under the monitor’s refresh rate. While this is understandable for actual gameplay to prevent screen tearing, it’s strange to see the studio gave the same treatment for benchmarks. I am merely compressing this issue to the basics. Check out PCPER’s analysis since it includes more than what I’ve experienced. Switching from ‘Application Control’ to ‘Off’ for VSync in Nvidia’s control panel does not help.
It’s frustrating. I wanted to compare the GTX 1000 series with the 900 series (and the Radeon 300 and 400 series cards). I am not sure how others are doing it, purely because the framerates are locked under the screen’s refresh rate value. Even if it’s reviewed using a 144 Hz panel, there’s no way telling if there’s something holding the graphic cards back- or overexaggerating actual framerates. With limited information and resources, ditching AOTS for now seems reasonable. I don’t have a higher refresh rate monitor to see if this is the case with me.
If you are curious about Ashes of the Singularity’s capped result, there you go. Take it with a pinch of salt:
DX 11 Test
Weird, right?? The funny part is that the benchmark crashes is the in-game V sync is turned on and there is a good amount of screen tearing when “Enable 3rd party overlay’ is checked.
In my workload, a decent amount DX11 and DX12 games are present. With DOOM highlighting OpenGL 4.5, computational benchmarks and also overclocking results. It’s highly unlikely I would include Ashes of the Singularity in the future unless Stardock rolls back on this issue. I would like to include Vulkan API games once it starts to roll out.
Another point that I’ve noticed is that the Pascals card’s boost clock gets a slight bump than its original/overclock result during load. This typically happens when the power target was increased to 100% via MSI Afterburner. I’ve also noticed that the differences between multiple benchmarks with the same in-game settings are more ‘stable’ once the fan speed is set to manual and increased to at least 85%. This happens even with its default base/boost clock when the target is up.
I’ve been asked- why not AMD. And here’s why.
I don’t get AMD Radeon graphic cards despite trying to get them for six years. I did get a call from them some weeks ago, but that was to review Radeon Fury at a time when RX 480 launched. They don’t pick up calls. They don’t reply to emails. Clearly, it’s an agency with a one-way line that has no Reply button. Implying that I only do Nvidia reviews at the beginning of the call was not appreciated at all. Nothing is stopping them, or AMD to send over processors and GPUs. Their partners barely get few cards to sell even if it’s generally regarded a value for money, while another one simply gets in low-end cards provided the two-generation-old cards are sold out.
I said that I can include all the Radeon 300 series and Fury data in the graph so that it would be a balanced analysis and start with fully-written Radeon 4xx series cards. All they have to do is to send the cards together. All attempts to contact after that did not happen, because they don’t reply nor they pick up the call- or call back.