Elon Musk’s OpenAI will teach AI to talk using Reddit
NVIDIA CEO Jen-Hsun Huang has delivered the first DGX-1 supercomputer to Elon Musk’s OpenAI nonprofit, and the researchers already have a project in mind. Believe it or not, they want to teach AI to chat by reading through Reddit forums, according to MIT Technology Review. That seems dicey given the site’s countless, bizarre forums, but the sheer size of it is what attracted the team. “Deep learning is a very special class of models because as you scale [them] up, they always work better,” says OpenAI researcher Andrej Karpathy.
The DGX-1 is a $129,000, desktop-sized box with eight NVIDIA Tesla P100 GPUs, 7TB of SSD storage and two Xeon processors. That nets 170 teraflops of performance, equivalent to around 250 servers. Moreover, the parallel architecture is ideal for OpenAI’s deep learning algorithms. NVIDIA said it cost around $2 billion to develop.
OpenAI, founded to ensure that machines don’t destroy us, will use the DGX-1’s extra power to read the nearly 2 billion Reddit comments in months, rather than years. It’ll also help it learn much faster and speak (or swear) more accurately. “You can take a large amount of data that would help people talk to each other on the internet, and you can train, basically, a chatbot, but you can do it in a way that the computer learns how language works and how people interact,” Karpathy said.
Best of all, the researchers won’t need to do much to improve areas like language learning and image recognition. “We won’t need to write any new code, we’ll take our existing code and we’ll just increase the size of the model,” says OpenAI scientist Ilya Sutskever. “And we’ll get much better results than we have right now.” All of that sounds interesting, but I’m not sure how I feel about machines generating hot takes in milliseconds.
Source: NVIDIA, MIT Technology Review
Thin gaming laptops will run VR with NVIDIA’s new chip
Nvidia has just taken the wraps off a trio of laptop GPUs based on its new “Pascal” chip architecture, the GeForce GTX 1060M, the 1070M and the 1080M. While the 1080M is by far the most impressive, it’s the humble 1060M that could make the biggest impact on the market. Why? Because it facilitates using a virtual reality headset like the HTC Vive or the Oculus Rift on a reasonably small laptop.
The 1060M essentially replaces the 970M, insomuch as it’ll fit into the same sort of products as the older chip. All of the technologies launched for the Pascal architecture, including VRWorks and Ansel, are supported on the 1060M, which has 1,280 CUDA cores, 6GB of 192-bit, 8Gbps memory and a base clock speed of 1,404MHz. The end result of these specs is a “VR-ready” chip that’ll fit in laptops as svelte as 18mm, like the Razer Blade.
What exactly “VR ready” means nowadays is a bit of a mystery. Oculus and HTC released their headsets targeting the desktop GTX 980, but both AMD and NVIDIA have since released cheaper cards (the RX 480 and the GTX 1060, respectively) that both claim to play nice with VR.
At a launch event in the UK, NVIDIA showed off the 1060M, 1070M and 1080M paired with various VR games. But while the more powerful chips were demoed with graphically intense games, NVIDIA chose The Thrill of the Fight. It’s a very fun, but relatively undemanding title, requiring only a desktop GTX 970 card. The MSI GS43 (an updated GS40 with a 1060M GPU inside) handled it perfectly. For regular gaming, NVIDIA claims it’ll do just fine. The same MSI GS43 hit 96.4FPS in Doom (1080p, ultra settings), 51.4FPS in The Witcher 3 (1080p, maxed settings, HairWorks disabled), and 71.5FPS in Tomb Raider (1080p, very high). Older games can play nice with higher resolutions, with <em>BioShock Infinite</em> hitting 72.4FPS at 1440p, and <em>Middle-earth: Shadow of Mordor</em> hitting 62.1FPS, both with “ultra” settings.
| CUDA cores | 1,280 | 1,280 | 2,048 |
| Base clock | 1,404MHz | 1,506MHz | 1,126MHz |
| Boost clock | 1,670MHz | 1,708MHz | 1,216MHz |
| Memory | 6GB GDDR5* | 6GB GDDR5 | 4GB GDDR5 |
| Memory speed | 8Gbps | 8Gbps | 7Gbps |
| Memory Bandwidth | 192GB/sec | 192GB/sec | 224GB/sec |
*Up to
When pressed, representatives at the event said the 1060M is VR-ready, and agreed that a stable “90 frames-per-second is a must for VR,” but “you might need to play around with the settings, as you would on any PC game, in order to reach that.” That suggests that while, yes, the 1060M has the power to run the current crop of VR games, don’t expect to be playing with Ultra graphics settings. NVIDIA also cautions that the “VR-ready” status is only when you’re plugged into an outlet — when running from the battery it won’t reach the clock speeds necessary. Digging into NVIDIA’s official benchmark sheet (which doesn’t compare the two directly), it seems that the 1060M is basically on par with the GTX 980, which makes the decision to restrict demos to such a forgiving game a strange one.
Putting minutiae to one side, NVIDIA’s new laptop GPUs look like winners. The 1080M almost kills the need for laptops with desktop chips in them (although I’m sure the market will continue). It’ll support SLI, and even on its own, can maintain 60fps in 4K for all but the most-demanding of titles, and 120Hz gaming in 1080p as well. The 1070M will be the go-to option for gamers without $1000s to spare, sliding into any laptop that currently houses a 980M — think something like the Asus ROG G752, the Acer Predator 15, or the Origin EON 15-X. But it’s the 1060M that offers the most exiting proposition, to me at least. Laptops like the Razer Blade, the MSI GS40 Phantom and the Gigabyte Aorus X3 Plus are already combining portability with legitimate gaming chops. Now, newer versions will also be able to support VR.
MSI and Origin PC use NVIDIA’s desktop-grade laptop graphics
PC makers aren’t wasting any time implementing NVIDIA’s GTX 10 series laptop graphics in their lineups. Both MSI and Origin PC have revealed that their higher-end gaming portables will be among the first to pack the much faster, desktop-class graphics. At MSI, the big deal is that it’s not just big, chunky systems that are getting a refresh — even relatively slim laptops like the GS and GE series will have VR-worthy graphics thanks to the GTX 1060M. You’ll have to move up to the GT range to get the 1070M or 1080M (up to two of them in Titan SLI variants like the GT73VR above), but that’s a solid baseline.
The upgraded MSI rigs should be available now, although they won’t come cheap. The experience starts with the $1,599 GS43VR Phantom Pro and its 14-inch 1080p screen, GTX 1060M, 2.6GHz Core i7, 16GB of RAM and 1TB hard drive, and you can spend as much as $5,099 if you want an 18-inch GT83VR Titan SLI with dual GTX 1080Ms, 2.9GHz Core i7, two 512GB SSDs, a 1TB hard drive and 64GB (!) of RAM.
Origin PC, meanwhile, is focused strictly on updating its beefy EON-15 and EON-17 machines. Pricing will vary depending on your configuration, but you can get up to a GTX 1070M in the 15-inch EON15-X (below), a 1080M in the 17-inch EON-17X, and dual 1080Ms in the EON17-SLX. They can all carry up to a 4K display, 64GB of RAM, dual 1TB SSDs and a desktop-level 4GHz Core i7 processor. None of them are svelte, then. However, they might be what you’re looking for if you can’t imagine giving up any significant amount of performance when on the road.
And it’s important to stress that these aren’t the only two vendors lining up. Heavyweights like Acer, ASUS, HP, Lenovo and Razer have also committed to NVIDIA’s new laptop video tech, giving you plenty of choices.

Source: MSI, Origin PC
NVIDIA brings desktop-class graphics to laptops
With the GeForce GTX 1080, NVIDIA pushed the boundaries of what a $600 graphics card can do. That flagship card was joined by the GTX 1070 and GTX 1060, two lower-power cards based on the same 16nm Pascal architecture at a much more affordable price. Now, it’s bringing mobile versions of those cards that match their desktop counterparts in almost every area — including being VR ready.
That’s not hyperbole. The top-of-the-line 1080M has 2,560 CUDA cores and 8GB of 10Gbps GDDR5x memory. The desktop chip has the same. The only difference is clock speed: it’s set at 1,556MHz, while the desktop version is 1,607MHz. The two do share the same boost clock (1,733MHz) though, and both have access to all the new technology introduced for the Pascal architecture. That means simultaneous multi-projection, VRWorks, Ansel and the rest.
If you want an idea what those specs translate to in real-world performance, how’s this: when paired with an i7-6700HQ (a quad-core 2.6GHz chip with 3.5GHz turbo), Mirror’s Edge Catalyst, 126; Overwatch, 147; Doom, 145; Metro Last Light, 130; Rise of the Tomb Raider, 125. Those are the 1080M’s FPS figures when playing at 1080p with “ultra” settings at 120Hz. NVIDIA is really pushing 120Hz gaming, and many of the first crop of Pascal laptops will have 120Hz G-Sync displays.
4K gaming, too, is more than possible. At 4K with “high” settings the same setup can push 89FPS on Overwatch, 70FPS with Doom, and 62FPS with Metro Last Light (according to NVIDIA). Only Mirror’s Edge Catalyst and Rise of the Tomb Raider fall short of 60FPS, both clocking in at a very playable 52FPS. At the chip’s UK unveil, NVIDIA showed the new Gears of War playing in 4K in real-time, and there were absolutely no visible frame drops. With figures like that, it goes without saying that VR will be no problem for the 1080M. The desktop GTX 980 is the benchmark for both the HTC Vive and Oculus Rift, and the 1080M blows it away. If you’re looking for more performance, the 1080M supports overclocking of course — NVIDIA suggests as high as 300MHz — and you can expect laptops sporting two in an SLI configuration soon.
The major drawback for the 1080M is power. We don’t know its exact TDP yet, but given the near-identical desktop version runs at 180W, you’d imagine it’s got to be at least 150W. NVIDIA has tech that counters that heavy power load when you’re not plugged in, of course. Chief among these is BatteryBoost, which allows you to set a framerate (i.e. 30FPS), and downclocks the GPU appropriately to save power — if your card is capable of pushing 147FPS plugged in, that’s going to be a fair amount of power saved. Whatever the battery savings possible, though, it won’t change the fact that the 1080M is only going to slide into big laptops.
That’s fine for those already used to carrying around behemoths on the go, but plenty of gamers prefer something more portable. Enter the 1070M. NVIDIA says this chip will fit into any chassis that currently handles the 980M, which covers a lot of laptops.
Just like the 1080M, the 1070M matches its desktop sibling in many ways. You’ve actually got slightly more in the way of CUDA cores — 2,048 vs. the desktop’s 1,920, but again they’re clocked slower (1,442MHz vs. 1,506MHz). Memory is the same — 8GB 8Gbps GDDR5 — and it too benefits from both the Pascal architecture itself and the new software features that come with it.
| CUDA cores | 2,560 | 2,560 | 1,920 | 2,048 |
| Base clock | 1,607MHz | 1,556MHz | 1,506MHz | 1,442MHz |
| Boost clock | 1,733MHz | 1,733MHz | 1,683MHz | 1,645MHz |
| Memory | 8GB GDDR5X | 8GB GDDR5X | 8GB GDDR5 | 8GB GDDR5 |
| Memory speed | 10Gbps | 10Gbps | 8Gbps | 8Gbps |
| Memory Bandwidth | 320GB/sec | 320GB/sec | 256GB/sec | 256GB/sec |
When faced off against the desktop 1070, the 1070M holds its own. In nearly every test we saw, it got within a couple of percentiles of the desktop card. We’re talking 77FPS in The Witcher 3 (1080p maxed settings, no HairWorks) vs. 79.7FPS on the 1070; 76.2FPS in The Division (1080p ultra) vs. 76.6FPS; and 64.4FPS in Crysis 3 (1080p very high) vs. 66.4FPS. The one outlier was Grand Theft Auto V, which dropped down to 65.3FPS vs. 73.7FPS on the desktop 1070. 4K gaming is a stretch on the desktop 1070, and that carries over here, but this card is more-than VR ready. NVIDIA says that it’ll support factory overclocking on the 1070M soon, so you may see laptops offering a little more grunt “in a couple of months.”
Rounding off the lineup is the 1060M, the mobile version of NVIDIA’s $249 “budget” VR-ready card. It’s something of the exception to the rule here. Yes, it offers 1,280 CUDA cores and 6GB 8Gbps GDDR5 memory, which is equal to the desktop 1060. But at the lower end of the range the fact that they’re clocked lower (1,404MHz vs. 1,506MHz) hurts performance quite a bit more. In side-by-side comparisons, NVIDIA’s benchmarks suggest you’ll get within ten percent or so of the desktop card. That’s not to say that the 1060M is a slouch. For traditional gaming, you’re not going to hit 60FPS at 1080P in every game without thinking about settings, but if you can play it on a desktop GTX 980, it’s probably a safe bet that the 1060M can handle it. That’s insanely impressive when you consider that the 1060M will fit into the same chassis as the 970M — think “ultra portable” gaming laptops.
| CUDA cores | 1,280 | 1,280 | 2,048 |
| Base clock | 1,404MHz | 1,506MHz | 1,126MHz |
| Boost clock | 1,670MHz | 1,708MHz | 1,216MHz |
| Memory | 6GB GDDR5* | 6GB GDDR5 | 4GB GDDR5 |
| Memory speed | 8Gbps | 8Gbps | 7Gbps |
| Memory Bandwidth | 192GB/sec | 192GB/sec | 224GB/sec |
*Up to
In reality, the 10-percent gap between the 1060 and the 1060M probably makes it slightly slower than the GTX 980, but the difference is almost negligible. I wasn’t able to push the 1060M too hard on the “VR ready” promise — you can read about the demo and why the 1060M matters in a separate article — but the demo I had was solid. And really, being able to plug an Oculus into something as slim as a Razer Blade was unthinkable a few months ago, so it’s probably best not to complain.
Acer, Alienware, Asus, Clevo, EVGA, HP, Gigabyte, Lenovo, MSI, Origin, Razer, Sager and XMG are just some of the OEMs signed up to make laptops with the new Pascal chips. Many will announce updated and all-new models today, while some might hold off a while. But expect lots of super-powerful, VR-ready gaming laptops very soon.
Adobe’s virtual oil paint adds texture to digital painting
While there are plenty of apps that can realistically emulate the look of brushstrokes on a 2D digital canvas (and even some in 3D space), none have accurately simulated the way a paintbrush actually behaves in a realistic, 3D environment. Now a new collaboration between Adobe and NVIDIA called Project Wetbrush claims to do just that by simulating the movements and interactions of each virtual bristle and rendering the results in three-dimensional virtual paint.
The simulated paint in Project Wetbrush actually mimics everything from the viscosity, color mixing properties and drying time of real world oil paint. So, even when using a stylus to paint on a touchscreen, the end result is a 3D virtual object with layers of thickness, depth and texture. Since a richly textured oil painting needs good lighting to be fully appreciated, NVIDIA contributed additional processing power to render the scenes.
The Project Wetbrush team claims their simulation is the first of its kind, and the plan is to observe digital painters and apply NVIDIA’s deep learning expertise to add realism to synthesized effects and filters like Prisma in the future. Imagine, if you will, taking a photo of a Parisian cafe at night, running it through an app and coming back with a realistic, 3D-printable approximation of a Van Gogh.
Report: Nintendo NX is a tablet with detachable controllers
Remember those early reports that described Nintendo’s next game console as a TV / portable hybrid device? According to Eurogamer, they were right on the nose. Eurogamer sources claim that the Nintendo NX is a handheld game console with detachable controllers, a TV base station and NVIDIA Tegra graphics. In other words, it sounds like a standalone Wii U gamepad dialed up to 11.
Specifically, Eurogamer describe’s the NX as a powerful, portable game console with its own display and detachable controllers on either side — sort of like a mix between the Wii U gamepad and Razer’s defunct Edge tablet. The detached controllers can apparently be used for multiplayer gaming (one side for each player) or possibly discarded for a more touch-focused tablet experience.
At home, users will be able to plug the device into a docking station and play games on the big screen, but the outlet’s sources say the console will be marketed with the hook of “being able to take your games with you on the go,” basically unifying Nintendo’s home and portable markets with one device.

If true, however, the report reveals that Nintendo may, once again, be bringing a last-gen console to a current-gen market. Eurogamer’s sources say that Nintendo is sacrificing power for portability, claiming that development kits use the NVIDIA Shield TV’s Tegra X1. NVIDIA’s mobile super-chip certainly isn’t a slouch when it comes to power — but it’s not going to be able to keep pace with the PlayStation 4 Neo, either.
We’re taking the report with a side of sodium — but the console Eurogamer describes does sound familiar. The proposed portable meshes well with previous rumors and Nintendo patents that describe a console capable of using supplementary processors. It also echos reports that the NX would favor game cartridges over discs, and re-confirms Nintendo’s own claims that the device won’t be running Android, despite its mobile GPU.
Nintendo says it can’t respond to “rumors and speculation,” as usual — but Eurogamer claims we’ll know more in September, when sources say the NX will be officially revealed to the public. We’re looking forward to it.
Source: Eurogamer, Digital Foundry
NVIDIA’s latest pro video cards help you livestream VR video
Did you think NVIDIA’s newest Titan X was a monster of a video card? You haven’t seen anything yet. The GPU maker has unveiled its latest Quadro workstation cards, the Pascal-based P5000 and P6000, and they both pack power that makes your gaming-grade card seem modest. The P6000 (above) is billed as the fastest graphics card to date, and for good reason. It has even more processing cores than the Titan X (3,840 versus 3,584) and twice as much memory — a whopping 24GB of RAM. The P5000 is closer to the GTX 1080 in performance with “just” 2,560 cores, but its 16GB of RAM handily bests the gaming card’s 8GB. If you’re working with massive amounts of 3D data, these are likely the boards you want.
However, their real party trick is more a matter of software. Both the P5000 and P6000 can take advantage of a new VRWorks 360 Video developer kit which, as the name suggests, helps produce virtual reality footage. They can capture, stitch and livestream VR video from up to 32 cameras in real time, which could make them ideal for that VR concert feed.
There’s only one catch: pricing. NVIDIA is shipping both Quadro cards in October, but it hasn’t said how much either of them will cost. Given that the Titan X costs $1,200 and doesn’t pack as much video memory as either of these GPUs, it’s safe to presume that this hardware will considerably more. These designs are meant for pros who can easily justify the price through the hours they’ll save while finishing big projects.
Source: NVIDIA (1), (2)
NVIDIA’s GeForce GTX 1060 gives you gaming power on a budget
After debuting the fastest high-end and mid-range video cards ever seen, the GTX 1080 and 1070, we expected a lot from NVIDIA’s new lower-tier entry, the $249 GeForce GTX 1060. And the stakes were raised even higher after AMD launched the Radeon RX 480, a $200 GPU that’s fast enough to power VR headsets (and manage some decent 1440p gaming). NVIDIA claims the GTX 1060 is even faster than the GTX 980, its premium video card from 2014. That says quite a bit about how far we’ve come in the GPU world: You no longer have to break the bank for a decent amount of gaming muscle.
As with the GTX 1080 and 1070, I tested the slightly more expensive ($299) Founders Edition of the GTX 1060. While the previous two cards looked practically identical — they’re both beefy 10.5-inch-long dual-slot GPUs — the GTX 1060 is a bit shorter at 9.8 inches. They all share the same elaborate metallic case and fan design, though, along with a premium-feeling build quality. On the back, there are three DisplayPort slots, an HDMI port and a DVI connection.
The GeForce GTX 1060 features clock speeds between 1.5GHz and 1.7GHz (in boost mode), just like the GTX 1070, and there’s also 6GB of GDDR5 RAM. Because of its slightly shorter frame, and the fact that it only needs a 6-pin power connector, the GTX 1060 might be a useful upgrade for people with tight cases and less capable power supplies. If you’re really in that spot, though, maybe just hold out until you can revamp your entire system.

| 3DMark (Firestrike) | 3DMark 11 | |
| NVIDIA GeForce GTX 1060 | Standard 10,890 / Extreme 5,715/ Ultra 2,953 | X5,698 |
| NVIDIA GeForce GTX 1070 | Standard 13,918/ Extreme 7,703/ Ultra 4,110 | X7,778 |
| NVIDIA GeForce GTX 1080 | Standard 15,859/ Extreme 9,316/ Ultra 5,021 | X9,423 |
| AMD R9 Fury X | Standard 13,337/ Extreme 7,249/ Ultra 3,899 | X,6457 |
| AMD Radeon RX 480 | Standard 10,279/ Extreme 5,146/ Ultra 2,688 | X4,588 |
Now on to those benchmarks: The GTX 1060 performed pretty much as I expected on my system (which consists of a 4GHz Core i7-4790K CPU, 16GB of 2400Mz DDR3 RAM and a 512GB Crucial MX100 SSD on an ASUS Z97-A motherboard). It’s noticeably slower than the 1070, and slightly faster than the AMD RX 480 with 8GB of RAM. Unfortunately, I didn’t have a GTX 980 that I could use to directly test NVIDIA’s claims about the 1060 being faster, but 3DMark comparisons against similarly specced systems showed that the cards were about as fast.
4K benchmarks
| Witcher 3 | Hitman | Doom | |
| NVIDIA GeForce GTX 1060 | 24 | 23 | 29 |
| NVIDIA GeForce GTX 1070 | 38 | 35 | 48 |
| NVIDIA GeForce GTX 1080 | 43 | 48 | N/A |
| AMD R9 Fury X | 35 | 38 | N/A |
| AMD Radeon RX 480 | 20 | 25 | 35 |
Average frames-per-second performance in 4K with all graphics set to maximum and NVIDIA HairWorks turned off.
Unsurprisingly, the GTX 1060 isn’t much of a 4K contender. That’s a resolution that even the GTX 1070 struggled with, and honestly I wouldn’t even want to run it on the 1080. Still, it’s worth comparing the GTX 1060’s performance (if only to future-proof our benchmarks a bit). Once again, it’s slightly faster than the RX 480, but that’s kind of a moot point, since both cards delivered unplayable performance.

1440p benchmarks
| Witcher 3 | Hitman | Doom | Overwatch | |
| NVIDIA GeForce GTX 1060 | 44 | 44 | 58 | 60 |
| NVIDIA GeForce GTX 1070 | 60 | 60 | 55-65 | 60 |
| NVIDIA GeForce GTX 1080 | N/A | N/A | N/A | 60 |
| AMD R9 Fury X | N/A | 70 | N/A | 60 |
| AMD Radeon RX 480 | 43 | 45 | 58 | 60 |
Average frames-per-second performance in 1440p with all graphics set to maximum and NVIDIA HairWorks turned off.
When it comes to 1440p (2,560 by 1,440 pixels), my preferred gaming resolution, the 1060 was about twice as fast as it was in 4K. In some games, like Doom and Overwatch, it even managed to reach 60 frames per second, which is the gold standard for smooth performance. It was about on par with the RX 480, which came as a surprise given the 1060’s slight 3DMark lead.
Naturally, the GTX 1060 had no problems reaching 60 fps and beyond at 1080p in just about every game I threw at it. Given the amount of power it holds, that’s no surprise. It also delivered a smooth VR experience with both the Oculus Rift and HTC Vive. There were no signs of slowdown either as I flew around space in Eve: Valkyrie or had shootouts in Hover Junkers.
While the 1060 generally outpaced AMD’s $240 RX 480 (8GB RAM version), it would likely perform similarly against the $200 RX 480 (4GB RAM) variant. Benchmarks comparing the 4GB and 8GB RAM versions of AMD’s card show very little difference between the two. So if you’re looking for the most bang for your buck, the RX 480 is still your best bet. You can also add in another RX 480 down the line for even more performance, whereas NVIDIA has removed its multi-card technology (SLI) from the GTX 1060 entirely.

And if the RX 480 doesn’t cut it for you, it’s probably worth saving up and getting a GTX 1070 instead of NVIDIA’s budget GPU. GTX 1070 cards retail for around $379, and they’ll offer significantly better performance than the GTX 1060. The 1070 also supports SLI, so you can throw in another card in a year or two as games become more demanding.
Overall, the GTX 1060 is exactly what NVIDIA needed to compete against AMD’s revolutionary RX 480. But its pricing makes it a tough sell, since the 480 is a better deal and NVIDIA’s own GTX 1070 isn’t that much more expensive. Once GTX 1060 cards come down in price, though, they’ll become much more compelling.
NVIDIA’s new top-end graphics card is the $1,200 Titan X
If you recently bought a $599 NVIDIA GTX 1080 in order to have the fastest rig around, I have bad news. NVIDIA has revealed the latest Titan X, a graphics card with 12GB of GDDR5X memory and 3,584 cores running at 1.53 GHZ, yielding an absurd 11 teraflops of performance. That easily bests the 8.9 teraflops of the GTX 1080, which itself put the last-gen Titan X to shame. You probably won’t feel too bad, however, when we tell you that the new card has a price tag of $1,200, double that of its now-second-best sibling.
The Titan X, based on the company’s new Pascal P102 GPU, has 12 billion transistors and runs at 250W, meaning it burns around 40 percent more power than the GTX 1080. Style-wise, it hews closely to the triangular, faceted form of the GTX 1080, but sports darker colors. It features DisplayPort 1.4, HDMI 2.0b and DL-DVI ports, though the company hasn’t yet detailed the configuration. NVIDIA has now unveiled four cards (the GTX 1060, 1070, 1080 and Titan X) in just over two months, which is a pretty frenetic launch rate.
To hammer home the point about brute horsepower, NDIVIDA CEO Jen-Hsun Huang did a surprise unveil of the Titan X at a meetup of artificial intelligence experts at Stanford University. That’s fitting, because it’s starting to blur the line between its gaming cards and Tesla GPU accelerators used for deep learning in servers and supercomputers. The card will go on sale August 2nd in North America and Europe for $1,200, but only on NVIDIA’s site and via “select system builders.”
Source: NVIDIA
NVIDIA’s ‘VR Funhouse’ carnival game just launched on Steam
A few months ago, NVIDIA showed off a new virtual reality tech demo designed to showcase what its new graphics technology could do for VR. Today, it’s releasing that demo to the public: VR Funhouse is a free, virtual reality carnival with collision-based haptic feedback, advanced physics simulation and a ton of other NVIDIA graphics technologies designed to make fire, hair, water and particles all look more real. All you need to play is an HTC Vive and a really, really powerful computer.
A high bar of entry is pretty normal for PC virtual reality right now, but VR Funhouse is aimed at machines with only the latest graphics technology. Specifically, NVIDIA recommends its GeForce GTX 1080, though users with a GTX 980 Ti, Titan X, 1060 or 1070 should be able to run the game on low-quality settings. That’s a lot to ask for a carnival game. That said, VR Funhouse is less important as a game as it is a platform to show developers what NVIDIA’s Gameworks technologies can add to the VR experience — which is why the title will be open sourced later this summer. Either way, if your PC has the specs, fell free to check out NVIDIA’s carnival. VR Funhouse launches later today on Steam.
Source: Steam, NVIDIA (1), (2)



