That death knell AMD has been ringing for DirectX? Microsoft’s having none of it. The software giant is now teasing the next version of the Windows graphics API, inviting developers to join it at GDC for the official reveal of DirectX 12. The splash page reveals little besides the version’s numeric and announcement time, but it does feature partner logos for Intel, Qualcomm, Nvidia and, of course, AMD. AMD’s disdain for the platform helped birth Project Mantle — a competing API that gives developers lower-level access (and as a result, more leverage over) PC graphics hardware. One of Microsoft’s GDC sessions suggest that something similar is in the works for its own development platform: “You asked us to bring you even closer to the metal… …so that you can squeeze every last drop of performance out of your PC, tablet, phone and console,” reads the description for one of the firms DirectX presentations. “Come learn our plans to deliver.”
It sure sounds similar, and indeed, it meshes well with recent rumors. Sources close to ExtremeTech say that while the two APIs will have different implementations, both should offer the same benefits. They also say that Microsoft’s “close to the metal” lower-level access API is a relatively new project in Redmond, meaning it probably won’t muscle in on Mantle’s territory until sometime next year. Between that, and the fact that Microsoft has recently taken to limiting Direct X upgrades to Windows upgrades, it’s possible that we might not see DirectX 12 in access until we’re installing Windows 9.
This new, low-end AMD graphics card’s meant for budget-conscious PC gamers, and maybe Steam Machines, too
Not every gamer has the
desire means to get the latest and greatest graphics hardware. Fret not budget-minded PC aficionados, for AMD’s rolling out a new, more powerful low-end GPU that should suit your financial constraints. Called the Radeon R7 265, it brings twice the memory bandwidth of its predecessor, the R7 260x, which AMD claims translates into a 25 percent performance boost. It’ll cost $149 when it goes on sale in late February, and with its debut, AMD’s also dropping the cost of the aforementioned 260x to a scant $119.
Naturally, those meager price points will appeal to cost conscious consumers, but AMD’s announcement could have an effect on Steam Machine OEMs, too. We saw AMD’s higher-end R9 graphics in several of the Steam Machines at CES, and we’ve been playing with a working iBuyPower prototype packing an R7 260x for awhile now. So, it stands to reason that the 260x and 265 will prove awfully attractive options to manufacturers trying to hit the all-important sub-$500 price point needed to compete with other gaming consoles. And, who knows, maybe these new (relatively) inexpensive options will help drive down the prices of both more powerful cards and the GPUs being offered by AMD’s competition.
If you caught our recent coverage of the huge Star Swarm demo, you’ll know that AMD’s Mantle programming tool has already proven itself capable of radically transforming a real-time strategy game. But the console-inspired API has been claimed to deliver performance benefits in FPS games too, starting with Battlefield 4, and the first independent evidence of this is now starting to trickle out. AnandTech and HotHardware have used almost-final Mantle drivers to achieve frame-rate gains of at least 7-10 percent in BF4, rising to 30 percent with some configurations, by doing away with the need for Microsoft’s relatively inefficient DirectX drivers.
In general, it looks like systems with weaker CPUs stand to benefit the most, because Mantle uses the graphics processor in such a way as to reduce CPU bottlenecks. We’ll get a better idea of the size of the improvement once Mantle is released to the public and tested on a wider variety of systems, including laptops and desktops with low-end or integrated AMD GPUs, but nevertheless, these early results bode well for those who are trying to eke better frame rates out of older, cheaper or smaller gaming rigs.
Not interested in buying a Steam Machine this year, but still want a tiny gaming PC? Never fear — CyberPowerPC has just released the Zeus Mini, its latest take on a conventional small computer with full-sized performance. The system is just 4.4 inches thick and 18 inches deep, but it has room for fast video cards like AMD’s R9 290 or NVIDIA’s GeForce GTX 780. You’ll also find a high-end AMD Kaveri or Intel Haswell processor inside, and there’s space for a large liquid cooling system if you insist on a silent rig. Zeus Mini prices start at $599 for a basic variant with a 3.7GHz AMD A10 chip and integrated graphics, but demanding players can shell out $1,479 for a flagship model with a 3.5GHz Core i7 and GTX 780 video.
It’s been a busy week for AMD news, what with the launch of the Kaveri APU and then our first real evidence of how the new Mantle drivers can impact on PC gaming. But now’s the time to kick back and check out some full reviews of Kaveri over at the specialist sites. We’ve rounded up some of the best articles after the break, and if you’re looking for brutally short executive summaries, we’ve got some of those for you too.
AnandTech — Based on a suite of traditional, real-world application benchmarks such as WinRAR, Kaveri usually struggled to match a Core i3 — except in those few applications that made good use of GPU compute via OpenCL. With games, on the other hand, Kaveri was usually better than a Core i7 in the more challenging scenarios, and you really should check out the site’s full frame-rate charts. The A10-7850K is actually able to play F1 2013 at max detail and 1080p resolution with a frame rate of 31fps, for example, versus 14fps from a much more expensive Core i7-4770K. Overall, Anandtech concluded that Kaveri could be an “ideal fit” for many people who aren’t power users but who like to indulge in a bit of gaming, but its reviewers also highlighted the fact that AMD has been tepid about supporting dual graphics for those who want to pair Kaveri with an Radeon R7-series graphics card (Kaveri also uses R7 graphics, so theoretically it should be possible to add the two GPUs together).
HotHardware — This site focused on the A8-7600, which can be customized to burn at 45W or 65W and is therefore aimed at small form factors (like HTPCs and Steambox-like gaming builds). In a number of synthetic graphics-focused benchmarks, such as 3DMark, this scaled-down processor was actually very close to (and sometimes better than) AMD’s flagship 95W from the previous generation (Richland), and also often better than any full-powered Haswell chip. Overall, despite it lagging behind Intel in single-threaded tests, HotHardware gave the A8-7600 its “Approved” badge.
ExtremeTech — This site spent a bit more time taking account of AMD’s new HSA technology. In its most practical sense, HSA is a fresh approach to GPU compute, but there is no mainstream software that makes use of it just yet. Instead, ExtremeTech ran a few niche HSA-enabled benchmarks to explore HSA’s potential, and they were pleasantly surprised: a JPEG decoding test showed that the A10-7850K was almost twice as fast as a Core i5-4670S, and even the A8-7600 was quicker than any Intel chips. A second test based on number-crunching within LibreOffice’s Calc spreadsheet application showed that the A10-7850K was about five times faster than the Core i5. Overall, this review concluded that, aside from its obvious gaming prowess, “Kaveri will only be competitive if developers implement the necessary optimizations for HSA,” and that pretty accurately sums up where AMD’s newest APU stands right now.
Grab a wearable, switch on the ‘ole curved TV and fire up your favorite 3D printer. We came, we saw, we conquered and now we’re ready to distill it all for you in the form of some high-quality video content. We’re not going to suggest that it’ll replace the seemingly endless stream of posts we’ve churned out over the past week or so, but if you’ve got a cocktail party full of guests you need to impress tonight, it’ll help you drop some serious CES 2014 tech news knowledge on their collective heads.
We’ve pulled together some top editors to offer up an abbreviated view of tech’s biggest show of the year, charting trends in old standby categories like HDTV, mobile, tablets and cars, to emerging spaces set to define the changing face of the show for years to come. Oh, and we’ve also tossed in some fun video of the show’s gadgets, because, well, it wouldn’t be much of an Engadget Show without that sort of thing, now would it? Toss in a bit of video of your long-time host getting a bit welled up the end, and you’ve got yourself a little thing we like to call The Engadget Show 49.
‘Til we meet again, Engadgeteers.
Filed under: Announcements, Cellphones, Desktops, Cameras, Displays, Misc, Gaming, GPS, Handhelds, Home Entertainment, Household, Laptops, Meta, Peripherals, Podcasts, Portable Audio/Video, Robots, Tablets, Transportation, Wearables, Wireless, Storage, Networking, Science, Internet, Software, HD, Mobile, Alt, Apple, Samsung, Sony, Microsoft, HTC, Nokia, ASUS, Google, Amazon, Verizon, Sprint, AT&T, LG, AMD, Intel, Blackberry, T-Mobile, HP, Dell, Acer, NVIDIA, Nikon, Canon, Facebook, Nintendo
Daily Roundup: Sony Xperia T2 Ultra and E1, court blocks parts of FCC net neutrality rules and more!
You might say the day is never really done in consumer technology news. Your workday, however, hopefully draws to a close at some point. This is the Daily Roundup on Engadget, a quick peek back at the top headlines for the past 24 hours — all handpicked by the editors here at the site. Click on through the break, and enjoy.
A Google Play edition of the Moto G popped up in the Play Store earlier today and is available for $180 (8GB) or $200 (16GB). Click through for details.
A Washington, DC appeals court voided anti-blocking and anti-discrimination requirements in the FCC’s Open Internet Order. Follow the link for more information.
Sony only recently released the Xperia Z1 Compact and Z1S at CES, but it’s adding yet another two additions to the Xperia line: the T2 Ultra and E1. Click the link for specs and launch information.
Whether it’s Age of Empires or StarCraft, there comes a point where every gamer struggles with maximum population caps. However, that might not be much of an issue for the new demo game Star Swarm. By utilizing AMD’s Mantle programming tool, the title manages a whopping 5,000 AI objects. Click on through for more details.
Every real-time strategy game has some kind of population cap, limiting the number of units that can be placed simultaneously on a player’s terrain. This limit can stem from the designers’ need to balance competition between armies, but ultimately it’ll also have something to do with the underlying hardware in a PC or console, because a processor will slow down if it’s asked to simulate too many independent, physical 3D objects at once. Some RTS games set the limit at 50-70 units, while others can cope with as many as 500, but a new demo game called Star Swarm takes things to a new level: it uses AMD’s Mantle programming tool to speed up communication between the CPU and GPU, allowing up to 5,000 AI- or physics-driven objects (i.e., not mindless clones or animations) to be displayed onscreen at one time. Coming up, we’ve got a 1080p video of what this looks like, plus an explanation of how Oxides Games, the company behind Star Swarm, made this possible.
As you’ll hear from the video’s narration, Star Swarm is a demo game that is built to show off Oxide’s new engine, Nitrous, which is being licensed to other developers. At least three Nitrous-based RTS games are currently in production and Oxide believes that these games will represent a major leap forward for real-time strategy genre thanks to the “epic scale” permitted by the high population limit.
“It’s a difference of at least an order of magnitude,” says Oxide founder Dan Baker (who was previously Graphics Lead for Civilization V). “Take the most complex scene you’ve ever seen in StarCraft II and multiply it by ten.”
There are a couple of ingredients that are essential for delivering these huge 5,000-unit spectacles. Firstly, you need a robust CPU, since processing this quantity of AI and physics relies on general computing power just as much as on graphics. Unlike many games on the market, Star Swarm is designed to use many CPU cores at the same time. The configuration in the video includes a aging but powerful six-core Intel Core i7 980.
“Take the most complex scene you’ve ever seen in StarCraft II and multiply it by ten.”
Secondly, to allow for both scale and enhanced visual effects such as motion blur, the graphics-side of the system must contain a recent AMD GPU that supports the Mantle programming tool. As we’ve reported before, Mantle brings hardware-specific (read: brand-specific) programming to PC games, because it allows developers to code directly for AMD’s Graphics Core Next architecture rather than going through fluffy, hardware-agnostic middlemen like Microsoft’s DirectX drivers. In this instance, Mantle speeds up the communication between the CPU and GPU, allowing multiple CPU cores to talk to the GPU at the same time without causing a jam. (For deeper technical detail on this, check out Oxide’s presentation at APU13.)
Star Swarm is actually the first hard evidence we’ve seen of what Mantle can do, and the numbers speak for themselves: with everything else being equal, enabling Mantle increased the demo’s frame rate by nearly 300 percent, from an unplayable 13 fps to a buttery 44 fps. AMD promised as much when it launched its Kaveri APU earlier today, adding that Star Swarm will run at playable frame rates even on low-power 65-watt versions of the APU (versus 95-watts for a regular desktop chip).
Separately, AMD claims that a forthcoming Mantle update for Battlefield 4 will boost performance in that title by as much as 45 percent. We’ve also heard some gossip that the PC version of Sniper Elite 3 will support Mantle, likely reflecting the fact that its developer, Rebellion, is making PS4 and Xbox One versions of the first-person shooter and is therefore already accustomed to optimizing its code for AMD’s architecture. All in all, if these games leave up to the precedent set by Star Swarm, it could well be worth having some Mantle juice in your gaming rig in 2014.
A decade ago, AMD brought us the first dual-core x86 processor. Then, starting in 2008, the company came out with tri-core and quad-core designs in quick succession, leading up to octa-core chips in 2011′s FX range as well as in the latest AMD-powered game consoles. Today, we’re looking at a fresh leap forward, albeit one that will take a bit of explaining: a desktop and laptop chip called Kaveri, which brings together up to four CPU cores and eight GPU cores and gives them unheard-of levels of computing independence, such that AMD feels justified in describing them collectively as a dozen “compute cores.”
Marketing nonsense? Not necessarily. AMD is at least being transparent in its thinking, and besides, if you’ve been following our coverage of the company’s HSA project, and of GPU compute in general, then you’ll know that there’s some genuine technology underpinning the idea of GPU cores being used for more than just 3D rendering. Nevertheless, even if you don’t go for the whole 12-core thing, AMD still makes some down-to-earth promises about Kaveri’s price and performance — for example, that it matches up to Intel chips that cost a lot more (the top Kaveri desktop variant costs just $173, compared to $242 for a Haswell Core i5), and that it can play the latest games at 30fps without the need for a discrete graphics card. These are claims that can — and will — be put to the test.
Let’s start with the theoretical stuff, even though it’s largely academic until more software comes along that can make use of it. The reason AMD calls the GPU cores inside Kaveri “compute cores” is that they’re said to be fundamentally different to the GPU cores in other PC processors. This difference lies in the fact that they’re able to function as equal citizens: instead of relying on the CPU to orchestrate their workload, they can access system memory directly and take on tasks independently — almost like a CPU core does. The only difference is that they can’t take on the same types of tasks as a CPU, as they’re better suited to simple parallel chores rather than complicated serial processing.
As things stand, software developers are already able to exploit the GPU for general computing using tools like OpenCL, which can be used to accelerate anything from Photoshop to big spreadsheets. But OpenCL requires reams of code and a lot of inefficient to-ing and fro’-ing between the GPU and CPU — all of which, AMD says, will be drastically reduced if developers latch onto HSA. That’s a big “if,” of course, but now that AMD has recruited a bunch of partners into its HSA Foundation, and now that it has managed push its silicon into millions of households via next-gen games consoles, developer interest looks more likely, and Kaveri’s compute cores at least bring it some future-proofing as a result.
Bearing in mind that we’re mostly reliant on AMD’s in-house test results for now, until independent reviewers put their graphs online, let’s look at that basic claim about Kaveri undercutting Intel as a gaming processor. The chart above shows a top-end Kaveri A10-7850K pitted against Intel’s Core i5-4670K for games being played at 1080p with max settings (or at least close to max settings — there’s a bit of ambiguity there, but it doesn’t affect the comparison). In each case, the processor is paired with a discrete graphics card, AMD’s mid-range Radeon R9 270X, presumably because most enthusiasts would still avoid relying solely on integrated graphics. As you can see, Intel is slightly ahead in a number of games, but never by a significant margin, suggesting that spending $70 more on Intel’s chip doesn’t add much to the experience.
Power efficiency and onboard graphics
In addition to Kaveri’s suitability for gaming when paired with a separate graphics card, the slide above suggests the chip also has an advantage over a Haswell Core i5 on certain synthetic benchmarks, likely due the fact that it has a bigger GPU than you’d find on an Intel processor. Kaveri’s built-in GPU accounts for 47 percent of all transistors in the chip (over a billion in total), and is potentially meaty enough for it to run games without the need for a discrete graphics card, thereby saving energy and money while also allowing for much smaller PCs. In practice, we played through a level of Bioshock: Infinite at 1080p with low settings, with Kaveri running beneath a little third-party cooler, and we experienced a steady frame rate of 30fps. This is something AMD claims is also possible in other big titles like Battlefield 4, which it’s bundling free with high-end boxed Kaveri chips, but again, you have to be prepared to accept low detail settings.
For the sake of balance, it’s important to point out that an Intel’s chip is likely to be more power-efficient in its own right. Haswell has fewer transistors (1.4 billion instead of Kaveri’s 2.3 billion) and its transistors are also significantly smaller (22nm instead of 28nm), which should equate to reduced power draw — something that’s especially when you think about notebook or hybrid/tablet versions of these chips, particularly ones that don’t need to focus on 3D graphics (or, equally, which delegate all such tasks to a separate GPU).
Mantle and TrueAudio
Speaking of Battlefield 4, we arrive neatly at Kaveri’s other big claim to fame — and it’s a claim that requires a much smaller leap of faith than HSA does. You see, Battlefield 4 is one of a growing number of games that will take advantage of an AMD-tailored programming tool called Mantle, which promises big boosts in performance even on lower-power (e.g., HTPC and laptop) versions of the chip. Mantle runs on any AMD graphics card that contains the newer Graphics Core Next (GCN) architecture, and since Kaveri’s graphics processor is based on GCN, it can run Mantle-optimized games and applications too, resulting in claimed performance increases of up to 45 percent in BF4 (once it gets its Mantle update later this month) and as much as 300 percent in real-time strategy games running on the new Star Swarm game engine. (For more on Mantle, read this.)
Finally, in addition to Mantle, Kaveri also brings another feature across from AMD’s latest graphics cards: TrueAudio. This is a dedicated, programmable audio processor that sits on the chip and helps to improve the audio in games by decoding data about location (giving sounds a feeling of directionality and distance) and also increasing the total number of voices and effects that can be heard at one time.
Kaveri apparently took four years to develop, due to all the extra gubbins AMD has squeezed onto it, including HSA, Mantle and TrueAudio. This also explains why Kaveri chips are priced significantly higher than their predecessor, Richland: the lower-specced A8-7600 will start at $119, rising to $152 for the A10-7700K and, as we’ve mention, $173 for the flagship A10. Will they be worth the money? We’ll wait to round-up independent reviews from specialist sites before we make any final judgement, but it certainly looks like AMD has brought some clever additions to this generation that could boost its value. It looks good as a traditional gaming processor right now, especially if you intend to pair it with a Radeon graphics card in order to enable Dual Graphics (with the GCN cores in Kaveri’s GPU and in the discrete GPU effectively being added together), but we’ll need to see more Mantle- and HSA-enabled software before we’re ready to believe it can tackle Intel on general computing.
Let’s take a moment to forget the technical nonsense. Seriously. Besides, we only really know the broad strokes about Mullins, AMD’s next-gen ultra-low voltage APU. Instead, let’s just gaze upon the tiny wonder that is the Nano PC for a bit and soak it all in. This reference design from the Sunnyvale company packs enough power to run Windows 8.1 pretty seamlessly and even get in a quick game of FIFA 14 at 1080p. Inside, in addition to a Mullins chip, is a 256GB SSD, a camera, Bluetooth, WiFI and a DockPort connector. And, it’s really not much larger or thicker than a Note 3 — it’s pretty much a marvel of engineering. It’s the last of those specs that’s pretty important, since it allows you to connect to a tiny breakout box with HDMI and USB ports. Obviously you’ll need one of those to connect it to a TV, which the Nano PC is designed to sit atop. Here’s hoping that a company or two picks up on the design and starts making absurdly thin machines of their own.