Skip to content

Archive for

14
Mar

‘Sideways Dictionary’ simplifies tech jargon for the masses


If you ever get confused about tech jargon (or want to clear up said confusion), a new tool from Google’s Jigsaw incubator and the Washington Post may help. The “Sideways Dictionary” uses analogies and metaphors to help regular, non-techy people understand terms like “zero-day,” “metadata,” “net neutrality” and other jargon. Users will be able to access analogies online like a regular dictionary or find them in the Post, where they’ll accompany articles that contain “technobabble.”

Jigsaw marketing head Alfred Malmros says the idea is to create a shared vocabulary around tech. “The innovation emerging from the technology industry is staggering, but too often we fall back on jargon to explain new concepts,” he said in a statement.

To explain the term “DDoS,” for instance, he uses a child’s birthday party. “If you think about it a bit sideways, it’s like hosting a birthday party for your ten-year-old niece where you’ve invited 20 of her closest friends to attend — only 40,000 kids show up,” writes Malmros. Another example, shown in the video below, is comparing encrypted versus plain text communication to sending a sealed letter versus a postcard.

Jigsaw, the Washington Post and the site’s first power-user, Nick Asbury, have already come up with definitions for about 75 terms. However, they also want users to come up with analogies for terms, “the quirkier and more personal, the better.”

All you need to do is log onto the site with Google or Facebook, type in a term and offer your own definition (I submitted one for “crowdsourcing.”) The team also sought definitions from luminaries like Google’s Eric Schmidt and internet co-inventor Vint Cerf, who helped coin some of the jargon in the first place.

Editors will moderate every analogy in an effort to find the most “relevant” definitions and eliminate any bias or inaccuracies. Readers will then vote on the most helpful ones, and those will appear in the Washington Post when you hover over a word. However, you’ll also be able to see other definitions if that one doesn’t work for you. “An analogy cannot be 100 percent correct, nor can it be 100 percent incorrect,” says Malmros.

Anything that helps “normals” understand the tech world is a good thing, but the timing of this is particularly good. It’s never been more important for the voting public to understand things like net neutrality, two-factor authentication, phishing and machine learning. With hundreds of “fake news” stories circulating, it also helps to have a well-informed public, particularly for the kinds of complex issues that the Washington Post, Engadget and other sites are reporting on.

Source: Sideways Dictionary

14
Mar

The history and future of the 3DMark, the world’s most popular gaming benchmark


If you have ever cared about how powerful your PC really is, then you’ve almost certainly used a piece of Futuremark software.

But you probably don’t know the story of where that company came from.

PCMark, Sysmark, VRMark and most famously, 3DMark, are just a taste of some of the benchmarks that have come out of a company that has been on the cutting edge of the underlying technology behind the latest games, for the best part of 20 years. Today, just as when gamers were asking each other “can it run Crysis?” overclockers and enthusiasts continue to push systems to their limit in the hope of earning the bragging rights of the highest 3Dmark scores in the world.

But what makes a benchmark? And what is Futuremark’s plans for the future?

Fortunately, there is one man we can turn to for answers to these questions — Futuremark’s commercial director Jani Joki, who has been with the company from its very earliest days.

Early days

Futuremark is an offshoot from Remedy Entertainment, which is best known for the Max Payne games. Back in the late ’90s it had created its first game, Death Rally, and was working on its second, which would ultimately become the pioneer of bullet-time, Max Payne.

“This was the age when 3D acceleration was first coming into being,” said Joki. “Remedy was contacted by a magazine publisher, VNU Publications. Together they had the idea of creating a classical demo scene made for 3D accelerators, that would also measure at the same time.”

gaming numbers history futuremark deathrally

Death Valley

gaming numbers history futuremark maxpayne

Max Payne

gaming numbers history futuremark maxpayne

Max Payne 2: Fall of Max Payne

Remedy’s programmers were capable of building some of the best graphical showcases in the world. Looking to market that skill, it agreed to team up with VNU Publications to create what would become the first benchmark aimed at gamers and 3D acceleration. The magazine would publish the branded benchmark as its own, as a tool for readers and to show “something cool that they did,” as Joki puts it.

“It was produced as a side project alongside the games Remedy was working on and debuted at Assembly, a large demoscene event,” he said. “It caught on, because it had a basic measuring system to it. It was largely supposed to look cool, but the concept that this new 3D acceleration should be measured, really caught on.”

More: Vulkan, VR and DirectX 12 all getting new additions in Futuremark’s benchmarks

The late ’90s was an interesting time for graphical hardware. Today’s gamers have no choice but to debate the merits of Nvidia or AMD’s latest graphics cards, but back then, gamers had many companies to choose from. Yet there was little knowledge among consumers, or even hardware and software developers, about which 3D accelerators were any good, or what techniques worked best.

Many companies claimed to have the best solution. Futuremark’s first benchmark provided a chance to prove it.

Building a benchmark

Futuremark’s founding came about at an important time in gaming, as many exciting new technological developments emerging.

“DirectX 7 was a huge new thing,” said Joki, “It was immensely popular, and T&L was standardized then. Almost everyone was using it.”

Games look better because of the graphical artists that work on them.

T&L, or transform, clipping and lighting, is a combination of rendering two-dimensional views of a 3D scene, only drawing parts of a scene that will be present in the picture when rendering is complete, and altering the color of various surfaces depending on the way the scene is lit.

Until supporting APIs and hardware were engineered, T&L had been handled exclusively by software, and processed by the CPU. It was hardware support for technologies like this that gave people a real reason to upgrade to powerful, dedicated graphics cards and in turn, gave them a reason to run testing software like 3DMark 2000. But just as those technologies drive consumers, they also gave Futuremark the spark to create a new benchmark.

“It can be a new generation of DirectX, a new generation of Windows or hardware or a combination, like DirectX 12 and Windows 10. ” said Joki. “We don’t just create benchmarks for the hell of it, there has to be some kind of demand.”


Futuremark/Twitter

That’s not to say that Futuremark is entirely reactive. It is aware of what’s coming one or two years down the pipeline, so it can prepare its benchmarks accordingly.

Part of that is the natural progression of hardware and software, with Joki claiming that Futuremark can make educated guesses about the long-term future, but it is also aware of some specific technologies that are coming in the future due to close ties with hardware and software developers.

“When we launch a new benchmark, it’s imperative that it’s valid for what’s already been launched in the past six to 12 months, and should be valid for what’s coming in the next one to two years,” he explained.

Falling behind the artistic curve

Futuremark’s focus on underlying technologies is an important aspect of what makes its software so useful. Because its tests are developed in-house, it’s engine agnostic, which means it’s testing the underlying tools used by everything, from the latest CryEngine to recent Unity releases, on hardware from various manufacturers.

“The neutrality we have held is always absolute,” laughed Joki. “We joke within Futuremark that if any one hardware vendor is too happy with a benchmark we’re developing, we need to take another look at it.”

That’s important because, if it were to build its test in any one engine already available, there would be technologies utilized in different ways by others, which it wouldn’t be able to cater to as well. It would also make it easy for hardware developers to ‘cheat’ the system, but optimizing for whatever engine is most common.

“We joke within Futuremark that if any one hardware vendor is too happy with a benchmark […], we need to take another look at it.”

But that’s also why software from 3DMark doesn’t look as good as some of the prettiest games. While Futuremark was born in a time where programmers held the key to divine artistry, today it’s much more to do with the artists themselves.

“When 3DMark 2001 came out, we could pretty easily say that we had graphical superiority over most games. The graphics we could create were more interesting and more realistic looking that just about anything that was out at that time,” said Joki “While we can still do that on a technical level today, games now look better because of the graphical artists that work on them. We still employ five great graphical artists, but we can’t compete with companies that hire hundreds to work on their game,” he said.

That’s something that Futuremark has come to terms with over the past decade, and Joki believes it’s a little more obvious in recent ones. Its last benchmarks might not be quite as pleasing to the eye when compared to contemporary games, but they still tax systems like never before. And that, ultimately, is what matters most.

Sure, but can it run Crysis?

Even if today’s 3Dmark software doesn’t look as aesthetically impressive as some of the AAA titles out there, this isn’t the first time developers have challenged Futuremark’s software beauty, nor its ability to test the viability of hardware. At the turn of the century, as 3Dmark and Futuremark’s popularity increased, game developers caught on that offering their game as a testing suite provided more content for gamers, and in the case of reviewers regularly using them for testing purposes, free publicity.

gaming numbers history futuremark crysisscreenshot

gaming numbers history futuremark crisisscreenshot

Many games over the years have been used for this purpose, but one game still stands out as a paragon of not only beauty, but its ability to crush hardware hopes and dreams. Crysis.

“Can it run Crysis?” is an old meme at this point, but one that persists in comment sections to this day. And to Joki, it’s still an important question. Not because of Crysis itself, but because in his mind, if you are buying a PC to play a specific game, no test can better tell you how that game will run than a test baked into the game itself.

“Using Crysis to predict how well your PC might run Civilization IV would not have been that easy.”

“What we at Futuremark try to do, is create something that, should you buy a bunch of games and measure all of them and then aggregate all of them into one number — that’s roughly what 3Dmark is designed to do,” Joki said.

But did Futuremark think Crysis was a good way to measure PC performance when it was released?

“In some ways yes,” said Joki, tactfully. “It had a lot of cool stuff in it, so I have nothing against it, though it did use some effects perhaps a little excessively. Because of that, it was a pretty good test for certain aspects of graphics hardware, but the scaling between different graphics components was not necessarily accurate. Using Crysis to predict how well your PC might run Civilization IV would not have been that easy.”

Changing of the guard

With all its programming skill and artistic talent, you might ask why Futuremark hasn’t made games of its own. If you know a little about its history already, though, you’ll know it has. It launched the Futuremark Games Studio in 2008, and released its first game, Shattered Horizons in 2009.

“This was something a lot of people requested from us,” explained Joki. “We had the resources, the manpower, and the interest, and we decided to try and see what would actually happen.”

While Shattered Horizons saw moderate success, it wasn’t the financial hit that Futuremark’s owners were looking for. After five years, the gaming division was sold off to Rovio, the developer of Angry Birds. That profit driven thinking is why today Futuremark is owned by Underwriters Laboratories, an American safety consultation and testing company. You might be familiar with its logo, which can be found on variety of equipment, including most laptop and smartphone power chargers.

gaming numbers history futuremark shattered horizon ss  edited

gaming numbers history futuremark shattered horizon ss  edited

gaming numbers history futuremark shattered horizon ss  edited

gaming numbers history futuremark shattered horizon ss  edited

Futuremark has always been owned by venture capitalists, who invested in its earliest days, back in 1999. In 2014, with eyes on a different horizon, those investors sold Futuremark off to the testing firm, which Joki sees as a perfect fit.

“Put simply, UL is a company which does testing, and we’re a company that makes testing software,” said Joki. Including benchmarking in UL’s testing system makes sense for everyone and that gelled from our first meeting together.”

A surprising benefit of this move is that it lends more credibility to Futuremark’s stance as a disinterested third party in graphical hardware market. UL is legally bound to be impartial, so Futuremark is now also bound by such constraints. Remember that the next time you read someone complaining that its latest benchmark unfairly favors one graphics card manufacturer over another.

Is there an event horizon for graphics?

Futuremark’s extensive work in standardization of 3D benchmarking has an end goal beyond the creation of an accurate benchmark. It’s also part of the push for faster, better hardware. Games are constantly pushing the boundaries of what’s possible in real-time visual rendering. Photorealism has been touted for years as the point of no return, where we stop being able to distinguish between rendered graphics and reality. But does that mean 3DMark will eventually engineer itself out of work?

That’s not something Joki’s too worried about in the near future. It’s not a problem you can solve just by throwing more hardware at it, he says. You need the software development right alongside the hardware to be able to achieve true-to-life visuals. And while we’re closer to photo-realism than ever before, each success makes the next step harder.

More: Wondering if your PC can handle VR? Basemark’s VRScore will let you know

Still, even if we do reach a point where we can make a scene look close to reality, there is still much more to be done, and much more to be tested.

“I just don’t see a point where we can say that there is sufficient performance,” Joki said. “Let’s say you can render a house at a photo-realistic level. Add 100 more houses and suddenly the scene is vastly more complex.”

“Plus, I’m pretty sure that the need for measuring how fast things are in different scenarios will still be needed […] There are situations even today where people could say that it doesn’t matter what hardware you have. If you only use Gmail, then any computer will do, but even the people buying pre-built systems for that still like to see how much faster it is than their old one. People who buy cars still like to know what the 0-60 speed is, even when buying a family sedan.”

Getting down to the mantle

With Futuremark not yet foreseeing its own demise, it has a lot of work still to do, and much of that will be taking advantage of new APIs. With DirectX12 gathering support among developers, and similar APIs like Vulkan helping draw more power from existing hardware, that’s going to be an immediate focus of the benchmark developer.

Joki tells us that there are new DirectX12 tests coming to VRmark, as well as an entirely new version which will be shown off at the Games Developer Conference at the end of this month and will launch shortly after. There’s also “some sort of 3Dmark launch” planned before the end of the year and Futuremark is already working on the next full version of it.

Whatever’s released, it’s certain that Futuremark will remain an essential part of the 3D graphics community. Its long history has rightly given it great influence, and we’ll need the company to use that in building next-generation benchmarks for tomorrow’s gamers.

14
Mar

Oppo turns the dual-lens cam trend around, puts them on the front of its new phone


Why it matters to you

The trend for dual-lens cameras on the back of our phones is slowly changing to include them on the front, and Oppo is the latest to bring the same tech to its selfie cameras on the F3 Plus

Oppo, a smartphone company known for its strong camera technology, has announced the F3 Plus, a new phone with a very special selfie cam. While dual-lens cameras are becoming commonplace on the back of our phones, there aren’t very many on the front; but that’s the F3 Plus’s party piece.

The main lens has 16 megapixels, and is joined buy a second 8 megapixel sub-camera lens, both set above the F3 Plus’s screen. However, Oppo hasn’t revealed what the two lenses will do. It’s natural to assume they will produce a bokeh effect on portrait shots, but a teaser image suggests one will be a wide-angle lens for group selfies. Whether this will be the only function remains to be seen.

More: Oppo’s clever 5X camera tech uses a prism for clearer pictures

Oppo is also staying quiet about the rest of the F3 Plus’s specification. Rumors spread recently about the phone, indicating it would have a screen larger than the F1 Plus’s 5.5-inches, potentially stretching up to 6-inches. A Snapdragon 653 processor with 4GB of RAM is possible, along with a 4,000mAh battery and 64GB of internal memory. The rear camera is expected to have 16 megapixels and a single lens.

The F3 Plus is likely to be joined by a smaller F3 smartphone, but Oppo hasn’t shared a release date or specifications for the device yet. It’s possible the phone will retain the dual-lens selfie cam from the F3 Plus. Unfortunately, it’s rumored the F3 series phones will run Android 6.0 Marshmallow, rather than the latest 7.0 Nougat version, and have Oppo’s ColorOS user interface over the top.

Oppo will release the F3 Plus on March 23, when it’ll be sold in India, Indonesia, Myanmar, the Philippines, and Vietnam. A wider international launch hasn’t been confirmed, but Oppo has sold its devices in Europe and the U.S. before.

14
Mar

Tag Heuer’s new smartwatch has 500 style combinations, and costs at least $1,600


Why it matters to you

Tag Heuer has completely embraced the smartwatch with its new model, which offers a dizzying total of 500 style combinations.

Swiss watch brand Tag Heuer has returned to the world of smartwatches, after first embracing the technology in 2015 with the Tag Heuer Carrera Connected. The new model, called the Connected Modular 45, makes the Carrera Connected look like a tentative, exploratory first step. For the new watch, Tag Heuer will offer 11 standard models, with another 45 available to special order, and a huge range of interchangeable parts for a total of 500 different style possibilities.

Intel worked with Tag Heuer on the Connected Modular 45, providing its Atom processor and technical abilities to engineer a full titanium metal body, yet still include Wi-Fi, Bluetooth, GPS, and NFC. The watch will run Android Wear 2.0 and support Android Pay, plus a companion app provided by Tag Heuer will expand the range of features accessible on the device. A microphone built into the watch will enable voice activation. You can even use the watch without a smartphone, at least for tracking exercise and location. The app also boasts an unusual calendar feature that works with certain watch faces, reminding you if you’re running behind by still showing past due appointments.

More: Our first look at the TAG Heuer Carrera Connected

A 1.39-inch screen covered in a 2.5-inch piece of sapphire glass will dominate the water resistant body, while the battery inside provides a day’s worth of use before it needs a recharge. The body itself comes in polished or satin finishes, and the customization options range from different bracelets, straps, buckles, modules, and even watch dials. It’s a shift away from the handful of options buyers had with the Carrera Connected, and a move into the direction pioneered by the Apple Watch and Fossil’s Android Wear lineup.

This is a Tag Heuer watch, so you’ll pay a premium for owning one, and the Connected Modular 45 will start at $1,600. Don’t worry about splashing out on a piece of technology that will one day be out of date, as Tag Heuer will swap the Connected Modular 45 for a Heuer 02T Tourbillon Chronograph, or just change the guts for a mechanical watch if you’d prefer to keep the body, when you think that day has come.

Head over to Tag Heuer’s website from March 14 to start customizing your Connected Modular 45. The watch will be on display at the Baselworld 2017 watch show in Switzerland, so look out for further coverage at the end of March.

14
Mar

Tag Heuer doubles down on luxury smartwatches with the Connected Modular


tag-heuer-android-wear-20-3646.jpg?itok=

Luxury watch owners have a reason to care about smartwatches now.

Not only is designing a smartwatch difficult from a technical perspective, it’s a unique challenge from a design perspective. The tech world wants lighter, thinner, faster, and more battery. If you go look at an actual watch, especially an expensive one, you’ll find almost none of them fit this description. Luxury watches are often huge, flashy things that stand out on the wrist and demand to be noticed, not to mention considerably more expensive than your average Android Wear watch.

Last year the folks at Tag Heuer bucked the Android Wear trend with a watch that was expensive by techy standards, but greatly exceeded company sales expectations. In response, the company has doubled down on the luxury part of the Connected line with the ability to swap out many different pieces on the watch body to match your needs. It’s called the Connected Modular, and if you were hoping the price was coming down this year you should probably stop reading now.

As specs sheets go, the Tag Heuer Connected Modular isn’t going to wow anyone familiar with smartwatches. Like its predecessor, this is an Intel-based watch with a single crown button on the side that doesn’t rotate. Unlike its predecessor, there’s only 512mb of RAM onboard. The 410mAh battery powers a 287ppi display in a 45mm casing that is 7.5mm thick and can handle water down to 50 meters. There’s no heart rate monitor, no barometer, no LTE radio, and the watch itself charges with a magnetic pin dock. You get WiFi, Bluetooth, NFC for Android Pay, and GPS onboard. Oh, and this watch is the first smartwatch to ever be certified as “Swiss Made” to help indicate quality.

tag-heuer-android-wear-20-3698.jpg?itok=

tag-heuer-android-wear-20-3640.jpg?itok=tag-heuer-android-wear-20-3713.jpg?itok=

The Tag Heuer Connected Modular comes in several base kits.

Where this watch really gets interesting is in all of the things that aren’t underneath the display. As our initial reporting suggested, the Connected Modular separates in several unique positions. The lugs, straps, and buckles on the watch will all be replaceable, with many different options ranging wildly in price. If you decide you’d rather not have Android Wear 2.0 on your wrist for a day, the whole watch body can be swapped out with a special Calibre 5 movement from Tag. There is also a limited tourbillon movement to be available with one of the models available at launch.

It’s not all hardware with this watch. Tag has seen the benefit of not only including custom watchfaces with its branding onboard, but making it easy for users to create their own Tag watch face. On top of several unique Tag faces with customizeable options built in to the watch, users will be able to create personalized options in the new Tag Studio app. This app will include many different Tag inspired options, as well as updated later on with pre-set options from Tag Ambassadors like Tom Brady and Mats Hummels. Most of these faces will support the new Android Wear complications feature, so they’ll be more than just nice looking on your wrist.

Naturally, these features come with an impressive price tag. The base model of this watch with a rubber strap is going to run you $1,650. There will be other kits available at higher price points with different modular options available, going all the way up to $18,500. And, to appeal to those eager to get their hands on one right now, Tag Heuer has made the watch available starting today. You can head to the Tag Heuer website, or check out your local store and be able to walk out with one on your wrist.

tag-heuer-3668.jpg?itok=fBe1dqYJtag-heuer-3672.jpg?itok=EOg_SXLw

A close look at the Tag Heuer’s detachable buckles and lugs.

While these watches are unlikely to appeal to the budget focused and tech-obsessed among us, the Connected Modular will appeal greatly to those who collect nice watches and appreciate the ability to appear as though you’re wearing many different watches just by swapping core pieces. This particular feature is one notably missing from even the most expensive of Apple Watch variants and is something Tag Heuer is going to be able to do very well.

For those still eager to use the original Tag Heuer Connected, the Android Wear 2.0 update will be rolling out starting today.

14
Mar

Google could team up with India’s Jio over an affordable 4G phone


A Google-branded budget phone could be in the works.

Google is reportedly working with Jio — which recently crossed 100 million subscribers — to launch an affordable 4G-enabled phone that will work exclusively on Jio’s network. The phone is likely to make its debut before the end of the year, according to The Hindu.

google-pixel-review-21.jpg?itok=l2cXlMvL

Google is looking to expand its reach in India, and one way to do that would be to collaborate with Jio. The company’s efforts in providing free internet access at railway stations across the country have borne fruit, with over 5 million customers using the service every month.

However, Google has failed to attract any mainstream attention for its Android One program, and in a recent interview, CEO Sundar Pichai said that smartphones need to be priced as low as $30 to succeed in the market. A partnership with Reliance could serve as a catalyst for Google’s device ambitions in the country, while allowing Jio to reach a wider customer base.

Reliance’s own Lyf brand of devices have fared well in the country mainly because of the fact that the carrier’s network was limited to its devices initially. In addition to collaborating on a budget phone, The Hindu noted that Reliance’s upcoming smart TVs will feature software from Google, which suggests they’ll run Android TV.

14
Mar

OnePlus UK is now taking applications for a student marketing campaign


oneplus-3t-back-with-box.jpg?itok=E5EqXf

OnePlus wants you to come up with a new marketing campaign.

OnePlus’ marketing efforts have had their ups and downs in recent years, and the company is now looking to raise its brand awareness in the UK by launching a new marketing campaign. To that effect, OnePlus is rolling out a challenge through which students will be able to pitch their ideas to OnePlus’ marketing division, with the winning entry eligible for a paid summer internship at the company’s London office.

From the website:

Calling all students: think you have what it takes to create and execute an innovative marketing campaign?

Welcome to the first ever OnePlus Marketing Challenge. Pick your teammates, put your heads together and come up with a killer marketing campaign.

The five best teams will be invited to our European HQ in London to pitch their campaign in person.

The winning team will be selected for a summer internship to run their campaign.

OnePlus is now taking submissions for ideas, and will put up the 20 best ideas for a public vote on May 12. The five finalists from the vote will fly to London to pitch their campaign to OnePlus’ marketing team, and the team that wins the challenge will get to work with OnePlus and walk away with a OnePlus 3T each:

  • First place will be awarded to one team who will receive a paid summer internship at the European HQ in London for the team to implement their marketing campaign and a OnePlus 3T for each team member.
  • Second place will be awarded with a OnePlus 3T for each team member and one £500 Amazon voucher for the team.
  • Third place will be awarded one £500 Amazon voucher for the team.
  • Runner-up prizes will be awarded to the teams who come fourth and fifth, receiving one £300 Amazon voucher per team.
  • The top 20 shortlisted teams, outside of the top five, will receive a £50 Amazon voucher per team member.

The contest is limited to those living in the UK. If you’re interested, head to the link below for all the details.

See at OnePlus

OnePlus 3T and OnePlus 3

  • OnePlus 3T review: Rekindling a love story
  • OnePlus 3T vs. OnePlus 3: What’s the difference?
  • OnePlus 3T specs
  • Latest OnePlus 3 news
  • Discuss OnePlus 3T and 3 in the forums

OnePlus
Amazon

14
Mar

How to set up the fingerprint sensor on the LG G6


lg-g6-how-to-3479.jpg?itok=T8vg6JMP

Set it up so that all you have to do is touch the sensor to unlock your phone.

I’ve been spoiled by rear-facing fingerprint sensors these past few years. They’re easier for my smaller hands to access and the mechanism itself just feels quicker than placing a thumb on the front side of the device. The LG G6 features its own rear-facing fingerprint sensor, too, and once you register a print, you can use it to lock up content in LG’s Gallery and QuickMemo+ apps. Here’s how to set up the fingerprint sensor.

How to set up the fingerprint sensor on the LG G6

Swipe down from the top of the screen to reveal the notification shade.
Tap the Settings icon in the upper right corner.
Tap General.

Select Fingerprints & security.

lg-g6-fingerprint-sensor-1.jpg?itok=XvcV

Tap Fingerprints.
Tap Add fingerprint.

Scan in the finger of your choice repeatedly until the interface indicates it’s been registered.

lg-g6-fingerprint-sensor-2.jpg?itok=-nv-

You can now use the fingerprint sensor on the back to quickly unlock your phone with just the touch of that registered finger.

Questions?

We’re standing by to answer any questions you may have. Just leave a comment!

LG G6

  • LG G6 review!
  • LG G6 specs
  • LG G6 vs. Google Pixel: The two best cameras right now
  • Everything you need to know about the G6’s cameras
  • LG forums

14
Mar

NVIDIA Jetson TX2 is the supercomputer that’s going to build the next great idea


nvidia-jetson_tx2-8.jpg?itok=oJA4OJA0

NVIDIA’s Jetson TX2 is more than a worthy successor to the original. It’s a new way to do things.

Artificial Intelligence and machines that can learn are how the things we use every day will be improved. Google and Android are all-in with AI through Google Assistant and machine learning, so it’s important to know how the back end operates, how they got there and what types of equipment makes it all possible. And it’s really cool, too!

The people who will build this technology of the future will need the tools to do so. In 2017, NVIDIA is doing its part, and the Jetson TX2 is the embodiment of this idea. Developers need hardware that’s not only capable of doing the computing and thinking (yes, I’ll say it) that our smarter future is going to need, but is also easy to use and deploy.

AI at the Edge.

NVIDIA refers to this as “delivering AI at the Edge” and it’s an apt description. The TX2 is a complete supercomputer. It’s able to process data on its own at the place and time it’s actually happening instead of thousands of miles away via the internet. We take connectivity for granted because of the way we use it right now, but there are plenty of cases where waiting for a data round trip from a smart piece of machinery is just too long to wait. And a large part of this blue marble we live on doesn’t have a connection to the internet, and won’t for a very long time.

A small computer that can do just about anything and process all the data it collects itself is how you tackle these problems. NVIDIA seems to have nailed it here.

What is this thing?

nvidia-jetson_tx2.jpg?itok=3oseUqod

This isn’t something you can find at Best Buy to use for things you do with your phone. It doesn’t run Android (but it certainly wouldn’t be difficult to fix that) and it’s something most of us won’t be buying. But it’s still a very important part of the things we love.

The Jetson TX2 is a development tool. The Jetson TX2 is also a field-ready module to power any AI-based equipment. It’s a computer the size of a credit card with all the inputs and outputs a “regular” computer has. When you plug the TX2 module into its specially designed backboard (that’s part of the development kit) it mostly turns into a typical small form factor PC complete with all the ports and plugs your desktop also has.

Developers can use this to build equipment around and actually use the Jetson itself to run demos and simulations. It’s a capable little machine that can do all the calculations something much bigger can do while using a minuscule amount of power to do so. The tech specs are impressive.

  • NVIDIA Parker series Tegra X2: 256-core Pascal GPU and two 64-bit Denver CPU cores paired with four Cortex-A57 CPUs in an HMP configuration
  • 8GB of 128-bit LPDDR4 RAM
  • 32GB eMMC 5.1 onboard storage
  • 802.11b/g/n/ac 2×2 MIMO Wi-Fi
  • Bluetooth 4.1
  • USB 3.0 and USB 2.0
  • Gigabit Ethernet
  • SD card slot for external storage
  • SATA 2.0
  • Complete multi-channel PMIC
  • 400 pin high-speed and low-speed industry standard I/O connector

The best tech spec is that the Jetson TX2 is a pin for pin drop in replacement for last year’s Jetson TX1. Let that sink in for a bit — developers who are using existing NVIDIA TX1 computers to power the brains behind their equipment will be able to shut things down, pull the old board and put in the new one. The software for the TX1 will be updated to the same software the TX2 is using so it will literally be a drop in replacement. If you’ve ever done any type of field or factory work on equipment that costs a lot of money when it has any downtime, you understand how important this is. While the next generation equipment is being developed, it’s using hardware that works 100% with the existing generation.

The secret here is through NVIDIA’s Pascal GPU cores. The same reason Pascal cores are used in very high-end video cards designed for VR and 4K 3D gaming is why they’re used for the Jetson TX2. GPU cores are a more efficient way to crunch numbers. They’re faster and use a lot less power.

The holy grail of computing is artificial intelligence (AI): building a machine so intelligent, it can learn on its own without explicit instruction. Deep learning is a critical ingredient to achieving modern AI. Deep learning allows the AI “brain” to perceive the world around it; the machine learns and ultimately makes decisions by itself. It is now widely recognized within academia and industry that GPUs are the state of the art in training deep neural networks (DNN), due to both speed and energy efficiency advantages compared to more traditional CPU-based platforms.

NVIDIA GPU computers already do some amazing things. They drive the deep learning used for self-driving cars, teaching robots human-like motor skills such as walking and grasping, analyzing video at high-speed to provide text captions and even play Go. And beating really good human opponents.

GPU cores can do the same work using less power as traditional CPU computing.

The real test of AI and the brains that can drive it is on the horizon. Autonomous robots and drones are being developed for jobs like industrial inspection, portable medical devices that can be taken in the field to help those in need are desperately needed and even smart security cameras that can analyze what they are seeing and take appropriate action are soon to be realities. These ideas need computing that can drive AI with deep learning algorithms and the ability to analyze neural network collected data on their own. They can’t be attached to a cable and will be used in places where even Verizon has no coverage.

Besides being powerful, a computer designed to be small and portable has to be power efficient. Testing shows (.pdf file) that NVIDIA GPU-based computing can be equivalent to an Intel core i7 6700K CPU and use 6 watts of power compared to 60. For equipment that’s not connected to the power grid, that’s important.

We ran some benchmarks using AlexNet and GoogLeNet — CV based object category classification and detection testing software and the results were fantastic. In Max-P (high-power) mode, the Jetson TX2 was able to analyze an average of 641 images per second using the AlexNet Network while using just 13 watts of power. The GoogLeNet testing averaged 278 images per second while using 14 watts of power. Max-Q (low power) tests scored an average of 481 images per second on AlexNet and 191 images per second on GoogLeNet while using just 7 watts of power. This is just about twice what last year’s Jetson TX1 could deliver, and it was pretty good at it, too.

When you can process information this fast and this accurate on-site, a connection to the cloud isn’t the limiting factor it used to be.

In the lab

nvidia-jetson_tx2-6.jpg?itok=JyhDVamy

The Jetson TX2 should be very capable in the field. It’s the first of the next generation machines that will learn by doing without a connection to the cloud and a substantial upgrade from existing equipment. But it also has features that developers will love.

The credit card sized compute module can plug into a complete carrier board available as part of the Jetson TX2 development kit. The carrier board uses the 400 I/O pins on the Jetson module to provide standard desktop connections. A software developer can use a standard USB keyboard and mouse, a standard monitor and the Jetson TX2 to create a complete development environment.

Running on an Ubuntu 16.04 based Linux4Tegra operating system, all the tools you might need to develop and debug deep learning AI applications are included as part of NVIDIA’s JetPack software. Developers can download the package from NVIDIA’s Developer Zone as well as follow tutorials and community knowledge to see what the Jetson can do then begin work on their own ideas. Included software in the JetPack is pre-configured to run optimized on the TX2 processing system:

  • cuDNN – a GPU-accelerated library of primitives for deep neural networks.
  • NVIDIA VisionWorks is a software development package for Computer Vision (CV) and image processing.
  • CUDA Toolkit – a comprehensive development environment for C and C++ developers building GPU-accelerated applications.
  • TensorRT – a high performance deep learning inference runtime for image classification, segmentation, and object detection neural networks.
  • NVIDIA Nsight Eclipse – A full-featured and customized Eclipse IDE for developing, debugging and profiling CUDA-C applications.
  • Tegra System Profiler and Tegra Graphics Debugger – tools to profile and sample applications using OpenGL.
  • The necessary collateral and assets to develop and design hardware using the NVIDIA Jetson TX2.

Using the same platform to build and debug any application is a must for anything intricate and complicated. It’s one of the ways developers can simplify the process and anything that can help make things easier makes for happier developers. While the Jetson TX2 may not be designed as the sole development and build computer any group would be using, knowing that it is capable is a boon for installation and field work. Making small adjustments and changes can be done on the Edge the same way the processing is without sending data back to another computer bank to process and return.

nvidia-jetson_tx2-5.jpg?itok=2-A4dc44

Equipment can be designed using the available hardware assets and drawings to not only reduce complexity but to allow an easy interface using readily available peripherals and software. Armed with a laptop and a USB cable, an engineer or field tech has everything needed to rebuild from the ground up if necessary.

The NVIDIA Jetpack software means developers can focus on their work, not setting up a build environment.

Even the installation of NVIDIA’s Jetpack is streamlined. Reviewers were provided with an updated version to install, and following a few simple instructions through a clever GUI had a complete rebuild of all the software finished with just a few steps and a cup of coffee. Again, we see NVIDIA making things easier so developers can focus on their work rather than maintaining the build environment itself.

jetson-tx2-running-samples.png?itok=GX73 You can actually build and debug software on the Jetson TX2, while having an assortment of other applications running to write a blog post.

After a few days of setting things up and testing everything, I came away very impressed with what NVIDIA is delivering here. The first Jetson TX1 was a great product that filled a need for fast development using GPU cores to do the heavy lifting for deep learning neural network applications. In a very short time, NVIDIA has raised the bar with a successor that can break the dependency on the cloud using the same familiar development tools and techniques.

The technology of the future will excite and inspire us all. Products like the Jetson TX2 are what will make that future possible. The NVIDIA Jetson TX2 Developer Kit is priced at $599 for retail orders and $299 for students.

See at NVIDIA Embedded Developers portal

14
Mar

Waze now integrates with Spotify to help de-stress your commute


Waze is adding a big new integration — and sorry, it isn’t Android Auto.

blank

Waze and Spotify have inked a deal to deeply integrate into each other’s apps, both giving you access to the best tunes during your commute and keeping your route easily in hand while browsing your music. With the latest version of the Waze app and your Spotify account connected, you’ll be able to access your Spotify playlists from inside Waze or quickly skip between tracks and see upcoming music. You can even have music start automatically when your navigation begins.

spotify-waze-integration-press-image.jpgwaze-spotify-integration-press-image.jpg

waze-spotify-animation-press-image.gif?i

From the Spotify side of things, you’ll be able to start navigation from within the Spotify app while you’re browsing for tunes to take on the road. When your car is at a complete stop, you’ll be able to quickly switch between the Spotify and Waze apps with a single tap, and you won’t lose your place in either app as music information will continue to display in Waze and direction information will show up in Spotify.

We sure would appreciate a more general extension network that would let you integrate any popular music service into Waze — or, how about just integrating Waze into Android Auto? — but this is a big deal for the millions who use both services.

Waze says the rollout for these new features will take a couple of weeks, so be patient if you don’t see it show up immediately. Just make sure you have both apps installed and up-to-date, and you’ll be in line to get the latest playlist partnership for your commute.