Skip to content

Archive for

8
Feb

Google Brain brings ‘zoom and enhance’ method one step closer to reality


Why it matters to you

You could soon be able to enhance your older photos to look good on a 4K display — just like in crime shows.

The concept of enhancing a pixelated image isn’t new — “zoom and enhance” is responsible for dozens of criminals being put behind bars in shows like Criminal Minds, but that kind of technology has so far evaded the real world. Well, the boffins over at Google Brain have come up with what may be the next best thing.

The new technology essentially uses a pair of neural networks, which are fed an 8 x 8-pixel image and are then able to create an approximation of what it thinks the original image would look like. The results? Well, they aren’t perfect, but they are pretty close.

More: Robotic skin: Researchers created a material twice as sensitive as human skin

To be clear, the neural networks don’t magically enhance the original image — rather, they use machine learning to figure out what they think the original could have looked like. So, using the example of a face, the generated image may not look exactly like the real person but instead, a fictional character that represents the computer’s best guess. In other words, law enforcement may not be able to use this technology to produce an image of a suspect using a blurry reflection from a photo of a number plate yet, but it may help the police get a pretty good guess at what a suspect may look like.

As mentioned, two neural networks are involved in the process. The first is called a “conditioning network,” and it basically maps out the pixels of the 8 x 8-pixel image into a similar looking but higher resolution image. That image serves as the rough skeleton for the second neural network, or the “prior network,” which takes the image and adds more details by using other, already existing images that have similar pixel maps. The two networks then combine their images into one final image, which is pretty impressive.

It is likely we will see more and more tech related to image processing in the future — in fact, artificial intelligence is getting pretty good at generating images, and Google and Twitter have both put a lot of research into image enhancing. At this rate, maybe crime-show tech will one day become reality.

8
Feb

Google Brain brings ‘zoom and enhance’ method one step closer to reality


Why it matters to you

You could soon be able to enhance your older photos to look good on a 4K display — just like in crime shows.

The concept of enhancing a pixelated image isn’t new — “zoom and enhance” is responsible for dozens of criminals being put behind bars in shows like Criminal Minds, but that kind of technology has so far evaded the real world. Well, the boffins over at Google Brain have come up with what may be the next best thing.

The new technology essentially uses a pair of neural networks, which are fed an 8 x 8-pixel image and are then able to create an approximation of what it thinks the original image would look like. The results? Well, they aren’t perfect, but they are pretty close.

More: Robotic skin: Researchers created a material twice as sensitive as human skin

To be clear, the neural networks don’t magically enhance the original image — rather, they use machine learning to figure out what they think the original could have looked like. So, using the example of a face, the generated image may not look exactly like the real person but instead, a fictional character that represents the computer’s best guess. In other words, law enforcement may not be able to use this technology to produce an image of a suspect using a blurry reflection from a photo of a number plate yet, but it may help the police get a pretty good guess at what a suspect may look like.

As mentioned, two neural networks are involved in the process. The first is called a “conditioning network,” and it basically maps out the pixels of the 8 x 8-pixel image into a similar looking but higher resolution image. That image serves as the rough skeleton for the second neural network, or the “prior network,” which takes the image and adds more details by using other, already existing images that have similar pixel maps. The two networks then combine their images into one final image, which is pretty impressive.

It is likely we will see more and more tech related to image processing in the future — in fact, artificial intelligence is getting pretty good at generating images, and Google and Twitter have both put a lot of research into image enhancing. At this rate, maybe crime-show tech will one day become reality.

8
Feb

Google Brain brings ‘zoom and enhance’ method one step closer to reality


Why it matters to you

You could soon be able to enhance your older photos to look good on a 4K display — just like in crime shows.

The concept of enhancing a pixelated image isn’t new — “zoom and enhance” is responsible for dozens of criminals being put behind bars in shows like Criminal Minds, but that kind of technology has so far evaded the real world. Well, the boffins over at Google Brain have come up with what may be the next best thing.

The new technology essentially uses a pair of neural networks, which are fed an 8 x 8-pixel image and are then able to create an approximation of what it thinks the original image would look like. The results? Well, they aren’t perfect, but they are pretty close.

More: Robotic skin: Researchers created a material twice as sensitive as human skin

To be clear, the neural networks don’t magically enhance the original image — rather, they use machine learning to figure out what they think the original could have looked like. So, using the example of a face, the generated image may not look exactly like the real person but instead, a fictional character that represents the computer’s best guess. In other words, law enforcement may not be able to use this technology to produce an image of a suspect using a blurry reflection from a photo of a number plate yet, but it may help the police get a pretty good guess at what a suspect may look like.

As mentioned, two neural networks are involved in the process. The first is called a “conditioning network,” and it basically maps out the pixels of the 8 x 8-pixel image into a similar looking but higher resolution image. That image serves as the rough skeleton for the second neural network, or the “prior network,” which takes the image and adds more details by using other, already existing images that have similar pixel maps. The two networks then combine their images into one final image, which is pretty impressive.

It is likely we will see more and more tech related to image processing in the future — in fact, artificial intelligence is getting pretty good at generating images, and Google and Twitter have both put a lot of research into image enhancing. At this rate, maybe crime-show tech will one day become reality.

8
Feb

Moment has three new accessories to unlock the power of your iPhone 7 camera


Why it matters to you

Moment is known for its accessories that help turn your iPhone into a professional camera, and now it has a new suite of tools for you.

Moment needs your help again.

The company, known best for its iPhone camera-supplementing accessories, has returned to where it all began to raise funds for its latest products. Now on Kickstarter, Moment is debuting a new battery photo case, a photo case (really just like the battery case but without the battery), and a cinema wide lens for the iPhone 7 and iPhone 7 Plus.

“We believe that the future of photography is in your pocket,” the Moment team notes in its latest campaign. “The best camera is the one you have with you, and that camera is your phone. At Moment, we want to make your phone work more like a camera.”

More: BlackMagic turns focus to live-streaming in latest cameras and production gear

With the battery power case, Moment is offering a new kind of protection that promises to bring power and photography together. The iPhone 7 Plus Case comes complete with a 3,500mAh battery, while the iPhone 7 case carries a 2,500mAh battery. Because these are each larger than your iPhone’s own batteries, you’ll get a more than 100 percent recharge on your phone, so feel free to shoot all day. Moreover, you can control whether or not your case is charging your phone using the Moment app.

The app also allows you to shoot with your phone more as you would with a real camera. Just half-press the shutter button to lock the subject, then press down fully in order to shoot either photo or video. If you hold down for a longer period of time, you’ll find yourself in burst mode. Shooting with a button can be faster than tapping the screen, especially if your subject is moving. And when you’re shooting with one hand, it’s always awkward trying to hold onto the phone and touch the screen at once.

Using the Moment App, you can half-press the shutter button to lock the subject and full press to shoot (photo or video). In the Apple camera app you can full press to shoot. And in both apps, pressing and holding unlocks burst mode. Promising a shutter button 75 percent faster than the one included in the previous case, this new product claims to be a huge upgrade over even your last Moment case.

Then, there’s the new and improved Moments lens, which is said to be stronger and more reliable than before. “Slightly larger, we now have enough room to make an interface that clicks into place,” the Moment team noted. “Not only is it easier to mount a Moment lens, but for the first time you can put our glass over the wide or tele lens on the iPhone 7 Plus.”

At this point, you can get the Photo Case, the thin, lens mountable case for photo enthusiasts for just $20, while $69 will get you the Battery Photo Case. You can get these accessories with the Moment Lens starting at $99.

8
Feb

Swifter, stronger, smarter: Quickblade’s new paddle improves your SUP skills


Why it matters to you

The Smart Paddle claims it helps stand-up paddlers to pick up the sport in a quarter of the time it would normally take.

Stand-up paddleboarding — aka SUP — is one of the fastest growing sports in the world, drawing in millions of people on an annual basis. If a recent report is to be believed, it doesn’t look like that popularity is going to wane anytime soon, with growth is projected to continue beyond 2020. As with any relatively new sport, SUP continues to evolve rapidly, with new gear helping to make it easier and more accessible than ever before. That means better boards, improved paddling techniques, and, of course, more efficient paddles too, including a newly announced option that promises to be the industry’s first smart paddle.

At the 2017 Surf Expo — held in Orlando, Florida in January — Quickblade revealed the Smart Paddle, a new oar that comes equipped with a host of impressive technology. Using a variety of sensors and tracking features created by an Israeli startup called Motionize, this paddle will have the ability to collect a wide variety of data and metrics that can be used to provide real-time feedback and virtual coaching.

Roei Yellin, the vice president of marketing at Motionize, tells Digital Trends “The smart paddle offers features that help paddlers paddle while staying much more balanced between strokes.” That, combined with feedback on their performance leads to improved confidence and the ability to pick up the sport at a faster rate. How much faster? Yellen says, “The smart paddle will help novice paddlers become more proficient in a quarter of the time it would take them to figure it out on their own.”

More: Osprey’s GearKit duffel bags are your mobile base camp

We first took a look at Motionize’s technology when we covered the company’s innovative device, which could be added to any SUP paddle. But this collaboration with Quickblade integrates the technology directly into the Smart Paddle itself, eliminating the need for an add-on altogether. The sensor, which connects to a smartphone via Bluetooth, is able to track the entry angle of the paddle blade, the rhythm of a paddler’s strokes, stroke efficiency, and more. It can also keep track of calories burned, as well as speed and distance traveled, with the paddler’s course being plotted on a map. That data is then analyzed by a special app for iOS and Android that can provide real-time feedback with tips on how to improve paddling technique.

It’s basically a talking paddle. It’s like having an onboard coach with you on the water,” Yellen said

The Smart Paddle is expected to go on sale in March at a price ranging from $299 to $559, depending on the specific model. Yellen said that essentially any of the Quickblade paddles can be turned into a smart paddle for an additional $30. That means no matter what your specific needs are, you can take advantage of the Motionize technology to improve your SUP skills.

8
Feb

Uber Flat: Ridesharing service offers rides in NYC for cheap fees


Why it matters to you

With Uber’s new flat-fare offer, rides in New York City become much more cost-conscious.

The next subscription service you get may help you around New York City a bit easier. As part of its UberPlus program, Uber has slowly begun rolling out new subscription package — Uber Flat –where anyone in New York City can get a ride for a flat fare of under $6.

The new flat-fare packages set UberPool and Uber X rides at $3 and $6, respectively. You can choose from three packages that offer 10, 20, and 40 ride limits. Purchasing a package requires an upfront, one-time subscription payment ranging between $5 to $20. Each account can only buy one package.

More: Uber’s newest hire aims to help the company make flying vehicles a reality

The flat-fare price will be applied once you request an UberPool or Uber X ride and will be reflected in your purchase. These rides last for 30 days after the purchase of the package and cover the five boroughs.

Before you start zipping across the Bruckner Expressway into Manhattan every day, the flat fares have limits beyond just ride count. Subscribers will be charged the flat fare fee for rides up to $30 in value. You will be charged anything exceeding that $30 limit in addition to the flat fare. An Uber X from deep in the North Bronx to Williamsburg, Brooklyn during a weekday afternoon would be under $30 with the new flat fare, as opposed to more than $50.

Uber first brought flat-fare pricing to NYC in October when New Yorkers could get unlimited rides in Manhattan for $100. This current pricing plan is more in line with similar tests Uber did in other cities last year. San Francisco, Miami, San Diego, Boston, Seattle, and Washington, D.C., each had flat-fare programs last September. By comparison, the NYC offer is the most affordable as it offers the cheapest upfront price.

You can purchase one of the flat fare packages here.

8
Feb

The Quartz will be ZTE’s first Android Wear watch, and it was just leaked online


Why it matters to you

If you’re looking for an Android Wear watch with cellular connectivity, the ZTE Quartz could be the device for you.

It looks like ZTE is looking to enter the Android Wear smartwatch fray. While some manufacturers are prepping their third-generation smartwatch, ZTE is reportedly set to launch its first running Android Wear — and the device may show up within a few months.

According to a report from VentureBeat written by well known leaker Evan Blass, the new ZTE watch will be called the ZTE Quartz, and it was recently spotted getting its Bluetooth certification with the model number ZW10. The news isn’t all that surprising — we heard that ZTE would launch a smartwatch this year from ZTE USA CEO Lixin Cheng last month.

More: ZTE Axon 7 mini review

So what will the new watch look like? VentureBeat has also managed to get its hands on ZTE Quartz promotional materials, showing a device that could be mistaken for a classy-looking non-smartwatch.

The device could also offer a little more functionality than most other smartwatches — according to Bluetooth SIG, the device will offer 3G cellular connectivity to complement its Wi-Fi connectivity, meaning that it won’t necessarily need to be connected to your smartphone all the time to get things like notifications.

We don’t really know all that much else about the new watch, but it’s likely that the device will feature Android Wear 2.0, the updated version of Android Wear that is set to launch February 8. While we don’t know for sure that the device will launch at Mobilr World Congress at the end of the month, it certainly wouldn’t be surprising.

This isn’t ZTE’s first attempt at a smartwatch — just its first attempt at an Android Wear smartwatch. At MWC 2016, the company showed off the Venus 1 and Venus 2 smartwatches, but those devices were mainly fitness trackers and didn’t really offer a fully-fledged operating system.

As VentureBeat notes, ZTE actually already has a device called the Quartz — which is a 5.5-inch smartphone with Android 4.3 Jelly Bean.

8
Feb

As Apple pursues perfection, new campus frustrates builders and officials


Why it matters to you

Apple’s extravagant monument of a main campus is further proof of the company’s continued growth — and its eye for flawless design.

Apple has long been lauded for its attention to detail and emphasis on build quality and it seems that ambition extends to the construction of its facilities. According to Reuters, the company made numerous demands as it prepares Apple Campus 2, more colloquially referred to as the “spaceship campus,” to ensure a level of craftsmanship more befitting of a phone or watch than a building meant to employ 14,000 workers.

For starters, Apple’s new Cupertino, California, headquarters supposedly boasts the largest panel of curved glass ever made — though that was probably evident from the exterior shots. Inside, Apple has stipulated a lengthy list of requirements.

More: Apple is looking to double the size of its famous flagship retail store in NYC

The ceiling tiles, made of polished concrete, were each examined twice by company representatives to ensure perfection. The door handles, free of imperfections down to the nanometer, according to the construction team, were rejected by Apple in favor of a redesign. All pipes and beams in the ceiling had to be concealed from view, so they would not appear in reflections off the glass.

Even the signage was a source of frustration. The Santa Clara County Fire Department reportedly clashed with the company over designs that emphasized minimalism over function in the event of an emergency. Retired Deputy Fire Chief Dirk Mattern told Reuters the issue came up in 15 different meetings, and that he had “never spent so much time on signage.”

These considerations might be commonplace for a handheld device but for a facility of Campus 2’s scale, they are unheard of. In most buildings, the tolerance that materials are allowed to differ from specific measurements is about 1/8 of an inch. Channeling the iPhone’s seamless panel gaps, Apple mandated far tighter tolerances, even for inconspicuous surfaces.

“You would never design to that level of tolerance on a building,” one architect said. “The doors would jam.”

Speaking of which, Apple was reportedly adamant that the doorways throughout the entire building be perfectly flat, with no thickness around the threshold. If an engineer had to adjust his form to move differently through a doorway, he might become distracted from his work, the company reasoned. One former construction manager said he spent months pushing back on the request, “because that’s time money and stuff that’s never been done before.”

The spaceship campus has been said to evoke many design cues from famous Apple products. Lead architect German de la Torre likened the building’s curves to the rounded rectangle motif the company employs in much of its hardware and software. Other employees compared the elevator controls to the iPhone’s home button.

It may come as little surprise that Skanska USA and DPR Construction, the contractors on the project when it broke ground in 2011, eventually bailed. Experts estimate the facility has cost Apple $5 billion and those close to the project said difficulties with the approval process have nudged completion to the spring.

8
Feb

MacBook Pro line could get Kaby Lake refresh in 2017


Why it matters to you

If you’ve been holding off on buying a MacBook Pro because of last-gen processors, your wait should end sometime in 2017.

Some recent notebooks releases, such as Microsoft’s Surface Book with Performance Base and Apple’s 2016 MacBook Pro refresh, skipped Intel’s seventh-generation, or Kaby Lake, processors. That was disappointing to some users who would have benefited from the efficiency and performance enhancements provided by the new CPUs.

Apple is expected to perform a minimal refresh of the MacBook Pro line sometime in 2017, to upgrade to just those seventh-generation Intel parts, and people are tearing through MacOS Sierra data looking for any hints of what might be coming. Some industrious folks did just that and found some indications of new MacBook Pro machines, as MacRumors reports.

More:

As always, this information is highly suspect and open to interpretation. For what it’s worth, though, Pike’s Universum, a blog dedicated to Apple products, located some information in the MacOS Sierra 10.12.4 beta “plist” data that suggests new MacBook Pros could be released after that version of MacOS Sierra becomes official.

According to the data, the new machines would utilize the latest equivalent seventh-generation parts to replace the older CPUs released with the 2016 MacBook Pro refresh. The updates could look something like this:

13-inch MacBook Pro without Touch Bar

  • Core i5-6360U -> Core i5-7260U
    Core i7-6660U -> Core i7-7660U

13-inch MacBook Pro with Touch Bar

  • Core i5-6267U -> Core i5-7267U
    Core i5-6287U -> Core i5-7287U
    Core i7-6567U -> Core i7-7567U

15-inch MacBook Pro with Touch Bar

  • Core i7-6700HQ -> Core i7-7700HQ
    Core i7-6820HQ -> Core i7-7820HQ
    Core i7-6920HQ -> Core i7-7920HQ

According to a previous report, these new MacBook Pros could enter production in July 2017, with a 15-inch MacBook Pro version offering 32GB of RAM. The current machine’s 16GB maximum was another sore spot for some power users and so that would be welcome news as well. However, all of this is just speculation, and we won’t know for certain what Apple will release until Apple’s WWDC 2017 event in June.

8
Feb

A new study uses cancer-seeking fluorescents to light the way for surgeons


Why it matters to you

Florescent nanoprobes are another way to help doctors seek and destroy cancerous tissues.

Scientists have developed a new method for helping surgeons recognize and remove even the smallest cancerous tumors — by quite literally lighting the way with tiny fluorescent particles.

As described in a new study published in the American Chemical Society journal ACS Omega, the technique involves loading microscopic expansile nanoprobes with the fluorescent particles. When cancerous cells are detected, these nanoprobes release their contents, resulting in the problematic areas lighting up when viewed at a certain wavelength.

“The probe’s relatively rapid visualization of cancerous tissue renders it potentially compatible with operating room workflow for assessing tumor margin status via frozen section analysis,” Robert Strongin, professor of organic chemistry at Portland State University, told Digital Trends, describing the method as “relatively user-friendly.”

More: Graphene’s latest miracle? The ability to detect cancer cells

In the study, the fluorescent probes were used to detect pancreatic cancer, one of the nastier cancers around, in which survival rates are frequently less than a year. Some experts suggest that pancreatic cancer is set to become the second-leading cause of cancer-related death over the next 15 years.

It is particularly relevant to this work because sufferers of pancreatic cancer often require surgery to remove the tissue since neither chemotherapy or radiotherapy prove particularly effective against it. Unfortunately, surgery to remove cancerous tissues can be inaccurate, with the result being that only a section of the cancerous material is successfully removed from patients.

“The current probe is quite selective for pancreatic ductal adenocarcinoma (PDAC),” Strongin continued. “However, the probe design derives from traditional principles of drug design. The library of probes we have synthesized is thus affording us knowledge of structure-activity relationships via biodistribution data. This is enhancing our understanding of what is required for rationally addressing other disease targets using simple organic fluorophores.”

So far, the fluorescent probes have only been tested on mice, although human clinical trials could take place in the near future.

“We are initially planning to have the dyes tested in real time on frozen tissue sections analyzed during tumor resection in the operating room,” Strongin said. “A longer term goal is to aid in the understanding of the high recurrence rate of PDAC after surgery.”