Samsung’s auto-reply app fights distracted driving
Let’s be honest: too many of us are using our phones while driving. It’s a problem and it’s dangerous, but we do it anyway. Samsung knows this and has come up with a new app cleverly named In-Traffic Reply to help. The app, currently in beta, aims to keep you safe while allowing you to answer messages you get while you’re behind the wheel.
In-Traffic Reply will use your smartphone’s sensors to detect when you’re in a car (or on a bike). It will then send an automatic response to calls and texts. You can choose the message sent out, too, from three options: the default “I’m driving, so I cannot answer at the moment,” a “fun, animated response,” or a custom reply of your own making.
If you’re looking to block texts in the car now, you can check out T-Mobile’s subscription-based DriveSmart Plus service and AT&T’s free DriveMode. Auto maker Mitsubishi and wireless provider Sprint (way back in 2011) have also explored solutions to the problem. Apple has provided custom replies for a while, but they still require you to look at your phone.
The final version of Samsung’s anti-distracted driving app should arrive mid-May in the Google Play Store. Here’s hoping it prevents more car crashes due to texting while driving.
Via: The Verge
Source: Samsung
Alleged iPhone 8 Schematic Depicts Dual-Lens Vertical Rear Camera, Hints at Wireless Charging
Another alleged iPhone 8 schematic is making the rounds today, shared on Twitter by several “leakers” who often share alleged leaked device images sourced from Weibo and unnamed tipsters. KK Leaks, OnLeaks, and Benjamin Geskin have all tweeted the image, which comes from an unknown source.
It is not clear if the schematic is legitimate, and we’re at the point in the rumor cycle where it’s difficult to separate what’s real from what’s fake, so it’s best to view all current leaks with some skepticism.
This is a tipped leak what means I can’t confirm if legit or not but there you have it… #iPhone8 pic.twitter.com/6OgASNUDNb
— OnLeaks (@OnLeaks) April 26, 2017
The schematic appears to depict the interior of the OLED iPhone 8, and it matches many previously-leaked design schematics and rumors. The device pictured features a vertical dual-lens rear camera with an LED flash in the middle, and it includes a large circular area, perhaps for some kind of wireless charging functionality. No rear Touch ID button is included in the schematic.
Over the course of the last few weeks, we’ve seen several alleged design schematics and renderings that are said to represent the iPhone 8, but because Apple is said to be testing multiple prototypes, we appear to be seeing two distinct devices, and it’s not clear which one represents Apple’s final 2017 iPhone.
One device, which seems to be similar to the device in the schematic above, features an edge-to-edge display with a small 4mm bezel, what appears to be a glass body (perhaps with a stainless steel frame), and a Touch ID Home button that appears to be embedded in the display, while the other features an edge-to-edge display with slightly wider top and bottom bezels, an aluminum body, and a Touch ID Home button located on the rear of the device.
Both are said to be representative of different designs Apple has pursued, and Apple is reportedly experimenting with a rear Touch ID button due to difficulties implementing it under the display. It’s not yet known if a Touch ID button under the display will pan out.
While we’re seeing different designs at the current time, there are several rumors that are consistent. The OLED iPhone is said to be similar in size to the 4.7-inch iPhone but with a display closer in size to that of the 5.5-inch iPhone, and almost all current rumors point towards a vertical dual-lens camera for improved images and perhaps some kind of augmented reality or 3D functionality.
Not much has been said about wireless charging recently, but Apple is supposedly implementing some kind of inductive charging solution, and we can count on an improved A11 processor in the device.
Apple plans to sell the OLED iPhone alongside two standard iPhones with LCD displays, which are said to be similar in size to the existing iPhone 7 and iPhone 7 Plus. With many of the rumors focusing on the flagship OLED iPhone, not much is known about the other two iPhones and their specific design also remains unclear.
Recent rumors are suggesting the OLED iPhone may be severely constrained when it launches in September, so it could be difficult to get for several months. That’s a common rumor that we hear every year ahead of the debut of a new iPhone, but the rumors, coming from trusted sources, are especially emphatic and dire this year, suggesting there’s some truth there.
Related Roundup: iPhone 8 (2017)
Discuss this article in our forums
Carezone will help keep your medical info on-track (review)

If you are a person who has multiple medications and/or supplements to deal with on a regular basis, you know that managing doses, refills, and prescription appointment cycles can be a real hassle. And sometimes you can get into a rut where you consistently miss on these, causing potentially real effects on your body.
Enter CareZone, a free app downloadable on the Google Play Store (link here). CareZone is a utility app designed to help you organize your medical life; everything from medication dosing, to appointments, to emergency contacts, and important documents.
The appeal of CareZone is the all-inclusiveness of the app. This isn’t just a medication reminder tool. CareZone has the ability to tie together all aspects of you medical life. Let’s take a quick tour to see everything it can do!
Setup
Now when I say “has the ability”, please read that there is some front-loaded effort required by you the user to get the app truly to the point where it can be of the most use.
The first step is simple enough: download the free app from the Play Store. Then the small amount of work begins. CareZone will ask you to fill out several pages of information; as stated earlier, the more info you put in the more useful the app becomes.
You are given the opportunity to fill out several areas of you medical life, including current medications and supplements (you can also scan photos of the bottles into the app), physician’s contact info, emergency contact info, and health documents (insurance card, driver’s license). You can also add your appointments into the built-in calendar, vitals metrics (such as blood glucose and blood pressure), and to-do lists.






Once you have all this info input, you can then also share you info with family members and close friends, so they can help as much as possible in the event you aren’t able to access it for any reason.
The medication scanning tool is particularly useful, as it asks that you take 4 consecutive photos of each medication vessel (1 per side, presumably). This ensures you have all the information needed by a doctor or pharmacist. I can see someone going to the Pharmacy, only to realize they didn’t have all the needed information to get their question answered. This could help avoid those embarrassing and time-consuming scenarios.
App Use
Using the app itself is very simple. Actually, one of the biggest draws I see to it is that in addition to tracking the medical info on yourself, you can also input and track the medical information of another person, presumably a family member or friend who may have difficulty in maintaining this themselves. This allows a several advantages:
All info is in one place; no more post-it notes strewn everywhere as new info is gleaned.
Their schedule can be in one place and maintained, but without cluttering up your personal or base schedule already on your phone/tablet.
The information is completely shareable with your loved one, so they don’t have to feel like they’ve lost control.
There is one area where the app can be a bit intrusive, and that is in the way of ads for you to use their medication ordering and delivery service. While it may be a darn good service, it does seem like you trip over these integral adds (not pop-ups or splash pages) more often than you’d like, if you’re not interested in this service.


Conclusion
That said, CareZone is a clean, well-designed application, that allows individuals and/or their loved ones to curate an essential medical file for use when dealing with care professionals or emergency staff. It will take a nominal amount of upkeep, but the benefits make CareZone a recommended app for those who can benefit from its mission.
For more information, visit the CareZone website.
Microsoft releases second April update for Surface Pro 4 and Surface Book
Why it matters to you
If your Surface Pro 4 or Surface Book is running Windows 10 Creators Update, there’s a driver update waiting for you.
Microsoft’s next event is scheduled for May 2, 2017, where the company is expected to cover Windows 10 Cloud Edition and focus on some kind of solution for the educational market. It’s speculated that the company might introduce the Surface Pro 5 successor to the highly successful Surface Pro 4 Windows 10 2-in-1.
In the meantime, the company continues to pay attention to its current Surface Pro 4 and Surface Book machines. Microsoft released the second set of driver updates this month for each machine, and the updates are aimed at further improving the stability and performance of devices upgraded to Windows 10 Creators Update, as Neowin reports.
The Surface Pro 4’s latest update focuses primarily on improving audio performance when playing video:
- Intel Corporation driver update for Intel Smart Sound Technology (Intel SST) Audio Controller (09.21.00.2102): Improve video playback on installed apps while online.
- Intel Corporation driver update for Intel Smart Sound Technology (Intel SST) OED (09.21.00.2102): Improve video playback on installed apps while offline.
The Surface Book drivers also improve audio performance while playing video, in addition to helping Cortana understand you when you are speaking to her:
- Realtek Semiconductor Corp. update for Realtek High Definition Audio (SST) (6.0.1.7895): Improves Cortana speech recognition.
- Intel Corporation driver update for Intel Smart Sound Technology (Intel SST) Audio Controller (09.21.00.2102): Improves video playback on installed apps while online.
- Intel Corporation driver update for Intel Smart Sound Technology (Intel SST) OED (09.21.00.2102): Improves video playback on installed apps while offline.
Neowin makes note of the fact that 24.1 percent of Surface Pro 4 users and 29.3 percent of Surface Book users are already running Windows 10 Creators Update. If you are an owner of either of these machines and you want to install the latest update, then you will need to make sure you’re running Creators Update as well.
AT&T lost nearly 200,000 subscribers in the first quarter of the year
Why it matters to you
When service providers compete, you win. And apparently, AT&T isn’t competing quite hard enough to retain its customers’ patronage.
It’s not just Verizon that is losing subscribers — AT&T is in the same boat. On Tuesday, the mobile service provider announced that it missed quarterly revenue estimates as a result of lower hardware sales, customers who kept their existing phones for longer, and rival companies who offered various deals on unlimited data plans. And while AT&T is still the number two American wireless carrier, it looks as though its position may be starting to feel a bit, well, precarious.
AT&T lost a total of 191,000 postpaid subscribers from January to March of 2017. Last Thursday, Verizon also reported a quarterly subscriber loss — its first ever, in fact.
Much of the floundering appears to be linked to customers’ demand for data. Both T-Mobile and Sprint have unveiled new unlimited deals, and while AT&T did reduce the price of its own plan, it apparently wasn’t enough to keep all its customers around.
“Obviously, this has made an already competitive market even more so, and our response to the unlimited data plans was probably a little slow,” Chief Executive Randall Stephenson said on the company’s post-earnings conference call. He also noted that AT&T lost market share in the first quarter of the year.
In total, AT&T’s operating revenue dropped three percent, and this was mostly attributed to low sales of phones — the lowest in the company’s history. And apparently, these less than ideal numbers may be a trend. The Dallas-based carrier said Tuesday that it would stop providing a full-year revenue forecast as wireless handset sales are so unpredictable.
Of course, all is not lost for the company. AT&T is currently engaged in an $85.4 billion acquisition deal with Time Warner that would allow it to control channels like HBO and CNN. The deal is expected to be signed, sealed, and delivered by the end of 2017. And AT&T also has grand plans to roll out 5G networks, hoping to win back customers with blazing fast speeds.
SFX professionals design 3D-printed head for ‘Nintendo neurosurgery’
Why it matters to you
This 3D model could help train neurosurgeons without the need for expensive, nonreusable cadavers.
A neurosurgeon, a special effects guy, and a computer engineer walk into a lab — out walks a lifelike 3D model of a 14-year-old-boy, designed to teach medical students how to perform minimally invasive brain surgery.
That’s our summary of recent work by a team of researchers from Johns Hopkins University School of Medicine. The model may be seem creepy, but it’s already offering trainees a realistic test subject.
“The model is highly accurate,” Alan Cohen, a professor of neurosurgery at Johns Hopkins who lead the project, told Digital Trends. “It is even more accurate than cadavers, because the model is based on MR (magnetic resonance) imaging from a real patient with the disorder being simulated, whereas cadavers do not usually have the pathologic disorders being taught.”

AANS
If available, minimally invasive neurosurgery is a procedure of choice for many surgeons because it doesn’t require a whole lot of poking and prodding. The operation is performed through tiny wholes using tiny instruments, with minimal brain trauma. However, tiny procedures don’t leave a whole lot of room for error and require some pretty sophisticated hand-eye coordination. As such, Cohen has dubbed the technique “Nintendo neurosurgery.”
“The procedure is done with miniaturized instruments inserted through working channels in an endoscope, a lighted tube similar to what the general surgeons used to take out the appendix or gall bladder in laparoscopic surgery,” he said. “The procedure is done with the surgical team guiding the instruments while watching a TV monitor.”
Though cadavers are the more conventional way to train medical students, they can be impractical. Each cadaver can only be used once, they’re relatively expensive, and hard to come by.
“Existing training methods are flawed,” Cohen said. “We sought to use 3D printing and a collaboration of neurosurgeons, neuroradiologists, simulation engineers, and a special effects team from Hollywood to create a novel training model for minimally invasive neurosurgery.”
The product of that collaboration — especially with special effects group FracturedFX — is a full-scale model that not only looks like a patient, but also has flowing cerebrospinal fluid and a realistically pulsating brain. In a study, neurosurgery fellows and medical residents performed surgery on the models and rated them highly for realism. Further tests will be needed to determine whether or not mock operations on the models improve operating room performance.
A paper detailing the study was published this week in the Journal of Neurosurgery.
PhotoScan update lets you ditch glare when digitizing old photos sans scanner
Why it matters to you
Snapping a smartphone photo of a photo is a quick way to digitize old photos without a scanner, but one of the method’s biggest issues is getting a fix.
Snapping a photo of a photo is a popular way to skip the scanner to digitize old photos, except for one thing: Glare. Last week, Google released an update to the PhotoScan app to fix just that.
The app uses several pictures of the same image and combines them to remove the glare automatically. By holding the smartphone at slightly different angles, the glare moves around the photos, making it possible to eventually capture all the details in the photo and remove the glare.
“The challenge is that the images need to be aligned very accurately in order to combine them properly, and this processing needs to run very quickly on the phone to provide a near instant experience,” Google researchers wrote.
To get the images to align accurately with minimal processing time, the software engineers took the advanced “obstruction-free photography” technique and altered it to work with the smaller processing power of a mobile device. The first picture works as the reference frame, the developers said, or the angle that you want the final image to appear at. Then, the program asks the user to take four more photos. The program then identifies similar points between the images to map or align the subsequent images with the first frame. Pixel mapping helps correct any irregularities from snapping the photo at slightly different angles.
To lighten up the program for use on a smartphone instead of a desktop computer, the photo is divided into a grid and each grid point — instead of each pixel — is used to map out the differences between photos, giving the smartphone a much smaller task to handle in order to churn out quick results.
Once the photos are aligned, the system takes the darkest color value from all the overlapping images, which eliminates the bright glare.
Glare from light sources often renders bright spots even on glossy photos outside a frame, though Google says the feature can also be used to eliminate the glare from leaving the photo inside the glossy sleeve of a photo album.
The new glare-elimination technique is part of the latest version of PhotoScan, available as a free download on both Android and iOS platforms.
Machine learning helps researchers predict cardiovascular disease
Why it matters to you
Machine learning’s capacity to trawl through masses of data could potentially have a major impact on the future of medical care, specifically in terms of prevention.
Researchers at Nottingham University have demonstrated that machine learning algorithms could be better at predicting cardiovascular risk than the medical models that are currently in place. Four algorithms were put through their paces during the study; random forest, logistic regression, gradient boosting, and neural networks.
A team of primary care researchers and computer scientists compared these algorithms with the standard guidelines for cardiovascular disease risk assessment offered by the American College of Cardiology. A data set comprising 378,256 patients from almost 700 medical practices in the United Kingdom was used to facilitate the investigation.
All four algorithms were found to improve overall prediction accuracy compared to established risk prediction methodology based on a metric known as the ‘Area Under the Receiver Operating Characteristic curve,’ according to a report from Phys.org. The level of improvement varied from 1.7 to 3.6 percent.
Neural networks was found to be the highest achieving algorithm in the study, correctly predicting 7.6 percent more patients who would eventually develop cardiovascular disease.
“Cardiovascular disease is the leading cause of illness and death worldwide,” said Dr. Stephen Weng, of Nottingham University’s National Institute for Health Research School for Primary Care Research. “Our study shows that artificial intelligence could significantly help in the fight against it by improving the number of patients accurately identified as being at high risk and allowing for early intervention by doctors to prevent serious events like cardiac arrest and stroke.”
Based on the results of the study, the team is confident that artificial intelligence and machine learning techniques have a key role to play in fine-tuning risk management strategies for individual patients.
The researchers say that there’s more to be learned about the potential predictive accuracy of machine learning techniques, with other large clinical datasets, different population groups, and different diseases all providing avenues to expand upon the study.
Machine learning helps researchers predict cardiovascular disease
Why it matters to you
Machine learning’s capacity to trawl through masses of data could potentially have a major impact on the future of medical care, specifically in terms of prevention.
Researchers at Nottingham University have demonstrated that machine learning algorithms could be better at predicting cardiovascular risk than the medical models that are currently in place. Four algorithms were put through their paces during the study; random forest, logistic regression, gradient boosting, and neural networks.
A team of primary care researchers and computer scientists compared these algorithms with the standard guidelines for cardiovascular disease risk assessment offered by the American College of Cardiology. A data set comprising 378,256 patients from almost 700 medical practices in the United Kingdom was used to facilitate the investigation.
All four algorithms were found to improve overall prediction accuracy compared to established risk prediction methodology based on a metric known as the ‘Area Under the Receiver Operating Characteristic curve,’ according to a report from Phys.org. The level of improvement varied from 1.7 to 3.6 percent.
Neural networks was found to be the highest achieving algorithm in the study, correctly predicting 7.6 percent more patients who would eventually develop cardiovascular disease.
“Cardiovascular disease is the leading cause of illness and death worldwide,” said Dr. Stephen Weng, of Nottingham University’s National Institute for Health Research School for Primary Care Research. “Our study shows that artificial intelligence could significantly help in the fight against it by improving the number of patients accurately identified as being at high risk and allowing for early intervention by doctors to prevent serious events like cardiac arrest and stroke.”
Based on the results of the study, the team is confident that artificial intelligence and machine learning techniques have a key role to play in fine-tuning risk management strategies for individual patients.
The researchers say that there’s more to be learned about the potential predictive accuracy of machine learning techniques, with other large clinical datasets, different population groups, and different diseases all providing avenues to expand upon the study.
Google Assistant can now teach you how to make dinner, other daily tasks
Why it matters to you
Wondering what to cook for dinner? Google Assistant can find a recipe and walk you through the steps.
The Google Assistant, the smart digital helper, can already juggle a bunch of hands-free tasks for you. It will supply movie showtimes, place a restaurant reservation, summon a car, and even place orders from nearby stores on command. And it hasn’t stopped improving. This week, it is gaining integrations that will boost its cooking, booking, nature, and car knowledge.
On Wednesday, Google announced recipe skills for the Google Assistant and Google Home, the artificial intelligence-powered smart home speaker. Before, both could answer basic questions about substitutions, measurements, and conversions before, but now, they go about it a little more conversationally. You can ask about recipes by yelling commands like, “OK Google, let’s make a croissant,” and if you’re on a smartphone or tablet, you save a recipe for later by tapping, “Send to Google Home.” When you shout “OK Google, start cooking” at your Google Home, it will start walking you through the steps.
If you accidentally miss something, it’s no big deal. You can ask the Google Assistant or Google Home to repeat it by saying, “OK Google, repeat.” If it is a step earlier in the recipe, you can say something like, “OK Google, what is step two?”
Google’s said it is sourcing more than 5 million recipes from Bon Appetit, The New York Times, and more, and that the new recipe skill will roll out to the Google Assistant and Google Home in the coming weeks.
Recipes are not the only new thing.
Earlier this week, Google Home and the Google Assistant gained 25 new actions — or third-party voice apps — including one that lets you listen to hundreds of bird songs, a voice-activated virtual concierge, and vehicle controls.
Mercedes and Hyundai recently launched actions that tap into their respective car platforms. The Mercedes Me action can unlock the car door, take navigation directions, and start the car engine.
For naturalists and nature lovers, there is the Bird Song Skill by Thomptronics and Earth Day. Bird Song Skills can play more than 200 bird sounds and test your knowledge with the song quiz. And the Earth Day action can supply resources on issues like deforestation, climate change, and biodiversity.
The Atlanta rail action provides local bus and train arrival times. The Virtual Concierge, a product of The Lodge in Palmer Lake, Washington, tells vacation rental and Airbnb guests about house rules and answers questions about things like Wi-Fi passwords, nearby restaurants, and activities. And the Farmer’s Almanac tells you things like the number of days until summer and the best time to destroy pests and weeds.
Google said that more than 175 actions have been added to Google Home since the launch of Actions on the Google Assistant platform last December. And new skills and actions follow the debut of multi-account support. Last week, the Google Assistant on Home devices gained support multiple accounts and the ability to differentiate between up to six voices.



