Revamped ‘DT Daily’ goes bicoastal with the latest tech talk
Tuesday marked the inaugural stream of our revamped DT Daily, broadcast from our brand-spanking-new studios in New York and Portland, Oregon. In Portland, Greg Nibler was joined by Digital Trends emerging tech editor Drew Prindle and brand manager BJ Frogozo to discuss notable headlines, such as the future of food delivery, and the launch of the first high-speed transit tunnel by the Elon Musk-owned Boring Company. They also responded to some viewer comments on the fly, and welcomed guests Ben Lee and Quinn Slocum to discuss online branding.
Ben Lee, the founder of digital agency Rootstrap, suggests there may be unforeseen challenges to personal branding in 2018. His company works with founders and other influencers by providing services that help people create the best content in the digital age. On the flip side of things, we also talked to influencer Quinn Slocum, who made his start on Instagram while still in middle school, and used his editing skills to grow several popular Instagram pages, including Best Celebrations, which has now amassed more than 3.4 million followers.
In New York, our editor-in-chief, Jeremy Kaplan, and mobile editor, Julian Chokkattu, discussed some of the latest phones to hit the market over the last few months, as well as a few secrets surrounding the Red Hydrogen One, a holographic smartphone to which you can attach a DSLR lens.
From the iPhone XR, Apple’s “budget” iPhone, to the Galaxy Note 9 and Pixel 3, among others, we’ve had several noteworthy devices enter the scene. One of the first phones to launch in October, the LG V40 ThinQ, is noteworthy for its five-camera arrangement, sporting three cameras on the back and two on the front, including a wide-angle selfie camera. While camera quality is definitely one of the trends pushing to the forefront, others like the rise of gaming phones are leaving us a little stumped.
DT Daily airs Monday through Friday at 9 a.m. PT, with highlights available on demand after the stream ends. For more information, check out the DT Live homepage, and be sure to watch live for the chance to win a $100 Amazon gift card, among other prizes.
Editors’ Recommendations
- What to expect from Google’s October 9 event in New York City
- Superstar influencer Quinn Slocum talks building brands and living well
- No gimbal required: GoPro’s HyperSmooth stabilizer makes Hero7 a must-have upgrade
- Instagram feature that lets you reshare others’ posts may be on its way
- The best 360 cameras you can buy
Xiaomi Mi Mix 3 unveiled with 10GB of RAM and slideout front cameras
Xiaomi is raising the bar with the Mi Mix 3 in more ways than one.

Xiaomi was one of the first major companies to adopt full-view displays, starting with the Mi Mix. The Chinese manufacturer was able to trim the bezels considerably on three sides, and moved the front camera module to the bottom bar. Last year’s Mi Mix 2 followed the same design ethos, but the company is switching things up with its latest flagship.
The Mi Mix 3 gets rid of the camera module on the bottom bezel, leading to a much more immersive display with a 93.4% screen-to-body ratio. Xiaomi was able to reduce the chin by 4.6mm, which resulted in an increase of 3.82% in the screen-to-body ratio from the Mix 2S. Shrinking the bezels has allowed Xiaomi to include a larger 6.39-inch AMOLED panel without a noticeable increase in overall dimensions.

The AMOLED 19.5:9 display has a resolution of 2340 x 1080, but it’s what hidden underneath the panel that’s more interesting. Xiaomi is now offering a dual camera setup up front — with a primary 24MP sensor joined by a 2MP module — and they’re both hidden behind the display. Mechanised sliders aren’t new, as we’ve seen on the likes of the OPPO Find X and the Vivo NEX, but with the Mix 3, the entire display slides down to reveal the cameras. Unlike the Find X, there’s a traditional fingerprint sensor at the back.

Xiaomi is saying that the magnetic sliding motor has a life expectancy of 300,000 cycles, but it’ll be interesting to see if that claim holds up in real-world usage, particularly after a few months’ worth of dust has accumulated in the crevices of the device. Xiaomi is touting a few use cases for the slider, such as answering calls, taking selfies, launching an app, accessing the shortcuts drawer, and more. Like the NEX, you’ll be able to choose from several preset sounds for the slider, with five sound effects available.
The front 24MP camera features the same slate of AI features that Xiaomi is offering in the Mix 2S, and the secondary 2MP lens is designed to facilitate depth of field. At the back, the primary 12MP camera with 1.4-micron pixels is joined by a 12MP telephoto lens, and it has OIS, portrait mode, and AI scene detection. The camera is also able to shoot 960fps slow-motion video, and with the Mix 3 getting a score of 103 on DxOMark, it is Xiaomi’s most capable shooter thus far.

The Mix 3 retains the ceramic back of its predecessor, and will be available in three color options: black, jade green, and blue. The phone also offers wireless charging, and Xiaomi is throwing in a 10W wireless charger in the box. The Mix 3 also features Qualcomm’s Snapdragon 845 chipset, NFC, dual-frequency GPS, 4×4 MIMO, global LTE bands, and there’s a variant that comes with 10GB of RAM. Xiaomi is also set to launch a model in European markets next year that will offer a 5G NR modem.
The Mix 3 is slated to go on sale in China shortly for ¥3,299 ($475) for the variant with 6GB of RAM and 128GB of storage, ¥3,599 ($520) for 8GB of RAM and 128GB of storage, and ¥3,999 ($575) for the model with 8GB of RAM and 256GB of storage. The Palace Museum edition with 10GB of RAM and 256GB of storage will retail for ¥4,999 ($720). The phone will make its debut in global markets, starting with India, and we should know more on this front in the coming months.
What do you guys like the most about the Mi Mix 3?
Hackers target major airline in data breach affecting nearly 10M customers
Cathay Pacific has revealed details of a massive hack that’s seen the personal data of nearly 10 million of its customers stolen.
The major international airline, which operates out of Hong Kong and flies to seven U.S. cities, said on Wednesday it had discovered unauthorized access “to some of its information systems containing passenger data of up to 9.4 million people.”
The security breach is notable not only for the large number of people affected, but also for the broad range of personal data that was accessed by the hackers, specifically: passenger name, nationality, date of birth, phone number, email, address, passport number, identity card number, frequent flyer program membership number, customer service remarks, and historical travel information.
In addition, 403 expired credit card numbers were also accessed, as well as 27 credit card numbers with no CVV (a card’s security code).
The airline, which is now contacting affected customers, added that the hacked I.T. systems “are totally separate from its flight operations systems, and there is no impact on flight safety.”
At this stage, there’s no evidence that the stolen data has been misused in any way, but anyone keen to follow developments or contact the company can visit this Cathay Pacific webpage dedicated to the incident.
We have discovered unauthorised access to some of our passenger data. For Data Security Event support, please DM @cxinfosec for assistance.
— Cathay Pacific (@cathaypacific) October 24, 2018
Cathay Pacific CEO Rupert Hogg said the company is “very sorry for any concern this data security event may cause our passengers.”
Hogg promised that the airline “acted immediately to contain the event, commence a thorough investigation with the assistance of a leading cybersecurity firm, and to further strengthen our I.T. security measures.”
The CEO added: “We are in the process of contacting affected passengers, using multiple communications channels, and providing them with information on steps they can take to protect themselves.”
The airline said that although no one’s travel or loyalty profile was accessed in full and no passwords were compromised, it nevertheless recommends that customers consider changing their passwords regularly, while also checking for any suspicious activity on their various accounts, while also being vigilant against phishing or other attempted scams.
The hack comes just a month after British Airways revealed hackers had nabbed personal data belonging to 380,000 of its customers. But the size and scope of this most recent hack raises serious questions about how Cathay Pacific stored its customer data and what kind of security systems the company had in place to protect it.
Editors’ Recommendations
- British Airways data hack hits 380,000 recent customers
- Was your Facebook account hacked in the latest breach? Here’s how to find out
- Hack affects 2 million T-Mobile customers, unclear if passwords included
- Attacker stole user data from Reddit through employee accounts
- Australian student hacks into Apple, steals 90GB of data because he’s a ‘fan’
How to watch the World Series, with or without a cable subscription
Adam Glanzman / Contributor
The 2018 World Series is upon us, with the Boston Red Sox facing off against the Los Angeles Dodgers. The games all air live on Fox, but that’s far from the only way to watch. Whether you’re a hardcore baseball fan or you’re just looking to occasionally check in and see who’s in the lead, there are plenty of ways to watch the series, whether online, with cable, or without it, and we’ve listed them all here for you.
With an HD antenna
Bill Roberson/Digital Trends
If you don’t want to subscribe to either a pay TV service or a streaming TV service, you can use an antenna to watch the game, provided you live in an area where you can get Fox over the air. If you live anywhere near your local Fox affiliate, you should be able to get by with most indoor antennas, and we’ve got a list of some of the best ones you can buy. If you’re further away from your affiliate, you might need to pull out the big guns. The Mohu Striker, for example, has a range of up to 75 miles, but needs to be mounted either outside or in your attic.
Using a live TV streaming service
Not a cable subscriber and don’t want to bother hooking up an antenna just to watch the game? Good news: You don’t have to. There are plenty of live TV streaming services that will let you watch the game, potentially for free, as long as they offer up your local Fox affiliate.
Sling TV might be your best bet if you’re looking to watch the game on the cheap, but be sure to check that it carries your local channels. PlayStation Vue, DirecTV Now, and Hulu with Live TV will all work as well. If you’re a year-round sports nut, particularly one with a love of soccer, FuboTV is another great option that provides plenty of local channels around the country. Some of these services, including PlayStation Vue and DirecTV Now, also act as TV Everywhere providers, meaning you may be able to watch the games using Fox Sports Go, with the streaming service as your TV provider login.
Finally, there’s YouTube TV. This service gets its own special shout-out, as it will be airing Fox’s World Series coverage, no matter whether it carries your local affiliate or not, meaning this might be your best bet if you can’t get your local channels via any of the methods listed above.
One thing to keep in mind is that most of these streaming services have a free trial period, which should let you watch at least most of the remaining games in the series for free. You might need to jump between two of them if the series goes the full seven games, but that can help you figure out which streaming service is best for you — in addition to getting you your free baseball fix.
With a cable subscription
If you’re a traditional pay-TV subscriber, whether it’s via cable or satellite, you’ve got a lot of options. Sure, you can just watch the games live on Fox or DVR them and watch them at your leisure, but that’s just the beginning of how you can watch with your subscription.
In addition to airing on Fox, the games will also be available to watch using Fox Sports Go, either via its website or its various apps. Apps are available for iOS and Android, and streaming devices like Amazon Fire TV, Apple TV, and Roku are supported, as well as smart TVs on the Android TV platform. You can also watch on your Xbox One game console.
Editors’ Recommendations
- How to watch NFL games online, with or without cable
- How to watch ‘Game of Thrones’ online
- How to watch NBA games online
- Cord-cutting 101: How to quit cable for online streaming video
- I only watched free streaming services for a month, and I didn’t miss much
A.I. can do almost anything now, but here’s 6 things machines still suck at
Predicting what A.I. and computers aren’t capable of is a fool’s errand. Throughout A.I.’s 60-year history, skeptics have attempted to single out tasks that they think machines will never be able to achieve. Such tasks have ranged from playing a game of chess to generating pieces of music to driving a car. In almost every instant, they have been proved wrong — sometimes profoundly so.
But as amazing as A.I. is here in 2018, there are still things that it is most assuredly not able to do. While some are more frivolous than others, they all showcase some part of machine intelligence that’s currently lacking. Here are six examples which highlight how much more there is to do.
Writing funny jokes
If you’re an A.I. researcher reading this, consider this one the (possibly) low-hanging fruit, tantalizingly within your reach. After all, writing a decent joke should be easy, right? Tell that to every attempt at creating joke-generating A.I.s so far.
Earlier this year, one intrepid coder trained a neural network on more than 43,000 jokes and asked it to invent new jokes. A representative, laughter-defying sample goes: “What do you get when you cross a cow with a rhino? A bungee with a dog.”
Whether a joke is funny or not is hugely subjective, but even the biggest bungee enthusiast is unlikely to find too much to chuckle about there. IBM’s Jeopardy!-playing A.I. showed that machines can be made to understand linguistic complexities such as multiple meanings to the same word. But so far not to purposely humorous effect.
So here’s a challenge: Get an A.I. to write and then deliver a 3-minute set list of comedic material that makes 50 percent of its (non-coder) human audience laugh. And no joke stealing allowed either. That means you should probably throw out your Carlos Mencia training data.
Writing good novels
The rise of companies like Narrative Science and the use of algorithms for sports reporting shows that writing is not out of reach for today’s computers. But we don’t expect to see a machine write a novel yet, regardless of whether that’s chart-topping popular genre fiction or highfalutin literary fiction.
It’s about venturing into problem-solving tasks, and deciding which pieces of information are relevant.
Writing either one requires not just generating text to reveal fragmentary scraps of information, such as the score in a local football game. It means composing a narrative (or willfully subverting that idea) which resonates with readers, and then figuring out the best way to tell it.
There are some fascinating demonstrations of A.I. used to write prose. There are some very silly ones as well. But we’re not holding our breath for either a computational Jane Austen or J.K. Rowling any time soon. If ever.
Formulating creative strategies
On one level, this simply isn’t true. As Google DeepMind’s game-playing A.I. demonstrated, when it comes to things like playing Atari video games, intelligent agents versed in reinforcement learning can indeed formulate optimal strategies. I’m also of the belief that creativity isn’t an untouchable area for artificial intelligence.
What I’m talking about here, however, is the ability to formulate the kind of creative strategies that, for instance, define a great lawyer’s ability to form unique arguments or a top CEO to lead his or her company in bold new directions.
This isn’t just about analyzing data; it’s about venturing into unstructured problem-solving tasks, and deciding which pieces of information are relevant and which can be safely ignored.
Excelling at these tasks also frequently requires the ability to…
Being human
Tough goal, right? No, we don’t mean this literally: If a machine needed to literally be a human to be considered intelligent then it would never happen. Instead, this refers to traits like compassion and an ability to tap into the things which drive us as human beings.
Machines are getting very good at identifying individual users’ emotional states through things like facial expressions and vocal patterns. They can then use this insight to modify how they interact with us, such as recommending us certain playlists when we feel sad or happy.
S3studio/Getty Images
But as good as computers might be getting at identifying diseases like cancer, would you choose one, instead of a human doctor, to tell you that you are dying of a terminal illness? On the lighter side, books and movies like Moneyball show us how data analytics can pick out winning sport teams. But could an A.I. be a top level sports coach? These are important human roles, and they’re ones that are going to remain human for the foreseeable future.
If machines can’t adopt these skills, it’s going to limit the scope of what they can achieve in the workplace. (On the plus side, maybe that throws a lifeline to humans!)
Making a cup of coffee
Bear with us for a second here. Yes, there are plenty of smart coffee machines out there, but that’s not what we’re referring to. The Coffee Test was put forward by Apple co-founder Steve Wozniak as a measure of multiple aspects of machine intelligence and robot dexterity. The test Wozniak describes involves a machine entering a random American home, finding the coffee machine, adding water, finding a mug, and brewing a coffee by pushing the correct buttons.
Will we ever see a team of robots beat a human soccer team?
What I like about this test is how measurable it is. Other attempts to quantify an Artificial General Intelligence focus either on philosophical abstractions (the Turing Test) or have already arguably been reached (Nils John Nilsson’s proposal of an Employment Test). Wozniak’s test requires high performance in areas like image recognition, but it also needs a generalized, multi-purpose intelligence. So far, this hasn’t been achieved — yet.
Beating human sports teams
Consider this a bonus one. That’s because, like making a cup of coffee, it’s not just about about A.I., but also its related field of robotics: the hardware yang to software’s yin.
In the same way that A.I. must be generalized if it’s ever going to be considered intelligent, then robots must also be multi-purpose if they’re going to fully live up to their potential. We’re starting to see this with the likes of Boston Dynamics’ Atlas robot, which is as happy performing backflips as it is jogging or carrying out a bit of parkour.
But while A.I. has performed intellectual feats like defeating grandmasters at chess or winning games of Go against the best players in the world, the same hasn’t proven true for robots. Will we ever see a team of robots beat a human soccer team? Considering the speed and myriad skills that game entails, it seems a long way away.
Are we wrong?
It’s not often that I reach the end of an article and think, “Boy, I really hope I’m wrong about this one.” In this case, however, I really mean it. These are not, in my view, universal truths that can never change. The extraordinary progress of A.I. shows us how rapidly things are moving. More to the point, much of this progress has specifically been a response to statements like “a computer will never do X.”
If you think there is evidence for one or more of these statements to be untrue, let me know. Because, as we move forward in A.I. research, these are some of the hurdles that will need to be dealt with. Especially as we begin working with A.I. and robots in the workplace on a regular basis.
Editors’ Recommendations
- Moxi the ‘friendly’ hospital robot wants to help nurses, not replace them
- Robots are going to steal 75 million jobs by 2025 — but there’s no need to panic
- OpenAI bots bring the pain to professional ‘Dota 2’ players
- Teaching machines to see illusions may help computer vision get smarter
- 2019 Honda Pilot first-drive review
A.I. can do almost anything now, but here’s 6 things machines still suck at
Predicting what A.I. and computers aren’t capable of is a fool’s errand. Throughout A.I.’s 60-year history, skeptics have attempted to single out tasks that they think machines will never be able to achieve. Such tasks have ranged from playing a game of chess to generating pieces of music to driving a car. In almost every instant, they have been proved wrong — sometimes profoundly so.
But as amazing as A.I. is here in 2018, there are still things that it is most assuredly not able to do. While some are more frivolous than others, they all showcase some part of machine intelligence that’s currently lacking. Here are six examples which highlight how much more there is to do.
Writing funny jokes
If you’re an A.I. researcher reading this, consider this one the (possibly) low-hanging fruit, tantalizingly within your reach. After all, writing a decent joke should be easy, right? Tell that to every attempt at creating joke-generating A.I.s so far.
Earlier this year, one intrepid coder trained a neural network on more than 43,000 jokes and asked it to invent new jokes. A representative, laughter-defying sample goes: “What do you get when you cross a cow with a rhino? A bungee with a dog.”
Whether a joke is funny or not is hugely subjective, but even the biggest bungee enthusiast is unlikely to find too much to chuckle about there. IBM’s Jeopardy!-playing A.I. showed that machines can be made to understand linguistic complexities such as multiple meanings to the same word. But so far not to purposely humorous effect.
So here’s a challenge: Get an A.I. to write and then deliver a 3-minute set list of comedic material that makes 50 percent of its (non-coder) human audience laugh. And no joke stealing allowed either. That means you should probably throw out your Carlos Mencia training data.
Writing good novels
The rise of companies like Narrative Science and the use of algorithms for sports reporting shows that writing is not out of reach for today’s computers. But we don’t expect to see a machine write a novel yet, regardless of whether that’s chart-topping popular genre fiction or highfalutin literary fiction.
It’s about venturing into problem-solving tasks, and deciding which pieces of information are relevant.
Writing either one requires not just generating text to reveal fragmentary scraps of information, such as the score in a local football game. It means composing a narrative (or willfully subverting that idea) which resonates with readers, and then figuring out the best way to tell it.
There are some fascinating demonstrations of A.I. used to write prose. There are some very silly ones as well. But we’re not holding our breath for either a computational Jane Austen or J.K. Rowling any time soon. If ever.
Formulating creative strategies
On one level, this simply isn’t true. As Google DeepMind’s game-playing A.I. demonstrated, when it comes to things like playing Atari video games, intelligent agents versed in reinforcement learning can indeed formulate optimal strategies. I’m also of the belief that creativity isn’t an untouchable area for artificial intelligence.
What I’m talking about here, however, is the ability to formulate the kind of creative strategies that, for instance, define a great lawyer’s ability to form unique arguments or a top CEO to lead his or her company in bold new directions.
This isn’t just about analyzing data; it’s about venturing into unstructured problem-solving tasks, and deciding which pieces of information are relevant and which can be safely ignored.
Excelling at these tasks also frequently requires the ability to…
Being human
Tough goal, right? No, we don’t mean this literally: If a machine needed to literally be a human to be considered intelligent then it would never happen. Instead, this refers to traits like compassion and an ability to tap into the things which drive us as human beings.
Machines are getting very good at identifying individual users’ emotional states through things like facial expressions and vocal patterns. They can then use this insight to modify how they interact with us, such as recommending us certain playlists when we feel sad or happy.
S3studio/Getty Images
But as good as computers might be getting at identifying diseases like cancer, would you choose one, instead of a human doctor, to tell you that you are dying of a terminal illness? On the lighter side, books and movies like Moneyball show us how data analytics can pick out winning sport teams. But could an A.I. be a top level sports coach? These are important human roles, and they’re ones that are going to remain human for the foreseeable future.
If machines can’t adopt these skills, it’s going to limit the scope of what they can achieve in the workplace. (On the plus side, maybe that throws a lifeline to humans!)
Making a cup of coffee
Bear with us for a second here. Yes, there are plenty of smart coffee machines out there, but that’s not what we’re referring to. The Coffee Test was put forward by Apple co-founder Steve Wozniak as a measure of multiple aspects of machine intelligence and robot dexterity. The test Wozniak describes involves a machine entering a random American home, finding the coffee machine, adding water, finding a mug, and brewing a coffee by pushing the correct buttons.
Will we ever see a team of robots beat a human soccer team?
What I like about this test is how measurable it is. Other attempts to quantify an Artificial General Intelligence focus either on philosophical abstractions (the Turing Test) or have already arguably been reached (Nils John Nilsson’s proposal of an Employment Test). Wozniak’s test requires high performance in areas like image recognition, but it also needs a generalized, multi-purpose intelligence. So far, this hasn’t been achieved — yet.
Beating human sports teams
Consider this a bonus one. That’s because, like making a cup of coffee, it’s not just about about A.I., but also its related field of robotics: the hardware yang to software’s yin.
In the same way that A.I. must be generalized if it’s ever going to be considered intelligent, then robots must also be multi-purpose if they’re going to fully live up to their potential. We’re starting to see this with the likes of Boston Dynamics’ Atlas robot, which is as happy performing backflips as it is jogging or carrying out a bit of parkour.
But while A.I. has performed intellectual feats like defeating grandmasters at chess or winning games of Go against the best players in the world, the same hasn’t proven true for robots. Will we ever see a team of robots beat a human soccer team? Considering the speed and myriad skills that game entails, it seems a long way away.
Are we wrong?
It’s not often that I reach the end of an article and think, “Boy, I really hope I’m wrong about this one.” In this case, however, I really mean it. These are not, in my view, universal truths that can never change. The extraordinary progress of A.I. shows us how rapidly things are moving. More to the point, much of this progress has specifically been a response to statements like “a computer will never do X.”
If you think there is evidence for one or more of these statements to be untrue, let me know. Because, as we move forward in A.I. research, these are some of the hurdles that will need to be dealt with. Especially as we begin working with A.I. and robots in the workplace on a regular basis.
Editors’ Recommendations
- Moxi the ‘friendly’ hospital robot wants to help nurses, not replace them
- Robots are going to steal 75 million jobs by 2025 — but there’s no need to panic
- OpenAI bots bring the pain to professional ‘Dota 2’ players
- Teaching machines to see illusions may help computer vision get smarter
- 2019 Honda Pilot first-drive review
A Day With the iPhone XR: Unboxing and First Impressions
The iPhone XR is set to launch on Friday, October 26, and ahead of its release date, we were able to get our hands on a review unit from Apple.
We spent the day with the iPhone XR in New York City, checking out its feature set and doing a quick comparison with the higher-priced iPhone XS for all of our readers who are thinking of picking up Apple’s most affordable flagship smartphone later this week.
Subscribe to the MacRumors YouTube channel for more videos.
The iPhone XR in our video is the black version, which looks fantastic with the glass body and matching aluminum frame, but it’s worth noting the XR comes in several colors that are a bit more fun: white, yellow, coral, blue, and PRODUCT(RED).
Apple’s iPhone XS features a higher-end sturdier stainless steel frame instead of the lower-cost aluminum frame in the XR, but the XR doesn’t feel cheap. It’s a premium device, just like the iPhone 8 and other older aluminum iPhones.

Size wise, at 6.1-inches, the iPhone XR is a little bit bigger than the iPhone XS (5.8 inches) and a little bit smaller than the iPhone XS Max (6.5 inches), which makes it the perfect size for those who want a middle ground between the two XS options.
iPhone XR in the center
Like the iPhone XS, the iPhone XR features an edge-to-edge display with no Home button so there’s plenty of screen real estate, but the bezels at the sides, top, and bottom are noticeably thicker, which is one downside.
The iPhone XR uses the exact same TrueDepth camera system with Face ID that’s in the iPhone XS, which means there’s still a notch at the top. Overall, though, the visible display is much larger than what you get with the iPhone 8 and it’s almost the same as the XS and XS Max.
iPhone XR on left, iPhone XS Max on right
One of the most notable differences between the XR and the XS is the XR’s “Liquid Retina” LCD display compared the OLED display of the XS. If you have both phones side by side, you’re going to notice the lower 1792 x 828 resolution, but on its own, the XR’s display is perfectly adequate. Basically, it looks totally fine for an LCD display, and, in fact, better than previous Apple LCDs.
There is no 3D Touch in the iPhone XR due to the technicalities associated with implementing an edge-to-edge LCD display, and Apple has replaced it with a Haptic Touch feature. Haptic Touch provides haptic feedback when long pressing on buttons like the camera or the flashlight, but it is in no way as fleshed out as 3D Touch and can’t be used in as many places.
For frequent 3D Touch users, there’s good news — Apple plans to make Haptic Touch work with more gestures in the future.
At the back of the XR, there’s a single-lens camera which is available in lieu of the dual-lens camera system in the iPhone XR. It features a single wide-angle camera lens, but it can still do much of what the dual-lens camera in the XS can do thanks to some software magic.
The iPhone XR’s camera works with Smart HDR, Portrait Mode, and Depth Control, but there are some differences to be aware of. With Portrait Mode, some of the low light photos can look even better than with the iPhone XR because it’s using the larger f/1.8 aperture wide-angle lens to capture the image instead of the smaller f/2.4 telephoto lens like the iPhone XS, which lets in more light.

That means Portrait Mode photos in lower lighting on the XR are potentially going to look better than those on the XS, but there’s one major caveat – you can only take Portrait Mode photos of people on the XR.
Because there’s no multiple camera system to use to calculate depth between the background and foreground for blurring purposes, Apple uses a person’s face to determine what to blur and what to keep sharp. That means no rear-facing iPhone XR Portrait Mode images of pets, flowers, food, etc. You’ll also get fewer Portrait Lighting options.
Smart HDR is similar between the two cameras, though the feature is a bit more hit or miss when it comes to the handling of highlights and complex lighting situations.

As for the front-facing camera, since it’s exactly the same as the camera in the iPhone XS, you can do all of the same things and you have full access to Memoji and Animoji.
Inside, the iPhone XR is using the same A12 Bionic chip as the iPhone XS, which means it’s just as fast. And since that chip isn’t driving an OLED display, the XR gets way more battery life. In fact, the iPhone XR has the highest battery life of any of the three new flagship iPhones, XS Max included. One thing to note, though: the iPhone XR has 3GB RAM, while the XS has 4GB for the OLED display.
There are a few other differences to be aware of between the XS and the XR: the XR has a slower LTE Advanced connection instead of a Gigabit LTE connection like the iPhone XS, it has an IP67 water resistance rating instead of IP68, and it maxes out at 256GB of storage.
All in all, the LCD display, the aluminum frame, Haptic Touch, and the single-lens camera are the major differentiators between the two devices, and we don’t feel like the iPhone XR’s shortcomings are going to be a big deal for most consumers.

With the super fast A12 chip, Face ID, edge-to-edge display, glass body for wireless charging, the color options, the lower price tag, the XR is a great smartphone that’s going to be an ideal choice for many people.
The question isn’t whether the iPhone XR is as good as the iPhone XS — it’s whether the iPhone XS’s OLED display and camera features are worth the extra $250 over the XR. Pricing on the iPhone XR, by the way, starts at $749, while the XS is priced starting at $999 and the XS Max is priced starting at $1,099.
Do you have an iPhone XR coming on Friday? Why did you choose it over the XS? Let us know in the comments.
We’re going to have in-depth iPhone XR coverage coming next week, including a deep dive into the iPhone XR camera compared to the iPhone XS camera, so make sure to stay tuned to MacRumors for more.
Related Roundup: iPhone XRBuyer’s Guide: iPhone XR (Buy Now)
Discuss this article in our forums
Apple Updates Events App for Apple TV Ahead of October 30th Keynote
Apple today updated its Events app for the fourth and fifth-generation Apple TV in preparation for the October 30th event that’s expected to see the debut of new iPad Pro models and several new Macs.
The updated Events app can be downloaded from the tvOS App Store, and it features artwork from the media invites that were sent out last week.
Apple’s Events app, along with the Events section of Apple’s website, will be used to live stream the unveiling of the new products. This event is being held in New York City, which means it will start at 10:00 a.m. Eastern Time rather than 10:00 a.m. Pacific Time.
The Events app on the Apple TV will list the start time relevant to your own location, listing 10:00 a.m. for people on the East Coast and 7:00 a.m. for people on the West Coast.
As noted by 9to5Mac, Apple has also listed several Today at Apple sessions to allow customers to sign up to watch the unveiling at a local Apple Store. These events are available in several stores in the UK, Dubai, and Toronto.

For those who are unable to watch Apple’s live stream, MacRumors will have live coverage of the event on MacRumors.com and on our MacRumorsLive Twitter account.
Discuss this article in our forums
The new thin-bezeled Chromebooks from Asus won’t empty your wallet
Computer hardware company Asus has launched three new Chromebooks, featuring narrow bezels, Intel quad-core CPUs and premium design. The three models come in 11.6-inch, 14-inch, and 15.6-inch sizes but remain affordable and stylish.
Priced at $229, the cheaper 11.6-inch Asus Chromebook C223 is for people who want a compact and lightweight device. The footprint is smaller than a sheet of A4 paper and the weight comes in at just 2.2 pounds. It also offers the Intel N3350 processor, 4GB of RAM, and 32GB of storage. Connectivity onboard includes 2 USB 3.1 TypeA ports, 1 USB C port, a microSD card slot, and an audio jack.
The other two models — the Chromebook C423 and Chromebook C523 — are priced at $269 and feature an aluminum-finished lid for a premium look and feel. Both pack larger 14-inch and 15.6-inch displays, but with 1,366×768 HD resolution, and an 80 percent screen-to-body ratio. Touchscreen options are available and the connectivity includes two USB-A ports, two USB-C ports, a microSD slot, and an audio jack. The same Intel N3350 processor as the 11.6-inch model is powering these models.
Both the Chromebook C423 and Chromebook C523 come with a 180-degree lay-flat hinge for easy sharing of your screen with friends or colleagues. Asus puts each hinge through a 20,000-cycle open-and-close test to ensure long-term reliability.
Similar to competing Chromebooks, all three models should get to 10 hours of battery life on a single charge, ensuring that consumers can get work done on the move. The choice of an Intel quad-core processor should also make for speedy performance in popular Google and Android apps.
Other notable features on all three models include full-size, ergonomic keyboards with 1.4mm key travel. The Google Play store is also supported on all three models, allowing consumers to enjoy all their favorite Android apps.
Asus is not the first computer maker to go with a 15-inch Chromebook, but the slim bezels are a welcome change. Acer previously introduced a 15-inch Chromebook and we approved of its great battery life, zippy performance and awesome speaker placement. Lenovo also unveiled a 15-inch all aluminum Chromebook at IFA 2018, with an optional 4K display. Considering Google did not reveal a new Pixelbook at its last media event, these devices all hold up to look impressive.
Editors’ Recommendations
- Asus taps into Intel’s new ‘Whiskey Lake’ CPUs to power its latest ZenBooks
- Asus claims ‘world’s thinnest’ title with its new Zephyrus S gaming laptop
- Lenovo IdeaPad 530S review
- Alienware has unleashed the M15, its thinnest gaming PC yet
- Acer Chromebook 15 review
Powered brace proves restoring arm functionality is no longer out of reach
Previous
Next
1 of 3



Wearable technologies such as exosuits can enhance people’s baseline abilities by giving them the ability to walk further or lift more than they would ordinarily be able to. In some cases, this means helping non-disabled people to carry out tasks more efficiently. Where it’s more profound, however, is in helping people with disabilities to perform actions that might otherwise be impossible.
This is where a powered brace called the MyoPro myoelectric arm orthosis excels. Manufactured by the wearable medical robotics company Myomo, the device is billed as the only lightweight wearable device on the market that can help restore functionality in the arms and hands of individuals with neuromuscular disease or injury. That includes the effects of conditions such as strokes, brachial plexus injury, or cerebral palsy.
The noninvasive device works by reading faint nerve signals on the surface of the skin, and then using this information to activate small motors that move the prosthesis as required. The user is in full control of their own hand or arm at all times.
“MyoPro can help restore function to an arm paralyzed or weakened by any disease or injury — it is diagnosis agnostic,” Paul Gudonis, chairman and CEO of Myomo, told Digital Trends. “We estimate approximately 1 percent of the population is in this condition. Patients with their MyoPro brace enjoy getting back the ability to do everyday activities such as cutting food, getting dressed, holding a book, playing pool, and helping with chores around the house. MyoPro is suited for this because it is lightweight, wearable, easy to use, and its long-life swappable battery enables all-day use.”
At present, MyoPro is being used by more than 700 adults in the U.S., Canada, and select European countries. According to Gudonis, the device is covered by the majority of insurance plans. It has found particularly widespread adoption among veterans who have been injured in combat or suffered injuries after their service. For instance, one U.S. Air Force veteran who suffered a brachial plexus injury was told that his arm would remain paralyzed until he received his MyoPro. It was also recently approved for adolescents, thereby opening the technology up to a whole new audience.
Editors’ Recommendations
- Brain-controlled third arm lets you take your multitasking to the next level
- Want an extra arm? A third thumb? Check out these awesome robotic appendages
- Look out, bartenders: This cocktail-making robot is coming for your job
- Meet Fusion: A helpful robotic ‘parasite’ that lives on your back
- This mirror-wielding robot arm behaves in a freakily lifelike manner



