Skip to content

Archive for

24
Aug

Lyft wants to drop you off closer to your seat on your next stadium visit


While having a large quadcopter fly you to your seat would surely be the ideal way to arrive for a stadium event, we’re guessing such a service is still a ways off.

Second best way might be Lyft’s new offering. Announced this week, the ridesharing company said it’s partnered with ticket service SeatGeek to make the process of getting to your seat at a gig or sports game as simple as possible.

Anyone who’s ever been to a huge stadium — and yes, most of them are huge — will know all too well how much time can be wasted trying to hunt down the right ticket gate. If you’re running late, chances are you’ll end up reaching your seat out of breath in a sweaty funk, having missed the start of whatever it is you’ve gone to see.

In an effort to ensure a more dignified arrival, Lyft and SeatGeek are working together to drop you off as close to your seat as possible, “saving up to 30 minutes in walking time at larger stadium venues,” the ridesharing company said in a post announcing the new service.

Lyft said that customers “who request a Lyft ride through the SeatGeek app will automatically have their seat location featured in the Lyft app, ensuring the driver drops them off as close to their seat as possible.”

Sounds good. The only downside is the extremely narrow nature of the launch, limited as it is to fans visiting Providence Park in Oregon, home to Major League Soccer’s Portland Timbers. “Timbers fans attending games via the SeatGeek app [iOS and Android] will automatically be dropped off at the best gate for their seats when taking a Lyft, saving them crucial time getting around the stadium,” Lyft promised.

The company has been working on improving pick-up and drop-off accuracy for a couple of years, though this is the first time it’s partnered with another outfit for stadium rides.

In 2016, it launched a “precise location selection” feature for its app at 200 sites across the U.S. So where as you’d ordinarily indicate that you want to be driven to a particular airport, the feature — if it’s active for that particular location — will offer up more precise drop-off points, such as a particular entry door for the terminal you’re heading to. Of course, you could explain a precise arrival point to your driver as you go, but Lyft hopes the option helps to improve the journey experience by letting you focus on other things.

Editors’ Recommendations

  • How to improve your Android privacy
  • How to calibrate your monitor
  • 5 outrageous headphones that will blow your mind, and your savings
  • How to calibrate your TV
  • Motorcycle buying guide: What to know before buying your first bike



Advertisements
24
Aug

IBM’s drone idea knows when you want a coffee, and then flies it to you


Coffee-delivering drones are already a thing — well, in a couple of gimmicky cafe-sponsored setups, at least — but tech firm IBM has had an idea to take the concept one step further.

Designing its system for both cafe and office environments, the company suggests using its technology to understand a person’s state of mind to determine whether a cup of coffee is required. And then using a drone to deliver it.

In other words, if the drone’s on-board sensors detect someone sitting at their desk with their head tilted and their eyes half closed, then it’s a safe bet a strong dose of caffeine is needed. Fast.

IBM’s offbeat idea is explained in a patent that was granted by the United States Patent and Trademark Office earlier this month.

While IBM dominated the early days of computing, in recent years it’s turned its attention to fast-expanding sectors such as artificial intelligence (A.I.), unveiling a range of ideas far more impressive than a coffee-delivering drone, though even this system is very much rooted in A.I.

For example, the patent describes how the drone’s technology would use cameras and smart sensors to interpret a person’s pupil dilation and facial expressions, using the information to decide whether that person could do with a cup of the ol’ bean juice. The document also says that with drinking being a habitual or ritualistic process for many people, the system’s A.I. smarts would enable it to “learn times and places at which an individual tends to prefer to consume coffee,” and then use that history to improve the efficiency of its drink delivery system.

On a simpler level, the technology could also be programmed to respond to hand gestures indicating the desire for a drink.

Special delivery

Several delivery methods are suggested in the patent. For example, the drink could be lowered to a recipient on an “unspooling string,” with the piping hot liquid refreshment sealed safely in a bag to prevent an unfortunate scalding incident should anyone knock it on its way down. Alternatively, the beverage could be dispensed directly into a cup from a coffee-carrying package carried by the drone.

Downsides to IBM’s idea? With drones constantly buzzing about as they try to determine who needs a cup of joe, the constant noise is likely to become unbearable for most office workers, especially in an open-plan office. The sensible alternative would be for the caffeine-deprived employee to place one foot in front of the other until they arrive at the kitchen, whereupon they can prepare their favorite drink. The light exercise may even be beneficial.

While it’s true that many patents never make it off the page, it should be noted that IBM’s effort comprises 16 pages of extremely detailed explanation and drawings and would likely have cost several thousand dollars to file. In addition, we’re pretty confident there are plenty of folks out there who rather like the idea of a drink-delivering drone, especially one that knows they need a coffee before they even do.

Editors’ Recommendations

  • Awesome Tech You Can’t Buy Yet: inflatable backpacks and robotic submarines
  • Uber could soon know if you’re drunk before the car arrives to pick you up
  • U.S. Army algorithm tells you how much coffee to drink to remain alert
  • Magic Leap One: Everything you need to know
  • Apple Car: What you need to know about Project Titan



24
Aug

T-Mobile Customers Can Now Enroll in iPhone Upgrade Program Online


T-Mobile customers are now able to enroll in the iPhone Upgrade program online, a process that used to require a visit to an Apple Store.

The change is reflected in updated language in the Apple Store app, which now says that customers can join the iPhone Upgrade Program online with AT&T, Sprint, T-Mobile, or Verizon.

AT&T, Verizon, and Sprint users have been able to enroll in the iPhone Upgrade Program online since the program launched, allowing for online purchases of new iPhones, but T-Mobile subscribers who wanted to upgrade to a new iPhone using the iPhone Upgrade Program could not do so online like other customers for previous iPhone launches.

With this policy change, T-Mobile customers who plan to purchase a new iPhone using the iPhone Upgrade Program when the 2018 iPhones launch should be able to do so entirely online without visiting a retail store.


Apple last year offered pre-approvals for the iPhone Upgrade Program, which allowed iPhone Upgrade Program customers to get through the checkout process more quickly when pre-orders kicked off.

Apple is likely to offer the same pre-approval process this year, which customers of all carriers will be able to participate in. Apple also offered Trade-in kits delivered by mail last year, another option previously not available to T-Mobile users.

[via Reddit]

Tags: T-Mobile, iPhone Upgrade Program
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

24
Aug

Meet Fusion: A helpful robotic ‘parasite’ that lives on your back


You have probably seen (or you may even be) one of those parents who walks around with their baby on their back in a kind of modified backpack. Imagine how useful and extra productive a parent would be if their kid didn’t just sit there babbling, but could actually use their arms to reach out and assist with tasks. Now imagine that the kid was a robot, and you’ll get the gist of Fusion, a crazy new research project coming out of Japan’s Keio University.

Shown off to great acclaim at SIGGRAPH 2018, Fusion offers its wearers a second pair of working arms. What makes this different the other “extra limbs” projects we have covered at Digital Trends is the fact the operator of the Fusion robot is another human user, who controls the arms remotely using the magic of virtual reality. Essentially, it gives you two bodies (and brains) for the price of one.

“Fusion is a wearable telepresence backpack system that acts as an extension to the wearer body — or surrogate — so a remote user can dive into and operate it,” Yamen Saraiji, one of the researchers on the project, told Digital Trends. “The backpack is equipped with two humanoid arms and a head. Using it, two people can share the body and physical actions. One remote person uses a virtual reality headset to see live visuals from the robot head’s binocular vision, and can control the arms naturally using two handheld controllers. Thus, the user can feel ‘fused’ with the surrogate body, and both can share their actions. This system can enable a wide variety of applications and scenarios that can be explored using it.”

Keio University

Saraiji said that one idea they have for said applications would be teaching someone to perform actions. For example, it could be used by a therapist to assist with a patient’s physical practice. (Or, and we’re projecting here, by an old-school boss to clip you round the ear when you make a mistake!)

“From our research perspective, we have been focusing on body augmentative technologies and their applications to enhance our wellbeing,” Saraiji continued. “For Fusion, we imagined the situation at which our bodies can become surrogates for others, so we can collectively perform tasks and solve problems from one shared body. The most evident problem was the disjointed collaboration between remote people that we actively face in the current telepresence systems. With the proposed concept of body sharing, we not only solve the collaboration problem, but also propose its potentials as a skill transfer and rehabilitation system.”

Editors’ Recommendations

  • This myth-inspired, karate-chopping centaur robot could save your life one day
  • Brain-controlled third arm lets you take your multitasking to the next level
  • Want an extra arm? A third thumb? Check out these awesome robotic appendages
  • It’s not Terminator time yet, but robots with living muscle tissue are on the way
  • Rolls-Royce is creating a fleet of robotic snakes and beetles to repair planes



24
Aug

IBM’s Holodeck-style classroom tech makes language-learning apps look primitive


Whether it’s apps like Duolingo or the ease of travel, there are plenty of ways technology has made it more straightforward to learn a second (or third or fourth …) language. Now, IBM Research and New York’s Rensselaer Polytechnic Institute have come up with an entirely new high-tech approach — and it totally reminds us of the Vulcan school from 2009’s Star Trek movie.

Called the Cognitive Immersive Room (CIR), it pairs an A.I.-powered chatbot smart assistant with a 360-degree panoramic display system to place users into a variety of immersive locations to try out their language skills. Currently, it’s being used for Mandarin, which is widely considered to be among the more difficult languages for Westerners to learn. The CIR setup drops students into scenarios like a restaurant in China and a tai chi class, where they can put their Mandarin to the test.

“The Cognitive Immersive Classroom is a very important use case for us,” Hui Su, Director of the Cognitive and Immersive Systems Lab at IBM Research, told Digital Trends. “We are currently focusing on language learning, and are building the classroom for students who study Mandarin as a second language. In this classroom, students are immersed into a 360-degree [environment], surrounded by real-life scenes such as a restaurant, street market, and garden. They can talk to the avatars, such as the waiters and salesmen on the screens, and perform tasks like ordering drinks and food and buying products. These tasks are developed as games for students to complete. In these games, the students will be able to practice and learn Chinese languages in a culture-rich environment, and talk to the A.I. agents who can understand what they just said.”

To make the experience even more immersive, the room is kitted out with cameras, Kinect devices, and microphones. This makes it possible to, for instance, point at an object and say “what is that?” and have the question answered. The microphones, meanwhile, can pick up on every nuance of a speaker’s words.

“A pitch contour analysis function was developed to visualize the difference between the students’ pronunciations and those by native speakers, so that students could see easily where their tone pronunciations need to be improved,” Su said. “The classroom is a brand-new initiative to integrate A.I. technologies and human-scale immersive technologies together for Mandarin teaching. We are using IBM Watson speech recognition and natural language understanding for English and Chinese.”

Coming soon to a classroom near you. Or, at least, we really, really hope it is.

Editors’ Recommendations

  • Google’s smart home mastermind talks security, A.I., the future of control
  • Awesome Tech You Can’t Buy Yet: inflatable backpacks and robotic submarines
  • Chinese search giant Baidu creates an open-source A.I. for detecting cancer
  • Warning: The way your neighbors use Alexa will bore the snot out of you
  • Like a vice principal in the sky, this A.I. spots fights before they happen



24
Aug

IBM’s Holodeck-style classroom tech makes language-learning apps look primitive


Whether it’s apps like Duolingo or the ease of travel, there are plenty of ways technology has made it more straightforward to learn a second (or third or fourth …) language. Now, IBM Research and New York’s Rensselaer Polytechnic Institute have come up with an entirely new high-tech approach — and it totally reminds us of the Vulcan school from 2009’s Star Trek movie.

Called the Cognitive Immersive Room (CIR), it pairs an A.I.-powered chatbot smart assistant with a 360-degree panoramic display system to place users into a variety of immersive locations to try out their language skills. Currently, it’s being used for Mandarin, which is widely considered to be among the more difficult languages for Westerners to learn. The CIR setup drops students into scenarios like a restaurant in China and a tai chi class, where they can put their Mandarin to the test.

“The Cognitive Immersive Classroom is a very important use case for us,” Hui Su, Director of the Cognitive and Immersive Systems Lab at IBM Research, told Digital Trends. “We are currently focusing on language learning, and are building the classroom for students who study Mandarin as a second language. In this classroom, students are immersed into a 360-degree [environment], surrounded by real-life scenes such as a restaurant, street market, and garden. They can talk to the avatars, such as the waiters and salesmen on the screens, and perform tasks like ordering drinks and food and buying products. These tasks are developed as games for students to complete. In these games, the students will be able to practice and learn Chinese languages in a culture-rich environment, and talk to the A.I. agents who can understand what they just said.”

To make the experience even more immersive, the room is kitted out with cameras, Kinect devices, and microphones. This makes it possible to, for instance, point at an object and say “what is that?” and have the question answered. The microphones, meanwhile, can pick up on every nuance of a speaker’s words.

“A pitch contour analysis function was developed to visualize the difference between the students’ pronunciations and those by native speakers, so that students could see easily where their tone pronunciations need to be improved,” Su said. “The classroom is a brand-new initiative to integrate A.I. technologies and human-scale immersive technologies together for Mandarin teaching. We are using IBM Watson speech recognition and natural language understanding for English and Chinese.”

Coming soon to a classroom near you. Or, at least, we really, really hope it is.

Editors’ Recommendations

  • Google’s smart home mastermind talks security, A.I., the future of control
  • Awesome Tech You Can’t Buy Yet: inflatable backpacks and robotic submarines
  • Chinese search giant Baidu creates an open-source A.I. for detecting cancer
  • Warning: The way your neighbors use Alexa will bore the snot out of you
  • Like a vice principal in the sky, this A.I. spots fights before they happen



24
Aug

‘Scores’ of Employees Have Left Tesla for Apple Over the Last Year


Dozens of Tesla employees have left Tesla for Apple since late 2017, according to research conducted by CNBC.

The Tesla employees that have left Tesla have joined multiple departments at Apple, with the hires not limited to Project Titan, Apple’s car development effort.

In 2018 so far, LinkedIn data shows Apple has hired at least 46 people who worked at Tesla directly before joining the consumer electronics juggernaut. Eight of these were engineering interns. This year Apple has also hired former Tesla Autopilot, QA, Powertrain, mechanical design and firmware engineers, and several global supply chain managers. Some employees joined directly from Tesla, while others had been dismissed or laid off before joining Apple.

A Tesla engineer who has kept in touch with his Apple colleagues spoke to CNBC and said that based on what he’s been told, Apple appears to be taking steps to “more tightly control manufacturing processes and equipment used to make products.”

A number of Tesla employees who have switched over to Apple have not yet updated their LinkedIn profiles with their new job descriptions, including notable hire Doug Field.

Field, who previously served as Apple’s VP of Mac hardware engineering, rejoined the company after spending five years at Tesla overseeing the production of the Model 3. Field’s hiring, along with rumors from noted Apple analyst Ming-Chi Kuo, have led to speculation that Apple is once again developing a full Apple-branded self-driving vehicle rather than focusing solely on autonomous software.

Tesla employees told CNBC that Field’s departure from the company led to a dip in morale among engineers and technicians at Tesla. Even before Field left, however, more people were leaving Tesla for other companies like Apple.

According to Tesla, voluntary attrition has decreased by one-third over the last 12 months, with the company also claiming that it has added talent from Apple and other companies. From a Tesla spokesperson:

“We wish them well. Tesla is the hard path. We have 100 times less money than Apple, so of course they can afford to pay more. We are in extremely difficult battles against entrenched auto companies that make 100 times more cars than we did last year, so of course this is very hard work. We don’t even have money for advertising or endorsements or discounts, so must survive on the quality of our products alone. Nonetheless, we believe in our mission and that it is worth the sacrifice of time and the never ending barrage of negativity by those who wish us ill. So it goes. The world must move to sustainable energy and it must do so now.”

Apple’s “leadership, competitive pay, and products” are among the driving factors that have encouraged employees to leave Tesla for Apple. Multiple sources told CNBC that Apple pays about one-and-a-half times the salary for technicians, software, and manufacturing engineers compared to Tesla.

Other employees have cited Apple stock and the volatility of Tesla CEO Elon Musk as factors for leaving.
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

24
Aug

Huawei Mate SE Review



5acb64bf14fb7.jpg?a=33173-4d9d2e&s=19491

2018 has not been super kind to Huawei as the US government squashed plans to partner with AT&T and Verizon. However, that has not stopped the company from rolling out new options, including the budget-minded Huawei Mate SE.

The Mate SE was announced back in March, just a month after the Honor 7X was announced. For those who are unaware, Honor is actually a sub-brand of Huawei, who tends to focus on folks with a budget.

READ MORE: Five best budget Android smartphones you can buy

With the arrival of the Mate SE, we saw a few improvements when compared to the 7X. Mainly, there was a bump in the RAM and storage, while the design largely remained the same.

Huawei Mate SE Specifications

  • Display – 5.93-inches
  • Processor – HiSilicon Kirin 659
  • RAM – 4GB
  • Storage – 64GB
  • Front Camera – 8MP
  • Rear Cameras – 16MP + 2MP
  • Battery – 3,340mAh
  • Software – Android Oreo w/EMUI 8.0
  • Others – Dual nano-SIM cards, Fingerprint Scanner

Design

Moving into the design and to put it simply, the Mate SE is solid. We have a 5.93-inch display with a 2:1 aspect ratio thanks to thin bezels on the sides and smaller top and bottom bezels.

Instead of opting for an all-glass design, Huawei decided to use metal on the frame and rear. Meanwhile, there is a nice piece of glass housing the LCD display panel.

On the front, there is the earpiece, along with proximity sensor and front-facing 8MP selfie camera. Moving to the bottom, the Mate SE includes a microUSB charger, bottom-firing speaker, and 3.5mm headphone jack.

The Power and Volume buttons are housed on the right, with dual-SIM card slot found on the left. At the top, there is nothing but the additional microphone used to make phone calls easier.

Since the front is all glass, Huawei opted to move the fingerprint scanner to the rear. This is found in the middle of the device, with some Huawei branding placed closer to the bottom.

Finally, we come to the rear-mounted camera setup. Huawei decided to use horizontally-mounted cameras, along with an LED flash placed to the left.

Software

When the Mate SE shipped to users, the handset was equipped with Android Nougat along with Huawei’s EMUI 5.1 overlay. However, in the months since the release, Huawei has pushed the Android Oreo update to devices.

The software as a whole is reliable, although you will likely get a bit tired of EMUI after awhile. While serviceable, there are just certain aspects that can be rather annoying, including the placement of different options.

However, not everything is bad in the software department, and a lot of that is thanks to the EMUI 8.0. New features have arrived, including facial recognition and various improvements.

You will still get Huawei’s take on Android with EMUI, but this can be rectified with the help of a custom launcher and icon pack. However, you will be stuck with the various Settings panels and such that seem a bit too vibrant.

Cameras

As has been the tradition for the last year or so, the Mate SE comes with a dual-camera system. There is a primary 16MP sensor which does just about all of the heavy-lifting, but there is a secondary 2MP lens.

This secondary lens makes it possible for you to take images in portrait mode. By having the second sensor, you can get a bokeh effect to make your subject pop a little bit more.

In our testing, the Huawei Mate SE provides a solid experience in the camera department. The cameras won’t blow away the competition and don’t match up with flagship handsets.

However, Huawei still puts enough effort to make these images reliable and useful. If you tinker with the different modes, you can get some good results.

Huawei Mate SE Camera Samples

As for the front-facing cameras, I was actually a bit surprised. I’m not much of a selfie taker, as I like to rely on my rear-mounted cameras (since they are better).

However, the selfies from the Mate SE were crisp and clean, and even better than some of my other devices. This is definitely great news for those who love talking selfies while you’re at events with friends or just for a new profile picture.

Usage and Performance

In our day-to-day usage, the Mate SE provided an experience that was expected. Battery life was adequate and usually lasted throughout the day, with moderate usage.

Occasionally, there were some software stutters when quickly switching between apps, but that is to be expected. The Kirin 659 is not really built for power users, but for those who just want a comfortable and reliable experience.

One thing to make note of is the LCD display that the Huawei Mate SE uses. To help save on costs, Huawei couldn’t justify slapping an AMOLED panel in its budget line.

But that doesn’t mean that you are going to have a subpar experience. I would actually venture to state that this panel will be perfect for just about everyone and you shouldn’t run into any issues.

Conclusion

At just $250, the Huawei Mate SE is definitely a front-runner for those looking for a solid budget device. If you can get lucky and wait for a deal on Amazon, you can get the Mate SE for as low as $230.

Nonetheless, Huawei really is killing the game when it comes to mid-range/budget smartphones. Between the Mate SE and Honor 7X, you really can’t go wrong.

It’s just important to not expect flagship-like performance, but the gap is getting closer and closer. If you want to pick up a Huawei Mate SE for yourself, hit that beautiful button below.

In the meantime, let us know what you think about Huawei’s latest budget device and if you are interested in picking one up!

Buy the Huawei Mate SE!

Don’t Miss

Google Home is awesome; here’s how to make it even better!

Check it out!

24
Aug

When it comes to Chrome, there may not be a reason to expect privacy


If you’re browsing the web anonymously in Incognito mode on Chrome and need to check your Gmail, you may want to use extra precaution. A new report released by trade organization Digital Content Next suggests that Google could use cookies to associate your Incognito browsing to your Google account if you log into a Google service during your anonymous web session, an assertion that Google disputed.

“This allows Google to connect the user’s Google credentials with a DoubleClick cookie ID,” Vanderbilt University computer science professor David Schmidt and the study’s author said. “While such data is collected with user-anonymous identifiers, Google has the ability to connect this collected information with a user’s personal credentials stored in their Google Account.”

When you begin a private browsing session in Chrome, websites use cookies to serve you ads based on your history. So when you log into a Google service in Incognito mode, Google could theoretically connect your Incognito web history to your Google account by associating cookies from the browser.

A Google spokesperson refuted Schmidt’s claims in a statement to AdAge but did not give a full denial.  “We do not associate incognito browsing with accounts you may log into after you’ve exited your Incognito session,” Google said. “And our ads systems have no special knowledge of when Chrome is in incognito mode, or any other browser in a similar mode (ex: Safari Private Browsing, Firefox Private Browsing). We simply set and read cookies as allowed by the browser.”

Schmidt said that it wasn’t clear if Google was connecting Incognito histories to a Google accounts, but he said that “if you read the fine print on ‘incognito’ mode it brings up a whole lot of disclaimers.” Given that Chrome is the leading web browser with more than 1 billion monthly users, Google has the capability to collect even more data if it did apply this practice. Google discredited the report, noting Schmidt is biased given that he was a witness for Oracle in that company’s lawsuit against Google.

Regardless, if you’re browsing in Incognito mode, it’s probably a good practice to not log into personal services, and you should refrain from logging into your Google account until after you have exited your incognito session.

This latest report follows a recent discovery that Google is still tracking users’ location on Android phones, even if they turn off location tracking in the Google Maps app. Google had subsequently revised its terms of service to note that location data may still be collected based on your web history. Schmidt confirmed this in his study, noting that Google will get a user’s location every time an Android phone connects to Wi-Fi or a cell tower. This helps Google identify, when combined with the phone’s sensor, if the user is walking, running, biking, or riding a bike, car, or train.

Editors’ Recommendations

  • How to use YouTube’s Incognito Mode in the Android app
  • How to clear cookies
  • The best web browsers
  • Daydream VR users can browse with Google Chrome in virtual space
  • Google Photos can take up less space on your phone with new mobile support



24
Aug

Older Chromebooks may not run Linux programs due to outdated software


Not all Chromebooks will support Linux software when the feature comes to Chrome OS later this year. So far, 14 devices may be excluded from the list including Google’s own Chromebook Pixel introduced in 2015. The current list, generated on Reddit, consists of four models from Acer, four models from Asus, two from AOpen, and more.

Google revealed support for Linux software on Chrome OS during its developer conference earlier this year. The idea is for developers to test their Android- and web-based apps on Chromebooks. Linux would run inside a virtual machine designed specifically for Chrome OS, which is simply an emulated high-end computer running within your PC’s real-world system memory.

“It starts in seconds and integrates completely with Chromebook features. Linux apps can start with a click of an icon, windows can be moved around, and files can be opened directly from apps,” Google said in May.

Therein lies the problem. Support for virtual machines didn’t appear in the Linux kernel (aka core) until version 4.8. That said, Google may update the operating system on Chromebooks, but the Linux core will remain the same. Chromebooks now appearing on the makeshift blacklist are based on the Linux 3.14 kernel and will never see an upgrade, making them incapable of running virtual machines.

The kernel is the pure core of an operating system. With Linux, one kernel is made available for free, enabling developers to create an operating system around this core. Examples include Ubuntu, Debian, Android, Chrome OS and so on. Android devices actually suffer a similar issue: Phones and tablets may receive a few updates throughout the year, but never an upgrade to the actual core. New kernels are typically introduced when Google releases a new version of Android.

Meanwhile, Google’s list of Chromebooks supporting Android isn’t quite so short. A beta version of Android support appeared in 2014, but unlike Google’s plans for Linux software, apps run natively on Chrome OS within a software “container.” These sandboxed bubbles provide apps access to key Chrome OS services and hardware without changes to their code.

That said, Linux-based software will run on a virtual machine, but that emulated PC will still reside within a sandboxed software container. Keep in mind that virtual machines and containers are not the same: Virtual machines emulate high-end PCs while containers create an isolated “bubble” within the kernel to provide specific resources to software. Running a virtual machine means Chromebook owners won’t be forced to reboot the device into Developer Mode.

Google dubs this Linux push on Chrome OS as Project Crostini. When support for Linux software actually arrives on Chrome OS is up in the air for now although Google states “this year.” Given a virtual machine is involved, not all Chromebooks will be able to support Linux software anyway due to hardware limitations, not just the outdated Linux kernel issue.

Check out our list of the best Chromebooks you can buy for 2018 right here.

Editors’ Recommendations

  • Samsung Chromebook Pro review
  • The best Chromebooks of 2018
  • How to get Android apps on a Chromebook
  • How to install Windows on a Chromebook
  • 5 features we want in Google’s Pixelbook 2



%d bloggers like this: