MacOS High Sierra brings HVEC support, VR SDKs and more to the Mac

Earlier today, Apple’s Craig Federighi took to the stage at its WWDC 2017 annual developer conference to talk about some new additions to MacOS. He noted that this year, Apple will be focusing on perfecting Sierra, in a revision that will be dubbed MacOS ‘High Sierra.’
Safari improvements

Refinements are being made to Safari, which Federighi billed as the world’s fastest desktop browser. He went on to claim that Safari will offer speed improvements of up to 80 percent over Google Chrome when it comes to presenting modern JavaScript content.
It’s also set to offer users a more serene browsing experience, with new autoplay-blocking functionality. Safari detects sites that play video automatically, and gives the user control over whether they see it or not. It’s also set to receive Intelligent Tracking Protection, which is supposed to prevent sites and services from invasive practices — like stalking users around the web with adverts for products they’ve recently shown an interest in — by utilizing machine learning techniques.
Compressed Mail and photo refinements
Apple has worked on some updates to its Mail app, using compression to reduce the disk space it uses up by 35 percent. In addition, Split View is set to be supported by the Compose window, and Spotlight will be able to determine which messages are most important, and give them priority over other correspondence.
Photos is also set to receive some refinements, like improved facial recognition, and categories that are synchronized across all Apple devices. Expanded physical printing options are also being introduced, like third-party photo printing options.
There are also some major improvements to the editing capabilities of the Photos app, including the ability to fine-tune a color curve, selective color editing, and functionality that will sync edits across various devices.
Under the hood updates to speed things up

Federighi moved on to some more big changes, starting with the news that the 64-bit Apple File System (APFS) would at long last be making its way to MacOS. APFS offers some noteworthy speed improvements, which was demonstrated by a video of a lightning-fast copy process duplicating several HD video files. APFS will be the default file system for MacOS, and offers built-in encryption support.
Support for the HVEC video compression standard is coming to all Macs, and hardware acceleration will be offered on the newest models: the 27-inch Mac from late 2015, the MacBook from early 2016, and the MacBook Pro from 2016. It’s set to be built into apps like Final Cut, Motion, and Compressor, to help video editing pros get the best possible results.
The focus then turned to graphics, as Federighi talked up Apple’s high performance graphics API, Metal. He then announced Metal 2, a “tremendously fast,” highly optimized new iteration of the tool. As well as graphics, machine learning will be powered by Metal.
Apple also detailed how users who are eager to add some extra muscle to their Mac will be able to use Metal alongside external graphics hardware. Starting today, a developer kit is being made available that offers a Thunderbolt 3 enclosure with an AMD Radeon RX 580 graphics card and a USB-C hub. Support for external graphics will subsequently be rolled out to all users.
Federighi went on to announce plans to introduce a version of the Metal API that’s tailored for virtual reality alongside High Sierra. It’s set to help content creators push the limits of hardware being used to run VR experiences, while improvements to software like Final Cut will make it easier to edit spherical video to be viewed on headsets.
Some VR upgrades and High Sierra’s release date

High Sierra is set to receive several software packages that should help foster VR development on MacOS. Valve is bringing the SteamVR SDK to the platform, and both Unity and Unreal’s VR engines are coming to the Mac, too.
A developer preview of High Sierra is set to be made available to developers today, with a public beta starting up later in June. The free update will ship this fall, and is compatible with all systems that support Sierra.
Medical emergencies pose a whole host of new challenges in outer space
Why it matters to you
Space exploration is key to humankind’s continued progress, and keeping crews healthy is of vital importance. To that end, physicians are trying to prepare astronauts to handle a whole host of unique medical challenges they will face on missions.
As humanity prepares for manned missions to Mars within the next decade, physicians here on Earth are raising some of the challenges these pioneering astronauts will face. One unfortunately inevitable event will be a medical emergency. How will crew members react? How should they be trained? And what happens if the doctor dies?
Experts are tackling the topic at this year’s Euroanaesthesia Congress in Geneva, which is hosted by the European Society of Anaesthesiology.
“Space exploration missions to the moon and Mars are planned in the coming years,” Matthieu Komorowski, a physician from Charing Cross Hospital in London, said in a statement. “During these long-duration flights, the estimated risk of severe medical and surgical events, as well as the risk of loss of crew life, are significant.”
Space is a hostile and unforgiving environment. Simply by being there, astronauts increase their risk of conditions such as bone fractures and cardiovascular issues. Radiation is rampant and, without sufficient exercise, microgravity turns muscles to jelly.
“The exposure to the space environment itself disturbs most physiological systems and can precipitate the onset of space-specific illnesses,” Komorowski said.
If an emergency does occur, communications with Earth will be slow and limited, so Komorowski suggests that crews be diversely trained, with skills duplicated between personnel to increase the likelihood that a qualified person is able to treat an injured patient. “Extending basic medical training to most crew members will be extremely important,” he said.
Many of the measures taken for emergency medical care in outer space will be adapted from those used in remote regions, like at Arctic base camps. For example, crew members will be selected, in part, based on matching blood types. Medical equipment will also be 3D printed to save cargo space.
Space also poses a number of challenges to medical procedures that aren’t usually found on Earth. CPR — which is a pretty straightforward procedure on Earth — becomes a challenge in microgravity, where a person can’t use their own body weight.
A team led by Jochen Hinkelbein, a physician at the University Hospital of Cologne, is on the case. They discovered that a “handstand” technique proved to be effective in microgravity. Hinkelbein presented his findings at the conference.
Scientists need you to play classic Atari games, teach their AI new tricks
Why it matters to you
Watching skilled humans solve problems helps AI learn faster. And yes, sometimes that involves Ms. Pac-Man.
Learning valuable skills by playing video games sounds suspiciously like the kind of feeble excuse we used as teenagers to explain why we were playing GoldenEye 007 instead of doing our homework. But in the case of a new AI project carried out by computer scientists at RWTH Aachen University in Germany and Microsoft Research, it turns out to be absolutely true.
“What we’ve developed is a way to collect data of humans playing five Atari games, a large dataset of humans playing them, and the insight that — with current algorithms — less data of better players seems to be more useful for learning than more data of worse players,” Lucas Beyer, a researcher on the project, told Digital Trends. “This might sound obvious, but really it’s not: The common theme being ‘the more data the better.’”
There has been interesting work done before involving AI and classic Atari 2600 games. For example, a couple of years ago, an artificial agent created by the Google-owned DeepMind was able to learn to play games like Breakout without a human showing it how to. As Beyer notes, in the case of his and his colleagues’ work, humans are involved — since the bots are watching human players play through the games Q*Bert, Ms. Pac-Man, Space Invaders, Video Pinball, and Montezuma’s Revenge.
This playthrough data was gathered from Redditors, who turned out to be more than happy to revisit some vintage arcade games — all in the interests of improving AI. What is impressive about the work, however, is that the AI was able to learn new skills, such as problem solving, by extracting patterns from the human-led playthroughs it analyzed.
“This dataset is an open testbed for developing reinforcement learning algorithms that can get a head start by looking at human demonstration, as opposed to learning everything from scratch,” Beyer continued.
The algorithm was even able to learn to sort good players from bad, without ever being told what a “good” or “bad” player might look like.
Next up, the team wants to build on their system — by adding data from more expert Atari players. And did we mention the best part? They want you (yes, you!) to help them.
“It would be cool if you can ask people to play games on our website and add more data as a result,” Yobi Byte, another researcher on the project, told us.
Now how’s that for the homework you always dreamed of?
Scientists need you to play classic Atari games, teach their AI new tricks
Why it matters to you
Watching skilled humans solve problems helps AI learn faster. And yes, sometimes that involves Ms. Pac-Man.
Learning valuable skills by playing video games sounds suspiciously like the kind of feeble excuse we used as teenagers to explain why we were playing GoldenEye 007 instead of doing our homework. But in the case of a new AI project carried out by computer scientists at RWTH Aachen University in Germany and Microsoft Research, it turns out to be absolutely true.
“What we’ve developed is a way to collect data of humans playing five Atari games, a large dataset of humans playing them, and the insight that — with current algorithms — less data of better players seems to be more useful for learning than more data of worse players,” Lucas Beyer, a researcher on the project, told Digital Trends. “This might sound obvious, but really it’s not: The common theme being ‘the more data the better.’”
There has been interesting work done before involving AI and classic Atari 2600 games. For example, a couple of years ago, an artificial agent created by the Google-owned DeepMind was able to learn to play games like Breakout without a human showing it how to. As Beyer notes, in the case of his and his colleagues’ work, humans are involved — since the bots are watching human players play through the games Q*Bert, Ms. Pac-Man, Space Invaders, Video Pinball, and Montezuma’s Revenge.
This playthrough data was gathered from Redditors, who turned out to be more than happy to revisit some vintage arcade games — all in the interests of improving AI. What is impressive about the work, however, is that the AI was able to learn new skills, such as problem solving, by extracting patterns from the human-led playthroughs it analyzed.
“This dataset is an open testbed for developing reinforcement learning algorithms that can get a head start by looking at human demonstration, as opposed to learning everything from scratch,” Beyer continued.
The algorithm was even able to learn to sort good players from bad, without ever being told what a “good” or “bad” player might look like.
Next up, the team wants to build on their system — by adding data from more expert Atari players. And did we mention the best part? They want you (yes, you!) to help them.
“It would be cool if you can ask people to play games on our website and add more data as a result,” Yobi Byte, another researcher on the project, told us.
Now how’s that for the homework you always dreamed of?
Apple unveils MacBook Pro upgrade, plus a price cut for entry-level option
Why it matters to you
The MacBook Pro lineup is now faster than before, and the entry-level model is just $1,300 — that’s a $200 price cut.

At WWDC on Monday, Apple announced the latest MacBook Pros will be getting yet another update, this time bringing Intel’s latest seventh-generation “Kaby Lake” processors to the flagship notebook lineup. While it wasn’t exactly the star of the show at WWDC, it comes as a welcome reminder that Apple remains committed to the MacBook Pro as a platform.
Starting today, the entire MacBook Pro line will be receiving a much-needed hardware upgrade, with Intel’s latest-generation Core processors replacing the sixth-gen Core chips that shipped with the new MacBook Pro lineup in late 2016. This will mean better performance, improved power efficiency, and higher turbo boost clock speeds.
The MacBook Pro 13 can now hit 3.5GHz with the top-end Intel Core i7, while the MacBook Pro 15 will hit 3.1GHz with its own seventh-generation Intel Core i7 chip.
The 15-inch model will also feature more powerful discrete graphics options, as well as more video memory.
Thankfully, Apple isn’t charging more for the new chips. The opposite, in fact: The entry-level MacBook Pro 13 (without the Touch Bar) will not only receive a faster processor, but also a cheaper price, starting at $1,300 instead of its debut price of $1,500. That’s definitely a step in the right direction for anyone eyeballing a low-price MacBook Pro.
We’ll have to wait and see just how well the new seventh-gen processors perform up and down the MacBook Pro lineup, but with strong performance from the current sixth-generation chips, we can expect higher clock speeds and better all around performance. Our tests of Windows 10 systems found that notebooks with seventh-gen Intel Core hardware were 10 to 15 percent quicker than those with sixth-gen chips.
Even the lowly MacBook is getting an upgrade, the new seventh-generation Intel Core i7 hits 1.3GHz, and offers a 50 percent faster SSD and supports twice the amount of RAM.
Meanwhile, the MacBook Air struggles on. Apple announced the MacBook Air will be receiving a similar hardware upgrade, but didn’t get into the specifics. That likely means it will be similar to its predecessors in most respects, but receive a bump to Intel seventh-gen Core.
The latest MacBook Pros and MacBook Air won’t ship running Apple’s new MacOS High Sierra — a refinement to the last major MacOS update — but it will be available for download today via Apple’s Developer program. For everyone else, the MacOS High Sierra will be coming this fall.
Walmart’s new VR training program sounds like a bad ‘Black Mirror’ episode
Why it matters to you
Walmart’s VR tech will enable its employees to practice scenarios to provide better customer service.
Virtual reality can transplant users into any manner of amazing, exotic locations — so why would you want to use it to put yourself in the shoes of a retail employee on the most stressful shopping day of the year? If you’re Walmart, the answer is obvious: training.
Thanks to a handy assist from Stivr — a VR company that has previously used its tech to help NFL players train — Walmart has announced that its 200 “Walmart Academy” training centers will be using virtual reality training by the end of the year. And, yes, this includes a VR version of Black Friday.
“For the past several months, we have been testing VR at 31 Walmart Academies, which are the regionalized training facilities Walmart uses to train new and existing employees,” Danny Belch, vice president of strategy, sales & marketing at Stivr, told Digital Trends. “By the end of 2017, VR will be at all 200 of Walmart’s Academies. Over 140,000 Walmart employees will get to experience VR every year as a result of using it in the Academy system, which is one of the largest, if not the largest, VR rollouts in the history of virtual reality. Walmart has displayed incredible innovation in choosing to utilize VR in this way, and it has been really fun working with their team over the last several months.”
But why exactly is a Black Friday simulator necessary? Belch says that the Stivr team recommends VR when a training scenario might be prohibitively dangerous or expensive to carry out in the real world. But despite how much loot Black Friday brings in, he notes that the virtual reality setups the team has developed go way beyond preparing for just one shopping day.
“That is only one small part of what Walmart is doing with training in VR,” he said. “We have created a library of virtual content that addresses lots of different scenarios a Walmart employee may encounter on a daily basis in his or her job. These range from spotting errors in different parts of a store to engaging in different types of customer service modules. The general idea is to give employees more repetitions at the mental decisions they have to make on a daily basis, which in turn will lead to a better experience for all Walmart customers.”
Apple’s redesigned App Store makes it easier than ever to find new apps
Why it matters to you
Your app-browsing experience is about to get a whole lot cleaner thanks to the redesigned App Store.

For the first time since its launch, Apple has completely redesigned the App Store experience on iOS — meaning that not only will it be easier for you to find and download apps, but it should also make for a cleaner experience in general.
Perhaps the most interesting aspect of the new App Store is the addition of several new tabs. Notably, the App Store will now have a “today” tab, which will show the latest apps to hit the Store, and make it easier to discover new content.
Today isn’t the only tab that will be part of the redesigned store. The App Store will also now feature a “Games” tab, which will allow you to see the latest and greatest games to hit the App Store. You can even get a little more granular than that by being able to also see a particular kind of game.
The pages for individual apps have also gotten a redesign — making for a much cleaner and nicer-looking App Store. From the pages, you’ll be able to quickly and easily see information you need about an app before you download it, as well as user reviews. Now, you can see if an app is all it’s hyped up to be before downloading it.
The update is a welcome change to the App Store. According to Apple, a massive 180 billion apps have been installed since the launch of the App Store, and Apple has paid out a whopping $70 billion to developers.
The App Store update was announced at Apple’s Worldwide Developer Conference on Monday, but it wasn’t the only big announcement — the company also showed off new versions of iOS, WatchOS, and MacOS. In general, WWDC is one of Apple’s biggest shows of the year — and you can keep up with all of our WWDC coverage here.
It may soon be possible to accurately re-create facial images from memory
Why it matters to you
Research suggests it may one day be possible to accurately generate images of faces, based only a person’s memory of them.
Researchers at the California Institute of Technology have demonstrated that it is possible to recreate images of human face based on the monitoring of macaque monkey brain cells. The work shines a light on how exactly faces are processed by the brain.
Using functional magnetic resonance imaging (fMRI) technology, six areas of the brain were shown to be involved with the identification process. The team referred to the neurons in these areas as “face cells.”
In an experiment that involved inserting electrodes into the brains of monkeys to record their physical response to looking at images, the researchers found that 205 neurons encode different characteristics of a face. When these are combined using some smart machine learning technology, it’s possible to reconstruct the face the monkey had been looking at in a way that was strikingly close to the original image.
“The face cells we are studying are at the highest level of the visual system,” Steven Le Chang, a researcher on the project, told Digital Trends. “Normally people think the code for neurons at this level should be rather complicated. However, our result shows that once we find the appropriate coordinates for faces, the code of faces could be understandable. Using this code, we are able to reconstruct the face the monkey saw and predict responses of face cells to an arbitrary face.”

Doris Tsao
But as interesting as the work is from a biological perspective, is there any possible real world application for it? Quite possibly yes, Le Chang explained.
“Potentially, if we could decode faces from neural activity in the human brain [as well as the monkey one], there will be a lot of real world applications,” he said. “In general, that will helps human subjects convey concepts which are otherwise difficult. For example, a witness of a crime scene may have a hard time describing the face of the criminal. If we could directly decode the face based on the witness’ memory, we can extract the criminal’s face in a much more objective and quick way. Of course, whether memory activates the same population of cells as seeing the face is still an open question.”
Next up, the researchers wish to extend their study from neutral faces to expressive faces, as well as other types of object. As noted, they also want to investigate how imagination or memory of faces affects the representation in face patches.
It may soon be possible to accurately re-create facial images from memory
Why it matters to you
Research suggests it may one day be possible to accurately generate images of faces, based only a person’s memory of them.
Researchers at the California Institute of Technology have demonstrated that it is possible to recreate images of human face based on the monitoring of macaque monkey brain cells. The work shines a light on how exactly faces are processed by the brain.
Using functional magnetic resonance imaging (fMRI) technology, six areas of the brain were shown to be involved with the identification process. The team referred to the neurons in these areas as “face cells.”
In an experiment that involved inserting electrodes into the brains of monkeys to record their physical response to looking at images, the researchers found that 205 neurons encode different characteristics of a face. When these are combined using some smart machine learning technology, it’s possible to reconstruct the face the monkey had been looking at in a way that was strikingly close to the original image.
“The face cells we are studying are at the highest level of the visual system,” Steven Le Chang, a researcher on the project, told Digital Trends. “Normally people think the code for neurons at this level should be rather complicated. However, our result shows that once we find the appropriate coordinates for faces, the code of faces could be understandable. Using this code, we are able to reconstruct the face the monkey saw and predict responses of face cells to an arbitrary face.”

Doris Tsao
But as interesting as the work is from a biological perspective, is there any possible real world application for it? Quite possibly yes, Le Chang explained.
“Potentially, if we could decode faces from neural activity in the human brain [as well as the monkey one], there will be a lot of real world applications,” he said. “In general, that will helps human subjects convey concepts which are otherwise difficult. For example, a witness of a crime scene may have a hard time describing the face of the criminal. If we could directly decode the face based on the witness’ memory, we can extract the criminal’s face in a much more objective and quick way. Of course, whether memory activates the same population of cells as seeing the face is still an open question.”
Next up, the researchers wish to extend their study from neutral faces to expressive faces, as well as other types of object. As noted, they also want to investigate how imagination or memory of faces affects the representation in face patches.
Apple shows off new photography features coming to iOS 11
Why it matters to you
Apple iOS users now shoot over one trillion photos per year, and with iOS 11’s new photo features, that number will only go up.

For iPhone users, photography has long been a central part of the mobile experience. In fact, iOS users now take over one trillion photos per year, according to Apple. So it comes as no surprise that at the 2017 Worldwide Developer Conference (WWDC), Apple announced several new photography-related features coming to iOS 11.
One of the more unique features that’s been in iOS for a couple of generations now is Live Photos. A Live Photo is essentially a short video, capturing more of the moment than a single frame. With iOS 11, Live Photos will become significantly more flexible, potentially changing how iPhone owners use them. Users will be able to select a new keyframe from anywhere within the Live Photo, which could be incredibly helpful for shooting things like sports, pets, or kids. Additionally, iOS 11 users can trim the length of the Live Photo to change the total duration, or save it as an autoplaying video loop or a Boomerang-style “bounce.” It will also be possible to save long exposure photos similar to shooting with a slow shutter speed on a DSLR camera.
Apple also promises more creative control over the look of iPhone photos thanks to new professional-quality filters built into the camera app. Users can select a filter for a variety of effects, from making skin tones more natural to applying classic looks to portraits.

When it comes to viewing your photos and videos, iOS 11 also includes a revamped Memories feature. Currently, the auto-generated slideshows that appear in Memories are formatted to be viewed in landscape orientation only, with any portrait-orientation content displaying in a cropped or downscaled format. With iOS 11, playing back Memories slideshows will automatically adjust to fill the screen regardless of how you hold your device, which should make for a more natural and better-looking experience.
Additionally, developers will soon be able to take advantage of the dual-camera Portrait mode on the iPhone 7 Plus with a new Depth API. Third-party apps will be able to implement the same depth-sensing capabilities that Apple uses in the default iOS camera app to simulate a shallow depth of field.
IOS 11 also introduces some under-the-hood technology updates that will help optimize a device’s storage and data usage. Apple is moving from standard JPEG compression for its still images to a new file format it’s calling HEIF, for High Efficiency Image Format. Apple claims HEIF offers twice the compression effectiveness of JPEG but will still be fully shareable, which saves space on a user’s device and iCloud storage, shortens the time it takes to share an image, and uses less data when doing so.
Video files will receive a similar treatment, thanks to the new HEVC codec. This is especially important given the high-resolution 4K videos that iPhones now shoot, and could potentially save a lot of storage space for users who shoot video frequently.
A developer preview of iOS 11 is available today, with the public release planned for sometime in fall 2017.
Updated June 5, 2017 to include additional details from Apple’s iOS 11 preview page.



