Skip to content

Archive for

2
Jun

Apple Leaks Video of macOS 10.14 Showing Xcode 10 With Dark Mode, News App, and More


Steven Troughton-Smith today discovered a brief video on Apple’s servers that appears to show Xcode 10 running on macOS 10.14.

The well-known developer says he found the 30-second clip buried within an API on the backend of the Mac App Store. He shared a direct link to the video, embedded below, with 9to5Mac’s Guilherme Rambo.


Given the video originates from Apple’s servers, and is for its own Xcode development tool, everything shown is very likely real.

Ladies and gentlemen, I give you Xcode 10 on macOS 10.14. Dark Appearance, Apple News, App Store w/ video previews pic.twitter.com/rJlDy81W4W

— Steve Troughton-Smith (@stroughtonsmith) June 2, 2018

That includes:

  • Xcode 10 has a new dark interface, while the Trash icon in the dock is also darker, suggesting that macOS 10.14 may feature a systemwide dark mode, including in apps. On macOS 10.13, there is a partial dark mode, but only for the dock and top menu bar.
  • There is an Apple News icon in the dock, suggesting that it will be expanding to the Mac with a desktop app.
  • The desktop background could be a picture of the Mojave Desert in California during the night, hinting at a macOS Mojave name for the next version. MacRumors recently noted that Mojave could be Apple’s top choice based on the company’s recent trademark activity.

The video itself also likely confirms rumors that the Mac App Store will be redesigned on macOS 10.14 to more closely resemble the App Store on iOS 11, including the addition of preview videos like this one for apps.

The leak comes just two days before Apple’s annual Worldwide Developers Conference, where the company is expected to preview macOS 10.14 alongside iOS 12, watchOS 5, and tvOS 12. WWDC opens with a keynote on Monday at 10:00 a.m. Pacific Time at the McEnery Convention Center in San Jose, California.

MacRumors will be in attendance at the keynote, with live coverage of the event beginning shortly after 8:00 a.m. Pacific Time. Stay tuned to MacRumors.com and our @MacRumorsLive account on Twitter.

Related Roundup: macOS 10.14Tags: Xcode, Apple News
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

Advertisements
2
Jun

The latest version of MacOS now supports storing your messages in Apple’s cloud


The latest version of MacOS High Sierra is out, and with it arrives support for Messages in iCloud. Apple’s update arrives just days after the release of iOS 11.4 for the iPhone, iPad, and iPod Touch that introduced the same feature. You can now essentially store all messages and related attachments in Apple’s cloud, which automatically appear on all of your Apple devices if logged into the same iMessages account. 

To activate this feature in MacOS, open the Messages app and navigate to Messages > Preferences > Accounts > Enable Messages in iCloud.  

To activate this feature on iOS, open the Settings app and navigate to [your name] > iCloud > Enable Messages in iCloud. 

When turned on, this feature will allow you to read all messages sent from all Apple devices no matter if it’s a MacBook or an iPhone X. That also means if you delete a message, it’s gone for good: There’s no backup stored on your other Apple devices. 

According to Apple, this cloud-based feature aims to free up space on your devices. But there’s a drawback: Your messages are stored in Apple’s cloud. Thus, if a hacker breaks into your iCloud account, theoretically he/she will have access to your private messages and attachments. But don’t worry: Apple says your messages are encrypted from end to end. 

Outside the new Messages in iCloud feature, MacOS High Sierra 10.13.5 improves “stability, performance, and security.” Also on the short list are two changes for the enterprise: Variables in SCEP payloads that now expand properly, and a fix for configuration profiles containing a Wi-Fi payload and SCEP payload. Apple provides a second security-based list of notes here. 

To get the latest version of MacOS, open the App Store, click on the Updates button at the top, and wait a moment for the app to catch up with Apple. If you don’t see High Sierra 10.13.5, you might need to install incremental security updates before Apple pushes the latest High Sierra build to your device. 

MacOS High Sierra 10.13.5 arrives just days before Apple’s developer conference next week. We expect to see iOS 12 along with ARKit 2.0, the company’s platform for augmented reality apps served up on iOS. Other platforms expected to make an appearance include Watch OS 5, tvOS 12, and possibly MacOS 10.14. 

Apple is also expected to spend time discussing how it will bring iOS and MacOS closer together without combining the two platforms. If anything, Messages for iCloud does just that: brings a unified messaging experience to MacBooks and iPhones alike. Apple CEO Tim Cook even previously said customers don’t want iOS and MacOS merged into one platform. 

Meanwhile, what you may not see during the show are new devices, as the conference will supposedly focus strictly on software, not new software running on new hardware. Apple’s MacBook portfolio isn’t expected to receive an upgrade until late 2018: The same timeframe Apple will likely introduce its next iPhone(s) and MacBook Air laptops. 

Editors’ Recommendations

  • Vudu update finally lets Apple TV users stream Marvel, Disney films in 4K
  • Hey, I didn’t order this dollhouse! 6 hilarious Alexa mishaps
  • Firefox 60 is the first browser to support password-free internet logins
  • After the San Bernardino iPhone fiasco, lawmakers introduce the Secure Data Act
  • This innovative chunky padlock promises to be virtually unpickable


2
Jun

Collect and battle mystical creatures in Might and Magic Elemental Guardians! [Game of the Week]


might-and-magic-elemental-guardians-new-

Update June 1, 2018: Become the strongest Wizard in Ashan in Might and Magic Elemental Guardians, and then kick back and relax with the Sudoku-inspired puzzle game Hexologic!

Might and Magic: Elemental Guardians

Set in the same world as the previous games in the Might and Magic franchise, Elemental Guardians is a brand new RPG designed exclusively for mobile that tasks you with collecting and evolving over 400 different creatures and battling them in both a single-player campaign as well as in PvP battles.

This is a total collect-a-thon with a ton of daily missions and achievements to work towards. The animations and graphics are top-notch and there are even augmented reality features that let you bring your favorite creatures into our world and let battles unfold on your coffee table, or wherever you choose.

Might and Magic: Elemental Guardians is entirely free to play and is definitely worth checking out if you love mobile RPG games. If you pre-registered for the game, you’ll be happy to know that you also unlock Andy the Android bot as an exclusive reward so you won’t want to miss out on that!

Download Might and Magic: Elemental Guardians (Free w/IAPs)

Hexologic

Who doesn’t love a good puzzle game? Hexologic is a brand new game for Android that combines the core concept from the popular pen and paper game Sudoku with the colorful and relaxing gameplay of a good Android puzzle game.

The premise of the game is pretty simple — you tap to add dots to each hexagon in the grid and must make sure that the sum of all dots in each row adds up to the number at the end. Like any great puzzle game, Hexologic is easy to grasp but manages to up the challenge as the puzzles get more and more complex. There are over 60 puzzles to complete with more advanced modes available to up the challenge.

If you’re constantly on the lookout for a new puzzle game to chill with during your downtime, give Hexologic a try!

Download Hexologic ($0.99)

Android Gaming

best-action-games.jpg?itok=XIT8sDVg

  • Best Android games
  • Best free Android games
  • Best games with no in-app purchases
  • Best action games for Android
  • Best RPGs for Android
  • All the Android gaming news!

2
Jun

The Very Real reason LG built Google the sharpest OLED display ever


Android-figures.jpg?itok=JOwVsINE

Everything bad about current generation VR is about to disappear under a flood of pixels.

In late May, LG and Google showed off a tiny piece of glass that’s going to change VR and AR forever. It was a 4.3-inch 3840 x 4800 (18-megapixel) OLED display with a 120 x 96 field of view. A quick check of the math says that means it has 1,443 pixels-per-inch, which makes it the highest resolution display ever. Oh — it also has a 120Hz refresh rate.

If you have an Oculus Rift or HTC Vive you might understand why this is amazing. If you don’t, you need to understand that VR headsets work by projecting images on two small displays positioned very close to your eyeballs. That gives a sense of immersion, which is kind of important if you want to make virtual reality feel like real reality. It also means that everything can look like a screen door because the pixels are so close to your eyes, you can feel claustrophobic because the field of view is too narrow. Even a 60 or 90Hz refresh rate can leave you feeling a bit seasick under the right (or wrong) conditions.

VR needs a very good and very stable display. More is better here.

You also need to know that Google Daydream and Samsung’s Gear VR are very cool but smartphone-powered VR isn’t nearly as “powerful” as a headset that uses a PC engine and GPU, and can amplify every one of these problems. VR is new technology, and the issues that surround every new tech are there to be solved. One way to work them out is with a kicking display in front of your eyes that aims to reach the limits of human vision.

pixel-jerry.jpg?itok=s74t4CHF Some things look better behind the screen door.

We’re told by some very smart science people that the bounds of human vision is 9,600 x 9,000 pixels at 2,180 ppi, with a 160 x 150 field of view. That means this display is closer than ever to showing everything our eyes are mechanically capable of seeing. Compare this to something like the HTC Vive Pro and its 1440 x 1600 (625 ppi) display at 90Hz. The Vive Pro is a great VR headset and using it can feel immersive with the right content — it really can be Virtual Reality. This panel from LG is far better.

So this new screen is built for VR and will be a conduit to improve the tech. That’s expected as smart people work on those problems. But I don’t think anyone expected Google to be the company who wanted it built. Google can do VR pretty well, both in the smaller affordable space with Daydream and with stand-alone VR Headsets like the one we’ve already seen from Lenovo. But I don’t think this display was designed for any product Google currently “makes”.

mirage-solo-head.jpg?itok=XsVICzex This is how VR will sell — no wires, no backpacks. Google just has to make it amazing.

This new display has to go inside a standalone unit that doesn’t make you look like a Ghostbuster when you wear it or nobody will buy it.

You need plenty of oomph to push these many pixels at 120Hz, and a small self-enclosed headset with a mobile chip isn’t going to be able to do it. There is no way around that simple fact, and we already know that the panel can’t run past 90Hz on any existing mobile chipset. It’s not a matter of raw horsepower; small mobile chips are heat constrained. They can run fast and hard, but then get hot and need to take a rest. This panel isn’t going in something like the Lenovo Mirage Solo because it needs a CPU and GPU running fast enough to drive it without hitting that thermal wall.

I think Google is going to try and reinvent stand-alone VR again. Google loves to tinker with existing products and Daydream Standalone is a shiny new platform that hasn’t been sullied by dirty hands yet — it has to be tempting to try “stuff”. If there is a way to match portability with GPU rendering power, they will find it.

Whether we buy it or not is the real question.

2
Jun

Amazon’s one-day PNY sale features discounted storage, SSDs, graphics cards, and more


This limited time sale is one you don’t want to miss out on.

Amazon’s offering a variety of PNY products at up to 25% off today only, from micro SD cards and graphics cards to flash drives and SSDs. This sale is part of Amazon’s Gold Box deals of the day and brings many of these items down to their lowest prices ever.

pny-storage-items-june-2018-coo4.png?itoIf you’re on the hunt for a micro SD card, you could grab an Elite-X 128GB version for $40.49 or the Elite-X 256GB version for $79.99. They’re ideal for recording and storing 4K footage.

There are also USB flash drives from $11, and Lightning + USB flash drives from $38 which can plug directly into your iPhone or iPad.

Some of the other options on sale today include:

  • Attache USB 2.0 Flash Drive, 32GB (3-pack) for $17.99 (was $30)
  • Elite Performance 128GB High Speed SDXC Card for $35.99 (was $46)
  • 240GB 2.5″ SATA III Internal Solid State Drive for $55.99 (was $70)
  • 128GB Duo Link Lightning + USB 3.0 Flash Drive for $59.99 (was $98)
  • Elite Performance 512GB High Speed SDXC Card for $149.99 (was $212)
  • GeForce GTX 1060 3GB Graphics Card for $229.99 (was $280)

See at Amazon

2
Jun

5 reasons I still hate voice assistants, even as the world goes nuts for them


Bill Roberson/Digital Trends

I have multiple devices that I can talk to, and outside of telling Siri to turn off the lights when I go to bed (because I’m a lazy, terribly person), I don’t talk to a single one of them. In fact, I’ve disabled the voice-activation function of all the ones that I can. I don’t like talking to my devices. I find it silly, and I don’t find it makes my life any easier.

But the numbers show that I’m virtually alone in my disdain for voice-activated electronics. The Alexa-enabled Fire Stick is Amazon’s best-selling device, along with the Echo Dot, FireTV 4K, and Echo Spot — all devices that want you to tell them what to do.

And people are using the devices like crazy. Alexa Skills are multiplying daily, and people are using their Echoes to order dog food and coffee instead of firing up an app or a web browser. Meanwhile, Apple is dead-set on making voice a premiere way to interact with its products, hiring hundreds of engineers and designers to make Siri smarter.

Let me back up a bit. I was an early adopter when it came to voice activation. In 2006, I adopted a Nabaztag voice-activated robotic rabbit. I loved her. I’d ask her for the news headlines, she’d read me my emails, and I’d pay attention to her telltale ears when the weather was bad.

And then Siri, Alexa, Google Assistant, PlayStation Eye, Kinect, and others came along. They all wanted me to talk to them. To tell them what to do. I learned the voice commands like a good nerd and, once I had my home dialed in to the extent that I could change the thermostat temperature, turn on the TV and even choose a channel, and dim the lights with my voice, I quit.

I quit it all.

I realized I just don’t like talking to my devices, and I don’t find that doing so is any better than using an app or website to do what I need to do.

Once I had my home dialed in to the extent that I could change the thermostat temperature, I quit.

Before I go any further, I’d like to make a couple things clear. First and foremost, I love new technology. My entire home is smart, from the thermostats, to the lights, to the TV, to the smoke detectors, to the security systems. I like connected devices. Second, I think voice commands have a time and a place, namely in cars to reduce distraction and in cases where disabilities or other impairments make it a viable way to interact with your tech.

I also believe that at some point in the future — probably sooner than later — when Siri and Alexa can parse more than simple commands, I’ll be on board. This isn’t the rant of a luddite who just doesn’t get it. This is the rant of a nerd who doesn’t need it.

With that aside, let’s dig in to why I don’t like to talk to my devices.

It’s goofy

I have a friend who loves talking to his devices. I get it — he’s as much of a nerd as I am, and telling your device to remind you to pick up some milk before heading home is a convenience that’s hard to replicate. But every time he does it, he gives me an apologetic look as if to say, “Yeah, I know what I’m doing is goofy, but I’m doing it anyway.” I can respect that. But it’s still goofy.

It’s slow

In many cases, telling your device to do something – from the moment you realize you want to change the temperature to the moment the temperature changes, for instance – is rarely faster than picking up your device to get it done. Sure, your smartphone might be across the room and you just want to set the thermostat to 72 degrees, but when was the last time your smartphone wasn’t within reach?

It’s frustrating

Let’s assume I’m wrong about the slowness thing (I’m not). Even if it is faster, at least at first, to use your voice to change the temperature or to order a product, it’s often incorrect. This undermines the convenience and speed of talking to your devices.

Philips

“Alexa, set temperature to 72 degrees.”

“Okay, I ordered ten packets of Cold Eeze. They will be delivered by UPS on Saturday.”

The old-school alternative — changing the temperature (or ordering Cold Eeze for that matter) — is seeing your choice and an exact, visual confirmation before following through with what you actually want. No muss, no fuss. It just works.

It’s incomplete

Ever notice how excited people get when Alexa or Google Assistant can do something new, like parents seeing their toddler say a real word for the first time? “Alexa can now set your DVR to record The Daily Show!”

You could do that for years just using your DVR or a remote app like Harmony. And it just works.

It’s a gimmick – a party trick.

The fact that we get excited when our digital assistant learn new tricks is evidence that they’re incomplete infants. We get so excited that they can do something as simple as take a note or recite an email that we forget how useless they really are in the big picture, and how often we’re going back to the “old” way of getting things done anyway. In other words, it’s a gimmick – a party trick. Perhaps in the future when they’re smarter we’ll find real use for them, but let’s be real: They don’t do anything we couldn’t do last year with our fingers.

It’s creepy

I spend enough of my life online to know that the concept of privacy is a moving target. All of my devices track what I do, mostly to make my experience with them more tailored to my habits, but also to make other people more money. I’ve accepted that, just as I have accepted advertising, marketing, and being a contributing part of society.

With that said, I still find the idea that devices are always waiting for me to say, “Alexa!” or “Okay Google” a bit, well, creepy. It’s likely I’ll get over it, but I’m not quite there yet. Reports of Alexa randomly sending recordings of private conversations to friends is not helping the situation.

Don’t get me wrong

Eight years ago, beloved animator and director Hayao Miyazaki of Studio Ghibli (Spirited Away, Princess Mononoke) infamously dismissed capacitive gesture interfaces as obscene, saying, “For me, there is no feeling of admiration or no excitement whatsoever. It’s disgusting. On trains, the number of those people doing that strange masturbation-like gesture is multiplying.”

Miyazaki was last seen attempting to learn to use gesture and touch interfaces as he struggles to get his next upcoming film ready for the world. In short, he was wrong, and he admits it.

Voice commands are clearly part of our future. This is not me saying that voice commands — or those who embrace them — are pointless or even obscene. I just don’t think they’re ready yet. But, hey: Thanks for beta testing them and teaching them some new tricks while the rest of us wait for prime time.

Editors’ Recommendations

  • The best Amazon Echo Easter eggs
  • Apple HomePod tips and tricks to get you started
  • Nest vs. Ecobee: Which is the better smart thermostat?
  • ‘Alexa, give me a cookie!’ Are digital assistants ruining my son?
  • How to connect your smart home gadgets to your Amazon Alexa device


2
Jun

How broke college grads made animation software used in Jurassic Park and Iron Man


Adobe

In an unheated, defunct imperial knife factory in Providence, Rhode Island, a small team of young Brown and Rhode Island School of Design grads used task lamps to keep their fingers warm as they typed. The exposed beams and sandblasted brick made for a nice visual atmosphere, but turning on the furnace would cost far too much to keep the old building warm.

The year was 1992, before the world wide web was, well, wide, and the group of ten or so team members making up the Company of Science and Art (COSA) had already failed at one project: Selling digital magazines on CD-ROMs via snail mail. Now, another project was in jeopardy. Apple had just launched QuickTime, a serious competitor to the small company’s own video playback app, PACo Producer.

With maybe six months of cash left to burn, the team grasped for one last possible lifeline in an attempt to pivot and re-invent themselves. And what they created was nothing short of a revolutionary piece of software that went on to power the special effects in a wide range of blockbusters; a program we know today as Adobe After Effects. After launching out of beta in 1993, After Effects is now celebrating its 25th anniversary.

An idea born of desperation

After Effects did to animation and motion graphics what Photoshop did for photography and graphic arts by providing a layer-based system for animation design. Creating digital animations before After Effects was possible, but other programs limited the number of layers to just two or three. After Effects had no such limit, giving creators far more flexibility. 

“We were just trying to keep the company alive — we were just trying to survive.”

The layer-based essence of After Effects, where every layer can be a video, was there from the very first version, but other trademarks of the software, including the timeline that allows you to navigate to different frames, was missing. Instead, the animators navigated using pause, play, fast forward, and rewind controls. Rendering the animation on that very first version took long enough that the task was often left to run overnight.

Dave Simons, one of the original creators at COSA who now works with Adobe’s young Character Animator program, said that the team knew their little operation wouldn’t be able to compete with Apple’s QuickTime. When they brainstormed ideas, they tossed out everything from a mosquito spray to what became After Effects. Animation software was more in line with their earlier work and something they wished they had at hand when creating digital magazines on CD-ROMs, so going that direction made perfect sense.

“We were just trying to keep the company alive — we were just trying to survive,” Simons said. “I consider myself very lucky that we were at the right time and right place. We had great people working on it, but a lot of things had to come together for it to become the success that it is. It’s still shocking to me, and it’s still growing. More and more people use it.” Simons still works with some of the original After Effects team from COSA at Adobe, including interface “surgeon” David Cotter and Paul Usatello.

The concept for the software was partially inspired from the tedious process of creating keyframe animations using Photoshop to design each frame individually. The group also found inspiration watching music videos on a stack of seven second-hand TVs all tuned to MTV — back when MTV actually showed music videos.

Adobe initially turned it down

The team spent nine months creating After Effects, running on a bare minimum budget (hence, the unheated building). They approached several large companies for funding, including Adobe, but found no takers. Thankfully, QuickTime actually helped create a higher demand for PACo Producer, giving the struggling company a few more months than they expected and keeping them afloat until the release of After Effects in January 1993.

After Effects has kept the original essence while vastly expanding the tools for creating animations and composites.

The software was not only enough to save the company, it also attracted large buyers. COSA was acquired by Aldus, which was in turn bought by Adobe in 1994, where After Effects found its forever home. That new ownership also meant the program could grow far beyond its original scope, integrating with Photoshop and eventually becoming part of today’s Creative Cloud suite of applications.

Since then, Adobe After Effects has kept the original essence while vastly expanding the available tools for creating animations and composites. A timeline was added in the second version while support for three-dimensional animations came in version 5. More recently, Adobe expanded the collaborative capabilities of After Effects, enabling distributed teams to work on the same project in near real time.

After Effect’s first major Hollywood use was to create an on-screen animation in Jurassic Park, when an animated Dennis Nedry pops up on a computer and says, “Ah ah ah, you didn’t say the magic word!” Since then, the program has become a common tool in the industry. The software is often used, like in that original on-screen animation, to create fake user interfaces on futuristic computers, including the flashy heads-up display graphics in Iron Man 3, but its capabilities have branched out into many other areas of special effects. The program is also commonly used for creating credits, and is part of Pixar’s toolkit for making its stylish animated credit sequences.

“I’m not an artist, but I love creating tools that let artists be more efficient and expressive, to give them the tools to express,” Simons said. “To be able to aid in the creation of that kind of art, and moving images of all types, is still very satisfying to me and what drives me.”

Editors’ Recommendations

  • The best free photo-editing software
  • How to make GIFs with Photoshop (or these free alternatives)
  • Adobe enables faster workflow with updates to XD, InDesign, and Illustrator
  • Snap and edit pictures like a pro with the best camera apps for Android
  • What’s the difference between Lightroom CC and Lightroom Classic?


2
Jun

Google just removed the ‘Tablets’ section from the official Android site


In case you didn’t already know that Android tablets were dead.

Android tablets have always been interesting beasts. Companies like Motorola and Samsung tried making them popular with the Xoom and Galaxy Tabs early on, and Google soon swooped in with home runs such as the Nexus 7 and Nexus 10. However, due to lacking developer support and no proper optimizations for the OS on the big screen, Android tablets never caught on the way the iPad did.

Nexus-7-2012-2013-06.jpg?itok=sdirjDOG

It’s been apparent for some time that Google’s all but given up on Android tablets, but now the final nail in the coffin has been set with Google quietly removing the “Tablets” section from the official Android website.

android-site-bye-bye-tablets.jpg?itok=OY

If you visit android.com, you’ll see the navigation bar pictured above. There are links to Phones, Wear, TV, Auto, and Enterprise. A Tablets button was there prior to today, but now it’s nowhere to be seen.

This isn’t surprising in the slightest considering that Google’s last tablet was the overpriced Pixel C from 2015, and the quiet removal from the Android site without any big announcement goes to show that Google knows no one really cares about Android tablets and haven’t for some time.

So long and farewell, Android tabs. You were never really amazing, but we’ll still miss you (kind of, but not really).

Here’s to Chrome OS tabs 🍻

I, for one, am totally OK with Chromebooks replacing Android tablets

2
Jun

Here are five new features we want to see in Apple’s iOS 12


Julian Chokkattu/Digital Trends

As a software-focused event, a large portion of Apple’s Worldwide Developer Conference is dedicated to unveiling the latest operating system for the iPhone. Last year’s iOS 11 set out to allow for more customization and accessibility options. On Monday, June 4, we’ll see what Apple has up its sleeve for iOS 12 when it presents the new operating system at WWDC 2018 in San Jose, California.

Ahead of its debut, rumors have surfaced about what could be included in iOS 12 — software to combat iPhone addiction, HomeKit improvements, and more. But while there are plenty of features that we could possibly see in the new OS, there are also ones we’re hoping will actually make the cut. Here are a few of the new features we want to see in iOS 12.

More customizability in the control center

Brenda Stolyar/Digital Trends

One of the most notable changes in iOS 11 was the control center. Past versions of iOS only allowed you to access the basics in the control center — Wi-Fi, Bluetooth, toggles for the flashlight, camera, calculator, and brightness adjustments. But in iOS 11, you’re able to add quick access toggles for Notes, Stopwatch, Text Size, and Wallet, among others. We were particularly in favor of the ability to add Lower Power Mode — which used to require multiple steps to simply toggle on. With the last OS, we are able to simply swipe up on the control center and tap the low battery icon instead.

In our iOS 11 review, we hoped future updates would be less restrictive regarding what you can add — but that has yet to happen. That’s why we’re hoping to see additional options in iOS 12. It would be nice to be able to have other useful apps available to you in the same motion that we do with the camera or calculator. This could include social media or productivity apps you use often. We’d also like to have the ability to choose between different Wi-Fi networks through the control center, rather than only be able to toggle Wi-Fi on and off.

We’d like to be able to customize the control center when it comes to more than icons as well. Even though you can control the order of the controls that you add, there’s no way to customize the entire menu. Similar to the way you rearrange your app icons, it’d be nice to be able to drag and drop each module in the control center freely.

More Siri capabilities

While Siri didn’t receive a ton of updates in iOS 11, it did give users the ability to type out questions and commands instead of saying them out loud. But with iOS 12, we want to see an upgrade to the voice assistant — with more functionality. Especially in a world of Google Assistant and Amazon Alexa, it seems Siri only sticks to the basics.

For starters, Siri isn’t reliable enough to handle too many commands at once. We’d like to see a feature similar to Google’s Multiple Actions — where you’re able to make different requests at once. Another feature we want in iOS 12 is for Siri to be able to launch specific things within third-party apps with a voice command. For example, if we want to play a specific song in Spotify, we’d be able to let Siri know by stating the command, which would prompt our iPhone to automatically launch the song. As of now, we’re only able to launch the app itself but it doesn’t allow us to go into specifics.

There are rumors that Siri is supposed to be getting smarter, with information being unveiled at WWDC. We do hope that at the very least, the company has spent the last year working to improve the voice assistant’s ability to understand its users. Even though Siri has a variety of language support, users still find that it often misunderstands exactly what someone is saying or asking it. Even if more of the complex features aren’t added in, it’d be nice for Apple to improve Siri’s transcription capabilities at the very least.

Group FaceTime

It’s rumored that iOS 12 will allow users to FaceTime in groups — which is a feature we would definitely like to see. More often than not, friends and family are always communicating via group chats on iMessage, so it only makes sense to have the same capability via FaceTime. Rather than having to install third-party apps, it’d be extremely convenient to have the option built right into the operating system instead. It was also rumored to be included in iOS 11 last year, and we hope the new feature makes the cut this time around.

More Animoji options

Jeremy Kaplan/Digital Trends

This update is for iPhone X owners specifically, seeing as how the device includes Apple’s Face ID — the company’s facial-recognition system. Even though Animoji was debuted less than a year ago, we’re still hoping to see additional Emoji options at WWDC. Currently, there are about 12 different animals for you to choose from but there are far more Emojis Apple could add to make the feature more fun.

While Samsung’s AR Emoji — which launched a few months back — didn’t receive the best response in terms of accuracy, it definitely set a precedent for all the possibilities when it comes to digital cartoon avatars. With AR Emoji, you’re not only able to mirror a character or animal but you’re also able to create one of yourself — allowing you to choose a skin tone, outfit, hairstyle, hair color, and glasses. We would definitely like to see the same wide range of customizability with Apple’s Animojis in iOS 12.

Fewer bugs

Julian Chokkattu/Digital Trends

There’s an exhaustive list of different bugs found in iOS 11, ranging from microphone problems to notifications disappearing and even Touch ID not working. Perhaps one of the strangest glitches was when the letter “A” and a symbol appeared if some users typed the letter ‘i.’ Even though Apple released updates for mostly all of them, it’s inconvenient to constantly run into issues on an operating system that’s supposed to improve the user experience with new features and capabilities.

With iOS 12, we hope to see Apple’s new OS run far more smoothly this time around. There have been reports that suggest the company is already focusing on this — rather than making it feature-heavy, iOS 12 will focus on ironing out bugs from its software. Let’s hope this is the operating system that finally breaks the cycle of iPhones crashing and problems piling up.

Editors’ Recommendations

  • From new MacBooks to iOS 12, here’s what we’re expecting at Apple WWDC 2018
  • Here’s what we want from MacOS in 2018
  • At WWDC 2018, Apple to show off its latest software innovations
  • Apple’s iOS 13 could feature a revamped Files app and better multitasking
  • Apple iPhone X review


2
Jun

Watch 958 drones create a 400-foot tall Time cover in lights instead of pixels


The iconic red border and masthead on Time magazine for the issue now nestled on newsstands wasn’t created by graphic design software. The Time cover for the June 11 issue was instead created by 958 flying drones — and shot by another drone. For the Time special-edition Drone Report, the magazine created the cover in the sky using Intel’s Shooting Star, a fleet of drones and software used to create UAV light shows. The in-flight cover was shot in Folsom, California, on May 3, with the special edition issue available beginning today, June 1.

Time’s cover is the first that was shot on a drone.  The Shooting Star drones can fly up to 400 feet, but even at that height, the drones had to fly closer together than the drone light show coordinators typically fly. Rather than the 10-foot radius the drones typically keep to avoid gusts of wind creating an in-air collision, the drones flew with fewer than five feet between them.

Using Intel’s program to choreograph light shows, the drones created the border of the magazine that’s usually simply drawn onto a photograph with graphics software. The entire “cover” in the air, with all the drones in place, was about 328 feet tall, a buzzing collection of lights in the sky at dusk.

With 958 drones, the cover choreography was one of the largest drone shows in the U.S. The largest so far, to our knowledge, was in China with 1,374 drones, but Intel Shooting Stars broke a record for indoor shows with 100 and also created a 1,200 drone show for the Olympics this year.

Astraeus Aerial Cinema Systems used an airborne drone to capture the image, with LA Drones also working on the project.  The cover was shot at sunset to create a gradient in the sky behind the drones, a time frame that also presented challenges with additional wind and the low light capture.

“I’ve always been amazed at how different an image looks when you put it inside the red border of Time,” D.W. Pine, the creative director at Time, said in a statement. “What’s interesting about this is that the image is actually the border of Time. I’ve looked at that border and logo every single day on a flatscreen monitor, and to see it up in the sky, at 400 feet in the air, it was very moving for me.”

The cover houses the magazine’s special edition on drone technology, which explores the explosion in UAV technology. The report looks at drone technology from several different angles, including safety and government moderation and drone use in Hollywood.

Editors’ Recommendations

  • A defense company is building a drone that can fly continuously for one year
  • DJI partners up with Microsoft to give drones A.I. superpowers
  • China nabs world record for biggest drone display, but it’s a bit of a mess
  • The best drones of 2018
  • Halo Drone Pro review


%d bloggers like this: