iPhone SE 2 rumors are running rampant, but there is little consensus about what to expect should the device be real.
The latest word comes from Japanese blog Mac Otakara, citing Chinese accessory makers who claim production has yet to begin for the second-generation iPhone SE. In fact, Apple is apparently still considering a final design for the device among the several different prototypes it is said to have tested.
The report claims that at least one of those prototypes has an iPhone X-esque design, including a nearly full screen display with no home button and a notch, while other prototypes are believed to have a similar design as the current iPhone SE, except with a glass back, presumably to allow wireless charging.
Ben Geskin recently shared photos of what appears to be iPhone display glass with a shallow cutout resembling the TrueDepth sensor housing on iPhone X, but in a follow-up tweet, he said his “main source” says that the new iPhone SE will have the same design and display as the current model, but with a glass back.
Alleged iPhone SE (2018) parts?
via https://t.co/pJsOlU5ZlN pic.twitter.com/zYD8f7ybc0
— Ben Geskin (@VenyaGeskin1) May 10, 2018
I have to say, that my main source is saying that SE2 will have the same design (and display) as the first SE, but with a glass back.
— Ben Geskin (@VenyaGeskin1) May 10, 2018
Meanwhile, MacRumors obtained renders from case maker Olixar last week that depicted a new iPhone SE with an iPhone X-esque display, but still with an aluminum, flat-edged frame like the current model and iPhone 5s. Olixar said its renders are based on information obtained from a “reliable source” in China.
The renders suggest the iPhone SE’s notch would be approximately half as wide as the one on the iPhone X, likely making it too slim to house facial recognition sensors. One possibility is that the iPhone SE does move to an edge-to-edge design, with a notch, but without any of the Face ID technology.
As we said at the time, however, it’s hard to imagine that Apple would include Face ID on the iPhone SE without significantly raising its price, as such a move would likely cannibalize sales of the iPhone X lineup.
And without Touch ID or Face ID, it’s unclear how authentication would work on the new iPhone SE. Chinese company Vivo’s new X20 Plus UD became the first smartphone with an in-display fingerprint sensor early this year, but Apple seemed to have elected against that approach early on with the iPhone X.
In whatever form it arrives, the accessory makers believe the new iPhone SE is more likely to launch in the third quarter of 2018, suggesting the device could be unveiled alongside a new iPhone X and iPhone X Plus in September. If true, however, Apple has likely already finalized the design of the device.
A handful of earlier reports and regulatory filings had suggested the new iPhone SE could be unveiled in May or at WWDC 2018 in June.
It’s unclear how the Chinese accessory makers obtained their information, but they could be confusing one of the prototypes with the rumored 6.1-inch iPhone. That device is expected to have some iPhone X features like Face ID, but lack an OLED display, stainless steel frame, and 3D Touch to achieve a lower price point.
All in all, it’s a confusing mess right now. Apple often prototypes several iPhone models, and it could even have some fake versions passing through the supply chain to throw off leakers. But where there’s smoke, there’s often fire, so specifics aside, it sounds like a new iPhone SE of some kind is in the works.
Apple hasn’t truly updated the iPhone SE since it launched in March 2016, beyond doubling its available storage capacities a year later. The device currently starts at $349 in the United States with 32GB of storage.
Related Roundup: iPhone SETag: macotakara.jpBuyer’s Guide: iPhone SE (Don’t Buy)
Discuss this article in our forums
Apple today seeded the fifth beta of an upcoming macOS High Sierra 10.13.5 update to developers, one week after seeding the fourth beta and more than a month after releasing the macOS High Sierra 10.13.4 update.
The new macOS High Sierra 10.13.5 beta can be downloaded through Apple Developer Center or the Software Update mechanism in the Mac App Store with the proper profile installed.
macOS High Sierra 10.13.5 introduces support for Messages on iCloud, a feature that was previously present in macOS High Sierra 10.13.4 betas before being pulled ahead of the release of the update. Messages on iCloud is also available in iOS 11.4.
The Messages on iCloud feature is designed to store your iMessages in iCloud for improved syncing. Right now, incoming iMessages are sent to all devices where you’re signed into your Apple ID, but it’s not true cloud-based syncing because your old messages don’t show up on new devices nor does deleting a message remove it from all of your devices, both features enabled through Messages on iCloud.
Messages on iCloud also allows your older iMessages and attachments to be stored in iCloud rather than on your iPhone, iPad or Mac, saving valuable storage space.
The update also likely includes bug fixes and performance improvements for issues that weren’t addressed in macOS High Sierra 10.13.4, but as Apple does not provide detailed release notes for macOS High Sierra beta updates, we may not know exactly what’s included until the new software is provided to the public.
No other major outward-facing changes were found in the first four betas of macOS High Sierra 10.13.5, but we’ll update this post should any new features be found in the fifth.
The previous update, macOS High Sierra 10.13.4, brought support for external graphics processors (eGPUs) along with Business Chat in Messages and several other bug fixes and smaller feature improvements.
Related Roundup: macOS High Sierra
Discuss this article in our forums
Apple today seeded the fifth beta of an upcoming watchOS 4.3.1 update to developers for testing purposes, one week after seeding the fourth watchOS 4.3.1 beta and more than a month after releasing watchOS 4.3.
Once the proper configuration profile has been installed from the Apple Developer Center, the new watchOS 4.3.1 beta can be downloaded through the dedicated Apple Watch app on the iPhone by going to General –> Software update.
To install the update, the Apple Watch needs to have at least 50 percent battery, it must be placed on the charger, and it has to be in range of the iPhone.
Shortly before watchOS 4.3.1 was introduced, Apple stopped allowing developers to update Apple Watch apps built on the original watchOS 1 SDK. In watchOS 4.3.1, there’s an alert when launching a watchOS 1 app that warns that the app may not be compatible with future versions of watchOS. This suggests Apple will drop support for watchOS 1 apps entirely in the future, just as it did with support for 32-bit iOS apps in iOS 11.
Code hidden within watchOS 4.3.1 also suggests Apple is working on introducing support for custom third-party watch faces, but this is likely a feature that would be introduced as part of a major watchOS 5 update rather than through watchOS 4.3.1.
There were no major new features discovered in first four watchOS 4.3.1 betas, which is unsurprising as this is a minor 4.x.x update. It’s likely this update focuses primarily on bug fixes to address issues discovered since the release of watchOS 4.3.
Apple does not provide detailed release notes for watchOS, so we may not know what’s included in the update until it sees a public release.
Related Roundups: Apple Watch, watchOS 4Buyer’s Guide: Apple Watch (Neutral)
Discuss this article in our forums
Apple today seeded the fifth beta of an upcoming tvOS 11.4 update to developers for testing purposes, one week after seeding the fourth beta and more than a month after releasing the tvOS 11.3 update.
Designed for the fourth and fifth-generation Apple TV models, the new tvOS 11.4 developer beta can be downloaded onto the Apple TV via a profile that’s installed using Xcode.
tvOS 11.4, paired with iOS 11.4, reintroduces AirPlay 2 features that were present in early tvOS and iOS 11.3 betas but were removed ahead of release. With AirPlay 2, the same audio content can be played on multiple devices (like the Apple TV) throughout your home, and audio playback can be controlled via the iPhone or through Siri commands. After installing iOS 11.4 and tvOS 11.4, the Apple TV is also once again be listed in the Home app.
There were no other new features discovered in the first four tvOS 11.4 updates, and it’s likely it focuses on bug fixes and other small improvements. Apple’s tvOS updates have historically been minor in scale, and we may not find any other new additions.
We’ll update this post should new features be introduced in the fifth beta of tvOS 11.4.
Related Roundup: Apple TVBuyer’s Guide: Apple TV (Neutral)
Discuss this article in our forums
Apple today seeded the fifth beta of an upcoming iOS 11.4 update to developers, one week after seeding the fourth beta and more than a month after releasing iOS 11.3, a major update that introduced several new features.
Registered developers can download the new iOS 11.4 beta from Apple’s Developer Center or over-the-air once the proper configuration profile has been installed from the Developer Center.
The iOS 11.4 update introduces a new ClassKit framework for educational institutions, which supports new features announced at Apple’s March 27 education-focused event.
For regular users, the iOS 11.4 update adds features that were originally present in the iOS 11.3 beta but removed ahead of release.
It includes support for Messages on iCloud, designed to store your iMessages in iCloud rather than on each individual device, allowing for improved syncing capabilities. Currently, incoming iMessages are sent to all devices where you’re signed in to your Apple ID, but there is no true cross-device syncing.
Messages on iCloud will allow you to download all of your iMessages on new devices, and a message deleted on one device will remove it on all devices. Older messages and attachments are also stored in iCloud rather than on-device, saving valuable storage space.
The iOS 11.4 update also includes AirPlay 2 features, with the Apple TV once again available in the Home app. With AirPlay 2, the same audio content can be played in multiple rooms on devices that support AirPlay 2. AirPlay 2 includes a feature that lets you ask Siri on one device to play content on another AirPlay 2-enabled device. So, for example, you can ask Siri on iPhone to play content on your Apple TV in another room if you’re running the iOS 11.4 and tvOS 11.4 betas.
There were initially signs of support for HomePod stereo sound in the first iOS 11.4 beta, a long-promised feature, but stereo sound didn’t work properly and the mention was removed in the second beta. It’s not clear if it will return for the update’s release.
For the iPhone 8 and iPhone 8 Plus, there is a new (PRODUCT)RED wallpaper available, which is not available on iPhone X.
iOS 11.3, the previous update to iOS 11, introduced a new Battery Health feature for monitoring the status of your iPhone’s battery, Business Chat for iMessage, which lets you communicate with companies directly in the Messages app, ARKit 1.5 with augmented reality improvements, new Animoji on iPhone X, Health Records from participating medical providers, and more.
Related Roundup: iOS 11
Discuss this article in our forums
The latest in the G series tries to bring the relationship between LG and Google even closer, while once again upgrading its main feature, the camera experience. The LG G7 ThinQ brings an AI-enabled camera, a Google Assistant button, and more, but is that enough to bring LG back to the forefront of the flagship game? Let’s find out in our LG G7 hands-on.
First things first — the ThinQ name on the end of the official title for this phone is a connection to LG’s IoT platform. However, we were told during our time with the phone in Korea we can just call it simply the LG G7.
The LG G7 looks a lot like a mix between the V30 and the G6.
Though LG has moved to a slower release cycle for its mobile devices, the influence of the past is still very strong in this handset. The design of the device is much like the LG V30, which had the shape of the LG G6. The design’s tall screen now sports a notch, with a glossy back that comes in four different colors. It might not be the most unique-looking device, but it is still a looker. What is interesting in the G7 is how LG is ever so slightly moving away from certain aspects that helped differentiate their devices from the rest of the field.
A big example of this is the fingerprint reader, which is still in the same place as before, but no longer doubles as the power button. Instead, the power button is located more conventionally on the side, which might feel a little odd to LG veterans. Apparently, this was changed to make waking the device easier. Some users found the back power button a little annoying to reach, especially in situations like driving (or when your phone is lying face up on a table). The power button on the side also affords the phone a new way of quickly launching the camera with just a double tap.
A new AI button is now found on the other side of the phone. LG has partnered up with Google very closely again to provide G7-specific functionality. Google Assistant is the G7’s AI assistant of choice, and this button allows for full control over its triggering. Press the button and it will launch Google Assistant in the same way that holding the home button does. Double press the button and it will launch Google Lens. Press and hold the button and Google Assistant will listen for as long as the button is pressed down for the voice search string, making the start and end of your query easier to identify.
The AI key is a unique and potentially invaluable way of interacting with Google Assistant
If you want it bad enough, LG might let you remap the G7’s Assistant button
The LG G7 ThinQ is official as of this morning. We now know that the extra hardware button that caused much speculation over the past few months is a dedicated button used to launch Google …
That last function might seem simple — like talking to Google through a walkie-talkie — but it might be the most significant change to the way Assistant works. Only Google Pixel Buds support that kind of long-press interaction, where the touch-sensitive earbuds can be held down to take a search string for the duration of the hold. Think of all the times you failed to get a proper search done with Google Assistant because it failed to recognize when you’d stopped speaking. For that reason alone, this implementation will make Google Assistant a bit easier to manage and use. It is certainly a different take on what we have seen in the form of squeezing on other devices.
The notch is becoming a very common addition in new phones this year, and every manufacturer is trying to bring their own spin on it, LG included. First off, the phone speaker and the better front-facing camera are centered at the top, where the notch just covers up that portion of the screen.
More: The top 7 LG G7 ThinQ features you need to know
LG tries to make the notch easier on the eyes through customization. It can be “turned off” by making the entire notification area black. Those same areas can also be made different colors and gradients. Admittedly we are a little miffed that what is called the “New Second Screen” doesn’t really add any new functionality, like the old second screens of the LG V10 and V20 once did.
The screen is still a powerful Quad HD+ IPS LCD panel. An OLED screen might have appealed to more people, but LG chose to stick with LCD for one particular reason: brightness. When set to Super Bright Display mode, the screen can get to 1000 nits if the user so chooses and can stay that bright for a maximum of three minutes.
The screen has also been tuned to retain the fidelity of colors and sharpness of text. This is more than just changing the screen settings in broad daylight situations. When directly underneath the sun, the screen will brighten automatically, but prioritize textual elements in places like the phone or messaging apps. It is only when the user triggers the Boosted Mode that the screen will be a very bright, but still effective, display experience.
Everything looks crisp and proportionate on this display. LG’s software has been accused in the past of feeling bloated, without a fully coherent design language. Yearly updates have been good to this now Oreo-enabled version of its interface.
There isn’t much wasted space in the new menus. The home screens can be changed to include not only an app drawer button, but also a swiping motion similar to the Pixel Launcher or Samsung UI, and LG’s own companion home screen experience seems simple enough.
LG’s ploy for AI capabilities shows in not only the included SmartThinQ application, but also its Smart Bulletin, which tries to give users contextual information based on location and time. The page will display all of that in cards. It isn’t too hard on the eyes yet, though we will need to spend more time with it to see if LG’s algorithms are up to par with the likes of Google Feed and Bixby.
The specs you’d expect from a 2018 high-end device power this phone. It’s got a Snapdragon 845 with either 4GB of RAM and 64GB of onboard storage, or 6GB of RAM and 128GB of storage. The larger version will be available later, though we are not sure which markets will get it (we were told the U.S. will only get the 4GB/64GB version). A 3,000mAh battery might not sound huge, but we will reserve judgment until our real-world usage and tests. All the other bits and pieces you would want in the phone are here, including the headphone jack.
LG continues to be one of the few companies that pays this much attention to the audio experience.
A single bottom-firing speaker might sound like a pretty conventional setup, but LG has taken it a step further by creating Boombox Sound. This is a unique take on the speaker experience, as it makes the entire back portion of the phone into a sound chamber.
Don’t miss: LG G7 ThinQ specs: Fantastic audio and a super bright screen
In simple terms, the space between the back cover and everything it protects is now a place for sound to literally emanate from. When playing audio at loud volumes, the entire back of the phone will vibrate — just enough so you can feel it, but not so much that it becomes annoying. This vibration makes the sound resonate through anything the phone might lay on. Any box or hollow container will thus create great sound.
The vibrations make materials like thin wood and cardboard boxes literally amplify the sound, making for a richer experience far better than any other device — even if you do that trick of putting phones in glass cups. There’s a striking difference between the LG G7 and any other phone we’ve tested; practical or not, this is an interesting way of thinking — or listening — outside the box by LG.
The headphone jack returns, bringing with it the Quad DAC once again, this time with tuning made for the DTS-X 3D standard. That standard is basically an add-on for changing the soundstage of whatever you are listening to and can either narrow or widen the audio experience.
Read more: What is DTS:X virtual surround sound in the LG G7 ThinQ?
The Quad DAC is still one of the big trump cards for LG’s phones. Anyone disillusioned by the USB Type-C adapters of the world can rest assured everything out of this phone sounds really great, no matter what headphones you use. Also, the Quad DAC plus active noise cancelling is one hell of a combination.
The camera has been LG’s most unique and most polarizing phone feature the last few years. Let’s start off with some great news: the front-facing camera is finally good — and not just because of the bump up to 8MP from 5MP. Just from a cursory glance, the sharpness has been greatly improved and details are no longer smudged by shoddy processing and over-softening of features.
The front-facing camera also now has a portrait mode, which is software based but still welcome for the artistic bokeh background effects it brings. The improvements to the front-facing camera alone make the G7 a viable upgrade for anyone unhappy with selfies on their previous LG phones.
Finally — an LG flagship with a good front facing camera!
Portrait mode tends to take advantage of a dual lens setup, most commonly through a regular and telephoto lens. LG did not want to mess with a good thing, however, and kept the wide angle lens. It now has a 107-degree field-of-view, which helps correct the distortion on the sides of the frame without sacrificing too much of the wide angle. That wide angle lens is still one of the best parts of this phone — the style and drama it affords still makes for one of the most unique photo-taking experiences around.
Because the rear camera lacks a telephoto lens with tighter zoom capabilities, LG’s portrait mode keeps the field-of-view the same as the main lens to remain effective. This means users don’t have to step back because the camera is trying to reach farther into the frame. Some might prefer the look of the tighter frame, but others may like not having to move in order to make the portrait work. Though our devices were very pre-production, the portrait mode shots needed a little bit more work. It messed up the cutouts around elements like hair or my glasses in certain shots.
LG decided on a pixel binning solution for low light photography. Essentially the 16MP of either rear camera shifts into larger pixel groups of four so each group can more effectively flood in light in a darker scene (the same thing Huawei has done with the P20 and HTC did with UltraPixels). The result is a 4MP image that hopefully exposes better than the alternative. We didn’t get to test this much but we’re looking forward to trying it further when we get our review unit.
Video is still a big deal with the LG G7. It retains the manual movie mode and the high-quality audio capture capabilities of the LG V30. The wide-angle lens is still very viable, too, though I noticed I could not change between the two lenses while recording.
The camera AI is mostly consistent at identifying the subject, but the tag cloud might make some users think otherwise.
LG already introduced AI in its updated LG V30S ThinQ. The tag cloud from that release has made its way to the G7. This cloud is a flurry of words that shows the camera is trying to recognize what object or subject it is pointed at when AI is on. Once it decides what is in the frame, settings change to enhance the photo. Take a second on a plate of Korean BBQ and the camera will change to the food settings, which mainly bumps up the saturation. Take a picture of a tree and the AI mode will make the green colors pop.
AI modes in cameras are a little bit new and their effectiveness is down to the optimization of the software. I found some odd words in the cloud during a selfie session, but it never ended up deciding I was a cauliflower. The camera seems to find its subject well enough by the end of its search. Thousands of objects have been inserted into the AI memory of the camera and it is always searching. We will see how the AI Cam fares when we test it further in our final production review unit.
There you have it: the LG G7 ThinQ. LG proves each year it really prioritizes key aspects others might take for granted (like a high-quality headphone jack, for instance). While some of its decisions this year seem more trend-following (like the notch), the features where LG tries to be different are what people will really appreciate. The wide-angle camera is great but we will have to wait and see if Boombox Sound ends up on the list, as well.
We are excited to get our hands on an LG G7 review unit to see how the handset fares under more rigorous testing. Until then, let us know how you feel about the LG G7 below!
Next: LG G7 ThinQ price, availability, and release date
Rolling out to all users now.
Gmail’s updated desktop interface has continued to be a very welcome change since it went live late last month, and now it’s being updated with another new feature that I’m sure many users will be thrilled to have – a native offline mode.
First spotted by 9to5Google, you can now use certain Gmail features even without a data/internet connection as long as you’re running Chrome 61 or later.
To turn this on, navigate to Settings, click on the new Offline tab, and check the box next to “Enable offline mail.” Once this is done, you can choose to store emails for 7, 30, or 90 days, whether or not you want to download attachments, and choose between one of two security modes.
Those two modes are described by Google as follows:
- Keep offline data on my computer – Data stored on your device will not be deleted when signing out of your Google account or when changing your password. To delete account data from your device disable offline mail and save changes.
- Remove offline data from my computer – Data will need to be resynced to your computer when logging back into Gmail. It may take a few hours to resync the mailbox.
With offline mail turned on, you’ll be able to compose new emails, search for existing ones, and archive/delete others to slowly achieve inbox zero. This functionality is rolling out to all Gmail users now, so be sure to keep an eye out for it.
Gmail’s massive redesign is now live: Here’s a look at the new features
Security researchers will be publishing what they claim are critical vulnerabilities in PGP/GPG and S/MIME email encryption on May 15. In the meantime, EFF advises you disable PGP email clients. GPUPG offers different advice.
A team of European researchers claim to have found critical vulnerabilities in PGP/GPG and S/MIME. PGP, which stands for Pretty Good Privacy, is code used to encrypt communications, commonly email. S/MIME, which stands for Secure/Multipurpose Internet Mail Extension, is a way to sign and encrypt modern email and all the extended character sets, attachments, and content it contains. If you want the same level of security in email as you have in end-to-end encrypted messaging, it’s likely you’re using PGP / S/MIME. And, right now, they may be vulnerable to hacks.
We’ll publish critical vulnerabilities in PGP/GPG and S/MIME email encryption on 2018-05-15 07:00 UTC. They might reveal the plaintext of encrypted emails, including encrypted emails sent in the past. #efail 1/4
— Sebastian Schinzel (@seecurity) May 14, 2018
Danny O’Brien and Gennie Genhart, writing for The EFF:
A group of European security researchers have released a warning about a set of vulnerabilities affecting users of PGP and S/MIME. EFF has been in communication with the research team, and can confirm that these vulnerabilities pose an immediate risk to those using these tools for email communication, including the potential exposure of the contents of past messages.
Our advice, which mirrors that of the researchers, is to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email. Until the flaws described in the paper are more widely understood and fixed, users should arrange for the use of alternative end-to-end secure channels, such as Signal, and temporarily stop sending and especially reading PGP-encrypted email.
Dan Goodin at Ars Technica notes:
Both Schinzel and the EFF blog post referred those affected to EFF instructions for disabling plug-ins in Thunderbird, macOS Mail, and Outlook. The instructions say only to “disable PGP integration in e-mail clients.” Interestingly, there’s no advice to remove PGP apps such as Gpg4win, GNU Privacy Guard. Once the plugin tools are removed from the Thunderbird, Mail or Outlook, the EFF posts said, “your emails will not be automatically decrypted.” On Twitter, EFF officials went on to say: “do not decrypt encrypted PGP messages that you receive using your email client.”
Werner Koch, on the GNU Privacy Guard Twitter account and the gnupg mailing list got a hold of the report and retorts:
The topic of that paper is that HTML is used as a back channel to create an oracle for modified encrypted mails. It is long known that HTML mails and in particular external links like are evil if the MUA actually honors them (which many meanwhile seem to do again; see all these newsletters). Due to broken MIME parsers a bunch of MUAs seem to concatenate decrypted HTML mime parts which makes it easy to plant such HTML snippets.
There are two ways to mitigate this attack
Don’t use HTML mails. Or if you really need to read them use a proper MIME parser and disallow any access to external links.
Use authenticated encryption.
There’s a lot to sift through here and the researchers aren’t releasing their findings to the public until tomorrow. So, in the meantime, if you use PGP and S/MIME for encrypted email, read the EFF article, read the gnupg mail, and then:
- If you feel the least bit concerned, temporarily disable email encryption in Outlook, macOS Mail, Thunderbird, etc. and switch to something like Signal, WhatsApp, or iMessage for secure communication until the dust settles.
- If you’re not concerned, still keep an eye on the story and see if anything changes over the next couple of days.
There will always be exploits and vulnerabilities, potential and proven. What’s important is that they’re disclosed ethically, reported responsibly, and addressed expeditiously.
We’ll update this story as more becomes known. In the meantime, let me if you use PGP / S/MIME for encrypted email and, if so, what’s your take?
Favstar, a popular service for seeing which of your tweets were the most loved and discovering other popular tweets, will be shutting down on June 19, 2018. Because, Twitter.
Come June 19, 2018, or thereabouts, Twitter will be stopping its streaming API. That’s the application programming interface developers of third-party Twitter apps, including twitter clients and web services, used to refresh timelines and send push notifications. Twitter will be providing a new Account Activity API to replace it, but not much is known about it and time is running out for developers to be granted access so they can implement it quickly enough to avoid interruptions in service. That uncertainty is what’s leading Favstar, the popular service for seeing which of your tweets were the most favorited and discovering other highly favorited tweets, to shut down.
During December 2017 Twitter stated that on June 19th 2018 they will be shutting down the method that Favstar and other third-party Twitter apps use to receive your Tweets, Likes, and Retweets. You can read more about this on Apps of a Feather.
Twitter wrote that they’ll be replacing this with another method of data access, but have not been forthcoming with the details or pricing. Favstar can’t continue to operate in this environment of uncertainty.
Just like it’s never safe to scurry beneath the feet of uncaring (or just plain clumsy) giants, it’s never safe to build your service on Twitter (or Facebook or Google, for that matter). If it’s a valuable service, there’s always the chance the tech giant will want to own it for themselves. In the best case, they buy you out for some serious cash. In the worst case, they simply copy your value and make it native.
If it’s not a valuable service, even if it’s a beloved one, the tech giant might just end your access to its users, social graph, or other data. And then you’re done.
You can say the tech giants are well within their rights to do what they want with their services. And fair enough. But many of these relationships are symbiotic and the third-parties provide experiences that, in turn, elevates the tech giant.
Twitter, infamously, achieved a lot of early success, in part, thanks to the work of third-party developers. Then it got big, decided to focus less on tech and more on the mainstream, advertising, and entertainment, and began starving out third-parties. These days, Twitter simply seems like it has no idea what it is or what it wants to do.
And Favstar is just the latest victim. There will likely be more.
Favstar Pro is no longer for sale. Anyone who has a Favstar Pro Membership beyond June 19th will receive a refund.
So long, and thanks for all the laughs,
Thank you, Tim.
If you used Favstar, let me know — how do you feel about it being shut down?
Step aside, Hal.
By now, you’ve probably heard a thing or two about Google Duplex. During its annual I/O developer conference on May 8, Google debuted its Duplex technology by showing it calling a hair salon and restaurant to make an appointment and reservation on someone’s behalf.
In both instances, the AI was able to have a natural conversation with the human on the other end, understand what they were saying, and get everything booked just like it had been asked.
Most everyone watching this demonstration seemed to be floored about it at the time, but since then, there have been a number of articles criticising Google for even creating something like this and airing their concerns about what sort of impact this will have on our world.
Although I understand some of the apprehension surrounding this technology, I’m beyond excited for this tech to make its way into people’s hands. Here’s why.
Duplex is already jaw-dropping at such an early stage
It’ll likely be a while before Google Duplex makes its way into a consumer-focused product that’s available for anyone to use, and while that long wait is a bit of a bummer, it makes Duplex that much more impressive.
Just imagine where Duplex will be in a year’s time.
Google’s clearly not done working on Duplex and will continue to improve upon it as we inch closer and closer to its release, and when that day eventually comes around, Duplex will be even better than what we saw last Tuesday.
If this technology is already as human-sounding and capable as it is in this current form, imagine where it could be after a few more months of hard work.
Duplex is only going to get better as time goes on, and while that’s something we see with a lot of Google’s products, it’s especially exciting here.
It can turn the Assistant into the ultimate virtual helper
Moving away from the technical side of things, let’s take a look at how Google chose to show off Duplex for this first time – through the Google Assistant.
The Assistant was first announced a couple years back as a part of Google’s Allo messaging app, and it’s since made its way to smartphones, watches, tablets, TVs, speakers, and more. No matter where you access the Assistant, however, it always has the same intent of helping out where it can to make your life as easy as possible.
In its current form, the Assistant already does a great job at this. Asking it to turn off smart lights is easier than manually flipping switches throughout the house, ordering movie tickets with just your voice takes less time than fumbling with an app or website, and being able to set timers, ask about the weather, or call your friend without having to touch any sort of screen allows for multi-tasking that simply wasn’t possible a few years back.
That theme of making your life easier is seen through just about everything the Assistant does, and with Duplex, it’s going to be elevated to a point that’s never been seen before.
Duplex elevates the Assistant to a level of helpfulness we’ve never seen from Alexa or Siri.
As necessary as they sometimes are, phone calls can be time-consuming, clunky, and a pain in the butt. However, they still need to get done.
With Duplex, we’ll be able to call upon the Assistant we already know and love, ask it to make an appointment on our behalf, and not have to worry about a thing. The Assistant will use Duplex to call that business, get everything squared away, and add that event to your calendar – all while you focus on whatever else needs your attention.
That level of interaction is something only human assistants were previously capable of, and it’s wild to think we’re already at a point where something like this is possible.
The possibilities are endless
As exciting as all of this, we’re just talking about one specific use case for this technology. Google may have only shown off Duplex calling a hair salon and restaurant, but it certainly won’t limit Duplex to just those two use cases.
It’s difficult to imagine exactly where else Duplex could be integrated, but at least on the subject of Assistant, there are a few things that come to mind. If Duplex allows the Assistant to call places like salons and restaurants, what about personal calls to friends and family? Say you want to make plans for a movie or tell your spouse you’re on the way home from work. Why wouldn’t Duplex be able to handle these things?
Furthermore, what about when Google decides to expand Duplex beyond the Assistant? What sort of use cases could we see of this technology in Maps, YouTube, Android Messages, Gmail, etc. Why not use this Duplex voice as the default one for the Assistant? The implications for accessibility are also staggering.
There are so many paths Google can take with what it’s created, and these unknown possibilities are thrilling.
Are you ready for a Duplex world?
Google Duplex is unlike anything we’ve seen before, and while that’s admittedly a bit scary, I’m still incredibly optimistic to see how it evolves and affects our world as time goes on.
Google’s already addressed that Duplex will make people aware that it’s not human when making phone calls on our behalf, and I’m sure we’ll see more conversations about the ethicality of everything over the coming weeks and months.
Duplex is the first technology of its kind, and I’m ecstatic to be able to experience this first hand and see where we go from here.
Google I/O 2018: All the big announcements!