Apple’s Siri Turns Six: AI Assistant Announced Alongside iPhone 4s on October 4, 2011
On October 4, 2011, Apple held a media event in which it introduced Find My Friends, refreshed the iPod Nano and iPod touch, and revealed the iPhone 4s with its all-new Siri voice assistant. This means that today marks the sixth year anniversary of when Apple’s Siri was first introduced to the world, although the AI helper wouldn’t be available to the public until the iPhone 4s launch on October 14, 2011.
In the original press releases for Siri, Apple touted using your voice to send text messages, schedule meetings, set timers, ask about the weather, and more. Apple explained Siri’s understanding of context and non-direct questions, like presenting you with a weather forecast if you ask “Will I need an umbrella this weekend?”
The original Siri interface on iOS 5
Siri on iPhone 4S lets you use your voice to send messages, schedule meetings, place phone calls, and more. Ask Siri to do things just by talking the way you talk. Siri understands what you say, knows what you mean, and even talks back. Siri is so easy to use and does so much, you’ll keep finding more and more ways to use it.
Siri understands context allowing you to speak naturally when you ask it questions, for example, if you ask “Will I need an umbrella this weekend?” it understands you are looking for a weather forecast. Siri is also smart about using the personal information you allow it to access, for example, if you tell Siri “Remind me to call Mom when I get home” it can find “Mom” in your address book, or ask Siri “What’s the traffic like around here?” and it can figure out where “here” is based on your current location.
Apple didn’t create Siri itself, however, as the company purchased the technology that’s now prevalent in all iOS devices by acquiring Siri, Inc., a spinoff of SRI International where the technology originated. Prior to the assistant’s presence on iPhone, Siri was a standalone app on the App Store (launched February 2010) that offered automated personal assistant services through integrations with third-party apps like OpenTable, FlightStats, Google Maps, and more. Users could interact with these apps using Siri’s voice-recognition technology, created by Nuance.
Just two months after Siri appeared on the App Store, reports of Apple’s acquisition of Siri surfaced in April 2010, and the purchase was quickly confirmed by representatives and board members from the voice-recognition company. According to Siri board member Shawn Carolan, “The offer from Apple was of a scope and tenor that it was a no-brainer to accept it.” The standalone app was removed from the App Store after Apple’s unveiling of its own Siri in October 2011.
Over the years, Siri has debuted new features and expanded to more devices, including the iPad (June 2012), iPod Touch (September 2012), Apple Watch (April 2015), Apple TV with Siri Remote (September 2015), Mac with macOS Sierra (September 2016), and HomePod (coming December 2017).
Since 2011, Siri has become a large enough part of Apple’s brand that the company just this year launched a series of advertisements focusing solely on the assistant’s helpfulness, aided by actor Dwayne “The Rock” Johnson. The latest version of iOS, iOS 11, has seen a few improvements brought to Siri, including a more natural speaking voice, text-to-talk, and a translation feature.
Details about Siri’s origin at Apple have continued to emerge over the years, with voice actress Susan Bennett revealing a few behind-the-scenes tidbits about the early days of the project in an interview posted earlier in 2017. Bennett described having to say “nonsense phrases” like “Say the shrodding again,” which she later realized provided Apple with “all of the sounds of the English language.”

The next place that Siri will be found in is Apple’s HomePod speaker, which was for a long time simply called the “Siri Speaker” prior to its official unveiling at WWDC in June. HomePod will greatly rely on user interaction with Siri, allowing for music playback, HomeKit control, timer settings, news reports, and essentially most of the tasks that Siri can already do elsewhere. Most importantly, Siri will become a “musicologist” in HomePod and gain a greater understand of music-related trivia to greater enhance HomePod’s position as a high-quality audio device.
Despite advancements, many users frequently point out Siri’s flaws and inconsistencies in certain situations. It’s been rumored previously that Apple’s development on Siri has been hindered by the company’s commitments to privacy. But, in an interview last month Apple VP of marketing Greg Joswiak argued that user data privacy and a smart AI assistant can co-exist: “We’re able to deliver a very personalized experience… without treating you as a product that keeps your information and sells it to the highest bidder. That’s just not the way we operate.”
Tag: Siri
Discuss this article in our forums
More Incidents Surface of iPhone 8 Plus Devices Burst Open Due to Possible Battery Failure
A small but increasing number of iPhone 8 Plus owners have shared pictures of their devices burst open due to possible battery failure.
iPhone 8 Plus with display popped out via MacRumors reader Anthony Wu
MacRumors reader Anthony Wu, from Toronto, Canada, said he bought and unboxed a new iPhone 8 Plus on Sunday, but he was forced to return it by Monday after the display popped out. The damage was presumably caused by a defective battery inside the iPhone that swelled and placed pressure on the assembly.
We also received a similar photo today of an iPhone 8 Plus with the display burst open from iRepair, an iPhone, iPad, and Mac repair shop in Greece. In this case, we’re told the customer unboxed the device last night, plugged it in overnight, and in the morning it looked as it does in the picture below.

In the latter case, the customer was supposedly using only an official Apple power adapter and Lightning to USB cable.
There are now at least five cases of possible iPhone 8 Plus battery failure, following reports in Taiwan, Japan, and Hong Kong last week.
Following the first two reports, an Apple spokeswoman told MacRumors that the company is “aware” and “looking into” the matter. But the company didn’t immediately respond to our request for an update on the status of the investigation. Apple routinely looks into any possible safety concerns with its devices.
With millions of iPhones coming off the production line overseas, and thereby millions of lithium-ion batteries being manufactured, it’s common in the industry for there to be a very low percentage of defective units.
For that reason, five cases of suspected iPhone 8 Plus battery failure out of millions of devices probably isn’t much cause for full-blown concern at this point, but we’ll continue to monitor the situation to see if a larger trend develops.
By comparison, there were reportedly hundreds of Galaxy Note 7 devices with critical battery-related failures before Samsung recalled and discontinued the device. Some of the devices caught fire, as well, which posed greater safety risks that even prompted the FAA to ban the device from in-cabin use during flights.
Following a lengthy investigation, Samsung eventually admitted that the Galaxy Note 7’s battery had a design flaw.
We’ll update this article if Apple responds.
Related Roundup: iPhone 8
Tag: battery pack
Buyer’s Guide: iPhone (Buy Now)
Discuss this article in our forums
Apple to Remove Dice Subcategory From Games Section of App Store
Apple today informed developers that as of now, the Dice subcategory in the Games section of iTunes Connect has been eliminated, with Apple planning to remove the Dice section from the App Store in the near future.
Developers who have apps in the Dice subcategory have received the email. Apple says that developers don’t need to take any action, but can change their app’s subcategory if desired.
Starting today, the Dice subcategory under Games will no longer be available for selection in iTunes Connect and will be removed from the App Store in the future. You are receiving this email because you have one or more apps in this subcategory.
While no action is required since subcategory selection is optional, you can change your app’s subcategory during your next update as described in View and edit app information, or change it now if your app has an editable app status.
Apple did not explain why it has opted to remove the Dice category from the App Store, but it’s likely not a highly popular category and its elimination allows for better streamlining of the available sections in the App Store.
Lotto Machine has maintained #12 in the Dice category for quite a while ☹️ I guess now I’m a nothing in the games category instead pic.twitter.com/5WkfsWwaUV
— Steve T-S (@stroughtonsmith) October 4, 2017
Dice games can be rolled into a wide range of other categories. Game categories in the revamped App Store in iOS 11 include Action, Adventure, AR, Arcade, Board, Card, Casino, Family, Indie, Kids, Music, Puzzle, Racing, Role Playing, Simulation, Sports, Strategy, Trivia, and Word.
Update: Apple is also removing the Educational Games subcategory and the Catalogs subcategory from the Apps section of the App Store.
Tag: App Store
Discuss this article in our forums
Adobe Elements 2018 can now pick your best shots, auto-trim videos
Why it matters to you
Adobe Elements continues to make it easy for inexperienced consumers to edit photos and videos with pro-like tools.
Adobe’s pared-down versions of the popular photo and video editing programs are getting even more advanced — without dropping the simplified user interface. On October 4, Adobe announced Adobe Elements 2018, which includes Adobe Elements Organizer 2018, Photoshop Elements 2018 and Premiere Elements 2018. The list of updates includes an auto curate option to select the best shots, new auto select tools, and several other new features.
With Adobe Elements 2018, the basic software suite moves from a version number to a year number format, joining the same nomenclature as the fully-fledged programs like Photoshop 2017. Moving forward, Adobe says future updates will also use the new naming structure.
A tool to eliminate the tedious task of turning hundreds of images into a dozen of the best shots headlines the list of new features, an option available in Elements Organizer. Adobe says the new feature will automatically pick the best shot, using factors such as quality, faces and the subject.
Adobe Elements Organizer
Choosing the best shots is a completely new idea, but one that lives largely in photo apps and not desktop software. EyeEm and Google Photos, for example, include artificial intelligence that automatically rates photos, while Everypixel Aesthetic is a platform dedicated entirely to the task of determining just how good a shot is. Even Adobe’s more advanced photo organizer, Lightroom 2017, does not have an auto curate feature.
A similar idea applies to the second new feature to Elements Organizer, with a one-click option to put those best photos into a slideshow.
Photoshop Elements is also gaining a number of new automated edits, included an Automatic Selection tool. Adobe says the tool allows users to click and drag and the program will automatically refine the edge to select an object.
A second update makes those group photo blinks not such a big deal. The auto fix opens closed eyes, but in order for the tool to work, users need a second image with the eyes opened. The tool is designed for editing group photos and fixing a shot where everyone looks great except for a blink or two. Using data from a second eyes-open photo, Photoshop Elements will automatically blend them into the edited photo.
The automatic curation and blink fix follows the previous version’s addition of tools to automatically correct a smile with facial recognition features and intelligent search options.
Adobe’s basic video editor, Premiere Elements, also sees a few new tools, including a “Candid Moments” tool that allows users to grab a still photo from the video footage, choosing the peak moment. Automatic edits also come into video with a Smart Trim tool, an auto option for trimming footage to the best shots. Adobe says the trim mode makes choices based on the “style” of the video, but that users can also customize the parameters to tweak what the software selects automatically.
The updates also come with eight new guided edits, in-app tutorials that walk users through the process of creating a specific look. New options include guides for swapping backgrounds, creating double exposures, adding overlays and creating a watercolor look. For video, Elements will now guide users through those bounce-back GIFs where an action is played back and forth on a loop. Freeze frames with titles, action camera fixes and animated social media posts bring the total number of Elements guides to 67.
Photoshop Elements and Premiere Elements retail for a $99 one-time download, or $150 for both programs. Elements Organizer is included with both Photoshop Elements and Premiere Elements. Users of previous Elements programs can upgrade for $80 each or $120 combined.
Common iOS 11 problems and how to handle them
Apple’s latest mobile operating system, iOS 11, launched on September 19, and early adopters have already reported several crucial bugs. The public and developer betas have been out since June, but it’s clear even the final version isn’t glitch-free.
We’ve searched a variety of forums for the biggest problems people are experiencing on their iPhone or iPad since the update. To help make the transition easier for you, here are the most common iOS 11 problems and potential solutions on how to deal with them.
Problem: Battery drain
With any iOS update also comes battery issues. Being able to handle iOS 11 isn’t a problem for the next-generation iPhone 8, 8 Plus, and X, which all include an A11 Bionic chip. But for those of you with an older processor, you might find your battery dying more quickly than it used to. On Apple’s forums, multiple users have been complaining that their batteries have been draining extremely fast since downloading iOS 11.
Potential solutions:
- If you haven’t updated to iOS 11 yet, you might want to hold off until the issue gets fixed in the next update. But if you did download the latest operating system, you can still downgrade back to iOS 10.3.3.
- Find which apps are taking up the most power by going to Settings > Battery power. This will tell you the exact percentage of battery life specific apps used over the last 24 hours and the last seven days. You can then either delete the apps that drain the most power or use them minimally. If you don’t want to delete them, remember to always properly quit apps after you’re fully done using them. If you know you’re going to go back within a short period of time, however, you’re better off leaving them open.
- According to USA Today, you can check if you have a problem by noting your Usage and Standby time, both under the Battery Power section. If your Usage time has increased by more than a minute, within five minutes of locking and unlocking your phone, then your phone isn’t resting the way it should.
- For more suggestions on saving battery life, you can check out our iPhone battery tips roundup.
Issue: Device overheating
Not only have users on Apple’s forum been complaining about their devices getting extremely hot, but some claim to have experienced serious swelling causing their iPhone 8 and iPhone 8 Plus to burst open. Due to what appears to be the battery swelling from gases inside, the expansion places pressure on the display causing it to pop open. While having the device burst open does help to avoid a fire, you’ll still want to make sure your device doesn’t overheat.
Potential solutions:
- If overheating occurs when you’re using a specific app, then take a break from using it. We did notice the iPhone getting very warm when playing AR games, but it should shut down automatically if it ever gets too hot.
- Turn off Location Services by going to Settings > Privacy > Location Service. Since Location Services uses GPS, Bluetooth, and cell tower locations to determine where you are, this could be too much activity for your phone on top of running demanding apps. You can also turn off the Bluetooth feature by going to Settings > Bluetooth, or swiping up to the Control Center and tapping on the Bluetooth icon.
- You can also kill any apps running in the background by double-clicking the Home button. This will pull up a gallery of apps that you can swipe through and swipe up to force close.
- If your iPhone starts to overheat while charging, unplug it and let it sit for a few minutes to cool down. Then, you can plug it back in.
- Taking your phone case off your device could help too. This will keep the case from trapping the heat and won’t block the phone’s heat vents.
Problem: Apps won’t work in iOS 11
Apple originally warned users with iOS 10.3 that it will soon drop support for 32-bit apps. With iOS 11, the change is official and any 32-bit apps on your phone will refuse to launch. You’ll instead receive a pop-up notification letting you know the developer needs to release a 64-bit update for it to work.
Workaround:
- You can check which of your apps are 32-bit by going to General > Settings > About > Applications. If you do have 32-bit apps then you’ll see an arrow next to the number of applications you have. You can tap on the section to see a list of which apps don’t have updates available.
- If you get to the Applications tab and tapping on it doesn’t lead to another window, it means you don’t have any 32-bit apps installed.
Glitch: Apps freezing or quitting unexpectedly
A common problem users have been experiencing with iOS 11 is their apps freezing or quitting unexpectedly. The problem could stem from a variety of reasons ranging from the iOS version to the app itself. If you find yourself opening an app only to watch it shut down on you repeatedly, there are ways, other than simply deleting the app, to make sure it doesn’t happen again.
Potential solutions:
- Go to the App Store and tap on the Updates tab. If there’s an update available for the specific app that’s constantly crashing, then you can choose to update only that one or go through with all the updates available.
- Delete the app and redownload it. By holding down on the app and tapping the X button, you can delete it from your phone completely. Then, head over to the App Store and download the same app again. Once it’s complete, you can try and open it again from the Home screen.
- There’s also always the option to contact the developer of the app if the issue continues to happen. While under the app in the App Store, find the App Store Customer Reviews section and tap App Support.
- If you haven’t downloaded iOS 11 yet, make sure you “Agree” to the Terms and Conditions within the App Store after downloading an app. There could be changes in the terms that keep apps from launching which then causes them to crash.
Problem: Touchscreen stops working
With iOS 11, users on the Apple Forum have been reporting their touchscreen is either lagging or becomes completely unresponsive. The issue has existed as far back as the beta with developers posting on the Apple Developer forum. If you find this happens to you, there’s always the common answer of making sure your screen is both clean and dry, and that you don’t need a new screen protector. But there are also other ways you might fix this issue within the operating system, rather than focusing on the screen itself.
Potential solutions:
- Check the 3D Touch Sensitivity bar by going to Settings > General > Accessibility > 3D Touch. If it’s on the Medium or Firm setting, you’ll want to set it to Light instead.
- The issue might also happen while using a specific app. In that case, exit out of the app and then force quit it by tapping on the Home Button twice. Once you restart the app, the touchscreen should start working again.
- If the touchscreen is completely unresponsive then a simple reset will often deal with it. The method is different for different iPhone models, so check out our guide on how to reset your iPhone.
Problem: Can’t connect to App Store
Aside from problems downloading apps, some users have also been experiencing problems with getting into the App Store. During the beta testing, Reddit users had the same issue and took to the iOS 11 thread to try to solve it. They’ve been receiving error messages saying their device cannot currently connect to the App Store.
Potential solutions:
- Sync the date and time on your device by going to General >Date & Time. Toggle off Set Automatically and instead scroll through to manually set the date and time. This will also change the date and time in the Today tab within the App Store. Then, go to the App Store and leave it open for 3 seconds, go back to Date & Time under settings and toggle on Set Automatically. Then kill the App Store through the app switcher and reopen it again.
- After syncing the date and time, you can also restart your device. If you’re using an iPhone 6S or earlier model, press and hold the Sleep/Wake and Home buttons together until you see the Apple logo on screen. For the iPhone 7, and later, press and hold the Sleep/Wake and Volume down buttons.
- You can also try to enable Mobile Data by going to Settings > Cellular > Enable Cellular Data. Once you scroll down to App Store, toggle it on.
Bug: Can’t send emails with Outlook.com or Exchange mail account
When the official version of iOS 11 first launched, users were receiving an error message when they tried to send an email. The problem occurred specifically with email accounts hosted by Microsoft, Office 365, or Outlook.com. When they would try to send an email, it would tell them “the message was rejected by the server.” If you’re on iOS 11, you could still be receiving this message when attempting to send an email.
Solution:
- Only a week after iOS 11 was released, Apple rolled out its beta version of iOS 11.1 which should now be available on your device to download. You can check on your iPhone by going to Settings > General > Software Update for an update option. Apple announced on its forum that the latest version has fixed the bug and you should no longer have problems sending out emails through any of those hosts.
Glitch: Brightness is turned up but your device is still dim
One user on an iOS 11 Reddit thread noticed their iPhone 7 looked dimmer than it did on iOS 10, even though the brightness is turned up. The new operating system might reset your settings from iOS 10, causing it to also reset the White Point percentage — which reduces the intensity of bright colors.
Solution:
- Go to Settings > General > Accessibility > Display Accommodations. You’ll see Reduce White Point at the bottom of the section, which you can either lower the percentage of, to make it brighter, or toggle the feature off.
Problem: Trouble connecting to Wi-Fi
Users have expressed via the Apple Forum that they are losing Wi-Fi connection or are unable to connect to networks. Even though the Wi-Fi is working with other devices, people are having trouble finding a stable connection.
Potential solution:
- Before attempting to figure out the bug on your iPhone, make sure your router is on and within range. You should also make sure you can see your Wi-Fi network by going to Settings > Wi-Fi and check that you’re connected to the correct network. You can also try and restart both your iOS device, the router, cable, or DSL modem — by unplugging it and then plugging it back in.
- If you still can’t connect go to Settings > General > Reset > Reset Network Settings. This will reset Wi-Fi networks and passwords, along with cellular, VPN, and APN settings.
If iOS 11 is still giving you trouble, you can always reset your iPhone to its factory settings by going to Settings > General > Reset and tap Erase All Content and Settings. Just make sure that you back up all your precious files first as this will wipe your iPhone completely. For more details and alternative methods, check out how to factory reset an iPhone.
HP revamps Spectre line with 8th-gen CPUs, new designs, and more
Why it matters to you
If you were planning to buy a new HP Spectre, then you’re going to get a whole bunch more for your money.
HP’s Spectre line of premium notebooks has offered some excellent options for the last several years. The Spectre 13 is a thin and light clamshell notebook that we found to be typical of its class — very thin and well-built with some compromises in performance and battery life. The Spectre x360 13 is a machine we called one of the best Windows 10 2-in-1s you can buy. Now, HP has revamped both of these Spectre machines to bring even more of what makes them such solid machines.
HP Spectre 13
The 2017 refresh of the Spectre 13 brings an entirely new design that seeks to shrink the notebook’s overall size without compromising on performance or battery life. First, HP maintained the machine’s 10.4mm thickness but added in a touch display — leading the company to characterize the Spectre 13 as the “world’s thinnest touchscreen notebook.” In addition, HP now offers a 4K UHD (3840 x 2160) resolution display in the same thin chassis. Weight went up ever so slightly from 2.43 pounds to 2.45 pounds.
Second, the bezels have been made considerably smaller, specifically 9.7mm (40 thinner) on top and 5.3mm (65 percent thinner) on the sides. The bottom bezel is 0.5mm thicker. Even with the reduction in the size of the top bezel, the webcam and the new infrared camera for Windows Hello support remain on top of the display. That results in the new Spectre 13 being 308.2 mm x 224.2 mm, compared to last year’s model, which was 325 mm x 229 mm.
The keyboard deck and speakers have also been redesigned, with the dual speakers now positioned directly below the display as opposed to on each side of the keyboard. That has allowed HP to expand the backlit keyboard — which offers a significant 1.3mm travel — to include the page up, page down, home, and other keys along the right side as it has done with some other recent machines. The touchpad has been increased in size 15 percent from the previous model.
In terms of performance, the new Spectre 13 incorporates the latest eighth-generation quad-core Intel Core i5 and i7 CPUs. Up to 16GB of LPDDR3 RAM can now be configured, and the solid-state disk (SSD) technology has been migrated to the faster PCIe standard with up to 1TB capacity.
Battery capacity has been increased from 42 watt-hours to 43.7 watt-hours, which combined with the more efficient processors promises improved battery life with the Full HD display option. The company’s fast-charging technology has also been added, which can charge the battery from zero to 50 percent in 30 minutes.
Furthermore, HP uses thermal sensors in the Spectre 13 to monitor heat more closely and to engage the fans only when necessary, while the cooling system’s overall design has also been enhanced. All of that promises better control of heat without the need to run the fans as often as with the previous model.
The Spectre 13 also offers new color options, including Ceramic White with Pale Gold accents and the company’s trademark Dark Ash Silver with Copper Luxe accents for the cover and keyboard deck. Advanced Electro Deposition (AED) technology is used to create a stronger and more scratch-resistant finish. Carbon fiber is used on the bottom of the chassis to afford weight and heat advantages without compromising strength. The white version comes with a matching white power adapter.
Finally, the Spectre 13 comes equipped with a total of three USB Type-C ports. Two are fully Thunderbolt 3 certified, while the other is a USB 3.1 Type-C connection. A combo audio jack, 2×2 MU-MIMO and Bluetooth 4.2 round out connectivity.
HP is pricing the new Spectre 13 at $1,300 for the Intel Core i5, 8GB RAM, 256GB SSD, Full HD configuration, with the same configuration and a Core i7 processor being priced at $1,400. Availability has not yet been announced.
Here are the complete specifications for the new Spectre 13:
Screen size:
13.3 inches
Screen resolution:
Full HD (1920 x 1080)
4K UHD (3840 x 2160)
Display type:
IPS touch display with Gorilla Glass
Processor:
Up to Core i7-8550U
Graphics:
Intel UHD Graphics 620
System memory:
Up to 16GB LPDDR3
Storage:
Up to 1TB PCIe SSD
Audio:
Bang & Olufsen with dual speakers
Connectivity:
2×2 801.11ac Wi-Fi and Bluetooth
Ports:
2x USB Type-C with Thunderbolt 3
1x USB 3.1 Type-C
3.5mm headset jack
Keyboard:
Full-size keyboard with home row keys, 1.3mm travel
Camera:
HP TrueVision HD IR webcam with Windows 10 Hello support
Battery:
43.7 watt-hour lithium-ion polymer battery
Dimensions (inches):
12.13 x 8.86 x 0.41 inches
Weight:
2.45 pounds
Materials:
Aluminum and carbon fiber
Starting price:
$1,300
Availability:
TBD
HP Spectre x360 13
The HP Spectre x360 13 convertible 2-in-1 also received a significant redesign, albeit not quite as significant as the Spectre 13. The x360 13’s changes represent more of a refinement of the previous design rather than a wholesale redesign.
In terms of the overall chassis, the dimensions were kept close to the same as the previous model. The display bezels were reduced very slightly, to 13.6 mm (down 1.6 percent), the machine is now 13.6 mm thick (down from 13.8 mm), and the weight now stands at 2.78 pounds (down from 2.85 pounds). The design aesthetic was also revamped a bit, with a more angular profile, but the color scheme remains the same Natural Silver or Dark Ash Silver with Copper Luxe accents.
In terms of performance, the new Spectre x360 uses eighth-generation Intel Core i5 and i7 processors, while 16GB of RAM and 1TB PCIe SSD options remain the same. HP incorporated the same IR sensor technology and enhanced cooling system as is used in the new Spectre 13 to keep heat under control and to minimize fan noise. Battery life is rated up to 16.75 hours for the Full HD display, and fast charge technology is on hand that can charge from zero to 50 percent in 30 minutes.
Display options remain the same, with Full HD (1920 x 1080, 166 PPI) and 4K UHD (3840 x 2160, 332 PPI) resolutions for the 13.3-inch panel. There’s also a new SureView privacy screen Full HD option that’s borrowed from the business-class EliteBook x360 G2.
HP also enhanced the Spectre x360 13’s active pen. It now offers the same tilt support Microsoft introduced with the latest Surface Pro, and the pen now a offers tail-end eraser and a gyro-mouse function to use the pen like a laser pointer during presentations. The pen is also now rechargeable, with a cap that can be removed to reveal a USB Type-C charging connector.
The Spectre x360 13 supports the same infrared camera-based Windows Hello support for password-less login. However, HP also added a new fingerprint scanner along the righthand side that offers another way to securely log into the machine. Overall connectivity now includes two USB Type-C ports with Thunderbolt 3, a USB 3.1 Type A port, and a 3.5 mm audio jack. The new model also now includes an SD card reader, a feature that was missing from the previous model.
Pricing for the new Spectre x360 13 has been reported as ranging from $1,150 for a configuration with an Intel Core i5 CPU, 8GB RAM, 256GB SSD, and Full HD display, up to $1,600 for a Core i7, 16GB RAM, 512GB SSD, and 4K UHD display. Availability has not yet been announced.
Screen size:
13.3 inches
Screen resolution:
Full HD (1920 x 1080)
Full HD (1920 x 1080) with SureView privacy screen
4K UHD (3840 x 2160)
Display type:
IPS touch display with Gorilla Glass
Processor:
Up to Core i7-8550U
Graphics:
Intel UHD Graphics 620
System memory:
Up to 16GB LPDDR3
Storage:
Up to 1TB PCIe SSD
Audio:
Bang & Olufsen with dual speakers
Connectivity:
2×2 801.11ac Wi-Fi and Bluetooth
Ports:
2x USB Type-C with Thunderbolt 3
1x USB 3.1 Type-A
SD card reader
3.5mm headset jack
Keyboard:
Full-size keyboard with home row keys, 1.3mm travel
Camera:
HP TrueVision HD IR webcam with Windows 10 Hello support
Battery:
60 watt-hour lithium-ion polymer battery
Dimensions (inches):
12.04 x 8.56 x 0.53 inches
Weight:
2.78 pounds
Materials:
Aluminum
Starting price:
$1,150
Availability:
TBD
A beginner’s guide to A.I. superintelligence and ‘the singularity’
Why it matters to you
Our beginner’s guide will fill you in on everything you need to know about the (possibly) forthcoming AI technological singularity.
Have you heard people talking about the technological singularity, either positively or negatively, but didn’t know enough to join in? Want to know if you should pack your bags and flee for the hills to escape the coming robot invasion? Or maybe join a church to welcome our new robot overlords? First, make sure to check out our beginner’s guide to all your singularity queries.
What exactly is the singularity?
The technological singularity, to use its full title, is a hypothesis predicted on the creation of artificial superintelligence. Unlike the “narrow” A.I. we have today — which can be extremely good at carrying out one task, but can’t function in as many domains as a more generalized intelligence such as our own — a superintelligence would possess abilities greater than our own. This would trigger a kind of tipping point in which enormous changes take place in human society.
With A.I., particularly deep learning neural networks, hitting new milestones on a seemingly daily basis, here in 2017 the idea doesn’t seem quite as science fiction as it once did.
Is this a new idea?
No. As is the case with a lot of A.I., these ideas have been circulating for awhile — even though it’s only relatively recently that fields like deep learning have started to break through into the mainstream. I.J. Good, a British mathematician who worked with Alan Turing as a cryptologist during World War II, first suggested the concept of an intelligence explosion back in 1965.
His common sense view was that, as computers become increasingly powerful, so too does their capacity for problem solving and coming up with inventive solutions to problems. Eventually, superintelligent machines will design even better machines, or simply rewrite themselves to become smarter. The results are a recursive self-improvement which is either very good, or very bad for humanity as a whole.
This idea was later picked up by Vernor Vinge, a sci-fi writer, mathematics professor and computer scientist. In a famous 1993 essay, “The Coming Technological Singularity” Vinge made the famous prediction that: “Within 30 years, we will have the technological means to create superhuman intelligence. Shortly after that, the human era will be ended.”
So Vinge was the guy who coined the term ‘singularity’ then?
Not really. Vinge may have popularized it, but the term “singularity” was first applied to computers by the mathematician John von Neumann, one of the most important figures in modern computing. Towards the end of the end of his life, in the 1950s, von Neumann was both fascinated and alarmed by “the ever-accelerating progress of technology and changes in the modes of human life, which gives the appearance of some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
When exactly is all this supposed to happen?
Thanks for asking the easy questions. As we’ve already mentioned, Vernor Vinge gave a 30-year time table back in 1993. That would place the singularity at some time before or during 2023. Futurist and singularity enthusiast Ray Kurzweil, meanwhile, has pegged 2029 as the date when A.I. will pass a valid Turing test and achieve human levels of intelligence. After this, he thinks the singularity will take place in 2045, at which point “we will multiply our effective intelligence a billionfold by merging with the intelligence we have created.”
Others have argued that both of these are ridiculously premature estimates, based on a faulty understanding of what comprises intelligence. And Terminator 2: Judgment Day, one of the greatest sci-fi movies of all time, placed the point at which computers become “self-aware” as being 2.14am Eastern on August 4, 1997. So who really knows?
Skynet, self-aware computers, and merging with our own machines…. I’m not really sure how to feel about any of this.
You, me, and everyone else. Imagining what the singularity would be like is a bit like trying to visualize a totally new color the world has never seen before. Remember that Aaron Sorkin-penned Mark Zuckerberg line from The Social Network about how, “If you guys were the inventors of Facebook, you’ d have invented Facebook?” Well, the same thing applies for non superintelligent beings like us trying to imagine how a superintelligence would view the world.
Superintelligence has the opportunity to solve all of our problems almost immediately. Or it could deem us an unnecessary risk and wipe us out in a moment. Or we could become its new pets, kept busy with whatever the human-entertaining equivalent of a cat’s laser toy might be.
Geez, you make it sound like the singularity is going to be some kind of godlike presence, dispensing a choice of wrath or salvation.
You’re not kidding! There’s definitely something religious about the zeal with which some people talk about the singularity. It’s almost like Silicon Valley’s answer to the rapture, in which we’re permanently unburdened of our status as the smartest guys and gals in the room by an all-seeing presence. Heck, there are even clouds (or, well, the cloud) involved in this heavenly scenario.
Case in point: Anthony Levandowski – the engineer who worked on Google’s autonomous car — has created a religious nonprofit which looks a whole lot like a church devoted to the worship of A.I..
Why don’t we just pull the plug right now?
Well, that would certainly be one option, just like we could solve poverty, financial inequality, wars and Justin Bieber by carrying out a complete 100 percent extermination of the human race. If all A.I. research was to stop right now, then the prospect of the singularity would certA.I.nly be averted. But who wants that? And who would enforce it? Right now, AI is helping improve life for billions of people. It’s also making a whole lot of money for the owners of companies like Google, Facebook, Apple, and others.
At present, we don’t yet have artificial superintelligence, and even the most impressive examples of A.I. are comparatively narrow in what they can achieve. Even if it is possible to one day replicate a true intelligence inside a computer, some people hope there will be ways to control it without it taking over. For example, one idea might be to keep a superintelligence in an isolated environment with no access to the internet. (Then again, the researcher Eliezer Yudkowsky thinks that, like attempts to keep Hannibal Lecter locked up, no superintelligent A.I. will be contained for long.)
Other proposals say that we’ll be alright so long as we program A.I. to behave in a way that’s good and moral. (But Nick Bostrom’s “paperclip maximizer” thought experiment pokes a few holes in that one, too.) It’s definitely a concern, though — which is something voiced by everyone from Elon Musk to Stephen Hawking.
Then again, if superintelligence turns out to be the greatest thing that ever happened to humanity, do we really want to stop it?
Is the singularity our only concern with A.I.?
Absolutely not. There are plenty of other concerns involving artificial intelligence that don’t involve superintelligence — with the impact of A.I. on employment and the use of A.I. and robots in warfare being just two. In other words, cheer up: there’s a whole lot more than the singularity to worry about when it comes to A.I.
A beginner’s guide to A.I. superintelligence and ‘the singularity’
Why it matters to you
Our beginner’s guide will fill you in on everything you need to know about the (possibly) forthcoming AI technological singularity.
Have you heard people talking about the technological singularity, either positively or negatively, but didn’t know enough to join in? Want to know if you should pack your bags and flee for the hills to escape the coming robot invasion? Or maybe join a church to welcome our new robot overlords? First, make sure to check out our beginner’s guide to all your singularity queries.
What exactly is the singularity?
The technological singularity, to use its full title, is a hypothesis predicted on the creation of artificial superintelligence. Unlike the “narrow” A.I. we have today — which can be extremely good at carrying out one task, but can’t function in as many domains as a more generalized intelligence such as our own — a superintelligence would possess abilities greater than our own. This would trigger a kind of tipping point in which enormous changes take place in human society.
With A.I., particularly deep learning neural networks, hitting new milestones on a seemingly daily basis, here in 2017 the idea doesn’t seem quite as science fiction as it once did.
Is this a new idea?
No. As is the case with a lot of A.I., these ideas have been circulating for awhile — even though it’s only relatively recently that fields like deep learning have started to break through into the mainstream. I.J. Good, a British mathematician who worked with Alan Turing as a cryptologist during World War II, first suggested the concept of an intelligence explosion back in 1965.
His common sense view was that, as computers become increasingly powerful, so too does their capacity for problem solving and coming up with inventive solutions to problems. Eventually, superintelligent machines will design even better machines, or simply rewrite themselves to become smarter. The results are a recursive self-improvement which is either very good, or very bad for humanity as a whole.
This idea was later picked up by Vernor Vinge, a sci-fi writer, mathematics professor and computer scientist. In a famous 1993 essay, “The Coming Technological Singularity” Vinge made the famous prediction that: “Within 30 years, we will have the technological means to create superhuman intelligence. Shortly after that, the human era will be ended.”
So Vinge was the guy who coined the term ‘singularity’ then?
Not really. Vinge may have popularized it, but the term “singularity” was first applied to computers by the mathematician John von Neumann, one of the most important figures in modern computing. Towards the end of the end of his life, in the 1950s, von Neumann was both fascinated and alarmed by “the ever-accelerating progress of technology and changes in the modes of human life, which gives the appearance of some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
When exactly is all this supposed to happen?
Thanks for asking the easy questions. As we’ve already mentioned, Vernor Vinge gave a 30-year time table back in 1993. That would place the singularity at some time before or during 2023. Futurist and singularity enthusiast Ray Kurzweil, meanwhile, has pegged 2029 as the date when A.I. will pass a valid Turing test and achieve human levels of intelligence. After this, he thinks the singularity will take place in 2045, at which point “we will multiply our effective intelligence a billionfold by merging with the intelligence we have created.”
Others have argued that both of these are ridiculously premature estimates, based on a faulty understanding of what comprises intelligence. And Terminator 2: Judgment Day, one of the greatest sci-fi movies of all time, placed the point at which computers become “self-aware” as being 2.14am Eastern on August 4, 1997. So who really knows?
Skynet, self-aware computers, and merging with our own machines…. I’m not really sure how to feel about any of this.
You, me, and everyone else. Imagining what the singularity would be like is a bit like trying to visualize a totally new color the world has never seen before. Remember that Aaron Sorkin-penned Mark Zuckerberg line from The Social Network about how, “If you guys were the inventors of Facebook, you’ d have invented Facebook?” Well, the same thing applies for non superintelligent beings like us trying to imagine how a superintelligence would view the world.
Superintelligence has the opportunity to solve all of our problems almost immediately. Or it could deem us an unnecessary risk and wipe us out in a moment. Or we could become its new pets, kept busy with whatever the human-entertaining equivalent of a cat’s laser toy might be.
Geez, you make it sound like the singularity is going to be some kind of godlike presence, dispensing a choice of wrath or salvation.
You’re not kidding! There’s definitely something religious about the zeal with which some people talk about the singularity. It’s almost like Silicon Valley’s answer to the rapture, in which we’re permanently unburdened of our status as the smartest guys and gals in the room by an all-seeing presence. Heck, there are even clouds (or, well, the cloud) involved in this heavenly scenario.
Case in point: Anthony Levandowski – the engineer who worked on Google’s autonomous car — has created a religious nonprofit which looks a whole lot like a church devoted to the worship of A.I..
Why don’t we just pull the plug right now?
Well, that would certainly be one option, just like we could solve poverty, financial inequality, wars and Justin Bieber by carrying out a complete 100 percent extermination of the human race. If all A.I. research was to stop right now, then the prospect of the singularity would certA.I.nly be averted. But who wants that? And who would enforce it? Right now, AI is helping improve life for billions of people. It’s also making a whole lot of money for the owners of companies like Google, Facebook, Apple, and others.
At present, we don’t yet have artificial superintelligence, and even the most impressive examples of A.I. are comparatively narrow in what they can achieve. Even if it is possible to one day replicate a true intelligence inside a computer, some people hope there will be ways to control it without it taking over. For example, one idea might be to keep a superintelligence in an isolated environment with no access to the internet. (Then again, the researcher Eliezer Yudkowsky thinks that, like attempts to keep Hannibal Lecter locked up, no superintelligent A.I. will be contained for long.)
Other proposals say that we’ll be alright so long as we program A.I. to behave in a way that’s good and moral. (But Nick Bostrom’s “paperclip maximizer” thought experiment pokes a few holes in that one, too.) It’s definitely a concern, though — which is something voiced by everyone from Elon Musk to Stephen Hawking.
Then again, if superintelligence turns out to be the greatest thing that ever happened to humanity, do we really want to stop it?
Is the singularity our only concern with A.I.?
Absolutely not. There are plenty of other concerns involving artificial intelligence that don’t involve superintelligence — with the impact of A.I. on employment and the use of A.I. and robots in warfare being just two. In other words, cheer up: there’s a whole lot more than the singularity to worry about when it comes to A.I.
Samsung Galaxy Note 8 review: A second opinion

In a post-Galaxy S8 world, the Note 8 is safe, predictable and arguably overpriced. But it’s still one of the very best Android phones you can buy right now.
Until recently, the last Galaxy Note I could actually buy in the UK was a model released almost three years ago. 2015’s Note 5 never reached European shores, and the Note 7 was canned just before the official Euro release could begin. It’s been a rough couple of years for the series that brought big phones to the mainstream — and, for better or worse, introduced “phablet” into the smartphone lexicon.
Aside from just being released and not catching on fire, the Galaxy Note 8 needed to remind buyers — particularly in the European market, where the series was met with such tremendous early success — what was important about the Note brand. When everyone has a pretty good, pretty big phone, why choose the biggest and most expensive?
For the Note 8, the answer is part high technology, and part fan service for the Note series’ dedicated core following.

The existence of the Galaxy S8+ required Samsung to go really, really big with the Note 8. As in years past, the Note 8 takes the design fundamentals of the current Galaxy S model, and blows it up into a larger size, with a more angular aesthetic.
I didn’t spend much time with the ill-fated Note 7, but the sheer size difference in this year’s model is striking. (Remember that the Note had stuck at the 5.7-inch mark for three years at that point.) You already know this is a big phone, on account of that 6.3-inch display size, but now it’s a big, tall phone, in a way that dwarfs even the S8 Plus, on account of its slightly chunkier forehead and chin, and the less curvaceous chassis.
The Note 8 is a beautiful phone, almost because it’s such an unapologetically huge chunk of technology.
Using it with one hand is problematic, even if you’re used to traditionally large handsets; you’ll want to become acquainted with the optional one-handed mode, enabled in the Settings app, pretty quickly.
Nevertheless, you can’t deny that the Note 8 is a beautiful phone, almost because it’s such an unapologetically huge chunk of technology. The minimal side bezels and tall 18.5:9 aspect ratio convey a phone that means business, while the relatively chunky 8.6mm depth measurement give it more heft than thinner, lighter phones like the LG V30.
Aside from the sheer size of this handset, the Note 8 is an aesthetic marvel for the same reasons as the GS8. The latest evolution of Samsung’s metal-and-glass design language brings us a classy, symmetrical chassis that looks phenomenal in lighter colors like gold and blue.
Samsung’s biometrics are still trash, but I hate the fingerprint scanner less than the S8’s.
If you’re after something ergonomic and easy to wrangle without bringing a second hand into play, that is not this phone. But then you’re reading a Galaxy Note review, so you probably already knew that.
The Galaxy S8’s…. unfortunate… biometric situation returns, with a rear-mounted fingerprint scanner you’ll have a hard time reaching, an iris scanner that doesn’t work in bright daylight, and a face-scanning system that fails in low light. I’ve been using Smart Lock in conjunction with my Huawei Watch, which has served as a workaround.
That said, I (surprisingly) don’t hate the fingerprint placement quite as much as I did on the S8+. It’s still unreasonably high up. (One competitor tells me Samsung likely made that decision to free up space for the battery.) But the added clearance provided by having the heart rate sensor and other biometric gubbins between the fingerprint sensor and the cameras means I’m less paranoid about gunking up the lens, and I’m free to reach to the scanner at a more natural angle.
Don’t get me wrong, it’s still bad. Just not as bad as it might’ve been.
On the inside, it’s a repeat performance of the Galaxy S8 on almost all counts, save for a bump in RAM (to 6GB) and a slight battery hit (3,300mAh, down from 3,500) compared to the S8 Plus. Besides that, you know the score: Snapdragon 835, 64GB storage, microSD expandability.
The full loadout of extra niceties we’ve come to expect from Samsung phones is also included — wireless charging, adaptive fast charging (think Qualcomm Quick Charge 2), IP68 water resistance, and the increasingly endangered 3.5mm headphone jack.
More: Samsung Galaxy Note 8 specs
I’m a little disappointed to see another year pass without a significant bump in wired charging speeds. (Samsung has stuck with the same flavor of 9V/1.67A quick charging for the past three years.) Nevertheless, the added convenience of wireless charging goes some way towards compensating.
Even with a smaller battery than the S8+ — that S Pen take up valuable internal space, remember — the Note 8 still performs satisfactorily in day-to-day use. Even while roaming, and using my unit’s dual SIM functionality, the phone gives me between three and four hours of screen on time, with between 14 and 16 hours of time per charge on LTE, with the Always-On Display feature enabled.

That’s a far cry from multi-day longevity, but broadly in line with competitors like the LG V30 and (OG) Google Pixel XL. What’s more, the aforementioned wireless charging makes opportunistic top-ups throughout the day less of a chore.
For a phone with such an enormous, beautiful, bright display, such endurance from a run-of-the-mill battery capacity represents a solid performance.
The unmatched brightness of Samsung’s screen is a real differentiator.
The Note 8 builds on the impressive AMOLED screen of the S8, with even brighter pixels capable of reaching a staggering 1200 nits in daylight mode. In the UK in early autumn, that’s not a feature you’ll rely on with much frequency. But having used the Note 8 in Hong Kong, Shenzhen and Taipei over the past couple of weeks, the guarantee of being able to see the display, even in the brightest daylight conditions, is a plus.
Samsung continues to push curved AMOLED through its Edge Display, which gives the Note 8 a symmetrical profile, while allowing for slim bezels. The curve is less pronounced than in Notes past, but there’s still some slight color shift around the edges. It’s easy to ignore, but undeniably present.
However, the only part of the display I’ve been disappointed in is its susceptibility to smudges and gunk. The oleophobic coating on my unit has started to wear very visibly, particularly around the home key, after just a couple of weeks. (I saw a similar trend on my S8 Plus, only after several months.) All phones will eventually succumb to this, as the smudge-resistant layer wears off. But the fact that I’m seeing it after less than a month is troubling.
I’m also not blown away by the Note 8’s built-in, bottom-firing speaker. Little progress seems to have been made since the S8, and while the speaker is reasonably loud, it’s also fairly tinny. That seems like a missed opportunity in a phone with such an awesome display.

The major differentiator of the Note series, now that many phones can match Samsung on display size and camera quality, and beat it on battery life, is the S Pen. We’ve reached the point where nobody is even trying to compete with Samsung when it comes to stylus input on a smartphone, leaving the company uncontested in this space.
Of course, there are technical improvements to the pen: Pressure sensitivity has doubled compared to the Note 7.
And Samsung has a wealth of S Pen software features arranged around the Air Command menu — the little radial dial that appears when you undock. These range from useful to pure gimmickry. (There’s no reason, for instance, why recording a section of the screen as a GIF should require the use of an S Pen.)
Aside from the added precision that the pen gives me compared to a stubby finger — in effect, allowing me to pinpoint areas of the UI the same way I do with a mouse on a PC — the most useful S Pen features I’ve found focus on multitasking and quick information retrieval.
Smart Select, for instance, lets you pin captured areas of the display to the top of your screen for reference. Meanwhile “Glance” mode gives you a quick shortcut button which you can hover over to refer back to a particular application.
And yes, note-taking is still a thing on the Note, with up to 100 pages of doodles or grocery lists now supported in Samsung’s Screen-Off memo feature. Pull out the pen with the screen off, take a note, then save it to the Samsung Notes app, or pin it to the Always-On Display for quick reference.
The sheer quantity of S Pen features makes it hard to separate out the signal from the noise.
The only problem with all this is that the sheer quantity of features can be overwhelming. Were I not in this job, I probably would’ve sidestepped most of the S Pen features entirely. I have to question how many Note 8 owners will really want to go digging in the S Pen and Air Command menus to find things like Direct Input, which lets you write directly into text fields in any app.
Outside of the S Pen, the Note 8’s software experience is practically identical to that of the GS8. You’ve got some new animated “Infinity Wallpapers,” with visually impressive star fields that scroll into view as you power on and pan through home screens. And we’re now up to Samsung Experience 8.5, which combined with the extra 2GB of RAM allows for a more fluid UI, and fewer app reloads than Samsung’s other flagships.
Samsung’s current UI remains one among my favorites, with a slick sci-fi aesthetic that’s fully differentiated from vanilla Android, and highly polished.
Bixby Voice is now available, letting you replace touch input with voice commands in the handful of supported apps. Outside of the fact that you can now disable the Bixby Button for Bixby Home (finally), my position on Bixby hasn’t really changed since using the Note 8. The potential is huge, but the execution is just nowhere near fully baked.
Bixby isn’t an assistant per se, but interacting with the service so often feels like being lumbered with a personal helper with the IQ of a toddler. Bixby Voice struggles to understand many commands, gets far too many things wrong, and that’s in the few apps it actually supports at present.















The Note series has often boasted new camera features ahead of other Samsung phones, and this year’s model the honor of sporting the first dual camera setup in a Galaxy handset.
The primary camera is essentially identical to the Galaxy S8, which is both a known quantity and a very good smartphone camera. Samsung no longer has a monopoly on the best phone cameras, but it didn’t really need to push the boat out beyond the S8 for regular photos, and I’m just fine with the company recreating the same f/1.7, 12-megapixel shooter around the back of the Note.
For more on the identical camera module as we’ve been using it on the Galaxy S8 series, check out our review of that phone. Six months on, it stands up really well.
The secondary telephoto camera is where things get interesting. It’s an optically stabilized 12-megapixel sensor behind an f/2.4 lens, and the OIS gives this camera an edge over the likes of the OnePlus 5, and most iPhone models in low-light shots. The result is the that Note 8 can capture zoomed-in shots with more fine detail than either the previous-gen iPhone or the OnePlus 5, despite the latter’s higher-resolution 20-megapixel sensor.

Personally, I still prefer LG’s approach to a dual-camera setup, where the wide-angle lens used in the G6 and V30 let me instantly capture a wider, more dramatic field of view. But there’s no denying the Note’s telephoto lens introduces some unique creative possibilities as well — and the main one is Live Focus.
Live Focus is getting better, but it’s still nowhere near 100% reliable.
Live Focus could be described as Samsung’s take on Apple’s Portrait Mode. And on current firmware it does a serviceable job at keeping your subject in focus, while artistically defocusing the background. (Things seem to have improved in that regard since Andrew first reviewed the Note 8 on Samsung’s initial firmware, which is to be expected.) But you can also use it to introduce added depth into just about any shot, provided there’s enough light. Food shots, architecture and landmarks, with a little creativity, can be captured in a way that doesn’t immediately scream “shot on a phone.”
Does Live Focus trip up from time to time? Absolutely. The standard stumbling blocks like hair, glass and liquids can confuse Samsung’s depth-sensing algorithms, and Live Focus still fails a little too often for my liking. But at least when it does fall flat, you still get regular photos captured with both the standard and telephoto lenses.
Is the Note 8’s camera the best? No. Is any single camera the best, in the age of highly differentiated software features, computational photography, telephoto-versus-wide-angle and other unique things that ultimately boil down to personal taste? I’d argue no.
Regardless, pick up a Note 8 and you’re getting two phenomenal cameras all the same.

The Galaxy Note 8 isn’t the best at everything it does. Big battery life? Look at the Huawei Mate 9, or possibly the soon-to-be-launched Mate 10, depending on how the specs shake out. Quick updates and superior HDR+ cameras? Look to Google’s new Pixels. A big screen in a manageable size? Local rival LG is hoping you’ll consider its V30.
That said, Samsung has stayed true to the essence of what made the Note special when it first appeared six years ago: The biggest, best screen you can reasonably fit in a smartphone-sized device. Solid fundamentals. Great cameras. And unique, unmatched S Pen features.
If any phone is worth $1000 to you, take a look at the Note 8.
The main areas where this Note falls behind is battery life, at least compared to rivals putting bigger cells in smaller phones. Samsung’s hesitance to push capacities too far after last year’s battery debacle is understandable, but eventually it’ll have to address this area. That’s probably my biggest criticism of a phone which, while great, plays it relatively safe, and has only a few surprises up its sleeve.
Nevertheless, Samsung has a fantastic contender in the Note 8. Whether any phone is worth almost $1000 is a question for another time. But if that’s the kind of cash you want to splash on a telephone, Samsung has a device worthy of your attention.
Samsung Galaxy Note 8
- Galaxy Note 8 review
- Complete Galaxy Note 8 specs
- Galaxy Note 8 vs. Galaxy Note 5
- Which Note 8 color is best?
- Join our Galaxy Note 8 forums
Verizon
AT&T
T-Mobile
Sprint
Best Buy
How to get the male Google Assistant voice on your phone and Google Home
The Google Assistant can now be changed to sound like a man, and this is how to do it.
It’s been exactly one year since the Google Assistant debuted on the Pixel and Pixel XL last October, and while we’re expecting big changes in both hardware and software from Google at its Pixel event, there’s a new feature with the Assistant that you can mess around with right now.

The Google Assistant has used the voice of a female ever since its conception, but just before the big launch of the Pixel 2 and Pixel 2 XL, Google is giving you the option to change the Assistant’s voice to a male’s. You can change the voice for both the Assistant on your phone and Google Home, and the process for doing so is fairly simple.
Changing Assistant voices on your phone
Hold down on your home button to prompt the Google Assistant
Tap the circular blue icon near the upper-right
Touch the overflow icon in the top-right and go to Settings
Go to Preferences -> Assistant voice
Changing Assistant voices on Google Home
Open the Google Home app
Go to More settings from the hamburger menu
Preferences -> Assistant voice
Once you’re at the Assistant voice section, Voice I is the female voice and Voice II is the male one. Tapping the blue speaker icon next to each one will play a preview for how it sounds, and touching anywhere else on either voice option will select it as your new default. The male voice doesn’t change anything about how the Google Assistant works, but it is nice to have some added customization over how Google’s AI sounds when interacting with it.
How to set up and customize Google Assistant
Google Hardware

- Google Wifi review
- Google Home review
- Everything you need to know about the Chromecast Ultra
- Chromecast vs Chromecast Ultra: Which should you buy?
Google Wifi:
Google
Amazon
Google Home:
Google
Best Buy
Chromecast Ultra:
Google
Best Buy



