Apple ignored a major HomeKit security flaw for six weeks
Apple’s HomeKit home automation platform is sold on the basis of security, privacy and trust — users had to buy brand-new accessories with Apple-approved security components just to get it up and running. But back in October a developer uncovered a huge vulnerability which essentially meant a stranger, with some basic tech know-how and an Apple Watch, could waltz right on in to your home. And Apple has only just fixed it.
Under the name “Khaos Tian”, the developer writes on Medium that HomeKit was sharing data on HomeKit accessories and encryption keys over insecure sessions with Apple Watches running watchOS 4.0 or 4.1, which essentially gave control of every HomeKit accessory (locks, cameras, lights) to any unscrupulous Apple Watch wearer. Tian says he reported the issue to Apple Product Security, which somehow made the situation worse by widening the flaw so unauthorized iOS 11.2 devices could also receive the sensitive data. Basically, they went from leaving the keys in the front door, to leaving the front door wide open.
Despite repeatedly emailing Apple throughout November, Tian had no success in getting a response from the company, so resorted to contacting Apple site 9to5Mac, which contacted Apple’s PR team on Tian’s behalf. Worryingly, but perhaps not unsurprisingly, Tian writes that Apple PR were “much more responsive” than the Apple Product Security team. On December 13 — some six weeks after Tian first flagged the vulnerability — Apple remedied the issue with iOS 11.2.1.
HomeKit is sold on the bold claim that you can entirely trust your home to Apple. More so than any other company, in fact, since the system requires users to purchase “extra-secure” Apple-approved components. But as Tian writes, “be vigilant when someone make[s] the promise that something is secure”, because as Apple demonstrates, it’s not too difficult to cause “a complete security breakdown of the entire system”. Apple has been contacted for comment.
Via: Venture Beat
Source: Medium
Fat Shark’s 101 starter set is a gateway to drone racing
Fat Shark, the de facto name in drone-video goggles, is moving into the drone game proper with the 101 “drone training system.” As the name suggests, the 101 is a quadcopter with new pilots in mind. There are many options at the entry level — Parrot has plenty — but these tend to be aimed at casual users. The 101 targets those who might eventually want to move on to something more serious but aren’t ready to invest in a full kit yet. Or maybe this will appeal to those who are curious about first-person view flying (FPV) but don’t know where to start. Basically, you start here.
The proposition is simple: The 101 comes with everything you need. That means the mini quad itself, the radio controller, the all-important FPV goggles and batteries, cables, aerials, and so on. Typically, you’d buy all of these separately, pair them up and hope for the best. Fat Shark isn’t the first to take such a frictionless approach, but it’s one of the most straightforward and comprehensive implementations I’ve seen. At $250, it’s also pretty affordable compared to buying everything individually. Added into the bargain is the fact that if you decide to move on to a new quad, you won’t need to repurchase some of the gear (like you would if you started with Parrot).
As someone who’s spent time trawling forums and hobby stores looking for a relatively simple (but not toyish) drone that I can cut my teeth on (and use for FPV flying) the standout benefit here is the balance between the simplicity (the radio has very few buttons, and there’s no binding the controller and the drone) and the authentic drone-piloting experience.

A Parrot Mambo is easy to fly but doesn’t handle like a professional quad, and the WiFi video link can be flaky. The 101 handles more like a racer (there are easier modes to learn with), and the video link from the modest (non-HD) camera to the goggles reaches much further. Our San Francisco office is pretty big, and I managed a few solid laps with nary a glitch (but did incur some nonplussed looks from busy colleagues). For the absolute beginner, the 101 also comes with free access to the DRL’s drone-simulator software (just plug the controller into your PC via USB), so you can learn the basics without fear of breaking your new quad.
Most mini quadcopters have one big problem: battery life. Seven or eight minutes of flight time is pretty standard at this size. The 101 doesn’t last quite that long, with each USB-chargeable cell lasting about five minutes. You’ll get two in the box, but you’ll probably want to snag a couple more: Ten minutes in the air will definitely leave you hungry for more.
While Fat Shark is debuting the 101 bundle today, you unfortunately won’t be able to make this a late addition to your holiday gift list. Though pre-orders are open, it won’t ship until January.
Printed photos can fool Windows 10’s Hello face authentication
Windows 10’s facial authentication system might be able to tell the difference between you and your twin, but it could apparently be fooled with a photo of your face. According to researchers from German security firm SySS, systems running previous versions of the platform can be unlocked with a printed photo of your face taken with a near-infrared (IR) camera. The researchers conducted their experiments on various Windows 10 versions and computers, including a Dell Latitude and a Surface Pro 4.
The spoof isn’t exactly easy to pull off — someone who wants to access your system will have quite a bit of preparation ahead of them. In some cases, the researchers had to take additional measures to spoof the systems, such as placing tape over the camera. Not to mention, they needed high-quality printouts of users’ photos clearly showing a close-up, frontal view of their faces.
Still, the researchers said the technique can successfully unlock computers and even released three videos showing it in action, which you can watch below. Somebody determined enough to break into your system could do so (they could scour your Facebook account for high-res photos they can modify, for instance), and your best bet is downloading and installing the Windows 10 Fall Creators Update. Simply installing the update isn’t enough, though: your system will still be vulnerable. The researchers said you’ll have to set up Windows Hello’s facial authentication from scratch and enable the new enhanced anti-spoofing feature to make sure you’re fully protected.
It’s not just Microsoft’s technology that has vulnerabilities, though. Its fellow tech titans, Apple and Samsung, are also having trouble with their authentication systems. A German hacking group found that the S8’s iris scanner can be spoofed using a photo of the user with contact lens on top, while another group of security researchers said they found a way to fool iPhone X’s face scanning system with masks.
Via: ZDNet, The Verge
Source: SySS
In 2017, society started taking AI bias seriously
A crime-predicting algorithm in Florida falsely labeled black people re-offenders at nearly twice the rate of white people. Google Translate converted the gender-neutral Turkish terms for certain professions into “he is a doctor” and “she is a nurse” in English. A Nikon camera asked its Asian user if someone blinked in the photo — no one did.
From the ridiculous to the chilling, algorithmic bias — social prejudices embedded in the AIs that play an increasingly large role in society — has been exposed for years. But it seems in 2017 we reached a tipping point in public awareness.
Perhaps it was the way machine learning now decides everything from our playlists to our commutes, culminating in the flawed social media algorithms that influenced the presidential election through fake news. Meanwhile, increasing attention from the media and even art worlds both confirms and recirculates awareness of AI bias outside the realms of technology and academia.
Now, we’re seeing concrete pushback. The New York City Council recently passed what may be the US’ first AI transparency bill, requiring government bodies to make public the algorithms behind its decision making. Researchers have launched new institutes to study AI prejudice (along with the ACLU) while Cathy O’Neil, author of Weapons of Math Destruction, launched an algorithmic auditing consultancy called ORCAA. Courts in Wisconsin and Texas have started to limit algorithms, mandating a “warning label” about its accuracy in crime prediction in the former case, and allowing teachers to challenge their calculated performance rankings in the latter.
“2017, perhaps, was a watershed year, and I predict that in the next year or two the issue is only going to continue to increase in importance,” said Arvind Narayanan, an assistant professor of computer science at Princeton and data privacy expert. “What has changed is the realization that these aren’t specific exceptions of racial and gender bias. It’s almost definitional that machine learning is going to pick up and perhaps amplify existing human biases. The issues are inescapable.”
Narayanan co-authored a paper published in April analyzing the meaning of words according to an AI. Beyond their dictionary definitions, words have a host of socially constructed connotations. Studies on humans have shown they more quickly associate male names with words like “executive” and female names with “marriage” and the study’s AI did the same. The software also perceived European American names (Paul, Ellen) as more pleasant than African American ones (Malik, Shereen).
The AI learned this from studying human texts — the “common crawl” corpus of online writing — as well as Google News. This is the basic problem with AI: Its algorithms are not neutral, and the reason they’re biased is that society is biased. “Bias” is simply cultural meaning, and a machine cannot divorce unacceptable social meaning (men with science; women with arts) from acceptable ones (flowers are pleasant; weapons are unpleasant). A prejudiced AI is an AI replicating the world accurately.
“Algorithms force us to look into a mirror on society as it is,” said Sandra Wachter, a lawyer and researcher in data ethics at London’s Alan Turing Institute and the University of Oxford.
For an AI to be fair, then, it needs to not to reflect the world, but create a utopia, a perfect model of fairness. This requires the kind of value judgments that philosophers and lawmakers have debated for centuries, and rejects the common but flawed Silicon Valley rhetoric that AI is “objective.” Narayanan calls this an “accuracy fetish” — the way big data has allowed everything to be broken down into numbers which seem trustworthy but conceal discrimination.
The datafication of society and Moore’s Law-driven explosion of AI has essentially lowered the bar for testing any kind of correlation, no matter how spurious. For example, recent AIs have tried to examine, from a headshot alone, whether a face is gay, in one case, or criminal, in another.
Then there was AI that sought to measure beauty. Last year, the company Beauty.AI held an online pageant judged by algorithms. Out of about 6,000 entrants, the AI chose 44 winners, the majority of whom were white, with only one having apparently dark skin. Human beauty is a concept debated since the days of the ancient Greeks. The idea that it could be number-crunched in six algorithms measuring factors like pimples and wrinkles as well as comparing contestants to models and actors is naïve at best. Deeply human questions were at play — what is beauty? Is every race beautiful in the same way? — which the scientists alone were ill-equipped to wrestle with. So instead, perhaps unwittingly, they replicated the Western-centric standards of beauty and colorism that already exist.

The major question for the coming year is how to remove these biases.
First, an AI is only as good as the training data fed into it. Data that is already riddled with bias — like texts that associate women with nurses and men with doctors — will create a bias in the software. Availability often dictates what data gets used, like the 200,000 Enron emails made public by authorities while the company was prosecuted for fraud that reportedly have since been used in fraud detection software and studies of workplace behavior.
Second, programmers must be more conscious of biases while composing algorithms. Like lawyers and doctors, coders are increasingly taking on ethical responsibilities except with little oversight. “They’re diagnosing people, they’re preparing treatment plans, they’re deciding if somebody should go to prison,” said Wachter. “So the people developing those systems should be guided by the same ethical standards that their human counterparts have to be.”
This guidance involves dialogue between technologists and ethicists, says Wachter. For instance, the question of what degree of accuracy is required for a judge to rely on crime prediction is a moral question, not a technological one.
“All algorithms work on correlations — they find patterns and calculate the probability of something happening,” said Wachter. “If the system tells me this person is likely to re-offend with a competence rate of 60 percent, is that enough to keep them in prison, or is 70 percent or 80 percent enough?”
“You should find the social scientists, you should find the humanities people who have been dealing with these complicated questions for centuries.”
A crucial issue is that many algorithms are a “black box” and the public doesn’t know how they makes decisions. Tech companies have pushed back against greater transparency, saying it would reveal trade secrets and leave them susceptible to hacking. When it’s Netflix deciding what you should watch next, the inner workings are not a matter of immense public importance. But in public agencies dealing with criminal justice, healthcare or education, nonprofit AI Now argues that if a body can’t explain its algorithm, it shouldn’t use it — the stakes are too high.
In May 2018, the General Data Protection Regulation will come into effect in the European Union, aiming to give citizens a “right to explanation” for any automated decision and the right to contest those decisions. The fines for noncompliance can add up to 4 percent of annual revenue, meaning billions of dollars for behemoths like Google. Critics including Wachter say the law is vague in places — it’s unclear how much of the algorithm must be explained — and implementation may be defined by local courts, yet it still creates a significant precedent.
Transparency without better algorithmic processes is also insufficient. For one, explanations may be unintelligible to regular consumers. “I’m not a great believer in looking into the code, because it’s very complex, and most people can’t do anything with it,” said Matthias Spielkamp, founder of Berlin-based nonprofit AlgorithmWatch. “Look at terms and services — there’s a lot of transparency in that. They’ll tell you what they do on 100 pages, and then what’s the alternative?” Transparency may not solve the deep prejudices in AI, but in the short run it creates accountability, and allows citizens to know when they’re being discriminated against.
The near future will also provide fresh challenges for any kind of regulation. Simple AI is basically a mathematical formula full of “if this then that” decision trees. Humans set the criteria for what a software “knows.” Increasingly, AI relies on deep neural networks where the software is fed reams of data and creates its own correlations. In these cases, the AI is teaching itself. The hope is that it can transcend human understanding, spotting patterns we can’t see; the fear is that we have no idea how it reaches decisions.
“Right now, in machine learning, you take a lot of data, you see if it works, if it doesn’t work you tweak some parameters, you try again, and eventually, the network works great,” said Loris D’Antoni, an assistant professor at the University of Wisconsin, Madison, who is co-developing a tool for measuring and fixing bias called FairSquare. “Now even if there was a magic way to find that these programs were biased, how do you even fix it?”
An area of research called “explainable AI” aims to teach machines how to articulate what they know. An open question is whether AI’s inscrutability will outpace our ability to keep up with it and hold it accountable.
“It is simply a matter of our research priorities,” said Narayanan. “Are people spending more time advancing the state of the art AI models or are they also spending a significant fraction of that time building technologies to make AI more interpretable?”
Which is why it matters that, in 2017, society at large increasingly grappled with the flaws in machine learning. The more that prejudiced AI is in the public discourse, the more of a priority it becomes. When an institution like the EU adopts uniform laws on algorithmic transparency, the conversation reverberates through all 28 member states and around the world: to universities, nonprofits, artists, journalists, lawmakers and citizens. These are the people — alongside technologists — who are going to teach AI how to be ethical.
Apple’s Upgraded TrueDepth Camera System in Future iPhones Will Necessitate Larger Batteries
iPhone models released in 2019 and later will likely feature an upgraded TrueDepth camera system that will consume more power, resulting in a need for larger-capacity batteries, according to KGI Securities analyst Ming-Chi Kuo.
In a research note obtained by MacRumors, Kuo said Apple has technologies at its disposal to develop larger-capacity batteries.
Apple capable of designing new system for large-capacity batteries: We believe the adoption of TrueDepth camera for 3D sensing in 2017-18 will create demand for larger-capacity batteries. From 2019, we predict iPhone may adopt upgraded 3D-sensing and AR-related functions, and it will consume more power, further increasing demand for large-capacity batteries. We believe Apple’s key technologies, including semiconductor manufacturing processes, system-in-package (SIP), and substrate-like PCB (SLP), will create the required space for larger batteries.
Kuo unsurprisingly expects Apple will use these technologies to continue increasing iPhone battery capacities in 2019 and 2020, as it routinely does, which should result in even longer battery life for future models.
Kuo reiterated that TrueDepth will be expanded to a trio of iPhone models next year, including a new 5.8-inch iPhone X, a larger 6.5-inch model we’re calling iPhone X Plus, and a new 6.1-inch mid-range model with an LCD display, but it sounds like the camera system will remain unchanged in 2018.
As far as next year is concerned, Kuo previously said the second-generation iPhone X could have a one-cell L-shaped battery that would provide up to 10 percent additional capacity compared to the two-cell battery in the current iPhone X, which of course could result in slightly longer battery life.
He added that next year’s so-called “iPhone X Plus” is likely to retain a two-cell battery design, but the larger size of the 6.5-inch device will still allow it to have a higher capacity in the range of 3,300 to 3,400 mAh.
Apple is expected to release the new iPhone X and iPhone X Plus in its usual timeframe of September to October next year.
Related Roundup: iPhone XTags: KGI Securities, Ming-Chi Kuo, TrueDepthBuyer’s Guide: iPhone X (Buy Now)
Discuss this article in our forums
The Morning After: Thursday, December 21st 2017
Hey, good morning! You look fabulous.
Good morning! Do you remember Magic Leap? You might remember the hype. Well, we now have a headset to stare at — even if it’s just a render. Your old iPhone is also slow for a reason and there’s talk of a YouTube rival from Amazon.
It’s real(?)Meet the Magic Leap One: Creator Edition

After years of hype, Magic Leap is finally showing off CGI renders of its first mixed reality headset. The Magic Leap One: Creator Edition makes virtual objects appear in real life thanks to light field technology, and room-mapping tech makes sure they stay wherever you put them. It ships in 2018 — just don’t ask about the price.
There’s a reason.Apple really is making your old iPhone slow down

The software fix that helped out with battery-induced resets on iPhone 6-era phones had an impact you may not have known about. After tests by Geekbench revealed older phones were underclocked, Apple revealed that iOS slows the CPU down when a battery is cold, old or just low on juice. If your phone is running more slowly than it used to, replacing the battery could get it back up to speed right away.
v1.0As ‘PUBG’ finally exits beta, its creators look to the future

Now that PlayerUnknown’s Battlegrounds has been officially released, what’s next? According to its developers, making the game the best it can be, and getting the Xbox version up to par with the PC release with an eye toward cross-platform play.
Big, bright and beautiful.LG launches an ultrawide 5K HDR monitor

All three of LG’s new monitors are DisplayHDR 600-rated.
Too bad an inconsistent camera holds back an otherwise-solid phone.BlackBerry Motion review: It’s all about the battery life

The second smartphone developed by BlackBerry Mobile is an all-touch affair, and it manages to get more right than wrong. The combination of a mid-range chipset, a 5.5-inch 1080p screen and an enormous battery means the new BlackBerry Motion routinely runs for two days on a single charge. We also appreciated the handful of valuable software features layered on top of Android, something we can’t say of every phone maker. Unfortunately, the camera’s performance left a lot to be desired, and there are more powerful devices available in the same price range. Ultimately, it’s a good phone, but not a very good deal.
But wait, there’s more…
- Amazon may be planning a YouTube rival
- California is set to hit its green-energy goals a decade early
- Trump removes climate change from national security strategy
- ‘Donut County’ is a love letter to LA
- Samsung’s new wall-mountable soundbar has a built-in subwoofer
The Morning After is a new daily newsletter from Engadget designed to help you fight off FOMO. Who knows what you’ll miss if you don’t Subscribe.
Craving even more? Like us on Facebook or Follow us on Twitter.
Have a suggestion on how we can improve The Morning After? Send us a note.
DJI forces UK pilots to sit a ‘knowledge quiz’ before takeoff
If you’re hoping for a DJI drone this Christmas, be prepared for one teeny-tiny roadblock as you rush into the back garden with controller in hand. Today, the company has announced a mandatory “Knowledge Quiz” for all of its customers in the UK. It will live in the DJI GO 4 app — which is basically required to use the company’s snap-on controller — and pose eight questions about safe, common sense flying. In short, you won’t be able to fly until you’ve answered them all successfully. So if you haven’t already, it’s worth swatting up on our handy guide to UK drone regulations.
DJI launched a similar quiz for US pilots earlier this year. The UK-specific version has the blessing of the Civil Aviation Authority (CAA), which oversees British drone laws, and the ARPAS-UK, a trade association for remotely piloted aircraft. The quiz also precedes new legislation, due in spring 2018, that will force UK drone pilots to sit a safety awareness test. It’s not clear, however, if DJI’s knowledge quiz is the same, or supplementary, to this government exam. Regardless, it sounds like a good initiative to ensure drone-related accidents are kept to a minimum.
Via: Gizmodo
Source: DJI (Press Release)
Razer gives its phone a major camera update
When the Razer Phone was first announced, we didn’t know what to expect. Sure, the company had Nextbit, but a handset just “for gamers?” The whole thing smelled like a gimmick. Turns out, we needn’t have worried. The Razer Phone is a solid flagship with a 5.7-inch, quad HD display (and a rare 120Hz refresh rate), a huge battery and loud front-facing speakers. The only problem? The camera is pretty average. Thankfully, Razer has heard fan complaints and pushed out a sizeable software update. Exactly what it’s changed is a mystery, but the result should be “improved picture quality” with less noise, punchier colors and clearer shadows.
There’s a 2x zoom button now and “improved shutter speed(s)” while using the camera in low-light conditions and HDR mode. We’ll have to put all of this to the test, of course. In our review, the camera fell “well short of what we expected from a device that costs this much.” It would take a huge improvement for the phone to match the images produced by Google’s Pixel 2 or Samsung’s Galaxy Note 8. Still, it’s good to see Razer supporting its first smartphone — especially when the rest of the device is so well put together. In a Facebook post, the company promised a further update to bring its camera app “on par with other flagship phones.” We can’t wait.
The latest camera update for the Razer Phone is live with faster shutter speed, better colors, quick zoom and more! We’ve addressed your asks and are working every day to improve the app – keep the feedback coming!
Full feature list – https://t.co/Z8lODLyYVD pic.twitter.com/7yd4ZLg6eE
— RΛZΞR (@Razer) December 21, 2017
Source: Razer (Facebook)
LA orders 25 of Proterra’s electric buses
Los Angeles wants to field a completely electric fleet of buses by 2030, and it just took a large step toward making that a reality. The city’s Department of Transportation (which runs the largest municipal transit in the county) has acquired 25 of Proterra’s smaller 35-foot Catalyst buses, all of which should arrive in 2019. That may not sound like much, but it’s a significant chunk of the DOT’s 359-bus fleet. The deal promises real savings, too — it should eliminate 7.8 million lbs. of greenhouse gas emissions per year and save $11.2 million in energy and maintenance over 12 years.
This isn’t the first step toward electrifying mass transit in the LA area, but is one of the larger examples. The DOT unveiled just four buses at the start of 2017, for example, so this is certainly a much larger commitment. And notably, it’s not just from an American company — it’s from a company whose manufacturing is even located in the county. The region’s Metropolitan Transportation Authority bought 95 buses in July, but they were split between Chinese transportation giant BYD and the American division of Canada’s New Flyer.
It’d represent a big order for Proterra as well. At the start of 2017, it had received 375 bus orders. This is a big deal for a firm that’s still hitting its stride, and shows that it’s earning the trust of some major cities. Don’t be surprised if its buses become more of a mainstay going forward.
Source: Proterra
Facebook job ads are being used to filter out older applicants
Facebook’s targeted ad tools have landed it in hot water again. Dozens of companies are placing recruitment ads restricted to select age groups on the social network, according to a joint investigation by ProPublica and The New York Times. They include Verizon, Amazon, Goldman Sachs, Target and Facebook itself, among others. Legal experts are questioning whether the practices are lawful, specifically whether they abide with the federal Age Discrimination in Employment Act of 1967, which forbids bias against people 40 or older in hiring.
To make matters worse, a class-action lawsuit filed by a communications industry labor union argues that Facebook should be held liable for the ads. Aside from being accused of engaging in the ageist practice itself, the suit claims that Facebook essentially acts an employment agency for the firms that use its ads. The big blue behemoth is expected to shield itself using the Communications Decency Act, as it did when it was sued for allowing racial bias in its targeted ads for housing.
In the past, Facebook has been forced to apologize and backtrack on its ad targeting tools. As recently as September, the company removed anti-semitic categories from its ad network. And in November, Facebook handed Congress over $100,000 worth of ads purchased by a Russian group to influence the 2016 US presidential election. To make amends, Facebook hired 1,000 workers to manually review ads targeting race or politics.
But, this time round, the company claims it’s doing no wrong. “Used responsibly, age-based targeting for employment purposes is an accepted industry practice and for good reason: it helps employers recruit and people of all ages find work,” said Facebook’s Rob Goldman, VP of ads. We’ve reached out to the company for a comment.
Meanwhile, some of the companies that placed the ads (including Amazon, Northwestern Mutual, and the New York City Department of Education), said they’d changed their recruitment strategies after being contacted by ProPublica and The Times. Others insisted the Facebook ads were part of a broader recruitment plan aimed at people of all ages.
Facebook isn’t the only company offering these types of tools to recruiters. Propublica managed to buy job ads excluding those 40 and over on Google and LinkedIn, with the latter claiming it had changed its system when it was notified of the buys. Google, on the other hand, said it doesn’t prevent marketers from displaying ads based on a user’s age.
Facebook (along with the big G) is leading the pack when it comes to digital ads. Last month, it declared that the vast majority of its $10.3 billion quarterly revenue came from advertisements. The social network also introduced a job listings feature in 2016 in a bid to challenge LinkedIn’s recruitment dominance.
Source: ProPublica, The New York Times



