Skip to content

Archive for

7
May

British watchdog group orders Cambridge Analytica to hand over data


As the fallout of data firm Cambridge Analytica continues, a U.K. data privacy watchdog group has ordered that the company cede all of the personal information it collected from an American professor back to him, noting that even non-British citizens have the right to seek and obtain data held by a British firm. In doing so, the Information Commissioner’s Office (ICO) has set a precedent for how Cambridge Analytica and other firms would have to deal with illegally collected information, potentially allowing millions of other U.S. Facebook users to demand information on exactly what data companies have on them.

In a notice posted on Saturday, the ICO said that it had “served a legal notice on SCL Elections Ltd,” the parent company of Cambridge Analytica, “ordering it give an academic all the personal information the company holds about him.” In total, the company will have 30 days to comply with the subject access request, as submitted by Professor David Carroll. If Cambridge Analytica does not do so, it will be violating the Data Protection Act of 1998, and will be committing a criminal offense “punishable in the courts by an unlimited fine.”

Carroll first submitted a request to Cambridge Analytica for his data in January 2017, at which point SCL Group requested that he submit a 10 pound fee and proof of identity in order to obtain such information. In March, he received a spreadsheet that the company claimed contained all of his personal data to which he was legally entitled. But Carroll was not convinced that this was truly the extent of the damage, nor did he receive an “adequate explanation of where it had been obtained from or how it would be used.” As such, he complained to the ICO last September, and now eight months later, may finally be getting his way.

“[SCL Group] has consistently refused to co-operate with our investigation into this case and has refused to answer our specific enquiries in relation to the complainant’s personal data — what they had, where they got it from and on what legal basis they held it,” said Information Commissioner Elizabeth Denham in statement. “The right to request personal data that an organization holds about you is a cornerstone right in data protection law and it is important that Professor Carroll, and other members of the public, understand what personal data Cambridge Analytica held and how they analyzed it.”

The Commissioner concluded,  “We are aware of recent media reports concerning Cambridge Analytica’s future but whether or not the people behind the company decide to fold their operation, a continued refusal to engage with the ICO will potentially breach an Enforcement Notice and that then becomes a criminal matter.”

Editors’ Recommendations

  • Facebook was always too busy selling ads to care about your personal data
  • 9 things to know about Facebook privacy and Cambridge Analytica
  • Cambridge Analytica used more apps to steal data, former employee claims
  • Cambridge Analytica designed cryptocurrency to sell back your personal data
  • Localblox data breach is the latest nightmare for Facebook, LinkedIn


Advertisements
7
May

Nike wants to put treadmills in shoes to help you get them on


Nike is proving itself to be the master innovator when it comes to footwear once again, but this time it has nothing to do with 3D-printed soles or recycled materials for the upper. Instead, it’s all about the treadmill.  You may no longer need to hit up the gym to get on the running machine — you’ll just need to put on your shoes. As per a new patent filed in early May, Nike is looking to place a “rotatable conveyor element” in the sole of your shoe. No, it’s not to help you work out. Rather, it’s to help you put on the shoe a bit more easily.

The concept contained within the patent includes an “insole, an upper configured to form a space between the upper and the insole,” which in turn is “configured to admit and secure a foot of a wearer.” And then, most importantly, there’s the conveyer belt portion, or for our purposes a mini treadmill. This little machine is “configured to rotatably engage a body part of the wearer as the foot enters the space and draw the foot into the space.” So no longer will you have to wrestle with your shoe to get it on. Instead, your foot will slide right in.

Also included in the patent is a controller mechanism. After all, you wouldn’t want your conveyer belt/mini-treadmill to be running all day long. As such, Nike has described a controller that is “coupled to an activation mechanism, such as a switch or mechanism to detect the presence of a foot.” Such mechanisms could be a magnetic field sensor that senses a change in the capacitance, indicating that a foot is nearby. It could also be a switch either inside or outside the shoe that you would need to activate in order for your foot to be conveyer-belted into the shoe.

We’ll give Nike some more time to figure that one out.

The conveyer belt mechanism could be used to help you take your shoe off at the end of the day, saving you precious seconds once you walk though the door. As is the common case with these cases, just because Nike has filed a patent for such an invention doesn’t necessarily mean it’ll come to fruition. But like many out-there patents, we wouldn’t mind seeing it happen.

Editors’ Recommendations

  • How to create a route in MapMyRun
  • The best running shoes
  • Adidas AM4NYC runners are templates for shoe design in the future
  • Once you put them on, you may never want to take Suavs shoes off
  • Nike’s 3D-printed uppers take weight off your feet


7
May

Facebook opens A.I. research labs in Seattle and Pittsburgh


Facebook has announced that it is opening two new artificial intelligence research labs in Pittsburgh and Seattle. The labs will include professors hired from the University of Washington and Carnegie Mellon University. This has prompted some fears that Facebook is poaching the instructors needed to train the next generation of A.I. researchers.

“It is worrisome that they are eating the seed corn” Dan Weld, a computer science professor at the University of Washington told the New York Times. “If we lose all our faculty, it will be hard to keep preparing the next generation of researchers.”

Experts in the field of A.I. and machine learning can often command extremely high salaries, making it difficult for universities and other non-profit research centers to compete with the likes of Facebook and Google.

In a recent post, Facebook’s director of A.I. research, Yann LeCun, says that that company’s goals have been misinterpreted. Rather than poach qualified experts from universities, Facebook is hoping to create a model where both the public and private sectors can benefit.

“Professors gain a different type of experience in industry that can have a positive impact on their students and on their research,” LeCun said. “Additionally, their connection with industry helps produce new scientific advances that may be difficult to achieve in an academic environment, and helps turn those advances into practical technology. Universities are familiar with the concept of faculty with part-time appointments in industry. It is common in medicine, law, and business. ”

LeCun stressed that the company’s goal with its FAIR program was to create a healthy partnership between Facebook and the universities which contributed to its research labs.

“Unlike others, we work with universities to find suitable arrangements and do not hire away large numbers of faculty into full-time positions bottled up behind a wall of non-disclosure agreements,” he added. “We contribute to local ecosystem.”

Facebook itself has plenty of reasons to be investing in A.I. Many of the company’s latest initiatives, such as photo and video sorting ,are reliant on machine learning. The social network is also experimenting with A.I. that can read text in order to help filter out hate speech and extremist organizations.

Editors’ Recommendations

  • Robot chefs are the focus of new Sony and Carnegie Mellon research
  • LED-studded ‘electronic skin’ monitors your health, makes you look like a cyborg
  • Thanks to A.I., there is finally a way to spot ‘deepfake’ face swaps online
  • Poachers don’t stand a chance against these A.I.-powered camera drones
  • Baidu’s pocket translator is a ‘Star Trek’ dream come to life


7
May

No map, no problem: MIT’s self-driving system takes on unpaved roads


If you find yourself on a country road in a self-driving car, chances are you’re both pretty lost. Today’s most advanced autonomous driving systems rely on maps that have been carefully detailed and characterized in advanced. That means the millions of miles of unpaved roads in the United States are effectively off-limits for autonomous vehicles.

But a team of computer scientists from the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) have designed a self-driving system aimed at successfully navigating unpaved roads by using basic GPS data and sensors technology.

“We were realizing how limited today’s self-driving cars are in terms of where they can actually drive,” Teddy Ort, an MIT CSAIL graduate student who worked on the project, told Digital Trends. “Companies like Google only test in big cities where they’ve labeled the exact positions of things like lanes and stop signs. These same cars wouldn’t have success on roads that are unpaved, unlit, or unreliably marked. This is a problem. While urban areas already have multiple forms of transportation for non-drivers, this isn’t true for rural areas. If you live outside the city and can’t drive, you don’t have many options.

Ort and his colleagues think their system, which they’ve named MapLite, could change that.

Using simple GPS data that can be found on Google Maps and an array of sensors to scan the surroundings, MapLite navigates along unpaved roads, observing road conditions over 100 feet in advance.

“Existing systems still rely heavily on 3D maps, only using sensors and vision algorithms for specific aspects of navigation, like avoiding moving objects,” Ort said. “In contrast, MapLite uses sensors for all parts of navigation, using GPS data only to obtain a rough estimate of the car’s location in space. The system first sets both a final destination and what we refer to as a ‘local navigation goal,’ which has to be within the view of the car.”

But there are good reasons why other autonomous system use detailed maps. For one thing, when they work, they work well. Put an advanced autonomous car on a road that’s been previously mapped, and you’re pretty much guaranteed that it will be able to navigate just fine.

MapLite, on the other hand, doesn’t come equipped with this experience and, thus, lacks the guarantee.

Moving forward, the CSAIL researchers will attempt make MapLite more versatile, capable of navigating various road types. They have no plans to commercialize the system yet, though Ort said they’re working with Toyota to incorporate the system into future vehicles.

Editors’ Recommendations

  • MIT’s new A.I. could help map the roads Google hasn’t gotten to yet
  • Ambarella launches Silicon Valley autonomous car demo despite Uber crash
  • How Nvidia is helping autonomous cars simulate their way to safety
  • MIT drones navigate more effectively in crowded spaces by embracing uncertainty
  • Autonomous cars with remote operators to hit California streets in April


7
May

No map, no problem: MIT’s self-driving system takes on unpaved roads


If you find yourself on a country road in a self-driving car, chances are you’re both pretty lost. Today’s most advanced autonomous driving systems rely on maps that have been carefully detailed and characterized in advanced. That means the millions of miles of unpaved roads in the United States are effectively off-limits for autonomous vehicles.

But a team of computer scientists from the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) have designed a self-driving system aimed at successfully navigating unpaved roads by using basic GPS data and sensors technology.

“We were realizing how limited today’s self-driving cars are in terms of where they can actually drive,” Teddy Ort, an MIT CSAIL graduate student who worked on the project, told Digital Trends. “Companies like Google only test in big cities where they’ve labeled the exact positions of things like lanes and stop signs. These same cars wouldn’t have success on roads that are unpaved, unlit, or unreliably marked. This is a problem. While urban areas already have multiple forms of transportation for non-drivers, this isn’t true for rural areas. If you live outside the city and can’t drive, you don’t have many options.

Ort and his colleagues think their system, which they’ve named MapLite, could change that.

Using simple GPS data that can be found on Google Maps and an array of sensors to scan the surroundings, MapLite navigates along unpaved roads, observing road conditions over 100 feet in advance.

“Existing systems still rely heavily on 3D maps, only using sensors and vision algorithms for specific aspects of navigation, like avoiding moving objects,” Ort said. “In contrast, MapLite uses sensors for all parts of navigation, using GPS data only to obtain a rough estimate of the car’s location in space. The system first sets both a final destination and what we refer to as a ‘local navigation goal,’ which has to be within the view of the car.”

But there are good reasons why other autonomous system use detailed maps. For one thing, when they work, they work well. Put an advanced autonomous car on a road that’s been previously mapped, and you’re pretty much guaranteed that it will be able to navigate just fine.

MapLite, on the other hand, doesn’t come equipped with this experience and, thus, lacks the guarantee.

Moving forward, the CSAIL researchers will attempt make MapLite more versatile, capable of navigating various road types. They have no plans to commercialize the system yet, though Ort said they’re working with Toyota to incorporate the system into future vehicles.

Editors’ Recommendations

  • MIT’s new A.I. could help map the roads Google hasn’t gotten to yet
  • Ambarella launches Silicon Valley autonomous car demo despite Uber crash
  • How Nvidia is helping autonomous cars simulate their way to safety
  • MIT drones navigate more effectively in crowded spaces by embracing uncertainty
  • Autonomous cars with remote operators to hit California streets in April


7
May

MIT’s self-driving car can navigate unmapped country roads


There’s a good reason why companies often test self-driving cars in big cities: they’d be lost most anywhere else. They typically need well-labeled 3D maps to identify curbs, lanes and signs, which isn’t much use on a backwoods road where those features might not even exist. MIT CSAIL may have a solution, though. Its researchers (with some help from Toyota) have developed a new framework, MapLite, that can find its way without any 3D maps.

The system gets a basic sense of the vehicle’s location using GPS, and uses that for both the final destination and a “local” objective within view of the car. The machine then uses its onboard sensors to generate a path to those local points, using LiDAR to estimate the edges of the road (which tends to be much flatter than the surrounding landscape). Generic, parameter-based models give the car a sense of what to do at intersections or specific roads.

MapLite still isn’t ready to handle everything. It doesn’t know how to cope with mountain roads and other sharp changes in elevation, for instance. However, the ultimate goal is clear: CSAIL wants autonomous cars that can safely navigate any road without hand-holding. While 3D maps may still be useful for dealing with the complexity of cities, this could be vital for rural trips, snowy landscapes and other situations where the car needs to improvise.

Source: MIT CSAIL (YouTube)

7
May

Vudu update brings 4K Disney movies to Apple TV


When the Apple TV 4K arrived, there was one glaring omission in its movie catalog: Disney. No matter how much you wanted, you couldn’t (officially) see the latest Marvel or Pixar flick in its full glory. If you live in the US, though, you now have a viable alternative. An update to Vudu’s Apple TV app has enabled 4K HDR support, opening the door to watching Disney’s movies at maximum quality if you live in the US. You’re not going to get Dolby Atmos audio (the Apple TV just doesn’t support it), but that’s fine if you don’t need to be fully immersed while watching The Last Jedi.

There’s one main gotcha: the price. The Apple TV is missing 4K Disney movies in part because of Apple’s insistence that 4K titles cost as much as their HD counterparts, and that means you’ll be paying as much as $25 to buy a movie instead of Apple’s usual $20. With that said, it might be worth the premium if you have an Apple TV and would rather not use another media hub (or smart TV) to fill out your 4K catalog.

Via: Reddit, 9to5Mac

Source: Vudu Forum

7
May

Apple’s influential, iconic iMac turns 20


There are few individual computer models that have left a lasting mark on the industry, but you can definitely put the iMac on that list. Apple introduced its signature all-in-one desktop at a special event on May 6th, 1998, and it’s safe to say the system has had a lasting impact on technology at large. At the same time, the iMac has also been a symbol of the cultural zeitgeist, including for Apple itself — it shows how the company evolved from an underdog in a Windows world to a behemoth focused more on phones than PCs. The iMac has had a long journey, but it’s worth following to see just how much the industry has changed in the past 20 years.

The original iMac

The very first iMac may seem quaint today with its 15-inch CRT screen, tiny 4GB hard drive and decidedly late ’90s translucent styilng, but at the time it was a minor revolution in… well, just about everything. This was a departure from the beige boxes that defined most PC designs, and it was designed from the ground up for internet access at a time when the feature was still far from ubiquitous — that’s primarily where the “i” in iMac comes from. It also helped usher in the USB era. While USB was certainly available before the iMac, Apple’s complete shift away from legacy ports (combined with Microsoft’s improved USB support in Windows 98) prompted many companies to build USB peripherals and jumpstart adoption of the universal connector.

And it’s no secret that the iMac was instrumental to Apple’s comeback. Its commercial success revitalized Apple after years of business blunders, giving it the financial runway to expand as well as a clear focus: Apple would be focused on ease of use, simplicity and design from here on out. It was also instrumental to the career of legendary designer Jonathan Ive — it was the project that cemented his reputation for minimalist yet memorable products.

iMac G4: Apple gets experimental

New iMac Introduced at Macworld

Apple faced a difficult problem as the original iMac line reached its twilight. How do you follow up on an iconic computer when its defining feature, the tube display, was outdated? For Apple, it was simple: design something even more radical than its predecessor. The 2002-era iMac G4 took its inspiration from a sunflower, putting its then-impressive 15-inch LCD (later models would jump to 17 and 20 inches) on a neck that let the display tilt and swivel while preserving Apple’s all-in-one design.

It wasn’t as big a hit as its predecessor, as high prices and the trailing performance of PowerPC chips did it no favors. However, it was central to Apple’s digital hub strategy (where the Mac was the center of your media creation and playback) and helped spur technological concepts like digital cameras and home movie editing. It also showed that Ive wasn’t afraid to mess with success — one of his defining traits over the past 20 years.

The iMac G5 and Apple’s leap to Intel

Apple Unveils Software To Run Windows XP

The iPod had barely been around for a few months when the iMac G4 arrived, but it had become the star of the show by 2005 — and Apple wanted a computer that was a perfect complement to its popular music player. The result was the iMac G5, a 17- and 20-inch all-in-one that borrowed more than a few pages from the iPod’s design book even as it broke some ground of its own. It brought a much-needed speed boost (Apple finally had a reasonably fast 64-bit CPU in a home machine), but it also introduced the distinctive floating-computer-on-a-stand design that Apple uses for iMacs to this day. Later models also helped popularize desktop webcams and served as Apple’s first experiments with media hubs through its Front Row interface. If you owned an early Apple TV, you could trace its roots back to this machine.

However, many will know this design for another reason: the iMac was Apple’s first production Intel-based machine. The very first x86-based Mac was virtually unchanged on the outside from the iMac G5, but it was both faster and could do things unheard of for any Mac, such as dual-booting Windows. That Apple led its processor switch with the iMac said a lot about both the flexibility of its design (it wasn’t nearly as exotic as the circular iMac G4) its symbolic importance in the lineup.

The aluminum iMac: Apple turns serious

Apple Introduces New Versions Of The iMac Computer And  iLife Applications

Up until 2007, iMac designs always had a certain amount of whimsy to them that indicated they were for home users. The aluminum iMac changed all that. Its metal-and-glass design was not only reminiscent of the original iPhone, but decidedly more sober, as if it was destined for your office. Its feature set reflected that, too. Apart from larger 20- and 24-inch screens, the aluminum iMac touted dual-core processors and options for a wireless mouse and keyboard. While the iMac was no longer turning the industry on its ear, it had clearly matured a lot in the space of a decade.

Later aluminum models were refinements of the 2007 version’s basic formula, but they mirrored the evolution of both the wider tech industry and the iMac’s role — it was becoming an entry-level pro machine in a period where laptops and smartphones were taking over. The unibody iMac (from 2009 onward) touted a 27-inch display and quad-core processors, with later models helping to introduce Intel’s high-speed Thunderbolt port. The 2012 iMac, meanwhile, symbolized the death of the optical drive as people turned to cloud services and streaming video instead of burning backup discs and watching DVDs. It was far from the centerpiece of Apple’s lineup by that point, but ther was no question that it showed where Apple was going.

The iMac as workstation: Retina Display and iMac Pro

Apple iMac Pro

By 2014, the computing landscape was much, much different than in 1998. Smartphones and tablets were putting a serious dent in PC sales, and the iMac was being asked to fulfill roles that would have been unimaginable for an all-in-one in the ’90s, such as pro-level media editing. Apple’s solution? Lean into the iMac’s growing importance as a workhorse. The iMac with Retina Display packed one of the first 5K screens on the market, and with an aggressive price to boot. Many stand-alone 5K displays at the time cost more than the entire iMac, making it a bargain for creatives who valued resolution over a modular design.

And if you need evidence of exactly how much the iMac and Apple have changed in 20 years, you need only look at the iMac Pro. It’s arguably a stopgap system for customers waiting for the redesigned Mac Pro, but it represents an almost complete inversion of Apple’s strategy. Where the 1998 system was built to be accessible for computing rookies, the iMac Pro is strictly for demanding work between its Xeon workstation processor, pro-level graphics, SSD storage and its eye-watering $4,999 starting price. It’s far from a “computer for the rest of us,” as Apple used to say, but that’s also because computers themselves are no longer the all-purpose devices they used to be.

7
May

Cambridge Analytica’s Facebook data models survived until 2017


Facebook may have succeeded in getting Cambridge Analytica to delete millions of users’ data in January 2016, but the information based on that data appears to have survived for much longer. The Guardian has obtained leaked emails suggesting that Cambridge Analytica avoided explicitly agreeing to delete the derivatives of that data, such as predictive personality models. Former employees claimed the company kept that data modelling in a “hidden corner” of a server until an audit in March 2017 (prompted by an Observer journalist’s investigation), and it only certified that it had scrubbed the data models in April 2017 — half a year after the US presidential election.

In a response to the Guardian, a Cambridge Analytica spokesperson denied that there was a “secret cache,” and said that it had started looking for and deleting derivatives of that data after the initial wipe, finishing in April 2017. It was a “lengthy process,” the company claimed.

Facebook has already outlined its stance on the subject. In his testimony to the US House of Representatives, company chief Mark Zuckerberg said that Cambridge Analytica “represented to us” that it had deleted models based on the social network’s data. A spokesperson added that Cambridge Analytica claimed all the derivative data was gone in a September 2016 statement from its lawyers. If the scoop is accurate, however, both statements are problematic. Facebook did tell Cambridge Analytica to erase derivatives, but it didn’t double-check that Cambridge Analytica had done exactly that. And if Cambridge Analytica’s attorneys had testified that the data had been erased by September 2016, why did it just claim the deletion took months longer?

This is partly water under the bridge now that Cambridge Analytica is closing down. At the same time, it underscores just how messy the situation was (and to some degree, still is). Whether or not Facebook was completely diligent in getting on-the-record promises, there was only so much it could do to verify that all aspects of the data were gone. The only certainty is that users and their privacy were caught in the crossfire.

Source: Guardian

7
May

Chinese spies linked to decade-long hacking campaign


China’s long-running hacking efforts may be more extensive than first thought. Security researchers at ProtectWise’s 401TRG team have determined that a long series of previously unconnected attacks are actually part of a concerted campaign by Chinese intelligence officials. Nicknamed the Winnti umbrella, the effort has been going on since “at least” 2009 and has struck game companies (like Nexon and Trion) and other tech-driven businesses to compromise political targets.

There are common methods and goals to the attacks. They usually start with phishing to trick someone into compromising the company network (often using political bait), and then use a mix of custom and off-the-shelf malware to collect info. They’ll often stay undetected by “living off the land” with the victim’s own software, such as system admin tools. The intuders are primarily looking for code signing certificates and “software manipulation,” according to the report.

The perpetrators also make occasional mistakes, and it’s those slip-ups that helped identify the Chinese origins. They normally use command-and-control servers to hide, but they inadvertently accessed some machines using IP addresses from China Unicom’s network in a Beijing district.

Even with these mistakes, the Winnti umbrella is an “advanced and potent threat,” 401TRG said. It’s also a not-so-subtle reminder that China’s state-backed hacking efforts are deeper than they seem at first glance — hacks that appear to be one-off incidents may be linked if you look for subtler similarities.

Via: Ars Technica

Source: 401TRG

%d bloggers like this: