Skip to content

Archive for

29
Jan

Dell may sell itself to VMware, a company it already owns


Dell has been a privately run company for more than 4 years, but it appears ready to return to public life — in a convoluted way. CNBC sources have claimed that Dell is exploring a “reverse merger” with VMware where the virtual machine maker (80 percent owned by Dell following the EMC deal) would buy its parent and let the resulting company go public without having to launch a new stock offering. It would also let Dell pay off some of its roughly $50 billion in debt.

This isn’t set in stone. The tipsters also said that a number of alternatives are on the table, including a straightforward public offering, other takeovers (the targets haven’t been named) or buying the remaining 20 percent stake of VMware. Dell is unlikely to sell to an outside company or give up VMware, however.

Dell has declined to comment on the report.

A reverse merger would be one of the more “audacious” options for Dell, but it would reflect how much things have changed for Dell in the space of a few years. It went private at a time when it was struggling and wanted the freedom to restructure without the pressure that comes with publicly traded stock. The situation isn’t completely rosy going into 2018 — Dell posted a $941 million net loss in its latest quarter, due in part to paying off $1.7 billion of its debt. It’s in a stronger position than it was in 2013, however, with less dependence on its PC business. A reverse merger could easily help it cut costs and raise funds that would otherwise be out of reach.

Source: CNBC

29
Jan

Step inside the Unabomber investigation in VR


In 1996, law enforcement officials arrested Ted Kaczynski, aka the Unabomber, after nearly two decades of investigation. But it wasn’t until the Washington Post and the New York Times published Kaczynski’s anonymous 35,000-word manifesto that a tip from his brother David led officials to Kaczynski and his isolated cabin in Montana. The massive nationwide hunt for the Unabomber, whose seemingly random attacks with lack of traceable evidence stumped law enforcement officials for years, is an interesting case and one that the Newseum in Washington DC has hosted an exhibit on for the past few years — a display that includes Kaczynski’s actual cabin. The exhibit has also featured a VR experience that let visitors explore the cabin from the perspective of an FBI agent, decide whether to publish the manifesto and even disarm the live bomb found in Kaczynski’s cabin. Now, Variety reports, Unabomber: The Virtual Reality Experience is available for anyone to explore.

“We want to be able to reach people wherever they are,” Newseum CTO Mitch Gelman told Variety. “We have a generation that is growing up on video games. VR is an incredibly experiential form of storytelling,” And in that regard, Unabomber: The Virtual Reality Experience includes voice-over commentary from agents that led the investigation, newspaper clippings and additional videos for those who want to learn more.

Museums have been turning to VR more and more as a way to augment visitors’ experiences. London’s Tate Modern is using VR as part of an immersive Modigliani exhibit that’s open until April while the Royal Academy of Arts has hosted an exhibit on how VR and similar technologies are impacting artists and their work. And the Smithsonian has been working on a VR experience that would let people anywhere experience some of the works it has on display.

Unabomber: The Virtual Reality Experience was produced by the Newseum and Immersion VR with support from Vive Studios. It’s on sale now through Viveport for $5 and will be available on Steam in the near future. You can check out a trailer below.

Via: Variety

29
Jan

Facebook adds variety series hosted by NFL star Von Miller


The latest sports star to get a Facebook Watch show is Denver Broncos linebacker Von Miller. Variety reports that Miller will get a live weekly variety series that brings together comedy, celebrity guests, teammates and Miller’s brothers. “Having my own show is a dream come true,” Miller told Variety. “I look forward to bringing the fans into my home and into my world each week. I know we are going to have some fun.”

Miller’s show joins a slew of other sports-related series on Facebook Watch. Dwyane Wade’s BackCourt Wade premiered on the platform in November while Marshawn Lynch’s No Script reality series debuted last September. Ball in the Family, which features LaVar Ball and his basketball-playing sons is wrapping up its second season on Facebook Watch. The reality shows work well alongside Facebook’s sports coverage, which includes NFL, college basketball and wrestling programming. ESPN also just signed on to develop a Facebook Watch version of its popular First Take talk show.

Miller’s show, Von Miller’s Studio 58, will run on Wednesdays at 8PM Eastern starting this week. Facebook has ordered eight episodes and you can check out a trailer below.

Via: Variety

Source: Facebook

29
Jan

Facebook Announces Series of Updates Aimed at Improving User Privacy


Facebook this week has detailed how it plans to give its users “more control” of their privacy on the mobile and desktop versions of the social network. One of the major new additions is described as a “privacy center” that will provide simple tools to manage privacy and combine all core privacy settings into one easy-to-find interface.

In order to explain how to use these features to its users, the company today is rolling out educational videos in its News Feed centering upon topics like “how to control what information Facebook uses to show you, how to review and delete old posts, and even what it means to delete your account.” This marks the first time that Facebook shared its privacy principles with its users, stating that the updates “reflect core principles” it has maintained on privacy over the years.

As pointed out by TechCrunch, Facebook’s planned rollout of beefed up privacy features comes ahead of a May 25 deadline for compliance with the General Data Protection Regulation (GDPR) in the EU. The GDPR’s goal is to give citizens back control over their personal data while “simplifying” the regulatory environment for business, essentially affecting “any entities processing the personal data of EU citizens.”

Under GDPR, the new game Facebook will need to play is gaming trust: Which it to say that it will need to make users feel they trust its brand to protect their privacy and therefore make them feel happy to consent to the company processing their data (rather than asking it to delete it). So PR and carefully packaged info-messaging to users is going to be increasingly important for Facebook’s business, going forward.

While all Facebook users will gain access to the updates, beginning today users in Europe will get reminders pushed out to them to take part in the network’s existing privacy check-up feature. In terms of the new privacy center, Facebook didn’t offer any specifics as to when it will launch and if the controls offered to users will be the same in the United States as they are in Europe. Another part of Facebook’s plan is to run data protection workshops for small and medium businesses — again focused on a launch in Europe — that will center upon the GDPR.

Earlier in January, Facebook CEO Mark Zuckerberg announced a major change coming to the News Feed, which aims to cut down on the content displayed from publishers and instead highlight more content from family and friends. The update was described as a way to have more “meaningful social interactions” on Facebook by reducing the amount of posts from businesses, brands, and media.

Tags: Facebook, privacy
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

29
Jan

Apple’s New ‘Selfies on iPhone X’ Ad Campaign Features Brazilian Carnival and NHL All-Star Steven Stamkos


Apple last week shared a new video that showcases selfies taken with Portrait Lighting effects on the iPhone X, kickstarting the company’s new “Selfies on iPhone X” ad campaign across different forms of media.

Next up in the campaign is a video promoting the annual Carnival of Brazil, a weeklong celebration of music, dance, food, and drink, with particularly large festivals in cities like Rio de Janeiro and São Paulo. The ad, accompanied by a webpage, highlights selfies taken with Portrait Lighting effects on the iPhone X.


Apple shared a similar Brazilian Carnival video last year amid a reported push into more regional marketing campaigns.

The campaign extends to billboards, which will likely appear in major cities across the world over the coming weeks. NHL all-star Steven Stamkos recently announced his participation in the campaign on Twitter, and shared a photo of him standing in front of his own Portrait Lighting selfie at Amalie Arena in Tampa, Florida.

So proud to be a part of Apple’s new campaign, selfies on iPhone X. #ShotoniPhone #NHLAllStar pic.twitter.com/D73bzupO8w

— Steven Stamkos (@RealStamkos91) January 27, 2018

The captain of the Tampa Bay Lightning is likely just one of several notable figures who will be featured in the campaign, which is similar to Apple’s larger “Shot on iPhone” series. We’ll be sure to keep an eye out for more ads, and if you spot one yourself, feel free to share it in the comments section.

Tag: Apple ads
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

29
Jan

Ming-Chi Kuo Casts Doubt on iPhone SE 2, Expects Few Changes Should New Model Launch


KGI Securities analyst Ming-Chi Kuo, who has sources within Apple’s supply chain in Asia, has issued a research note today that casts doubt on rumors about a second-generation iPhone SE launching in the second quarter of 2018.

Kuo believes Apple doesn’t have enough spare development resources to focus on launching another iPhone, with three new models already in the pipeline, including a second-generation iPhone X with a “much different” internal design, a larger 6.5-inch version dubbed iPhone X Plus, and a lower-priced 6.1-inch iPhone with Face ID but design compromises like an LCD screen.

An excerpt from the research note, obtained by MacRumors, edited slightly for clarity:

The announcement of three new iPhone models in the same quarter in the second half of 2017 was the first time Apple made such a major endeavor, and we believe the delay of iPhone X, which had the most complicated design yet, shows that Apple doesn’t have enough resources available for development. […]

With three new models in the pipeline for the second half of 2018, we believe Apple may have used up its development resources. Also, we think the firm will do all it can to avoid repeating the mistake of a shipment delay for the three new models. As such, we believe Apple is unlikely to have enough spare resources to develop a new iPhone model for launch in 2Q18.

If there really is a so-called iPhone SE 2 on Apple’s roadmap, Kuo expects it will have few outward-facing changes. He predicts the device would likely have a faster processor and a lower price, rather than iPhone X-like features like a nearly full screen design, 3D sensing for Face ID, or wireless charging.

There have been many rumors about Apple launching a new iPhone SE in 2018, with most of the sources based in Asia, including research firm TrendForce and publications like the Economic Daily News. The latest rumor suggested a new iPhone SE with wireless charging could launch in May-June.

The current iPhone SE looks much like the iPhone 5s, including its smaller four-inch display preferred by a subset of customers. The device is powered by Apple’s A9 chip, like the iPhone 6s and iPhone 6s Plus, and it has 2GB of RAM, a 12-megapixel rear camera, a 3.5mm headphone jack, and Touch ID.

Apple hasn’t fully refreshed the iPhone SE since it launched in March 2016, but it did double its available storage capacities to 64GB and 128GB last March. It also dropped the device’s starting price to $349 last September.

Related Roundup: iPhone SETags: KGI Securities, Ming-Chi KuoBuyer’s Guide: iPhone SE (Don’t Buy)
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

29
Jan

iMac Pro Again Available for $3,999 From Micro Center Stores


Micro Center retail stores are once again offering the entry-level iMac Pro for $3,999, an impressive discount of $1,000 off of the regular $4,999 price tag for the newly released machine.

The same deal was offered earlier in the month, and iMac Pro models available at Micro Center stores were snapped up quickly. Based on the online stock checking tool, most Micro Center locations have at least one iMac Pro in stock, with some, such as the Westmont Micro Center in Illinois, listing 10+ machines available for purchase.

The $1,000 discount on the iMac Pro is for Micro Center retail stores only, with the $3,999 iMac Pro not available from the Micro Center website.

Apple’s base configuration 27-inch 5K iMac Pro, which Micro Center is discounting, comes equipped with a 3.2GHz 8-core Intel Xeon W processor, Thunderbolt 3 support, 32GB ECC 2,666MHz RAM, a 1TB SSD, and a Radeon Pro Vega 56 graphics card with 8GB HMB2 memory.


No other retailer is offering the iMac Pro at such a significant discount at this time. Micro Center is limiting purchases to one per household, and available supply could go quick.

Micro Center stores are located primarily in the midwest and south, with 25 stores nationwide.

Related Roundups: iMac, Apple DealsTag: Micro CenterBuyer’s Guide: iMac (Neutral)
Discuss this article in our forums

MacRumors-All?d=6W8y8wAjSf4 MacRumors-All?d=qj6IDK7rITs

29
Jan

Don’t be fooled by dystopian sci-fi stories: A.I. is becoming a force for good


One of the most famous sayings about technology is the “law” laid out by the late American historian Melvin Kranzberg: “Technology is neither good nor bad; nor is it neutral.”

It’s a great saying: brief, but packed with instruction, like a beautifully poetic line of code. If I understand it correctly, it means that technology isn’t inherently good or bad, but that it will certainly impact upon us in some way — which means that its effects are not neutral. A similarly brilliant quote came from the French cultural theorist Paul Virilio: “the invention of the ship was also the invention of the shipwreck.”

“Technology is neither good nor bad; nor is it neutral.”

To adopt that last image, artificial intelligence (A.I.) is the mother of all ships. It promises to be as significant a transformation for the world as the arrival of electricity was in the nineteenth and twentieth century. But while many of us will coo excitedly over the latest demonstration of DeepMind’s astonishing neural networks, a lot of the discussion surrounding A.I. is decidedly negative. We fret about robots stealing jobs, autonomous weapons threatening the world’s wellbeing, and the creeping privacy issues of data-munching giants. Heck, once the dream of achieving artificial general intelligence arrives, some pessimists seem to think the only debate is whether we’re obliterated by Terminator-style robots or turned into grey goo by nanobots.

While some of this technophobia is arguably misplaced, it’s not hard to see critics’ point. Tech giants like Google and Facebook have hired some of the greatest minds of our generation, and put them to work not curing disease or rethinking the economy, but coming up with better ways to target us with ads. The Human Genome Project, this ain’t! Shouldn’t a world-changing technology like A.I. be doing a bit more… world changing?

A course in moral A.I.?

2018 may be the year when things start to change. While they’re still small seeds just beginning to sprout green shoots, there’s more evidence that the subject of making A.I. into a true force for good is starting to gain momentum. For example, starting this semester, the School of Computer Science at Carnegie Mellon University (CMU) will be teaching a new class, titled “Artificial Intelligence for Social Good.” It touches on many of the topics you’d expect from a graduate and undergraduate class — optimization, game theory, machine learning, and sequential decision making — and will look at these through the lens of how each will impact society. The course will also challenge students to build their own ethical A.I. projects, giving them real world experience with creating potentially life-changing A.I.

ITU/R.Farrell

“A.I. is the blooming field with tremendous commercial success, and most people benefit from the advances of A.I. in their daily lives,” Professor Fei Fang told Digital Trends. “At the same time, people also have various concerns, ranging from potential job loss to privacy and safety issues to ethical issues and biases. However, not enough awareness has been raised regarding how A.I. can help address societal challenges.”

Fang describes this new course as “one of the pioneering courses focusing on this topic,” but CMU isn’t the only institution to offer one. It joins a similar “A.I. for Social Good” course offered at the University of Southern California, which started last year. At CMU, Fang’s course is listed as a core course for a Societal Computing Ph.D. program.

“Not enough awareness has been raised regarding how A.I. can help address societal challenges.”

During the new CMU course, Fang and a variety of guest lecturers will discuss a number of ways A.I. can help solve big social questions: machine learning and game theory used to help protect wildlife from poaching, A.I. being used to design efficient matching algorithms for kidney exchange, and using A.I. to help prevent HIV among homeless young people by selecting a set of peer leaders to spread health-related information.

“The most important takeaway is that A.I. can be used to address pressing societal challenges, and can benefit society now and in the near future,” Fang said. “And it relies on the students to identify these challenges, to formulate them into clearly defined problems, and to develop A.I. methods to help address them.”

Challenges with modern A.I.

Professor Fang’s class isn’t the first time that the ethics of A.I. has been discussed, but it does represent (and, certainly, coincide with) a renewed interest in the field. A.I. ethics are going mainstream.

This month, Microsoft published a book called “The Future Computed: Artificial intelligence and its role in society.” Like Fang’s class, it runs through some of the scenarios in which A.I. can help people today: letting those with limited vision hear the world described to them by a wearable device, and using smart sensors to let farmers increase their yield and be more productive.

Ekso Bionics

There are plenty more examples of this kind. Here at Digital Trends, we’ve covered A.I. that can help develop new pharmaceutical drugs, A.I. that can help people avoid shelling out for a high priced lawyer, A.I. to diagnose disease, and A.I. and robotics projects which can help reduce backbreaking work — either by teaching humans how to perform it more safely or even taking them out of the loop altogether.

All of these are positive examples of how A.I. can be used for social good. But for it to really become a force for positive change in the world, artificial intelligence needs to go beyond simply good applications. It also needs to be created in a way that is considered positive by society. As Fang says, the possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood.

The possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood.

Several years ago, African-American Harvard University PhD Latanya Sweeney “exposed” Google’s search algorithms as being inadvertently racist, by linking names more commonly given to black people with ads relating to arrest records. Sweeney, who had never been arrested, found that she was nonetheless shown ads asking “Have you been arrested?” that her white colleagues were not. Similar case studies have noticed how image recognition systems will be more likely to associate a picture of a kitchen with women and one of sports coaching with men. In this case, the bias wasn’t necessarily the fault of one programmer, but rather discriminatory patterns hidden in the large sets of data Google’s algorithms are trained on.

The same is true for the “black boxing” of algorithms, which can make them inscrutable to even their own creators. In Microsoft’s new book, its authors suggest that A.I. should be built around an ethical framework, a bit like science fiction writer Isaac Asimov’s “Three Laws of Robotics” for the “woke” generation. These six principles include the fact that AI systems should be fair, reliable and safe; that they should be private and secure; that they should be inclusive; that they should be transparent, and that they they should be accountable.

“If designed properly, A.I. can help make decisions that are fairer because computers are purely logical and, in theory, are not subject to the conscious and unconscious biases that inevitably influence human decision-making,” Microsoft’s authors write.

More work to be done

Ultimately, this is going to be easier said than done. From most people’s perspective, A.I. research done in the private sector far outstrips work done in the public sector. The problem with this is accountability in a world where algorithms are guarded as secretly as missile launch codes. There is also no cause for companies to solve big societal problems if it will not immediately benefit their bottom line. (Or score them some brownie points to possibly avoid regulation.) It would be naive to think that all of the concerns raised by profit-driven companies are going to be altruistic, no matter how much they might suggest otherwise.

For broader discussions about the use of A.I. for public good, something is going to have to change. Is it recognizing the power of artificial intelligence and putting into place more regulations allowing for scrutiny? Does it mean companies forming ethics boards, as was the case with Google DeepMind, as part of their research into cutting edge A.I.? Is it awaiting a market-driven change, or backlash, that will demand that tech giants offer more information about the system’s that govern our lives? Is it, as Bill Gates has suggested, implementing a robot tax that will curtail the use of A.I. or robotics in some situations by taxing companies for replacing its workers? None of these solutions are perfect.

And the biggest question of all remains: Who exactly defines ‘good’? Debates about how A.I. can be a force for good in our society will involve a significant number of users, policy makers, activists, technologists, and other interested parties working out what kind of world it is that we want to create, and how to use technology to best achieve that.

As DeepMind co-founder Mustafa Suleyman told Wired: “Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical A.I. really means. If we manage to get A.I. to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.”

Courses like Professor Fang’s aren’t the final destination, by any means. But they are a very good start.

Editors’ Recommendations

  • IBM and MIT are working together to make sure A.I. isn’t our downfall
  • Everything you need to know about Neuralink: Elon Musk’s brainy new venture
  • Could this $10,000 scooter from Ujet be the mobility solution of the future?
  • Somnox is a robotic pillow that lulls you to sleep by ‘breathing’
  • Email spam is about to get way worse, and you can blame MailChimp


29
Jan

Don’t be fooled by dystopian sci-fi stories: A.I. is becoming a force for good


One of the most famous sayings about technology is the “law” laid out by the late American historian Melvin Kranzberg: “Technology is neither good nor bad; nor is it neutral.”

It’s a great saying: brief, but packed with instruction, like a beautifully poetic line of code. If I understand it correctly, it means that technology isn’t inherently good or bad, but that it will certainly impact upon us in some way — which means that its effects are not neutral. A similarly brilliant quote came from the French cultural theorist Paul Virilio: “the invention of the ship was also the invention of the shipwreck.”

“Technology is neither good nor bad; nor is it neutral.”

To adopt that last image, artificial intelligence (A.I.) is the mother of all ships. It promises to be as significant a transformation for the world as the arrival of electricity was in the nineteenth and twentieth century. But while many of us will coo excitedly over the latest demonstration of DeepMind’s astonishing neural networks, a lot of the discussion surrounding A.I. is decidedly negative. We fret about robots stealing jobs, autonomous weapons threatening the world’s wellbeing, and the creeping privacy issues of data-munching giants. Heck, once the dream of achieving artificial general intelligence arrives, some pessimists seem to think the only debate is whether we’re obliterated by Terminator-style robots or turned into grey goo by nanobots.

While some of this technophobia is arguably misplaced, it’s not hard to see critics’ point. Tech giants like Google and Facebook have hired some of the greatest minds of our generation, and put them to work not curing disease or rethinking the economy, but coming up with better ways to target us with ads. The Human Genome Project, this ain’t! Shouldn’t a world-changing technology like A.I. be doing a bit more… world changing?

A course in moral A.I.?

2018 may be the year when things start to change. While they’re still small seeds just beginning to sprout green shoots, there’s more evidence that the subject of making A.I. into a true force for good is starting to gain momentum. For example, starting this semester, the School of Computer Science at Carnegie Mellon University (CMU) will be teaching a new class, titled “Artificial Intelligence for Social Good.” It touches on many of the topics you’d expect from a graduate and undergraduate class — optimization, game theory, machine learning, and sequential decision making — and will look at these through the lens of how each will impact society. The course will also challenge students to build their own ethical A.I. projects, giving them real world experience with creating potentially life-changing A.I.

ITU/R.Farrell

“A.I. is the blooming field with tremendous commercial success, and most people benefit from the advances of A.I. in their daily lives,” Professor Fei Fang told Digital Trends. “At the same time, people also have various concerns, ranging from potential job loss to privacy and safety issues to ethical issues and biases. However, not enough awareness has been raised regarding how A.I. can help address societal challenges.”

Fang describes this new course as “one of the pioneering courses focusing on this topic,” but CMU isn’t the only institution to offer one. It joins a similar “A.I. for Social Good” course offered at the University of Southern California, which started last year. At CMU, Fang’s course is listed as a core course for a Societal Computing Ph.D. program.

“Not enough awareness has been raised regarding how A.I. can help address societal challenges.”

During the new CMU course, Fang and a variety of guest lecturers will discuss a number of ways A.I. can help solve big social questions: machine learning and game theory used to help protect wildlife from poaching, A.I. being used to design efficient matching algorithms for kidney exchange, and using A.I. to help prevent HIV among homeless young people by selecting a set of peer leaders to spread health-related information.

“The most important takeaway is that A.I. can be used to address pressing societal challenges, and can benefit society now and in the near future,” Fang said. “And it relies on the students to identify these challenges, to formulate them into clearly defined problems, and to develop A.I. methods to help address them.”

Challenges with modern A.I.

Professor Fang’s class isn’t the first time that the ethics of A.I. has been discussed, but it does represent (and, certainly, coincide with) a renewed interest in the field. A.I. ethics are going mainstream.

This month, Microsoft published a book called “The Future Computed: Artificial intelligence and its role in society.” Like Fang’s class, it runs through some of the scenarios in which A.I. can help people today: letting those with limited vision hear the world described to them by a wearable device, and using smart sensors to let farmers increase their yield and be more productive.

Ekso Bionics

There are plenty more examples of this kind. Here at Digital Trends, we’ve covered A.I. that can help develop new pharmaceutical drugs, A.I. that can help people avoid shelling out for a high priced lawyer, A.I. to diagnose disease, and A.I. and robotics projects which can help reduce backbreaking work — either by teaching humans how to perform it more safely or even taking them out of the loop altogether.

All of these are positive examples of how A.I. can be used for social good. But for it to really become a force for positive change in the world, artificial intelligence needs to go beyond simply good applications. It also needs to be created in a way that is considered positive by society. As Fang says, the possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood.

The possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood.

Several years ago, African-American Harvard University PhD Latanya Sweeney “exposed” Google’s search algorithms as being inadvertently racist, by linking names more commonly given to black people with ads relating to arrest records. Sweeney, who had never been arrested, found that she was nonetheless shown ads asking “Have you been arrested?” that her white colleagues were not. Similar case studies have noticed how image recognition systems will be more likely to associate a picture of a kitchen with women and one of sports coaching with men. In this case, the bias wasn’t necessarily the fault of one programmer, but rather discriminatory patterns hidden in the large sets of data Google’s algorithms are trained on.

The same is true for the “black boxing” of algorithms, which can make them inscrutable to even their own creators. In Microsoft’s new book, its authors suggest that A.I. should be built around an ethical framework, a bit like science fiction writer Isaac Asimov’s “Three Laws of Robotics” for the “woke” generation. These six principles include the fact that AI systems should be fair, reliable and safe; that they should be private and secure; that they should be inclusive; that they should be transparent, and that they they should be accountable.

“If designed properly, A.I. can help make decisions that are fairer because computers are purely logical and, in theory, are not subject to the conscious and unconscious biases that inevitably influence human decision-making,” Microsoft’s authors write.

More work to be done

Ultimately, this is going to be easier said than done. From most people’s perspective, A.I. research done in the private sector far outstrips work done in the public sector. The problem with this is accountability in a world where algorithms are guarded as secretly as missile launch codes. There is also no cause for companies to solve big societal problems if it will not immediately benefit their bottom line. (Or score them some brownie points to possibly avoid regulation.) It would be naive to think that all of the concerns raised by profit-driven companies are going to be altruistic, no matter how much they might suggest otherwise.

For broader discussions about the use of A.I. for public good, something is going to have to change. Is it recognizing the power of artificial intelligence and putting into place more regulations allowing for scrutiny? Does it mean companies forming ethics boards, as was the case with Google DeepMind, as part of their research into cutting edge A.I.? Is it awaiting a market-driven change, or backlash, that will demand that tech giants offer more information about the system’s that govern our lives? Is it, as Bill Gates has suggested, implementing a robot tax that will curtail the use of A.I. or robotics in some situations by taxing companies for replacing its workers? None of these solutions are perfect.

And the biggest question of all remains: Who exactly defines ‘good’? Debates about how A.I. can be a force for good in our society will involve a significant number of users, policy makers, activists, technologists, and other interested parties working out what kind of world it is that we want to create, and how to use technology to best achieve that.

As DeepMind co-founder Mustafa Suleyman told Wired: “Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical A.I. really means. If we manage to get A.I. to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.”

Courses like Professor Fang’s aren’t the final destination, by any means. But they are a very good start.

Editors’ Recommendations

  • IBM and MIT are working together to make sure A.I. isn’t our downfall
  • Everything you need to know about Neuralink: Elon Musk’s brainy new venture
  • Could this $10,000 scooter from Ujet be the mobility solution of the future?
  • Somnox is a robotic pillow that lulls you to sleep by ‘breathing’
  • Email spam is about to get way worse, and you can blame MailChimp


29
Jan

Save time and get healthy with the best meal-planning apps


Planning out meals is one of the best ways to eat healthier, cut calories, lose weight, and feed the whole family fast. It’s also pretty hard to do — but the right app can make your modern meals a whole lot easier to create, shop for, and prepare. Take a look at the best meal-planning apps, all totally free and ready to help out.

Mealime (Free)

Mealime (forgive the play on words) is designed around planning family or meals for guests the easy way. You can create profiles of everyone you are cooking for, which can list likes, dislikes, general eating habits, allergies, and so on. You can also create profiles for couples, whole families, and so on to make planning a little easier.

You can then look for recipes that match all your requirements. Pick one, and it gives you full instructions and can automatically add the necessary ingredients to your grocery list. Most recipes are focused on fast prep times around 30 minutes, so you may be able to save even more time in the kitchen.

Download now from:

iTunes Google

FoodPlanner (Free)

FoodPlanner is based around recipes. It allows you to browse the web for healthy recipes and download them onto the app. It gives you the nutritional data for the meal and allows you to automatically generate a shopping list. An extra inventory-management system for the truly serious allows you to keep track of your current ingredients, and you can also make recipes from scratch if you wish. There are sharing features, but they are Android-focused.

Download now from:

iTunes Google Amazon

Mealplan ($4)

Mealplan presents you with meal tags that you can drag and drop into a weekly schedule to quickly form your meal plans (and even email them to other people). The tags make it easy to search for specific meals, and can automatically generate grocery lists for you. You can also tweak meals to add snacks, put in links to specific recipes, or remove certain meals entirely if you have other plans. You can search for new meals and generate a tag for them, too. There’s a learning curve, but it’s a fun system, particularly if you have an iPad.

Download now from:

iTunes

MealBoard ($4)

Do you love to customize every little detail? Then MealBoard may be the app for you. It acts like many of the other apps on our list, with a search function for meals pulled from the internet, the ability to plan out meals on a calendar, and the option to generate a grocery list. But a couple of features make it unique: The interface is particularly pleasant to use and easy to customize, and there’s a pantry mode that allows you to move ingredients to your pantry when you buy them and remove them when you run out.

Download now from:

iTunes

Eat This Much (Free)

Here’s a different approach: If your primary goal is to lose weight, then Eat This Much encourages you to put in your food preferences, how much money you want to spend, your schedule, and how many calories the meals will contain. It will then generate meal plans for you and provide grocery lists for the ingredients. If you like cooking (as opposed to meal delivery) but want to develop healthier eating habits, this app could help you do just that.

Download now from:

iTunes Google

Lose It! (Free)

Lose It! is also a weight-loss app, but instead of putting in the number of calories you want per meal, you just set general goals and a body weight target that you want to reach. Then you track what you are eating (remember to be accurate) and what sort of exercise you are getting. The app includes a food database with millions of options to choose from, a scanning function so you can instantly add purchased foods, and even some photo recognition for basic foods. It’s ideal if you like to combine planning with tracking.

Download now from:

iTunes Google

Paprika ($5)

The Paprika Recipe Manager is a very interactive kind of meal planner. In addition to the usual features like finding recipes online, building automatic grocery lists, and planning meals for the week or month, there are also tools that you can use to go deeper. Automatically scale ingredients, cross them off as you add them, add photos to your recipes, and customize your grocery categories based on how you like to shop — there are tons of ways to make sure everything is just the way you like it.

Download now from:

iTunes Google Amazon

Yummly (Free)

Yummly is a general food-sharing and recipe-finding app that features plenty of vivid photos and a rating system to help you find the most popular (or at least the most talked-about) recipes online. If you have your meal routine pretty well down but need some help finding the right recipes, Yummly is a great, albeit more casual, app. We particularly suggest the Yummly Recipes and Shopping List version to help you plan out your meals.

Download now from:

iTunes Google

Editors’ Recommendations

  • The eMeals vegan meal kit uses your local grocery stores for ingredients
  • Take a deep breath, a sip of wine, and ace Thanksgiving with these apps
  • Give Rachael Ray a run for her money with these 11 best cooking apps
  • These meal delivery plans offer something for even the pickiest of eaters
  • Hestan wants to make a smart gas cooktop with its Cue technology