I, Alexa: Should we give artificial intelligence human rights?
A few years ago, the subject of AI personhood and legal rights for artificial intelligence would have been something straight out of science fiction. In fact, it was.
In Douglas Adams’ second Hitchhiker’s Guide to the Galaxy book, The Restaurant at the End of the Universe, he relates the story of a futuristic smart elevator called the Sirius Cybernetics Corporation Happy Vertical People Transporter. This artificially intelligent elevator works by predicting the future, so it can appear on the right floor to pick you up even before you know you want to get on — thereby “eliminating all the tedious chatting, relaxing, and making friends that people were previously forced to do whilst waiting for elevators.”
The ethics question, Adams explains, comes when the intelligent elevator becomes bored of going up and down all day, and instead decides to experiment with moving from side to side as a “sort of existential protest.”
We don’t yet have smart elevators, although judging by the kind of lavish headquarters tech giants like Google and Apple build for themselves, that may just be because they’ve not bothered sharing them with us just yet. In fact, as we’ve documented time and again at Digital Trends, the field of AI is currently making a bunch of things possible we never thought realistic in the past — such as self-driving cars or Star Trek-style universal translators.
Have we also reached the point where we need to think about rights for AIs?
You’ve gotta fight for your right to AI
It’s pretty clear to everyone that artificial intelligence is getting closer to replicating the human brain inside a machine. On a low resolution level, we currently have artificial neural networks with more neurons than creatures like honey bees and cockroaches — and they’re getting bigger all the time.
Have we also reached the point where we need to think about rights for AIs?
Higher up the food chain are large-scale projects aimed at creating more biofidelic algorithms, designed to replicate the workings of the human brain, rather than simply being inspired by the way we lay down memories. Then there are projects designed to upload consciousness into machine form, or something like the so-called “OpenWorm” project, which sets out to recreate the connectome — the wiring diagram of the central nervous system — for the tiny hermaphroditic roundworm Caenorhabditis elegans, which remains the only fully-mapped connectome of a living creature humanity has been able to achieve.
In a 2016 survey of 175 industry experts, the median expert expected human-level artificial intelligence by 2040, and 90 percent expected it by 2075.
Before we reach that goal, as AI surpasses animal intelligence, we’ll have to begin to consider how AIs compare to the kind of “rights” that we might afford animals through ethical treatment. Thinking that it’s cruel to force a smart elevator to move up and down may not turn out to be too far-fetched; a few years back English technology writer Bill Thompson wrote that any attempt to develop AI coded to not hurt us, “reflects our belief that an artificial intelligence is and always must be at the service of humanity rather than being an autonomous mind.”
The most immediate question we face, however, concerns the legal rights of an AI agent. Simply put, should we consider granting them some form of legal personhood?
This is not as ridiculous as it sounds, and nor does it suggest that AIs have “graduated” to a particular status in our society. Instead, it reflects the complex reality of the role that they play — and will continue to play — in our lives.
Smart tools in an age of non-smart laws
At present, our legal system largely assumes that we are dealing with a world full of non-smart tools. We may talk about the importance of gun control, but we still hold a person who shoots someone with a gun responsible for the crime, rather than the gun itself. If the gun explodes on its own as the result of a faulty, we blame the company which made the gun for the damage caused.
So far, this thinking has largely been extrapolated to cover the world of artificial intelligence and robotics. In 1984, the owners of a U.S. company called Athlone Industries wound up in court after their robotic pitching machines for batting practice turned out to be a little too vicious. The case is memorable chiefly because of the judge’s proclamation that the suit be brought against Athlone rather than the batting bot because, “robots cannot be sued.”
This argument held up in 2009, when a U.K. driver was directed by his GPS system to drive along a narrow cliffside path, resulting in him being trapped and having to be towed back to the main road by police. While he blamed the technology, a court found him guilty of careless driving.
Sean Ryan / Rapid City Journal
There are multiple differences between AI technologies of today (and certainly the future) and yesterday’s tech, however. Smart devices like self-driving cars or robots won’t just be used by humans, but deployed by them — after which they act independently of our instructions. Smart devices, equipped with machine learning algorithms, gather and analyze information by themselves, and then make their decisions. It may be difficult to blame the creators of the technology, too.
“Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”
As David Vladeck, a law professor at Georgetown University in Washington D.C., has pointed out in one of the few in-depth case studies looking at this subject, the sheer number of individuals and firms that participate in the design, modification, and incorporation of an AI’s components can make it tough to identify who the party responsible is. That counts for double when you’re talking about “black boxed” AI systems that are inscrutable to outsiders.
Vladeck has written: “Some components may have been designed years before the AI project had even been conceived, and the components’ designers may never have envisioned, much less intended, that their designs would be incorporated into any AI system, much less the specific AI system that caused harm. In such circumstances, it may seem unfair to assign blame to the designer of a component whose work was far removed in both time and geographic location from the completion and operation of the AI system. Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”
It’s the corporations, man!
Awarding an AI the status of a legal entity wouldn’t be unprecedented. Corporations have long held this status, which is why a corporation can own property or be sued, rather than this having to be done in the name of its CEO or executive board.
Although it hasn’t been tested, Shawn Bayern, a law professor from Florida State University, has pointed out that technically AI may have already have this status due to the loophole that it can be put in charge of a limited liability company, thereby making it a legal person. Another for this to occur might be for tax reasons, should a proposal like Bill Gates’ “robot tax” ever be taken seriously on a legal level.
It’s not without controversy, however. Granting AIs this status would stop creators being held responsible if an AI somehow carries out an action its creator was not explicitly responsible for. But this could also encourage companies to be less diligent with their AI tools — since they could technically fall back on the excuse that it acted outside their wishes.
There is also no way to punish an AI, since punishments like imprisonment or death mean nothing
“I’m not convinced that this is a good thing, certainly not right now,” Dr. John Danaher, a law professor at NUI Galway in Ireland, told Digital Trends about legal personhood for AI. “My guess is that for the foreseeable future this will largely be done to provide a liability shield for humans and to mask anti-social activities.”
It is, however, a compelling area of examination — because it doesn’t rely on any benchmarks being achieved in terms of ever-subjective consciousness.
“Today, corporations have legal rights and are considered legal persons, whereas most animals are not,” Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, told us. “Even though corporations clearly have no consciousness, no personality and no capacity to experience happiness and suffering; whereas animals are conscious entities. Irrespective of whether AI develops consciousness, there might be economic, political and legal reasons to grant it personhood and rights in the same way that corporations are granted personhood and rights. Indeed, AI might come to dominate certain corporations, organizations and even countries. This is a path only seldom discussed in science fiction, but I think it is far more likely to happen than the kind of Westworld and Ex Machina scenarios that dominate the silver screen.”
Not science fiction for long
At present, these topics still smack of science fiction but, as Harari points out, they may not stay that way for long. Based on their usage in the real world, and the very real attachments that form with them, questions such as who is responsible if an AI causes a person’s death, or whether a human can marry his or her AI assistant, are surely ones that are going to be grappled with during our lifetimes.
Universal Pictures
“The decision to grant personhood to any entity largely breaks down into two sub-questions,” Danaher said. “Should that entity be treated as a moral agent, and therefore be held responsible for what it does? And should that entity be treated as a moral patient, and therefore be protected against certain interferences and violations of its integrity? My view is that AIs shouldn’t be treated as moral agents, at least not for the time being. But I think there may be cases where they should be treated as moral patients. I think people can form significant attachments to artificial companions and that consequently, in many instances, it would be wrong to reprogram or destroy those entities. This means we may owe duties to AIs not to damage or violate their integrity.”
In other words, we shouldn’t necessarily allow companies to sidestep the question of responsibility when it comes to the AI tools they create. As AI systems are rolled out into the real world in everything from self-driving cars to financial traders to autonomous drones and robots in combat situations, it’s vital that someone is held accountable for what they do.
At the same, it’s a mistake to think of AI as having the same relationship with us that we enjoyed with previous non-smart technologies. There’s a learning curve here and, if we’re not yet technologically at the point where we need to worry about cruelty to AIs, that doesn’t mean it’s the wrong question to ask.
So stop yelling at Siri when it mishears you and asks whether you want it to search the web, alright?
I, Alexa: Should we give artificial intelligence human rights?
A few years ago, the subject of AI personhood and legal rights for artificial intelligence would have been something straight out of science fiction. In fact, it was.
In Douglas Adams’ second Hitchhiker’s Guide to the Galaxy book, The Restaurant at the End of the Universe, he relates the story of a futuristic smart elevator called the Sirius Cybernetics Corporation Happy Vertical People Transporter. This artificially intelligent elevator works by predicting the future, so it can appear on the right floor to pick you up even before you know you want to get on — thereby “eliminating all the tedious chatting, relaxing, and making friends that people were previously forced to do whilst waiting for elevators.”
The ethics question, Adams explains, comes when the intelligent elevator becomes bored of going up and down all day, and instead decides to experiment with moving from side to side as a “sort of existential protest.”
We don’t yet have smart elevators, although judging by the kind of lavish headquarters tech giants like Google and Apple build for themselves, that may just be because they’ve not bothered sharing them with us just yet. In fact, as we’ve documented time and again at Digital Trends, the field of AI is currently making a bunch of things possible we never thought realistic in the past — such as self-driving cars or Star Trek-style universal translators.
Have we also reached the point where we need to think about rights for AIs?
You’ve gotta fight for your right to AI
It’s pretty clear to everyone that artificial intelligence is getting closer to replicating the human brain inside a machine. On a low resolution level, we currently have artificial neural networks with more neurons than creatures like honey bees and cockroaches — and they’re getting bigger all the time.
Have we also reached the point where we need to think about rights for AIs?
Higher up the food chain are large-scale projects aimed at creating more biofidelic algorithms, designed to replicate the workings of the human brain, rather than simply being inspired by the way we lay down memories. Then there are projects designed to upload consciousness into machine form, or something like the so-called “OpenWorm” project, which sets out to recreate the connectome — the wiring diagram of the central nervous system — for the tiny hermaphroditic roundworm Caenorhabditis elegans, which remains the only fully-mapped connectome of a living creature humanity has been able to achieve.
In a 2016 survey of 175 industry experts, the median expert expected human-level artificial intelligence by 2040, and 90 percent expected it by 2075.
Before we reach that goal, as AI surpasses animal intelligence, we’ll have to begin to consider how AIs compare to the kind of “rights” that we might afford animals through ethical treatment. Thinking that it’s cruel to force a smart elevator to move up and down may not turn out to be too far-fetched; a few years back English technology writer Bill Thompson wrote that any attempt to develop AI coded to not hurt us, “reflects our belief that an artificial intelligence is and always must be at the service of humanity rather than being an autonomous mind.”
The most immediate question we face, however, concerns the legal rights of an AI agent. Simply put, should we consider granting them some form of legal personhood?
This is not as ridiculous as it sounds, and nor does it suggest that AIs have “graduated” to a particular status in our society. Instead, it reflects the complex reality of the role that they play — and will continue to play — in our lives.
Smart tools in an age of non-smart laws
At present, our legal system largely assumes that we are dealing with a world full of non-smart tools. We may talk about the importance of gun control, but we still hold a person who shoots someone with a gun responsible for the crime, rather than the gun itself. If the gun explodes on its own as the result of a faulty, we blame the company which made the gun for the damage caused.
So far, this thinking has largely been extrapolated to cover the world of artificial intelligence and robotics. In 1984, the owners of a U.S. company called Athlone Industries wound up in court after their robotic pitching machines for batting practice turned out to be a little too vicious. The case is memorable chiefly because of the judge’s proclamation that the suit be brought against Athlone rather than the batting bot because, “robots cannot be sued.”
This argument held up in 2009, when a U.K. driver was directed by his GPS system to drive along a narrow cliffside path, resulting in him being trapped and having to be towed back to the main road by police. While he blamed the technology, a court found him guilty of careless driving.
Sean Ryan / Rapid City Journal
There are multiple differences between AI technologies of today (and certainly the future) and yesterday’s tech, however. Smart devices like self-driving cars or robots won’t just be used by humans, but deployed by them — after which they act independently of our instructions. Smart devices, equipped with machine learning algorithms, gather and analyze information by themselves, and then make their decisions. It may be difficult to blame the creators of the technology, too.
“Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”
As David Vladeck, a law professor at Georgetown University in Washington D.C., has pointed out in one of the few in-depth case studies looking at this subject, the sheer number of individuals and firms that participate in the design, modification, and incorporation of an AI’s components can make it tough to identify who the party responsible is. That counts for double when you’re talking about “black boxed” AI systems that are inscrutable to outsiders.
Vladeck has written: “Some components may have been designed years before the AI project had even been conceived, and the components’ designers may never have envisioned, much less intended, that their designs would be incorporated into any AI system, much less the specific AI system that caused harm. In such circumstances, it may seem unfair to assign blame to the designer of a component whose work was far removed in both time and geographic location from the completion and operation of the AI system. Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”
It’s the corporations, man!
Awarding an AI the status of a legal entity wouldn’t be unprecedented. Corporations have long held this status, which is why a corporation can own property or be sued, rather than this having to be done in the name of its CEO or executive board.
Although it hasn’t been tested, Shawn Bayern, a law professor from Florida State University, has pointed out that technically AI may have already have this status due to the loophole that it can be put in charge of a limited liability company, thereby making it a legal person. Another for this to occur might be for tax reasons, should a proposal like Bill Gates’ “robot tax” ever be taken seriously on a legal level.
It’s not without controversy, however. Granting AIs this status would stop creators being held responsible if an AI somehow carries out an action its creator was not explicitly responsible for. But this could also encourage companies to be less diligent with their AI tools — since they could technically fall back on the excuse that it acted outside their wishes.
There is also no way to punish an AI, since punishments like imprisonment or death mean nothing
“I’m not convinced that this is a good thing, certainly not right now,” Dr. John Danaher, a law professor at NUI Galway in Ireland, told Digital Trends about legal personhood for AI. “My guess is that for the foreseeable future this will largely be done to provide a liability shield for humans and to mask anti-social activities.”
It is, however, a compelling area of examination — because it doesn’t rely on any benchmarks being achieved in terms of ever-subjective consciousness.
“Today, corporations have legal rights and are considered legal persons, whereas most animals are not,” Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, told us. “Even though corporations clearly have no consciousness, no personality and no capacity to experience happiness and suffering; whereas animals are conscious entities. Irrespective of whether AI develops consciousness, there might be economic, political and legal reasons to grant it personhood and rights in the same way that corporations are granted personhood and rights. Indeed, AI might come to dominate certain corporations, organizations and even countries. This is a path only seldom discussed in science fiction, but I think it is far more likely to happen than the kind of Westworld and Ex Machina scenarios that dominate the silver screen.”
Not science fiction for long
At present, these topics still smack of science fiction but, as Harari points out, they may not stay that way for long. Based on their usage in the real world, and the very real attachments that form with them, questions such as who is responsible if an AI causes a person’s death, or whether a human can marry his or her AI assistant, are surely ones that are going to be grappled with during our lifetimes.
Universal Pictures
“The decision to grant personhood to any entity largely breaks down into two sub-questions,” Danaher said. “Should that entity be treated as a moral agent, and therefore be held responsible for what it does? And should that entity be treated as a moral patient, and therefore be protected against certain interferences and violations of its integrity? My view is that AIs shouldn’t be treated as moral agents, at least not for the time being. But I think there may be cases where they should be treated as moral patients. I think people can form significant attachments to artificial companions and that consequently, in many instances, it would be wrong to reprogram or destroy those entities. This means we may owe duties to AIs not to damage or violate their integrity.”
In other words, we shouldn’t necessarily allow companies to sidestep the question of responsibility when it comes to the AI tools they create. As AI systems are rolled out into the real world in everything from self-driving cars to financial traders to autonomous drones and robots in combat situations, it’s vital that someone is held accountable for what they do.
At the same, it’s a mistake to think of AI as having the same relationship with us that we enjoyed with previous non-smart technologies. There’s a learning curve here and, if we’re not yet technologically at the point where we need to worry about cruelty to AIs, that doesn’t mean it’s the wrong question to ask.
So stop yelling at Siri when it mishears you and asks whether you want it to search the web, alright?
OnePlus still wants you to care about the OnePlus 5’s DxOMark score
OnePlus 5’s DxOMark score is about to be revealed, but the phone maker is in a tough spot.
Back in May, before we knew what the OnePlus 5 looked like or whether its camera was a double, the company boasted of a partnership with DxOMark, a popular camera testing platform that companies like OnePlus (and HTC, Samsung, LG, and others) like to use as a way to promote their optical prowess.

Here’s what OnePlus said about the partnership at the time:
We’re happy to announce that we have teamed up with DxO to enhance your photography experience with our upcoming flagship, the OnePlus 5. DxO is perhaps most well-known for creating the defining photography benchmark, the DxOMark. They’ve got years of imaging experience and expertise, both for professional cameras and for smartphones.
Working alongside DxO, we’re confident the OnePlus 5 will be capable of capturing some of the clearest photos around.
Well, the phone’s June 20 announcement and release came and went, and nary a peep was heard from OnePlus or DxOMark about the so-called partnership. At the same time, we know a lot about the dual camera setup and have pitted the OnePlus 5 against incumbents like the Galaxy S8 and current DxO leader, the HTC U11, and it doesn’t fare so well.
Still, OnePlus has two things to lean on: further improvements to the camera through software updates and a high score from DxOMark, which should be coming soon, according to the company’s Facebook page.
There are two possible scenarios from this impending announcement: either OnePlus will score higher than the HTC U11’s current score of 90 and top the charts, putting into question all of our subjective and objective remarks on the company’s 16-megapixel shooter, or the phone will earn a decent-but-not-great score, likely 86 or 87, which would put it on the same level as the Huawei P10 or iPhone 7. The second result is more desirable, but it also wouldn’t look great on OnePlus, since the company went out of its way to optimize its camera setup for DxOMark’s test suite.
Of course, even the most stringent test suites have an element of subjectivity to them, since we all enjoy different visual aspects of camera sensors, lenses, and the software that powers them. But as with devices like the Google Pixel, it’s fairly easy to assert that its low-light performance is objectively better than most, if not all other phones on the market and that the HTC U11 does a fantastic job taking photos in almost any lighting condition.
Unfortunately, at this point in the game, as good as the camera can be, it would be hard to assert the same thing about the OnePlus 5.
OnePlus 5
- Complete OnePlus 5 review
- OnePlus 5 specs
- Which OnePlus 5 model should you buy?
- Camera comparison: OnePlus 5 vs. Galaxy S8
- The latest OnePlus 5 news
- Join the discussion in the forums
OnePlus
Virgin Galactic to conduct first powered spaceship tests in 3 years
Virgin Galactic is determined to put its private space travel plans back on track following its tragic 2014 crash. Richard Branson tells Bloomberg that the company is about to resume powered test flights for the first time in close to 3 years, ending a series of glide-only tests that began in December. The company will fly in the atmosphere every 3 weeks, and plans to return to space (or at least, the edge of space) by November or December.
It’s an ambitious schedule given that Virgin only returned to the sky last September. However, it’ll be necessary if the company expects to fulfill its current vision. Branson reiterates that he hopes to start commercial passenger service by the end of 2018, and he’ll embark on his own flight by the middle of that year. The big question is whether or not Virgin will make that schedule in the first place. It’s clear that the outfit has learned some lessons in recent years, but there’s not much leeway in that timetable — it wouldn’t take much to delay the first paid flight by months.
Source: Bloomberg
Twitter left high-profile revenge porn live for 30 minutes
A verified Twitter user with 7.33 million followers shared nude photos of his ex-girlfriend, seemingly without her permission and as a form of public shaming — and the images remained live on the site for 30 minutes, Business Insider reports. The images were shared by Rob Kardashian, first on Instagram and later on Twitter, where they were viewable for half an hour before disappearing. It’s unclear whether Kardashian removed the images himself or if Twitter stepped in. His Twitter account is active and tweets related to the images are still live.
We asked a Twitter spokesperson how the images were allowed to stay online for 30 minutes — ages in a viral-hungry world — and who actually removed them in the end. The spokesperson said the company doesn’t comment on individual accounts, and pointed us to the following passage of its hateful-conduct policy: “The consequences for violating our rules vary depending on the severity of the violation and the person’s previous record of violations. For example, we may ask someone to remove the offending Tweet before they can Tweet again. For other cases, we may suspend an account.”
Kardashian is based in California, where a man was recently sentenced to 18 years in prison for operating a website trafficking in revenge porn.
The nude and body-shaming images were live on Instagram before making their way to Twitter, but Instagram shut down Kardashian’s account relatively swiftly. “At Instagram we value maintaining a safe and supportive space for our community and we work to remove reported content that violates our guidelines,” a spokesperson told Engadget.
Twitter has publicly struggled with its response to harassment on the platform, often failing to address users’ concerns in a timely or productive manner. But, it isn’t alone: Facebook, for example, is also still figuring out how to deal with violence and harassment in users’ live videos and elsewhere. Facebook recently dealt with a surge in revenge porn and cases of sexual extortion, investigating 54,000 potential violations and disabling more than 14,000 accounts in January alone.
Via: Business Insider
Google’s next DeepMind AI research lab opens in Canada
Google’s DeepMind artificial intelligence team has been based in the UK ever since it was acquired in 2014. However, it’s finally ready to branch out — just not to the US. DeepMind has announced that its first international research lab is coming to the Canadian prairie city of Edmonton, Alberta later in July. A trio of University of Alberta computer science professors (Richard Sutton, Michael Bowling and Patrick Pilarski) will lead the group, which includes seven more AI veterans. But why not an American outpost?
As Recode observes, you can chalk it up to a combination of familiarity and political considerations. There are about a dozen University of Alberta grads already working at DeepMind, and Sutton was the firm’s very first advisor. It only makes sense to set up shop where you already have close allies, especially when the school is at the forefront of AI research.
And simply speaking, the Canadian government is friendlier to AI research than the current American leadership. The country has been courting AI scientists with $125 million in funding (on top of existing provincial efforts) at the same time as the Trump administration is proposing major cuts to scientific research. If you were looking for financial support, where would you go? And it wouldn’t be surprising if Google’s opposition to the US travel ban plays a role. The company believes that the ban limits access to talent, and it won’t face that restriction in Canada.
We’d add that the country is no stranger to big companies establishing AI-related research labs. Uber just recently opened a self-driving car lab in Toronto, while Apple and BlackBerry both have autonomous driving facilities in Ottawa. This is really just the latest in a string of major AI coups for Canada — while there’s still plenty of active American research, it’s almost surprising when a tech giant doesn’t focus its attention north of the border.
Via: Recode
Source: University of Alberta, DeepMind
Apple to Announce Q3 2017 Earnings on August 1
Apple today updated its investor relations page to announce that the company will share its earnings for the third fiscal quarter (second calendar quarter) of 2017 on Tuesday, August 1.
The earnings release will provide a look at ongoing iPhone 7 and 7 Plus sales ahead of the iPhone 8, as well as early sales of the new iPad Pro and Mac models that were introduced at the Worldwide Developers Conference.
Apple’s guidance for the third quarter of fiscal 2017 includes expected revenue of $43.5 to $45.5 billion and gross margin between 37.5 and 38.5 percent. At that range, Apple’s Q3 2017 revenue will exceed Q3 2016 revenue, which was $42.4 billion, but gross margin may fall slightly.
The quarterly earnings statement will be released at 1:30 PM Pacific/4:30 PM Eastern, with a conference call to discuss the report taking place at 2:00 PM Pacific/5:00 PM Eastern. MacRumors will provide coverage of both the earnings release and conference call on August 1.
Tag: earnings
Discuss this article in our forums
Gear Up: Scosche MagicMount line keeps phones safe and secure in the car

Scoshe is well-known in the mount industry, and is highly-preferred among its users. Today, we’re going to take a look at a couple of their mounts. If you’re not familiar with Scoshce’s range of products, the specialize in mounts for your tech, and although they do have cradles, most of them just use magnets.
Its website gives you a diagram on where you can place the magnet:

When I first opened the box, I was reluctant to attach a magnet to the back of my phone, because I’m hoping that my mount will still work long after I replace my phone. So, I went with the alternative option: I put the magnet in-between the phone and the case. I didn’t remove the adhesive sticker, though. I just left the magnet hanging out behind my phone, and yes, it worked like a charm.

Let’s check out a couple products that Scoshe offers with no cradles to mess with, and no permanent stickers to place on the back of your phone.
MagicMount CD
The MagicMount CD conveniently inserts into your car’s CD player, with a “screw” on top to tighten it. It does not damage the CD player in the car, and depending on how your car is set up, it can be the perfect place for hands-free usage. The mount itself moves around so you can angle it to the view is ideal for you, regardless of whether that’s portrait or landscape mode.

In my car, I stream all my music and podcasts from my phone, so this mount taking up the CD player is a non-issue. Moreover, I don’t want to mess with the suction cups on the dash, because I sit too far back for it to be convenient. To me, this seems like the perfect place to put a mount. Personally, I love this thing, and it works great. My phone sits in place and doesn’t fall off, and it attaches instantly with the magnet (even if I didn’t attach it directly to my phone or case).
The only potentially negative thing I would say is to pay attention the the placement of your car’s CD slot. Since this mount is inserted into the CD slot, it will inevitably cover up something in your car. For me, it’s my “auto” button for climate control, but I typically leave that on, anyway. If I take the phone off the mount, I still have access to the button, but while it is mounted, the button is covered up. Again, to me, no big deal, but give the layout of your dash a quick once-over before you buy.
You can purchase the mount from Scosche’s website or Amazon; pricing is about $20, on average. Scosche | Amazon
MagicMount PRO home/office
The MagicMount Pro home/office is designed to be stationary at home, or in your office. Conveniently, it also comes with a free attachment to charge your Apple Watch at the same time.

I propped this up on the table beside my bed with the suction cup, so I can charge both my phone and watch at night. The suction cup is strong, and it doesn’t make it top-heavy.
Scosche’s website states, “This new mount in Scosche’s MagicMount Pro line-up creates an elegant look to any home or office… a perfect solution for keeping your device safe and your area clutter-free…” and it isn’t lying. This mount looks really nice, if you are using it in your office, it easily matches the aesthetic of the 21st century office.

This is definitely a quality accessory, and one that I cannot recommend highly enough. If you do not have an Apple Watch, you do not have to attach the included accessory, so it won’t look awkward. Either way, it looks professional and clean.
Like the CD mount, I highly recommend this. I love how it props up both the watch and the phone, so in the middle of the night, I can just glance over to see incoming messages, and don’t have to physically pick up my phone. You can pick one up for about $40 from Scosche or Amazon. Scosche | Amazon
Lyft reaches one million rides per day but is still well behind Uber
Today, Lyft announced that it’s now providing over one million rides per day. The company announced the milestone in a blog post, which highlighted some of its other achievements as well. Lyft noted that for the last four years, it has shown 100 percent year over year growth and it has launched in 160 new cities so far this year. That brings the company’s reach to 360 communities and 80 percent of the US population.
While the continued growth shows Lyft is holding its own, it still has a long way to go before it catches up with Uber. Founded three years before Lyft, Uber reached the one million rides per day mark in 2014 and as of a year ago was giving an average of 5.5 million rides a day. Uber just recently surpassed its five billionth ride. But Uber has taken a hit recently. Business Insider reported last month that Uber’s market share fell from 84 percent earlier this year to 77 percent by the end of May and Lyft saw a substantial jump in activations in the week after the #DeleteUber campaign.
Lyft has continued to adjust its service in order to make its ride-sharing more convenient for customers. And like its rival, Lyft is also working on self-driving cars. The company said in a post, “Since day one, we’ve worked to embed hospitality in everything we do. As more and more people choose Lyft and we continue to grow, we’ll remain focused on providing the best experience to our passengers and drivers.”
Source: Lyft
Obama Foundation taps social media to fight online echo chambers
Participating in social media these days, especially around contentious political issues, can be an exercise in frustration. Facebook fights its own echo chamber with features like Topics to Follow as much as it can, but it’s going to take much more than smart algorithms to learn how to listen to each other. President Obama, our most tech-savvy president, is stepping into the fray, using his post-presidential foundation to encourage us to become better digital citizens.
As reported by TechCrunch, the Obama Foundation’s chief digital officer, Glenn Brown, took to Medium to “identify the problems and talk about them — openly, together, via the very same channels that, when used without intention and awareness, help create the dysfunction in the first place.” In other words, we use Twitter and Facebook to attack the echo chamber problem in the very spots it came into being. Brown hearkens back to Obama’s words from earlier this year at the University of Chicago, where the former president said, “”[We] now have a situation in which everybody’s listening to people who already agree with them.” According to Obama, people “reinforce their own realities, to the neglect of a common reality that allows us to have a healthy debate and then try to find common ground and actually move solutions forward.”
Brown finishes his call for participation with some suggestions. You can respond via the foundation site itself, tag your thoughts on social media with the hashtag #DigitalCitizen,” and even answer some questions he poses to get things started, like “Who’s a model of digital citizenship in your world?” and “What people or organizations do you think exemplify digital citizenship when it comes to questions of embracing difference — of thought, identity, or any other variable that you value?” While the wording may seem a little bit like an essay prompt from high school, perhaps the push will help us all start a much-needed conversation.
Via: TechCrunch
Source: Obama Foundation/Medium



