Skip to content

Archive for

6
Jul

Disney’s first 4K Blu-ray will be ‘Guardians of the Galaxy 2’


At last, Disney is getting into Ultra HD movie releases. Fans of the studio’s flicks have been stuck in 1080p for the last couple of years even as other studios have released 4K movies via streaming, downloads and disc. Now Guardians of the Galaxy Vol. 2 director James Gunn has confirmed on Facebook that his movie will be the first one from Disney released in 4K and HDR — especially welcome thanks to the sharp image and vivid colors captured by using Red’s Weapon 8K camera. According to Gunn, “4K UltraHD is almost certainly the best way you can see this movie at home – with more definition and the most vibrant colors possible on your home screen, and with the brightest brights and the blackest blacks. A being composed of light truly appears to be a being composed of light!”

'Guardians on the Galaxy Vol. 2'

This opens the door to Ultra HD releases of other Marvel movies, the Star Wars series (it’s the perfect time for a release of the original trilogy without any updated CGI) and more. An internal document mentions the release date for this movie will be August 22nd, on DVD, Blu-ray and Ultra HD Blu-ray (there’s no mention yet of an early digital release), although there has not yet been an official announcement from Disney.

[Thanks Mark!]

Source: James Gunn (Facebook)

6
Jul

Valve Software’s Steam Link app is now available on Samsung smart TVs


Why it matters to you

PC gamers who own a 2016 or 2017 Samsung smart TV can download and use the Steam Link app for free although.

PC gamers with a Samsung smart TV wanting to stream their Steam library across the local network without a Steam Link set-top box can now do so with the new Steam Link App. The catch is that it only works with Samsung’s 2016 and 2017 portfolio of smart TVs, and is still considered as a beta application, so expect a few glitches for now. The app is currently available through the Samsung Smart Hub.

“During the beta, playback of 1080p video at 60 FPS and support for the Steam Controller are included,” Valve Software said in June. “For worldwide release later this summer, support for streaming 4K resolutions and additional controllers will be added.”

The Steam Link app is based on the same technology that powers Steam’s in-home streaming feature. For the uninitiated, this starts with a gaming-capable PC signed into Steam and connected to the local network. The “client” device, such as a non-gaming laptop, must also be signed on using the same Steam account and local network. A wired connection is best for stable, high-quality streaming.

However, Valve Software also provides a stand-alone set-top box called Steam Link that merely serves as a client device that connects directly to an HDTV. Typically sold for $50, the box initiates the Big Picture mode on the host PC gaming machine to provide a streamed, console-like interface. It’s compatible with a mouse and keyboard-based input, Valve’s Steam Controller, and third-party game controllers.

Steam Link first hit the market in early 2015 as part of Valve Software’s Steam Machines initiative. The company wanted to compete directly with consoles in the living room arena by getting computer manufacturers to create powerful, compact desktops capable of high-resolution PC gaming. That initiative also included the creation of a unique controller and a set-top box for extending a Steam Machine’s reach–– or any capable gaming PC for that matter — to other TVs in the house.

Talk about a dedicated app for Samsung smart TVs surfaced in October 2016 that would eliminate the need for Valve’s set-top box. Now that the app has arrived in beta, Valve states that a Steam Controller is required even though the Samsung app seems to work just fine with an Xbox 360 controller. Steam’s in-home streaming service is completely free to use.

This isn’t the first time we have seen a marriage between streaming game services and smart TVs. LG once signed a deal with game streaming subscription service OnLive to include an app for LG-branded Android TV-based HDTVs. Sony includes an app for its PlayStation Now game streaming service on its Bravia-branded HDTVs along with its TV streaming app, PlayStation Vue.

As for Samsung customers owning a smart TV prior to 2016, a Steam Link app may not be in the cards presumably due to the older components within. Valve’s Steam Link set-top box is powered by a Marvell DE3005-A1 processor and a Vivante GC1000 graphics chip along with memory, storage, and networking components. An older Samsung smart TV would presumably need equal or better components to handle the Steam Link game streaming.




6
Jul

Ever wanted to grow your own lamp out of mushrooms? Now you can


Why it matters to you

Objects made out of upscaled fungi not only look unique, they’re good for the planet, too.

Want to inspire envy at your local makerspace, while also showing off your eco-friendly credentials? No problem: New York-based company Ecovative Design is here to help.

Founded by friends Eben Bayer and Gavin McIntyre, the company has created mushroom-based materials for use in everything from packaging to furniture. Having previously “grown” everything from packaging to furniture, Ecovative recently launched a new “Grow It Yourself” initiative — designed to use the same mycelium (that’s the vegetative part of a fungus) technology to give customers at home the chance to create their own projects and products.

The creativity has been amazing,” co-founder and chief scientist Gavin McIntyre told Digital Trends. “College design students [have] created everything from a piggybank to jewelry to a guitar. Makers have created chairs, clocks and even a wedding dress — and a designer, Daniele Trofe, one of the earliest adopters of our Mushroom Material — [has even] created a full scale business.”

If you want an introduction to Ecovative’s GIY project you can order items including planters, table lamps and more. All of these come as kits, which you can then easily assemble from the comfort of your own home; requiring no more equipment than an oven (or, if you’re without that, a basic house fan), the kit, and a bit of patience.

If you’re a step beyond that, you can also order a bag of the mushroom stuff to start creating your own innovative creations from scratch.

While it might sound like a gimmick, with Ecovative having recently signed a $9.1 million contract with DARPA to develop the next generation of biomaterials, clearly this is an area with a whole lot of interest!

“In the twentieth century, we saw the dawn of plastics, derived from fossil fuel resources,” McIntyre continued. “Those materials are found everywhere: in durable goods such as insulation, and in nondurable goods such as protective packaging. We sought to create a material that fit within nature’s recycling system.  That is why we looked to mushrooms. In nature, fungi and mushrooms are nature’s recyclers. With Ecovative’s patented Mushroom Material, we can take any regional waste stream and upcycle it into a higher value product.  At the end of that product’s life cycle, it will passively return to the earth.”

And, hey, even without doing a good thing for the Earth, isn’t it kind of cool to own a lamp or other furniture made out of upcycled fungi?

They’re perfect for the apartment without mushroom. Get it? Much room? We’ll show ourselves out…




6
Jul

Want to 3D print your own marijuana edibles? Check out Potent Rope


Why it matters to you

Edible cannabis filament for 3D printing lets you 3D print marijuana-based snacks customized for you.

Like 3D printing? Have an appetite? Enjoy smoking the occasional doobie? Then you’re the perfect target audience for Potent Rope: a new edible cannabis filament for 3D printing.

Produced in two standard sizes of filament (3mm and 1.75mm) and usable in any standard consumer 3D printer, Potent Rope is made with an FDA-approved water-soluble thermoplastic. Eating thermoplastic might not sound your idea of a good time, but this particular material is one that the average person reportedly consumes around 44 pounds of every year — in products like beer, wine, and teeth whitening strips. This thermoplastic is combined with pharmaceutical-grade excipients and active cannabis extract, before being extruded into a standard-sized filament.

Impressively, the cannabis extract can be custom tailored with differing levels of cannabinoids, as well as terpenes — thereby expediting the years-long processes of genetic selective breeding to create a strain that answers the needs of each consumer.

“With 28 states and the District of Columbia, as well as over 20 other countries, having already introduced some form of legal cannabis laws, cannabis and the people who use it — whether medically or recreationally — are starting to see the negative stigma associated with its use decreased,” Paige Colen, chief operating officer of Potent Rope, told Digital Trends. “Whether the consumer is printing their own dosage, or a dispensary is doing the printing for direct sale to consumers, the consumers will fall into either the medical or recreational category. As far as what consumers can create, we are hoping to launch our dosage specific CAD designs by the end of the year, but there are also a growing number of 3D printers that offer open-source software. The world is [really] their oyster, or poodle, or Eiffel Tower!”

There is, of course, the challenge that because other filaments used in 3D printers — such as PLA or ABS — are not edible, the possibility of contamination is an issue. However, Colen said she predicts that, as home 3D printing becomes a prominent and widespread solution for many household needs, having more than one printer at home isn’t out of the realms of possibility.

“We are currently in negotiations with a few state-licensed processors to make Potent Rope available in Maryland, Nevada, Massachusetts, and California in the coming months,” she said. “The legal cannabis industry is always changing and the need to produce Potent Rope on a state-by-state basis, due to the federal nature of the plant, is a whole feat in itself.”

Hey, it’s just another example of an innovative and savvy startup racing to meet the newly-legal demand for weed-based products!




6
Jul

I, Alexa: Should we give artificial intelligence human rights?


A few years ago, the subject of AI personhood and legal rights for artificial intelligence would have been something straight out of science fiction. In fact, it was.

In Douglas Adams’ second Hitchhiker’s Guide to the Galaxy book, The Restaurant at the End of the Universe, he relates the story of a futuristic smart elevator called the Sirius Cybernetics Corporation Happy Vertical People Transporter. This artificially intelligent elevator works by predicting the future, so it can appear on the right floor to pick you up even before you know you want to get on — thereby “eliminating all the tedious chatting, relaxing, and making friends that people were previously forced to do whilst waiting for elevators.”

The ethics question, Adams explains, comes when the intelligent elevator becomes bored of going up and down all day, and instead decides to experiment with moving from side to side as a “sort of existential protest.”

We don’t yet have smart elevators, although judging by the kind of lavish headquarters tech giants like Google and Apple build for themselves, that may just be because they’ve not bothered sharing them with us just yet. In fact, as we’ve documented time and again at Digital Trends, the field of AI is currently making a bunch of things possible we never thought realistic in the past — such as self-driving cars or Star Trek-style universal translators.

Have we also reached the point where we need to think about rights for AIs?

You’ve gotta fight for your right to AI

It’s pretty clear to everyone that artificial intelligence is getting closer to replicating the human brain inside a machine. On a low resolution level, we currently have artificial neural networks with more neurons than creatures like honey bees and cockroaches — and they’re getting bigger all the time.

Have we also reached the point where we need to think about rights for AIs?

Higher up the food chain are large-scale projects aimed at creating more biofidelic algorithms, designed to replicate the workings of the human brain, rather than simply being inspired by the way we lay down memories. Then there are projects designed to upload consciousness into machine form, or something like the so-called “OpenWorm” project, which sets out to recreate the connectome — the wiring diagram of the central nervous system — for the tiny hermaphroditic roundworm Caenorhabditis elegans, which remains the only fully-mapped connectome of a living creature humanity has been able to achieve.

In a 2016 survey of 175 industry experts, the median expert expected human-level artificial intelligence by 2040, and 90 percent expected it by 2075.

Before we reach that goal, as AI surpasses animal intelligence, we’ll have to begin to consider how AIs compare to the kind of “rights” that we might afford animals through ethical treatment. Thinking that it’s cruel to force a smart elevator to move up and down may not turn out to be too far-fetched; a few years back English technology writer Bill Thompson wrote that any attempt to develop AI coded to not hurt us, “reflects our belief that an artificial intelligence is and always must be at the service of humanity rather than being an autonomous mind.”

The most immediate question we face, however, concerns the legal rights of an AI agent. Simply put, should we consider granting them some form of legal personhood?

This is not as ridiculous as it sounds, and nor does it suggest that AIs have “graduated” to a particular status in our society. Instead, it reflects the complex reality of the role that they play — and will continue to play — in our lives.

Smart tools in an age of non-smart laws

At present, our legal system largely assumes that we are dealing with a world full of non-smart tools. We may talk about the importance of gun control, but we still hold a person who shoots someone with a gun responsible for the crime, rather than the gun itself. If the gun explodes on its own as the result of a faulty, we blame the company which made the gun for the damage caused.

So far, this thinking has largely been extrapolated to cover the world of artificial intelligence and robotics. In 1984, the owners of a U.S. company called Athlone Industries wound up in court after their robotic pitching machines for batting practice turned out to be a little too vicious. The case is memorable chiefly because of the judge’s proclamation that the suit be brought against Athlone rather than the batting bot because, “robots cannot be sued.”

This argument held up in 2009, when a U.K. driver was directed by his GPS system to drive along a narrow cliffside path, resulting in him being trapped and having to be towed back to the main road by police. While he blamed the technology, a court found him guilty of careless driving.

Sean Ryan / Rapid City Journal

There are multiple differences between AI technologies of today (and certainly the future) and yesterday’s tech, however. Smart devices like self-driving cars or robots won’t just be used by humans, but deployed by them — after which they act independently of our instructions. Smart devices, equipped with machine learning algorithms, gather and analyze information by themselves, and then make their decisions. It may be difficult to blame the creators of the technology, too.

“Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”

As David Vladeck, a law professor at Georgetown University in Washington D.C., has pointed out in one of the few in-depth case studies looking at this subject, the sheer number of individuals and firms that participate in the design, modification, and incorporation of an AI’s components can make it tough to identify who the party responsible is. That counts for double when you’re talking about “black boxed” AI systems that are inscrutable to outsiders.

Vladeck has written: “Some components may have been designed years before the AI project had even been conceived, and the components’ designers may never have envisioned, much less intended, that their designs would be incorporated into any AI system, much less the specific AI system that caused harm. In such circumstances, it may seem unfair to assign blame to the designer of a component whose work was far removed in both time and geographic location from the completion and operation of the AI system. Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”

It’s the corporations, man!

Awarding an AI the status of a legal entity wouldn’t be unprecedented. Corporations have long held this status, which is why a corporation can own property or be sued, rather than this having to be done in the name of its CEO or executive board.

Although it hasn’t been tested, Shawn Bayern, a law professor from Florida State University, has pointed out that technically AI may have already have this status due to the loophole that it can be put in charge of a limited liability company, thereby making it a legal person. Another for this to occur might be for tax reasons, should a proposal like Bill Gates’ “robot tax” ever be taken seriously on a legal level.

It’s not without controversy, however. Granting AIs this status would stop creators being held responsible if an AI somehow carries out an action its creator was not explicitly responsible for. But this could also encourage companies to be less diligent with their AI tools — since they could technically fall back on the excuse that it acted outside their wishes.

There is also no way to punish an AI, since punishments like imprisonment or death mean nothing

“I’m not convinced that this is a good thing, certainly not right now,” Dr. John Danaher, a law professor at NUI Galway in Ireland, told Digital Trends about legal personhood for AI. “My guess is that for the foreseeable future this will largely be done to provide a liability shield for humans and to mask anti-social activities.”

It is, however, a compelling area of examination — because it doesn’t rely on any benchmarks being achieved in terms of ever-subjective consciousness.

“Today, corporations have legal rights and are considered legal persons, whereas most animals are not,” Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, told us. “Even though corporations clearly have no consciousness, no personality and no capacity to experience happiness and suffering; whereas animals are conscious entities. Irrespective of whether AI develops consciousness, there might be economic, political and legal reasons to grant it personhood and rights in the same way that corporations are granted personhood and rights. Indeed, AI might come to dominate certain corporations, organizations and even countries. This is a path only seldom discussed in science fiction, but I think it is far more likely to happen than the kind of Westworld and Ex Machina scenarios that dominate the silver screen.”

Not science fiction for long

At present, these topics still smack of science fiction but, as Harari points out, they may not stay that way for long. Based on their usage in the real world, and the very real attachments that form with them, questions such as who is responsible if an AI causes a person’s death, or whether a human can marry his or her AI assistant, are surely ones that are going to be grappled with during our lifetimes.

Universal Pictures

“The decision to grant personhood to any entity largely breaks down into two sub-questions,” Danaher said. “Should that entity be treated as a moral agent, and therefore be held responsible for what it does? And should that entity be treated as a moral patient, and therefore be protected against certain interferences and violations of its integrity? My view is that AIs shouldn’t be treated as moral agents, at least not for the time being. But I think there may be cases where they should be treated as moral patients. I think people can form significant attachments to artificial companions and that consequently, in many instances, it would be wrong to reprogram or destroy those entities. This means we may owe duties to AIs not to damage or violate their integrity.”

In other words, we shouldn’t necessarily allow companies to sidestep the question of responsibility when it comes to the AI tools they create. As AI systems are rolled out into the real world in everything from self-driving cars to financial traders to autonomous drones and robots in combat situations, it’s vital that someone is held accountable for what they do.

At the same, it’s a mistake to think of AI as having the same relationship with us that we enjoyed with previous non-smart technologies. There’s a learning curve here and, if we’re not yet technologically at the point where we need to worry about cruelty to AIs, that doesn’t mean it’s the wrong question to ask.

So stop yelling at Siri when it mishears you and asks whether you want it to search the web, alright?




6
Jul

I, Alexa: Should we give artificial intelligence human rights?


A few years ago, the subject of AI personhood and legal rights for artificial intelligence would have been something straight out of science fiction. In fact, it was.

In Douglas Adams’ second Hitchhiker’s Guide to the Galaxy book, The Restaurant at the End of the Universe, he relates the story of a futuristic smart elevator called the Sirius Cybernetics Corporation Happy Vertical People Transporter. This artificially intelligent elevator works by predicting the future, so it can appear on the right floor to pick you up even before you know you want to get on — thereby “eliminating all the tedious chatting, relaxing, and making friends that people were previously forced to do whilst waiting for elevators.”

The ethics question, Adams explains, comes when the intelligent elevator becomes bored of going up and down all day, and instead decides to experiment with moving from side to side as a “sort of existential protest.”

We don’t yet have smart elevators, although judging by the kind of lavish headquarters tech giants like Google and Apple build for themselves, that may just be because they’ve not bothered sharing them with us just yet. In fact, as we’ve documented time and again at Digital Trends, the field of AI is currently making a bunch of things possible we never thought realistic in the past — such as self-driving cars or Star Trek-style universal translators.

Have we also reached the point where we need to think about rights for AIs?

You’ve gotta fight for your right to AI

It’s pretty clear to everyone that artificial intelligence is getting closer to replicating the human brain inside a machine. On a low resolution level, we currently have artificial neural networks with more neurons than creatures like honey bees and cockroaches — and they’re getting bigger all the time.

Have we also reached the point where we need to think about rights for AIs?

Higher up the food chain are large-scale projects aimed at creating more biofidelic algorithms, designed to replicate the workings of the human brain, rather than simply being inspired by the way we lay down memories. Then there are projects designed to upload consciousness into machine form, or something like the so-called “OpenWorm” project, which sets out to recreate the connectome — the wiring diagram of the central nervous system — for the tiny hermaphroditic roundworm Caenorhabditis elegans, which remains the only fully-mapped connectome of a living creature humanity has been able to achieve.

In a 2016 survey of 175 industry experts, the median expert expected human-level artificial intelligence by 2040, and 90 percent expected it by 2075.

Before we reach that goal, as AI surpasses animal intelligence, we’ll have to begin to consider how AIs compare to the kind of “rights” that we might afford animals through ethical treatment. Thinking that it’s cruel to force a smart elevator to move up and down may not turn out to be too far-fetched; a few years back English technology writer Bill Thompson wrote that any attempt to develop AI coded to not hurt us, “reflects our belief that an artificial intelligence is and always must be at the service of humanity rather than being an autonomous mind.”

The most immediate question we face, however, concerns the legal rights of an AI agent. Simply put, should we consider granting them some form of legal personhood?

This is not as ridiculous as it sounds, and nor does it suggest that AIs have “graduated” to a particular status in our society. Instead, it reflects the complex reality of the role that they play — and will continue to play — in our lives.

Smart tools in an age of non-smart laws

At present, our legal system largely assumes that we are dealing with a world full of non-smart tools. We may talk about the importance of gun control, but we still hold a person who shoots someone with a gun responsible for the crime, rather than the gun itself. If the gun explodes on its own as the result of a faulty, we blame the company which made the gun for the damage caused.

So far, this thinking has largely been extrapolated to cover the world of artificial intelligence and robotics. In 1984, the owners of a U.S. company called Athlone Industries wound up in court after their robotic pitching machines for batting practice turned out to be a little too vicious. The case is memorable chiefly because of the judge’s proclamation that the suit be brought against Athlone rather than the batting bot because, “robots cannot be sued.”

This argument held up in 2009, when a U.K. driver was directed by his GPS system to drive along a narrow cliffside path, resulting in him being trapped and having to be towed back to the main road by police. While he blamed the technology, a court found him guilty of careless driving.

Sean Ryan / Rapid City Journal

There are multiple differences between AI technologies of today (and certainly the future) and yesterday’s tech, however. Smart devices like self-driving cars or robots won’t just be used by humans, but deployed by them — after which they act independently of our instructions. Smart devices, equipped with machine learning algorithms, gather and analyze information by themselves, and then make their decisions. It may be difficult to blame the creators of the technology, too.

“Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”

As David Vladeck, a law professor at Georgetown University in Washington D.C., has pointed out in one of the few in-depth case studies looking at this subject, the sheer number of individuals and firms that participate in the design, modification, and incorporation of an AI’s components can make it tough to identify who the party responsible is. That counts for double when you’re talking about “black boxed” AI systems that are inscrutable to outsiders.

Vladeck has written: “Some components may have been designed years before the AI project had even been conceived, and the components’ designers may never have envisioned, much less intended, that their designs would be incorporated into any AI system, much less the specific AI system that caused harm. In such circumstances, it may seem unfair to assign blame to the designer of a component whose work was far removed in both time and geographic location from the completion and operation of the AI system. Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.”

It’s the corporations, man!

Awarding an AI the status of a legal entity wouldn’t be unprecedented. Corporations have long held this status, which is why a corporation can own property or be sued, rather than this having to be done in the name of its CEO or executive board.

Although it hasn’t been tested, Shawn Bayern, a law professor from Florida State University, has pointed out that technically AI may have already have this status due to the loophole that it can be put in charge of a limited liability company, thereby making it a legal person. Another for this to occur might be for tax reasons, should a proposal like Bill Gates’ “robot tax” ever be taken seriously on a legal level.

It’s not without controversy, however. Granting AIs this status would stop creators being held responsible if an AI somehow carries out an action its creator was not explicitly responsible for. But this could also encourage companies to be less diligent with their AI tools — since they could technically fall back on the excuse that it acted outside their wishes.

There is also no way to punish an AI, since punishments like imprisonment or death mean nothing

“I’m not convinced that this is a good thing, certainly not right now,” Dr. John Danaher, a law professor at NUI Galway in Ireland, told Digital Trends about legal personhood for AI. “My guess is that for the foreseeable future this will largely be done to provide a liability shield for humans and to mask anti-social activities.”

It is, however, a compelling area of examination — because it doesn’t rely on any benchmarks being achieved in terms of ever-subjective consciousness.

“Today, corporations have legal rights and are considered legal persons, whereas most animals are not,” Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, told us. “Even though corporations clearly have no consciousness, no personality and no capacity to experience happiness and suffering; whereas animals are conscious entities. Irrespective of whether AI develops consciousness, there might be economic, political and legal reasons to grant it personhood and rights in the same way that corporations are granted personhood and rights. Indeed, AI might come to dominate certain corporations, organizations and even countries. This is a path only seldom discussed in science fiction, but I think it is far more likely to happen than the kind of Westworld and Ex Machina scenarios that dominate the silver screen.”

Not science fiction for long

At present, these topics still smack of science fiction but, as Harari points out, they may not stay that way for long. Based on their usage in the real world, and the very real attachments that form with them, questions such as who is responsible if an AI causes a person’s death, or whether a human can marry his or her AI assistant, are surely ones that are going to be grappled with during our lifetimes.

Universal Pictures

“The decision to grant personhood to any entity largely breaks down into two sub-questions,” Danaher said. “Should that entity be treated as a moral agent, and therefore be held responsible for what it does? And should that entity be treated as a moral patient, and therefore be protected against certain interferences and violations of its integrity? My view is that AIs shouldn’t be treated as moral agents, at least not for the time being. But I think there may be cases where they should be treated as moral patients. I think people can form significant attachments to artificial companions and that consequently, in many instances, it would be wrong to reprogram or destroy those entities. This means we may owe duties to AIs not to damage or violate their integrity.”

In other words, we shouldn’t necessarily allow companies to sidestep the question of responsibility when it comes to the AI tools they create. As AI systems are rolled out into the real world in everything from self-driving cars to financial traders to autonomous drones and robots in combat situations, it’s vital that someone is held accountable for what they do.

At the same, it’s a mistake to think of AI as having the same relationship with us that we enjoyed with previous non-smart technologies. There’s a learning curve here and, if we’re not yet technologically at the point where we need to worry about cruelty to AIs, that doesn’t mean it’s the wrong question to ask.

So stop yelling at Siri when it mishears you and asks whether you want it to search the web, alright?




6
Jul

OnePlus still wants you to care about the OnePlus 5’s DxOMark score


OnePlus 5’s DxOMark score is about to be revealed, but the phone maker is in a tough spot.

Back in May, before we knew what the OnePlus 5 looked like or whether its camera was a double, the company boasted of a partnership with DxOMark, a popular camera testing platform that companies like OnePlus (and HTC, Samsung, LG, and others) like to use as a way to promote their optical prowess.

oneplus-5-black-6.jpg?itok=Gk5abTMT

Here’s what OnePlus said about the partnership at the time:

We’re happy to announce that we have teamed up with DxO to enhance your photography experience with our upcoming flagship, the OnePlus 5. DxO is perhaps most well-known for creating the defining photography benchmark, the DxOMark. They’ve got years of imaging experience and expertise, both for professional cameras and for smartphones.

Working alongside DxO, we’re confident the OnePlus 5 will be capable of capturing some of the clearest photos around.

Well, the phone’s June 20 announcement and release came and went, and nary a peep was heard from OnePlus or DxOMark about the so-called partnership. At the same time, we know a lot about the dual camera setup and have pitted the OnePlus 5 against incumbents like the Galaxy S8 and current DxO leader, the HTC U11, and it doesn’t fare so well.

Still, OnePlus has two things to lean on: further improvements to the camera through software updates and a high score from DxOMark, which should be coming soon, according to the company’s Facebook page.

There are two possible scenarios from this impending announcement: either OnePlus will score higher than the HTC U11’s current score of 90 and top the charts, putting into question all of our subjective and objective remarks on the company’s 16-megapixel shooter, or the phone will earn a decent-but-not-great score, likely 86 or 87, which would put it on the same level as the Huawei P10 or iPhone 7. The second result is more desirable, but it also wouldn’t look great on OnePlus, since the company went out of its way to optimize its camera setup for DxOMark’s test suite.

Of course, even the most stringent test suites have an element of subjectivity to them, since we all enjoy different visual aspects of camera sensors, lenses, and the software that powers them. But as with devices like the Google Pixel, it’s fairly easy to assert that its low-light performance is objectively better than most, if not all other phones on the market and that the HTC U11 does a fantastic job taking photos in almost any lighting condition.

Unfortunately, at this point in the game, as good as the camera can be, it would be hard to assert the same thing about the OnePlus 5.

OnePlus 5

  • Complete OnePlus 5 review
  • OnePlus 5 specs
  • Which OnePlus 5 model should you buy?
  • Camera comparison: OnePlus 5 vs. Galaxy S8
  • The latest OnePlus 5 news
  • Join the discussion in the forums

OnePlus

6
Jul

Virgin Galactic to conduct first powered spaceship tests in 3 years


Virgin Galactic is determined to put its private space travel plans back on track following its tragic 2014 crash. Richard Branson tells Bloomberg that the company is about to resume powered test flights for the first time in close to 3 years, ending a series of glide-only tests that began in December. The company will fly in the atmosphere every 3 weeks, and plans to return to space (or at least, the edge of space) by November or December.

It’s an ambitious schedule given that Virgin only returned to the sky last September. However, it’ll be necessary if the company expects to fulfill its current vision. Branson reiterates that he hopes to start commercial passenger service by the end of 2018, and he’ll embark on his own flight by the middle of that year. The big question is whether or not Virgin will make that schedule in the first place. It’s clear that the outfit has learned some lessons in recent years, but there’s not much leeway in that timetable — it wouldn’t take much to delay the first paid flight by months.

Source: Bloomberg

6
Jul

Twitter left high-profile revenge porn live for 30 minutes


A verified Twitter user with 7.33 million followers shared nude photos of his ex-girlfriend, seemingly without her permission and as a form of public shaming — and the images remained live on the site for 30 minutes, Business Insider reports. The images were shared by Rob Kardashian, first on Instagram and later on Twitter, where they were viewable for half an hour before disappearing. It’s unclear whether Kardashian removed the images himself or if Twitter stepped in. His Twitter account is active and tweets related to the images are still live.

We asked a Twitter spokesperson how the images were allowed to stay online for 30 minutes — ages in a viral-hungry world — and who actually removed them in the end. The spokesperson said the company doesn’t comment on individual accounts, and pointed us to the following passage of its hateful-conduct policy: “The consequences for violating our rules vary depending on the severity of the violation and the person’s previous record of violations. For example, we may ask someone to remove the offending Tweet before they can Tweet again. For other cases, we may suspend an account.”

Kardashian is based in California, where a man was recently sentenced to 18 years in prison for operating a website trafficking in revenge porn.

The nude and body-shaming images were live on Instagram before making their way to Twitter, but Instagram shut down Kardashian’s account relatively swiftly. “At Instagram we value maintaining a safe and supportive space for our community and we work to remove reported content that violates our guidelines,” a spokesperson told Engadget.

Twitter has publicly struggled with its response to harassment on the platform, often failing to address users’ concerns in a timely or productive manner. But, it isn’t alone: Facebook, for example, is also still figuring out how to deal with violence and harassment in users’ live videos and elsewhere. Facebook recently dealt with a surge in revenge porn and cases of sexual extortion, investigating 54,000 potential violations and disabling more than 14,000 accounts in January alone.

Via: Business Insider

6
Jul

Google’s next DeepMind AI research lab opens in Canada


Google’s DeepMind artificial intelligence team has been based in the UK ever since it was acquired in 2014. However, it’s finally ready to branch out — just not to the US. DeepMind has announced that its first international research lab is coming to the Canadian prairie city of Edmonton, Alberta later in July. A trio of University of Alberta computer science professors (Richard Sutton, Michael Bowling and Patrick Pilarski) will lead the group, which includes seven more AI veterans. But why not an American outpost?

As Recode observes, you can chalk it up to a combination of familiarity and political considerations. There are about a dozen University of Alberta grads already working at DeepMind, and Sutton was the firm’s very first advisor. It only makes sense to set up shop where you already have close allies, especially when the school is at the forefront of AI research.

And simply speaking, the Canadian government is friendlier to AI research than the current American leadership. The country has been courting AI scientists with $125 million in funding (on top of existing provincial efforts) at the same time as the Trump administration is proposing major cuts to scientific research. If you were looking for financial support, where would you go? And it wouldn’t be surprising if Google’s opposition to the US travel ban plays a role. The company believes that the ban limits access to talent, and it won’t face that restriction in Canada.

We’d add that the country is no stranger to big companies establishing AI-related research labs. Uber just recently opened a self-driving car lab in Toronto, while Apple and BlackBerry both have autonomous driving facilities in Ottawa. This is really just the latest in a string of major AI coups for Canada — while there’s still plenty of active American research, it’s almost surprising when a tech giant doesn’t focus its attention north of the border.

Via: Recode

Source: University of Alberta, DeepMind