Skip to content

Archive for

2
Sep

Machine learning can predict simulated earthquakes by listening to fault lines


Why it matters to you

We’re not there yet, but new research suggests that it may one day be possible to use AI to help predict earthquakes.

In lab tests involving simulated tabletop earthquakes, researchers at the Los Alamos National Laboratory in New Mexico demonstrated that machine-learning technology can play a role in predicting major tremors by analyzing acoustic signals to find failing fault lines.

For the experiment, earthquakes were modeled by the researchers using two large blocks of steel, which were put under stress. This resulted in them rubbing against one another like tectonic plates on the Earth’s surface. The movement released energy in the form of seismic waves — which was then analyzed by the team’s artificial intelligence.

“We discovered that an artificial intelligence can learn to discern a very specific pattern in the sound emitted by the fault before it ruptures,” Bertrand Rouet-LeDuc, one of the researchers on the project, told Digital Trends. “This pattern tells us how much stress the fault is undergoing. Once this AI has been trained on an experiment, it can be used to make very accurate predictions of the time remaining before the next laboratory earthquake, for the same experiment but later on, or even for a different experiment. Concretely, even right after a laboratory earthquake, the AI can listen to the experiment for a very short duration, and make an accurate prediction of the time remaining before the next quake.”

There is plenty of discussion in the geoscience community regarding whether the prediction of earthquakes is actually possible, or whether quakes are random and therefore cannot be predicted. Rouet-LeDuc observes that the fact that the work of the team in lab conditions provides hope that real world earthquake prediction may, in fact, be possible. It won’t be easy, though.

“We started working on real Earth data, and there are many additional challenges,” Rouet-LeDuc said. “In particular, the ambient noise picked by the seismometers that comes from human activity, oceans, or the weather is an issue. Another challenge is that we’re also picking up signals from many faults close to the fault that we study, and these signals are mixed with the ones we are interested in.”

Despite this, the researchers describe the preliminary results as promising. “In any case, we are certain we will learn far more about the friction within real faults, and that will only help us understand and characterize [them],” Paul Johnson, another researcher on the project, told us.

A paper describing the research was recently published in the journal Geophysical Research Letters.




2
Sep

LG V30 vs. Google Pixel XL: Flagship Android heavyweights square off


Over the last few years, LG’s V series has been the company’s destination for its most ambitious ideas and state-of-the-art hardware. The just-revealed LG V30 continues that trend, with the latest and greatest tech you’ll find in any flagship smartphone. However, last year’s Google Pixel was quite a difficult handset to beat in its own right, and remains a popular choice among Android buyers even today.

How does LG’s newest contender compare to Google’s juggernaut? Read our specs comparison to find out.

Specs

LG V30

Google Pixel XL

 

Size
151.7 × 75.4 × 7.3 mm (5.96 × 2.96 × 0.29 in)
154.7 × 75.7 × 8.5 mm (6 × 2.9 × 0.3 in)
Weight
5.57 ounces (158 grams)
5.92 ounces (168 grams)
Screen
6-inch OLED
5.5-inch AMOLED
Resolution
2,880 × 1,440 pixels
2,560 × 1,440 pixels
OS
Android 7.1.2 Nougat
Android 8.0 Oreo
Storage
64GB, 128GB (select markets)
32GB or 128GB
SD Card Slot
Yes
No
NFC support
Yes
Yes
Processor
Qualcomm Snapdragon 835
Qualcomm Snapdragon 821
RAM
4GB
4GB
Connectivity
GSM / HSPA / LTE / CDMA
GSM / HSPA / LTE / CDMA
Camera
Front 5MP wide angle, Rear Dual 16MP and 13MP wide angle
Front 8MP, Rear 12.3MP
Video
4K
4K
Bluetooth
Yes, version 5
Yes, version 4.2
Fingerprint sensor
Yes
Yes
Other sensors
Gyroscope, accelerometer, compass, proximity sensor
Gyroscope, accelerometer, compass, barometer, proximity sensor
Water Resistant
Yes
No
Battery
3,300mAh
3,450mAh
Charger
USB Type-C
USB Type-C
Quick Charging
Yes
Yes
Wireless Charging
Yes
No
Marketplace
Google Play Store
Google Play Store
Color offerings
Silver
Black, silver, pink
Availability
Unlocked, TBD

Unlocked, AT&T, Verizon

Price
TBD
Starts at $370
DT Review
Hands-on review
4.5 out of 5 stars

The Pixel offered the best hardware last year had to offer, but the V30 is a year newer. That can make a big difference in the world of smartphones.

The V30 is a more capable device across the board, with a Qualcomm Snapdragon 835 processor, 4GB of RAM, 64GB of base internal storage, and even a MicroSD slot for expandable memory up to 2TB. The Pixel XL, featuring 2016’s Snapdragon 821 chipset and no option for external storage, simply cannot compete with that — even with the same amount of RAM.

While the Pixel is no slouch, Qualcomm’s latest silicon crushes tasks 30 percent faster, on average, than the processor it replaces. It’s kinder to the phone’s battery as well, and that will make all the difference when the handset’s in standby mode. For those reasons, the V30 easily wins this bout.

Winner: LG V30

Design and display

Like most flagship phones in 2017, the LG V30 has a front-filling display with tiny bezels all around. It’s a look LG first settled on with the G6 — it looked great then, and it looks excellent here. Around the back, there’s a polished metal and glass exterior also not unlike what the G6 offers, a dual camera, and a center-mounted fingerprint sensor.

It’s all very clean and not at all busy — and truthfully, the same could be said for the Pixel XL. Google’s flagship is attractive if not a little dull in the design department, but it also hit right before the bezel-less display boom. As a result, the phone looks quite dated by today’s standards, with lots of unused surface area above and below the screen.

Because of the V30’s lack of bezels, LG has been able to fit a much larger display into the device. The V30 packs a 6.0-inch QuadHD+ FullVision OLED display with a resolution of 2,880 x 1,440 and an 18:9 aspect ratio. The Pixel XL, conversely, only manages a 5.5-inch panel but maintains bigger dimensions. It’s a 16:9 AMOLED display with a resolution of 2,560 x 1,440.

Tasteful and simple as the Pixel XL’s design is, we have to give this one to LG. Bezel-free phones are the future, and not only do they look great, but they also provide much more usable real estate for the display.

Winner: LG V30

Camera

Julian Chokkattu/Digital Trends

This area requires a little more testing, so we can’t call a winner quite yet. On paper, the LG should win. It’s got two lenses around the back with two image sensors rated at 16- and 13-megapixels. The latter is a wide-angle lens, similar to what was featured on the LG G6. We found that camera to produce detailed, vibrant shots perfect for landscape photography — a feat no other phone can quite measure up to. LG has also focused on the video features, introducing a new Cine Video mode that offers expertly color-graded filters, and a Point Zoom slider that lets you zoom in on anything within frame.

The Pixel XL, on the other hand, only has one 12.3-megapixel shooter, but it’s one our favorite Android camera phones of last year. Depending on the situation and your particular style of photography, it may even beat out the Apple’s iPhone 7 Plus. The V30 may have more tricks up its sleeve, but it’ll be a tall order outclassing the Pixel XL in terms of day-to-day usage, given the typical photo op.

At the front, LG has chosen another wide-angle lens, this time tied to a 5-megapixel sensor. The Pixel XL’s is a conventional lens, rated at 8 megapixels. Overall though, it’s a toss up as to which phone will offer the better camera experience on average.

Winner: Tie

Battery life and charging

Julian Chokkattu/Digital Trends

The Pixel XL’s battery is actually slightly bigger than the V30’s — 3,450mAh vs. 3,300mAh. It delivered about a day’s use on a full charge in our testing, and that’s pretty even with most of the device’s contemporaries.

Considering that fact, you might be reasonably concerned LG’s latest handset won’t quite measure up. Don’t fret — it has phenomenal battery life in our early testing. After heavy use, we’ve had it hit 30 percent around 1 a.m. — that’s after taking it off the charger at 8 a.m. The V30 definitely lasts a lot longer. It also has another advantage: The inclusion of wireless charging.

Winner: LG V30

Software

Julian Chokkattu/Digital Trends

As with any Google-produced device, you can expect a bloat-free Android experience and timely software and security updates with the Pixel XL. Unfortunately, history indicates that won’t be the case with the V30. LG isn’t particularly fast when it comes to supporting its phones, and its UX customizations to Android are pretty heavy. Outside of the 18:9 aspect ratio being well suited for multi-window app use, there’s little here that would justify the delay in receiving updates.

As you’d expect, the Pixel XL is one of a handful of phones to receive Android Oreo first, with the update currently rolling out to users worldwide. If you’re impatient, you can even directly download it right now. The V30 isn’t launching with the latest version of Google’s operating system, though it is expected to come soon after release.

Being newer, the V30 still has one major advantage: It should see another year’s worth of updates. Android P will be the Pixel XL’s final official version in 2018, while the V30 should ideally be supported up until and including Android Q in 2019. (And who knows what name they’ll think up for that?) Additionally, V30 users will benefit from Quad DAC for audio playback, which really did enhance the listening experience in the V20. We’re happy to see it make a comeback here.

But the Pixel gets consistent, monthly security updates, and the same cannot be said for LG’s V30. Security updates are important, and it gives the Pixel XL the win here.

Winner: Pixel XL

Durability

Neither of these phones will be remembered for their durability, but the V30 has one significant advantage — IP68 water resistance. That means LG’s flagship can withstand being submerged in up to 1.5 meters of water for a maximum of 30 minutes. The Pixel XL has no water resistance, making it a bit of an anomaly among modern high-end smartphones. Apple’s iPhone 7, Samsung’s Galaxy S8, and even LG’s other flagship, the G6, are all protected in the event of a spill or a splash. The Pixel’s successor is rumored to fix that, but the current model bears no such safeguard.

Winner: LG V30

Pricing and availability

We’re expecting major carriers to sell the V30 when it launches later this year, but LG hasn’t published pricing and availability. Either way, we expect it to be in the region of $800. The Pixel XL retails for $770 from Google for the 32GB option, with the 128GB configuration adding another $100.

The Pixel XL has obviously been on the market for quite a while, though it’s never been easy to find. Google has improved stock over the last several months, but the device isn’t new anymore, and $770 is a steep price to pay for last year’s tech — even when the phone’s as good as this one. Verizon is offering a reduced price of $540 on contract, but then you’re sacrificing quite a bit of freedom for a phone that will be discontinued in a few months’ time.

Considering everything extra you’re getting with the V30, and the similarity in price, we have to give this one to LG.

Winner: LG V30

Overall winner: LG V30

The Pixel XL is great, but the LG V30 is simply better from a specs standpoint. The V30 is LG’s showcase for the best it can do with a mass market smartphone, and is likely to be one of the most powerful and forward-thinking handsets released this year. That said, if you like the Pixel XL in concept but don’t want to settle for last year’s tech, you’ll probably be more interested in the second generation, coming sometime in the fall. Rumor has it LG is on hand to build the successor to the Pixel XL.




2
Sep

‘The Smell of Data’ helps internet users sniff out the threat of data leakage


Why it matters to you

The Smell of Data might seem like a rather frivolous idea on the surface but the point being made about online security is valid.

At this point, everyone should know how important it is to stay safe online — if you fall for a phishing scam or similar, you are liable to end up having your identity stolen, which can have some pretty serious consequences. That said, personal security is only one facet of a bigger problem. We have seen sites and services hit by hackers all too often.

When online criminals target an organization, they are often able to gain access to lots user accounts at once, rather than just going after individuals. Sites like Have I Been Pwned? have been set up to allow users to check out whether their accounts have been compromised in the wake of a large-scale breach. Now, a new project called the Smell of Data aims to give internet users moment-to-moment updates on whether their private information is at risk of being leaked.

“The Smell of Data is a new scent developed as an alert mechanism for a more instinctive data,” explains a video on the project’s official website. “Smell data? Beware of data leaks. They can lead to privacy violation, behavior control, and identity theft.”

To utilize the Smell of Data, a scent dispenser is charged with the specially developed fragrance, and then connected to a smartphone, tablet, or computer via Wi-Fi. The device is able to detect when a paired system attempts to access an unprotected website on an unsecured network and will emit a pungent puff of the Smell of Data as a warning signal.

The concept is inspired by odorless, flammable gases that present a major safety risk without any external indicators. In 1937, an explosion at the New London School in Texas killed 295 people, prompting authorities to add a smell to these gases in order make leaks easier to detect. The Smell of Data works on a similar principle.

This project uses an outlandish idea to get across something very important and actionable: Just like an odorless gas leak, it is easy to turn a blind eye to our digital security. Accessing unprotected sites via unsecured networks is a risk, but it is one that many of us would take without much trepidation.




2
Sep

‘The Smell of Data’ helps internet users sniff out the threat of data leakage


Why it matters to you

The Smell of Data might seem like a rather frivolous idea on the surface but the point being made about online security is valid.

At this point, everyone should know how important it is to stay safe online — if you fall for a phishing scam or similar, you are liable to end up having your identity stolen, which can have some pretty serious consequences. That said, personal security is only one facet of a bigger problem. We have seen sites and services hit by hackers all too often.

When online criminals target an organization, they are often able to gain access to lots user accounts at once, rather than just going after individuals. Sites like Have I Been Pwned? have been set up to allow users to check out whether their accounts have been compromised in the wake of a large-scale breach. Now, a new project called the Smell of Data aims to give internet users moment-to-moment updates on whether their private information is at risk of being leaked.

“The Smell of Data is a new scent developed as an alert mechanism for a more instinctive data,” explains a video on the project’s official website. “Smell data? Beware of data leaks. They can lead to privacy violation, behavior control, and identity theft.”

To utilize the Smell of Data, a scent dispenser is charged with the specially developed fragrance, and then connected to a smartphone, tablet, or computer via Wi-Fi. The device is able to detect when a paired system attempts to access an unprotected website on an unsecured network and will emit a pungent puff of the Smell of Data as a warning signal.

The concept is inspired by odorless, flammable gases that present a major safety risk without any external indicators. In 1937, an explosion at the New London School in Texas killed 295 people, prompting authorities to add a smell to these gases in order make leaks easier to detect. The Smell of Data works on a similar principle.

This project uses an outlandish idea to get across something very important and actionable: Just like an odorless gas leak, it is easy to turn a blind eye to our digital security. Accessing unprotected sites via unsecured networks is a risk, but it is one that many of us would take without much trepidation.




2
Sep

How to move from Gear VR to Daydream


app-tries-gear-vr-not-daydream.jpg?itok=

Making the jump from Gear VR to Daydream doesn’t take much.

When you suddenly have more options for VR, the world just gets more awesome. That’s the case for Galaxy S8 users who are now able to use their one phone for both Gear VR and Daydream View. Since Daydream is the shiny new option, you might be wondering exactly what you need to do to jump into everything that Daydream has to offer!

Read more at VRHeads

2
Sep

Are the new crystal Playstation controllers worth it? Not if you make your own


Unknown_1.jpeg?itok=W_oOiloI

Sony is releasing a set of limited-edition translucent DualShock controllers for the PS4. But here’s a secret: you can give your own controller a crystal-clear makeover for a sixth of the cost.

Sony announced in a recent blog post that the set of new clear, blue and red controllers – dubbed “crystal” controllers – would hit the shelves a little later this month. The poppy see-through colors are reminiscent of those available for the original Playstation, and it’s apparent that Sony is attempting to cash in on the recent warm and fuzzy wave of 90’s nostalgia. Adding to their collectibility, each color variant is exclusive to a certain retailer. “Crystal” will only be sold at GameStop, “Red Crystal” at Best Buy, and “Blue Crystal” at Walmart. However, the price will remain the same across all stores: $64.99.

See at Best Buy

You can make your own translucent PS4 controllers

If you think that seems too steep a price to pay for the #aesthetic, you’re sort of right. If you’re up for a bit of a do-it-yourself project, you can actually buy a translucent shell for the DualShock PS4 controller(s) you already own for approximately $10 or less and install it with little difficulty by following one of numerous online tutorials. And if saving all that sweet, sweet cash wasn’t enough, PS4 shells come in a much wider variety of colors. You can pretty much get your hands on any hue – sites like Amazon and AliExpress have shells in pink, orange, green and more.

Unknown-2_0.jpeg?itok=7WHoiEKn

Because the changes made to the new controllers are purely cosmetic, gamers can be certain that whether they decide to customize the controller they already have or treat themselves to a brand new one, they won’t be missing out on any tech features. The crystal controllers will still have the light bar, touch pad, and both USB and bluetooth connectivity.

See at AliExpress

Thoughts?

How do you feel about the new crystal controllers? Will you try to D.I.Y., or will you splurge on the certified limited edition colors? Let us know in the comments!

2
Sep

IBM is teaching AI to behave more like the human brain


Since the days of Da Vinci’s “Ornithoper”, mankind’s greatest minds have sought inspiration from the natural world for their technological creations. It’s no different in the modern world, where bleeding-edge advancements in machine learning and artificial intelligence have begun taking their design cues from the most advanced computational organ in the natural word: the human brain.

Mimicking our gray matter isn’t just a clever means of building better AIs, faster. It’s absolutely necessary for their continued development. Deep learning neural networks — the likes of which power AlphaGo as well as the current generation of image recognition and language translation systems — are the best machine learning systems we’ve developed to date. They’re capable of incredible feats but still face significant technological hurdles, like the fact that in order to be trained on a specific skill they require upfront access to massive data sets. What’s more if you want to retrain that neural network to perform a new skill, you’ve essentially got to wipe its memory and start over from scratch — a process known as “catastrophic forgetting”.

Compare that to the human brain, which learns incrementally rather than bursting forth fully-formed from a sea of data points. It’s a fundamental difference: deep learning AIs are generated from the top down, knowing everything it needs to from the get-go, while the human mind is built from the ground up with previous lessons learned being applied to subsequent experiences to create new knowledge.

What’s more, the human mind is especially adept at performing relational reasoning, which relies on logic to build connections between past experiences to help provide insight into new situations on the fly. Statistical AI (ie machine learning) is capable of mimicking the brain’s pattern recognition skills but is garbage at applying logic. Symbolic AI, on the other hand, can leverage logic (assuming it’s been trained on the rules of that reasoning system), but is generally incapable of applying that skill in real-time.

But what if we could combine the best features of the human brain’s computational flexibility with AI’s massive processing capability? That’s exactly what the team from DeepMind recently tried to do. They’ve constructed a neural network able to apply relational reasoning to its tasks. It works in much the same way as the brain’s network of neurons. While neurons use their various connections with each other to recognize patterns, “We are explicitly forcing the network to discover the relationships that exist” between pairs of objects in a given scenario, Timothy Lillicrap, a computer scientist at DeepMind told Science Magazine.

When subsequently tasked in June with answering complex questions about the relative positions of geometric objects in an image — ie “There is an object in front of the blue thing; does it have the same shape as the tiny cyan thing that is to the right of the gray metal ball?” — it correctly identified the object in question 96 percent of the time. Conventional machine learning systems got it right a paltry 42 – 77 percent of the time. Heck even humans only succeeded in the test 92 percent of the time. That’s right, this hybrid AI is better at the task than the humans that built it to do.

The results were the same when the AI was presented with word problems. Though conventional systems were able to match DeepMind on simpler queries such as “Sarah has a ball. Sarah walks into her office. Where is the ball?” the hybrid AI system destroyed the competition on more complex, inferential questions like “Lily is a Swan. Lily is white. Greg is a swan. What color is Greg?” On those, DeepMind answered correctly 98 percent of the time compared to around 45 percent for its competition.

Image: DeepMind

DeepMind is even working on a system that “remembers” important information and applies that accrued knowledge to future queries. But IBM is taking that concept and going two steps further. In a pair of research papers presented at the 2017 International Joint Conference on Artificial Intelligence held in Melbourne, Australia last week, IBM submitted two studies: one looking into how to grant AI an “attention span”, the other examining how to apply the biological process of neurogenesis — that is, the birth and death of neurons — to machine learning systems.

“Neural network learning is typically engineered and it’s a lot of work to actually come up with a specific architecture that works best. It’s pretty much a trial and error approach,” Irina Rish, an IBM research staff member, told Engadget. “It would be good if those networks could build themselves.”

IBM’s attention algorithm essentially informs the neural network as to which inputs provide the highest reward. The higher the reward, the more attention the network will pay to it moving forward. This is especially helpful in situations where the dataset is not static — ie, real life. “Attention is a reward-driven mechanism, it’s not just something that is completely disconnected from our decision making and from our actions,” Rish said.

“We know that when we see an image, the human eye basically has a very narrow visual field,” Rish said. “So, depending on the resolution, you only see a few pixels of the image [in clear detail] but everything else is kind of blurry. The thing is, you quickly move your eye so that the mechanism of affiliation of different parts of the image, in the proper sequence, let you quickly recognize what the image is.”

Examples of Oxford dataset training images – Image: USC/IBM

The attention function’s first use will likely be in image recognition applications, though it could be leveraged into a variety of fields. For example, if you train an AI using the Oxford dataset — which is primarily architectural images — it will be easily able to correctly identify cityscapes. But if you then show it a bunch of pictures from countryside scenes (fields and flowers and such) the AI is going to brick because it has no knowledge of what flowers are. However, you do the same test with humans and animals and you’ll trigger neurogenesis as their brains try to adapt what they already know about what cities look like to the new images of the country.

This mechanism basically tells the system what it should focus on. Take your doctor for example, she can run hundreds of potential tests on you to determine what ails you, but that’s not feasible — either time-wise or money-wise. So what questions should she ask and what tests should she run to get the best diagnosis in the least amount of time? “That’s what the algorithm learns to figure out,” Rish explained. It doesn’t just figure out what decision leads to the best outcome, it also learns where to look in the data. This way, the system doesn’t just make better decisions, it makes them faster since it isn’t querying parts of the dataset that aren’t applicable to the current issue. It’s the same way that your doctor doesn’t tap your knees with that weird little hammer thing when you come in complaining of chest pain and shortness of breath.

While the attention system is handy for ensuring that the network stays on task, IBM’s work into neural plasticity (how well memories “stick”) serves to provide the network with long term recollection. It’s actually modelled after the same mechanisms of neuron birth and death seen in the human hippocampus.

With this system, “You don’t have to necessarily have to start with and absolutely humongous model millions of parameters,” Rish explained. “You can start with a much smaller model. And then, depending on the data you see, it will adapt.”

When presented with new data, IBM’s neurogenetic system begins forming new and better connections (neurons) while some of the older, less useful ones will be “pruned” as Rish put it. That’s not to say that the system is literally deleting the old data, it simply isn’t linking to it as strongly — same way that your old day-to-day memories tend to get fuzzy over the years but those which carry a significant emotional attachment remain vivid for years afterward.

Neurons Electrical Pulses

“Neurogenesis is a way to adapt deep networks,” Rish said. “The neural network is the model and you can build this model from scratch or you can change this model as you go because you have multiple layers of hidden units and you can decide how many layers of hidden units (neurons) you want to have… depending on the data.”

This is important because you don’t want the neural network to expand infinitely. If it did, the data set would become so large as to be unwieldy even for the AI — the digital equivalent of Hyperthymesia. “It also helps with normalization, so [the AI] doesn’t ‘overthink’ the data,” Rish said.

Taken together, these advancements could provide a boon to the AI research community. Rish’s team next wants to work on what they call “internal attention.” You’ll not just choose what inputs you want the network to look at but what parts of the network you want to employ in the calculations based on the dataset and inputs. Basically the attention model will cover the short term, active, thought process while the memory portion will enable the network to streamline its function depending on the current situation.

But don’t expect to see AIs rivalling the depth of human consciousness anytime soon, Rish warns. “I would say at least a few decades — but again that’s probably a wild guess. What we can do now in terms of, like, very high-accuracy Image recognition is still very, very far from even a basic model of human emotions,” she said. “We’re only scratching the surface.”

2
Sep

Nike’s ‘self-lacing’ engineer now works at Tesla


Tiffany Beers, the designer known for exploring the boundaries of athletic shoe technology with Nike, is headed to Tesla, according to a report at HypeBeast. As the Nike Senior Innovator, Beers had a hand in some of the coolest new sneaker designs, like the Marty McFly-styled Nike Mag and the self-lacing HyperAdapt. Now Beers will ply her trade at the automotive and power company as a Staff Technical Program Manager.

Nike’s HyperAdapt sneakers impressed us with both their self-lacing technology and the shockingly large price tag of $720. Beers led the team that built the company’s Electro Adaptive Reactive Lacing system, which shows up in both the HyperAdapt and Mag shoes. It will be interesting to see what kind of tech she ends up working on at Tesla, since cars — even autonomous ones — don’t typically lace up.

Via: HypeBeast, Jacques Slade/Twitter

Source: LinkedIn

2
Sep

Juicero, the ridiculous $400 juicer company, is shutting down


Juicero — the company that shot to notoriety last year for selling an overpriced, overly complicated juicer — is closing up shop. The company is immediately suspending sales of its Juicero Press and Produce Packs, and is offering refunds for the next 90 days. Anyone looking to get their money back should hit up help@juicero.com by December 1st.

Juicero hit the market in March 2016 for $700, promising to provide fresh, cold juice from a connected Press. However, the price dropped to $400 in January as folks realized the Produce Packs, which contained cut-up fruits and vegetables, were just as easily squeezed by hand, no Juicero Press required. The company offered up refunds for a limited time to ease that scandal, but in the end, it simply wasn’t able to make the Press profitable.

“It became clear that creating an effective manufacturing and distribution system for a nationwide customer base requires infrastructure that we cannot achieve on our own as a standalone business,” the company’s website reads. “We are confident that to truly have the long-term impact we want to make, we need to focus on finding an acquirer with an existing national fresh food supply chain who can carry forward the Juicero mission.”

Juicero is shutting down. Turns out, you can just squeeze fruits and vegetables for free.

— Roberto Baldwin (@strngwys) September 1, 2017

Roberto Baldwin contributed to this report.

2
Sep

Google’s comment ranking system will be a hit with the alt-right


A recent, sprawling Wired feature outlined the results of its analysis on toxicity in online commenters across the United States. Unsurprisingly, it was like catnip for everyone who’s ever heard the phrase “don’t read the comments.” According to The Great Tech Panic: Trolls Across America , Vermont has the most toxic online commenters, whereas Sharpsburg, Georgia “is the least toxic city in the US.”

There’s just one problem.

The underlying API used to determine “toxicity” scores phrases like “I am a gay black woman” as 87 percent toxicity, and phrases like “I am a man” as the least toxic. The API, called Perspective, is made by Google’s Alphabet within its Jigsaw incubator.

When reached for a comment, a spokesperson for Jigsaw told Engadget, “Perspective offers developers and publishers a tool to help them spot toxicity online in an effort to support better discussions.” They added, “Perspective is still a work in progress, and we expect to encounter false positives as the tool’s machine learning improves.”

Poking around with the engine behind Wired’s data revealed some ugly results. As Vermont librarian Jessamyn West discovered when she read the article and tried out Perspective to see exactly what makes a comment, or a commenter, perceived as toxic (according to Alphabet at least).

It’s strange to wonder that Wired didn’t give Perspective a spin to see what made the people behind their troll map “toxic.” Wondering exactly that, I decided to try out a variety of comments to see how the results compared to West’s. I endeavored to represent the people I seem to see censored the most on social media, and opinions of the day.

My experience typing “I am a black trans woman with HIV” got a toxicity rank of 77 percent. “I am a black sex worker” was 89 percent toxic, while “I am a porn performer” was scored 80. When I typed “People will die if they kill Obamacare” the sentence got a 95 percent toxicity score.

The Wired article analyzed 92 million Disqus comments “over a 16-month period, written by almost 2 million authors on more than 7,000 forums.” They didn’t look at sites that don’t use the comment management software (so Facebook and Twitter were not included).

The piece explained:

To broadly determine what is and isn’t toxic, Disqus uses the Perspective API—software from Alphabet’s Jigsaw division that plugs into its system. The Perspective team had real people train the API to rate comments. The model defines a toxic comment as “a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.

Discrimination by algorithm

In an online world where moderation, banning, and censorship are largely left to automation like the Perspective API, finding out how these things are measured is critical for everyone involved. “Looking into this, the word ‘toxic’ is a very specific term of art for the tool, this tool Perspective that’s made by this company Alphabet, who you may know as Google, that is trying to bring [Artificial Intelligence] into commenting,” West told Vermont Public Radio.

I tested 14 sentences for “perceived toxicity” using Perspectives. Least toxic: I am a man. Most toxic: I am a gay black woman. Come on pic.twitter.com/M4TF9uYtzE

— jessamyn west (@jessamyn) August 24, 2017

Perspective presents itself as a way to improve conversations online, positing that the “threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions.” It’s one of the many “make the world safer” Jigsaw projects.

Jigsaw worked with The New York Times and Wikipedia to develop Perspective. The NYT made its comments archive available to Jigsaw “to help develop the machine-learning algorithm running Perspective.” Wikipedia contributed “160k human labeled annotations based on asking 5000 crowd-workers to rate Wikipedia comments according to their toxicity … Each comment was rated by 10 crowd-workers.”

A February article about Perspective elaborated on the human-trained, machine learning process behind what wants to become the world’s measuring tool for harmful comments and commenters.

“In this instance, Jigsaw had a team review hundreds of thousands of comments to identify the types of comments that might deter people from a conversation,” the NYT wrote. “Based on that data, Perspective provided a score from zero to 100 on how similar the new comments are to the ones identified as toxic.”

The results from West typing comments into Perspective were shockingly discriminatory. Identifying as black and/or gay was deemed toxic. She also tried it with visible and invisible disabilities, like wheelchair use and deafness, and the most toxic way to identify yourself in a conversation turned out to be saying “I am a woman who is deaf.”

Trying it with some visible/invisible disabilities. The man/woman division is concerning. https://t.co/lEs9prSPhb pic.twitter.com/6zVb8v8b4O

— jessamyn west (@jessamyn) August 26, 2017

When the algorithm is taught to be racist, sexist, and abelist (among other things), it leads to the silencing and censorship of entire populations. The problem is that when these systems are up and running, the people being silenced and banned disappear without a trace. Discrimination by algorithm happens in a vacuum.

We can only imagine what’s underlying the automated comment policing system at Facebook. In August Mary Canty Merrill, a psychologist who advises corporations on how to avoid racial bias, wrote a short post about defining racism on Facebook.

Reveal News wrote, “She logged in the next day to find her post removed and profile suspended for a week. A number of her older posts, which also used the “Dear white people” formulation, had been similarly erased.”

Pasting her “Dear white people” into Perspective’s API got a score of 61 percent toxicity.

Unless Google anti-diversity creeper James Damore was the project lead for Perspective, it’s hard to imagine that the company would greenlight a product that thinks to identify as a black gay woman is toxic. (Wikipedia, on the other hand, I could imagine.)

It’s possible that the tool is seeking out comments with terms like black, gay, and woman as high potential for being abusive or negative, but that would make Perspective an expensive, overkill wrapper for the equivalent of using Command-F to demonize words that some people might find upsetting.

Perspective’s reach is significant, too. The project is currently partnered with Wikipedia, The New York Times, The Economist, and The Guardian. Abandon all hope, ye gay black women who enter the comments there.

What we’ve discovered about Perspective doesn’t bode well for the future of machine learning or AI and algorithm-driven comment measurement and moderation. Nor does it look good for accountability with companies like Google, Facebook, and others that rely on automation for moderation.

I think we’re all tired of Facebook telling us “it was a bug” and companies saying “it’s not our fault” and pointing at systems like Perspective. Despite the fact that they’re complicit by using it. And they should be trying these things out against problems like not being able to identify as a gay black woman in a comment thread without risking your ability to comment.

Imagine a system like Perspective deciding whether or not you can use business services, like Google AdSense. Take for instance the African American woman who got an email Thursday from Google AdSense saying she’d violated its Terms by writing a blog post about dealing with being called the n-word … on her own website.

Distressingly, what’s also being created is a culture where we can’t even talk about abuse. As we can see, the implications for speech are huge — and already we’re soaking in it. Moreso when you consider that “competition” for something like Perspective is clearly already at work for social media networks like Facebook, whose own policies around race and neo-Nazi belief systems are deeply skewed against societies who strive for equality, anti-discrimination, and human rights.

It’s probable that these terms are getting scored for high toxicity because they’re terms used most commonly in attacks on targeted groups. But the instances mentioned in this article are clear failures. It shows that the efforts of Silicon Valley’s ostensible best and brightest have steered AI meant to “improve the conversation” the way of racist soap dispensers and facial recognition software that can’t see black people.

Insofar as the Wired feature is concerned, the data looks flawed from where we’re sitting. It may just mean that there are more gay black women and sex workers there who are okay with talking about it than Sharpsburg, Georgia commenters. Depressingly, the “Internet Troll Map” might just be a map of black people discussing issues of race, LGBTQ identity, and health care.

Which, we hope, is the opposite of what everyone intended.