This time last year we heard the surprising Hollywood announcement that Chinese company LeEco would acquire Vizio in a deal worth $2 billion. Unfortunately, like so many of LeEco’s recent plans, that arrangement never went through, with the two companies formally calling things off in April. Now, the Orange County Business Journal reports that Vizio has filed two separate lawsuits against LeEco, accusing the company of making false claims while arranging the acquisition. One lawsuit, filed in US District Court in LA seeks $60 million in damages, while another filed with the Superior Court of the State of California-County of Orange in Santa Ana seeks $50 million, plus other relief.
Complaint 12: Plaintiff is informed and believes, and based thereon alleges, that unbeknownst to it, at the time that $2BB Financial Wherewithal Representations were made, Global Holding and its far-flung corporate empire had begun to collapse due to their severe cash flow and financial problems, and that they desperately needed to either obtain the instant financial stability, credibility and resources that a merger with VIZIO would bring, or at least to create a widespread and dramatic public impression of their own financial health and well-being to grow or continue in business that would come with the announcement of such an intended merger.
Complaint 13: Plaintiff is informed and believes, and based thereon alleges, that Global Holding and/or various of its subsidiary or affiliated corporate entities then concocted a secret plan at or about that time to (a) use a publicly announced intended merger with VIZIO to gain or try to obtain access to VIZIO’s large corporate customers and key decision makers thereat for their own purposes and by means of confidential customer information that had been developed by VIZIO at substantial cost, time and expense, and (b) create a false widespread public impression of their own financial health during the Serious Negotiations Period and beyond, with Defendants making the $2BB Financial Wherewithal Representations to further that secret plan, and intending to induce Plaintiff to enter into such a merger agreement and provide LeEco with access to VIZIO’s confidential customer information, including contact information, account history, purchasing needs or requirements, contract terms, and the like
Variety has posted a copy of the Santa Ana court filing, which makes the claim that LeEco’s representatives knew it was not financially healthy enough to pull off the $2 billion purchase. As a result, the lawsuit alleges LeEco’s true aim was to create a false impression of its financial health, as well as gain access to its corporate customers, decision makers and “confidential customer information that had been developed by Vizio.” It’s unclear whether that last bit refers to the Inscape automated content recognition and viewer tracking scheme that the FCC slapped Vizio’s wrist over earlier this year, but clearly, the two companies have some issues to work out.
Now, Vizio claims that LeEco only paid $40 million of an arranged $100 million termination fee, and used the promise of a joint venture/app deal to try and get out of paying the rest. Lately, the news around LeEco has focused on how much money it’s losing, a Chinese court froze its assets and its automaker affiliate Faraday Future has canceled plans for a Las Vegas plant.
Source: Orange County Business Journal
“The eyes are the window to the soul,” the adage goes, but these days our eyes could be better compared to our ethernet connection to the world. According to a 2006 study conducted by the University of Pennsylvania, the human retina is capable of transmitting 10 million bits of information per second. But for as potent as our visual capabilities are, there’s a whole lot that can go wrong with the human eye. Cataracts, glaucoma and age-related macular degeneration (AMD) are three of the leading causes of blindness the world over. Though we may not have robotic ocular prosthetics just yet, a number of recent ophthalmological advancements will help keep the blinds over those windows from being lowered.
Cataracts are the single leading cause of blindness worldwide, afflicting roughly 42 percent of the global population, including more than 22 million Americans. The disease, which causes cloudy patches to form on the eye’s normally clear lens, can require surgery if left untreated. That’s why Google’s DeepMind AI division has teamed with the UK’s National Health Service (NHS) and Moorfields Eye Hospital to train a neural network that will help doctors diagnose early stage cataracts.
The neural network is being trained on a million anonymized optical coherence tomography (OCT) scans (think of a sonogram, but using light instead of sound waves) in the hopes it will eventually be able to supplement human doctors’ analyses, increasing both the efficiency and accuracy of individual diagnoses.
“OCT has totally revolutionized the field of ophthalmology. It’s an imaging system for translucent structures that utilizes coherent light,” Dr. Julie Schallhorn, an assistant professor of ophthalmology at UC San Francisco, said. “It was first described in 1998 and it gives near-cell resolution of the cornea, retina and optic nerve.
“The optic nerve is only about 200 microns thick, but you can see every cell in it. It’s given us a much-improved understanding of the pathogenesis of diseases and also their response to treatments.” The new iteration of OCT also measures the phase-shift of refracted light, allowing doctors to resolve images down to the capillary level and observe the internal structures in unprecedented detail.
“We’re great at correcting refractive errors in the eyes so we can give you good vision far away pretty reliably, or up close pretty reliably,” Schallhorn continued. “But the act of shifting focus from distance to near requires different optical powers inside the eye. The way the eye handles this when you’re young is through a process called ‘accommodation.’” There’s a muscle that contracts and changes the shape of the lens to help you focus on close objects. When you get older, even before you typically develop cataracts, the lens will stiffen and reduce the eye’s ability to change its shape.
“The lenses that we have been putting in during cataract surgery are not able to mimic that [shapeshifting] ability, so people have to wind up wearing reading glasses,” Schallhorn said. There’s a lot of work in the field to find solutions for this issue and help restore the eye’s accommodation.
There are two front-runners for that: Accommodating lenses, which use the same ciliary muscle to shift focus, and multifocal lenses, which work just like your parents’ multifocal reading glasses except that they sit directly on the eye itself. The multifocals have been on the market for about a decade, though their design and construction has been refined over that time.
To ensure the lenses that doctors are implanting are just as accurate as the diseased ones they’re removing, surgeons are beginning to use optiwave refractive analysis. Traditionally, doctors relied on measurements taken before the surgery to know how to shape the replacement lenses and combined those with nomograms to estimate how powerful the new lens should be.
The key word there is “estimate.” “They especially have problems in patients who have already had refractive surgery like LASIK,” Schallhorn explained. The ORA system, however, performs a wavefront measurement of the cornea after the cataract has been removed to help surgeons more accurately pick the right replacement lens for the job.
Corneal inlays are also being used. These devices resemble miniature contact lenses but sit in a pocket on the cornea that’s been etched out with a LASIK laser to mimic the process of accommodation and provide a greater depth of focus. They essentially serve the same function as camera apertures. The Kamra lens from AcuFocus and the Raindrop Near Vision Inlay from Revision Optics are the only inlays approved by the FDA for use in the US.
Glaucoma afflicts more than 70 million people annually. This disease causes fluid pressure within the eye to gradually increase, eventually damaging the optic nerve that carries electrical signals from the eye to the brain. Normally, detecting the early stages of glaucoma requires a comprehensive eye exam by a trained medical professional — folks who are often in short supply in rural and underserved communities. However, the Cambridge Consultants’ Viewi headset allows anyone to diagnose the disease — so long as they have a smartphone and 10 minutes to spare.
The Viewi works much like the Daydream View, wherein the phone provides the processing power for a VR headset shell — except, of course, that instead of watching 360 degree YouTube videos, the screen displays the flashing light patterns used to test for glaucoma. The results are reportedly good enough to share with you eye doctor and take only about five minutes per eye. Best of all, the procedure costs only about $25, which makes it ideal for use in developing nations.
And while there is no known cure for glaucoma, a team of researchers from Stanford University may soon have one. Last July, the team managed to partially restore the vision of mice suffering from a glaucoma-like condition.
Normally, when light hits your eye, specialized cells in the retina convert that light into electrical signals. These signals are then transmitted via retinal ganglion cells, whose long appendages run along the optic nerve and spread out to various parts of the brain’s visual-processing bits. But if the optic nerve or the ganglion cells have been damaged through injury or illness, they stay damaged. They won’t just grow back like your olfactory sensory nerve.
However, the Stanford team found that subjecting mice to a few weeks of high-contrast visual stimulation after giving them drugs to reactivate the mTOR pathway, which has been shown to instigate new growth in ganglion cells, resulted in “substantial numbers” of new axons. The results are promising, though the team will need to further boost the rate and scope of axon growth before the technique can be applied to humans.
Researchers from Japan have recently taken this idea of cajoling the retina into healing itself and applied it to age-related macular degeneration cases. AMD primarily affects people aged 60 and over (hence the name). It slowly kills cells in the macula, the part of the eye that processes sharp detail, and causes the central focal point of their field of vision to deteriorate, leaving only the peripheral.
The research team from Kyoto University and the RIKEN Center for Developmental Biology first took a skin sample from a human donor, then converted it into induced pluripotent stem (IPS) cells. These IPS cells are effectively blank slates and can be coerced into redeveloping into any kind of cell you need. By injecting these cells into the back of the patient’s eye, they should regrow into retinal cells.
In March of this year, the team implanted a batch of these cells into a Japanese sexagenarian who suffers from AMD in the hope that the stem cells would take hold and halt, if not begin to reverse, the damage to his macula. The team has not yet been able to measure the efficacy of this treatment but, should it work out, the researchers will look into creating a stem-cell bank where patients could immediately obtain IPS cells for their treatment rather than wait months for donor samples to be converted.
And while there isn’t a reliable treatment for dry-AMD, wherein fatty protein deposits damage the Bruchs membrane, a potent solution for wet-AMD, which involves blood leaking into the eyeball, has been discovered in a most unlikely place: cancer medication. “Genentech started developing a new drug when an ophthalmologist in Florida just decided to inject the commercially available drug into patients eyes,” Schallhorn explained.
“Generally this is not a great idea because sometimes things will go terribly wrong,” she continued, “but this worked super-well. It basically stops and reverses the growth of these blood vessels.” The only problem is that the drugs don’t last, requiring patients to receive injections into their eyeballs every four to eight weeks. Genentech and other pharma companies are working to reformulate the drug — or at least develop a mechanical “reservoir” — so it has to be injected only once or twice a year.
Stem-cell treatments like those used in the Kyoto University trial have already proved potentially effective against a wide range of genomic diseases, so why shouldn’t it work on the rare genetic condition known as choroideremia? This disease is caused by a single faulty gene and primarily affects young men. Similar to AMD, choroideremia causes light-sensitive cells at the back of the eye to slowly wither and die, resulting in partial to complete blindness.
In April of 2016, a team of researchers from Oxford University performed an experimental surgery on a 24-year-old man suffering from the disease. They first injected a small amount of liquid into the back of the eye to lift a section of the retina away from the interior cellular wall. The team then injected functional copies of the gene into that same cavity, replacing the faulty copies and not only halting the process of cellular death but actually restoring a bit of the patient’s vision.
Gene therapy may be “surely the most efficient way of treating a disease,” lead author of the study, Oxford professor Robert MacLaren, told BBC News, but its widespread use is still a number of years away. Until then, good old-fashioned gadgetry will have to suffice. Take the Argus II, for example.
The Argus II bionic eye from Second Sight has been in circulation since 2013, when the FDA approved its use in treating retinitis pigmentosa. It has since gotten the go-ahead for use with AMD in 2015. The system leverages a wireless implant which sits on the retina and receives image data from an external camera that’s mounted on a pair of glasses. The implant converts that data into an electrical signal which stimulates the remaining retinal cells to generate a visual image.
The Argus isn’t the only implantable eyepiece. French startup Pixium Vision developed a similar system, the IRIS II, back in 2015 and implanted it in a person last November after receiving clearance from the European Union. The company is already in talks with the FDA to bring its IRIS II successor, a miniaturized wireless subretinal photovoltaic implant called PRIMA, to US clinical trials by the end of this year.
Ultimately, the goal is to be able to replace a damaged or diseased eye entirely, if necessary, using a robotic prosthetic. However, there are still a number of technological hurdles that must be overcome before that happens, as Schallhorn explained.
“The big thing that’s holding us back from a fully functional artificial eye is that we need to find a way to interface with the optic nerve and the brain in a way that we transmit signals,” she said. “That’s the same problem we’re facing with prosthetic limbs right now. But there are a lot of smart people in the field working on that, and I’m sure they’ll come up with something soon.”
Microsoft has launched an iPhone app designed to help blind and partially-sighted people better navigate the world. The app, Seeing AI, uses ‘computer vision’ to narrate the user’s surroundings, read text, describe scenes and even identify friends’ facial cues.
The project has been in the works since September 2016; in March this year, Microsoft demonstrated a prototype of the app for the first time. It uses neural networks, similar to the technology found in self-driving cars, to identify its environment and speak its observations out loud.
Point your phone camera at a friend and it’ll tell you who they are. Aim it toward a short piece of text such as a name badge or room number and it’ll speak it instantly — a marked step up from the optical character recognition (OCR) technology of yore. Plus, it guides the user into capturing the object in question correctly, telling them to move the camera left or right to get the target in shot.
The app also recognizes currency, identifies products via their barcodes and, through an experimental feature, can describe entire scenes, such as a man walking a dog or food cooking on a stove. Basic tasks can be carried out directly within the app, without the need for an internet connection. It’s currently available to download for free in the US on iOS, but there’s no indication when it’ll come to other platforms or countries.
In a blog post by Harry Shum, executive vice president of Microsoft’s AI and research group, the company explains that Seeing AI is “just the beginning” for this kind of AI application. Machine learning, perception and natural language processing have evolved over time into separate fields of research, it says, but “we believe AI will be even more helpful when we can create tools that combine those functions.”
Earlier this year, Facebook launched Spaces, a social VR experience that lets you and your friends interact with each other in virtual reality. Now, the company is ready to introduce a new aspect of Spaces by merging it with another Facebook product: Facebook Live. That’s right; starting today, anyone with an Oculus Rift can livestream directly from Facebook Spaces, letting anyone and everyone have a peek inside your VR universe.
To do this, you’ll access a new virtual tool in Spaces that looks like a tablet. Pick it up, hit the “go live” button, and it’ll livestream your VR world to your Facebook feed where you friends and family can view it. You can then show off various Spaces experiences like traveling through vacation photos, watching a video together or interacting with fellow VR buddies. The tablet, according to Spaces’ head of product management Mike Booth, essentially acts as a “magic mirror” of sorts that gives you a view of what everyone will see when they look at the livestream. Think of it as a virtual camera viewfinder.
As you’re livestreaming, your audience can add reactions like the Like thumbs up or a heart symbol just like regular Facebook Live, except that in Spaces, those reactions come flying out of the aforementioned “magic mirror” in the VR world. Also, as your audience leaves comments, you’ll see them appear as stacks next to the magic mirror. If you want to address one comment in particular, you can “grab” the comment from the stack and make it into a virtual sign. This, Booth says, lets people in the VR world talk back to the audience and respond in a personal way.
The inspiration behind the addition of Live was Messenger Video Calling, a feature in Spaces that lets you call people in the real world and have them see you and your virtual avatar in your virtual world. That, however, was a one-to-one experience. “With Go Live, we can have the same experience with many people at the same time,” says Booth.
“Our big goal behind this is that, VR is for everyone,” continues Booth. “A lot of people don’t know what VR is; they think it’s something that gamers do.” But if they can see and interact with this livestream of a social VR experience, they might think otherwise.
“It’s the ultimate communications device,” says Booth. “People are the killer app for VR.”
The livestream feature will roll out to Spaces starting today and throughout the rest of the week. Facebook Spaces is only available on the Rift at the moment, though there are plans to release it on more platforms in the future. Which is great, unless you’re not sold on the whole thing in the first place.
Frustrated that the mysterious story of What Remains of Edith Finch has been available on PCs and the PS4 for months, but not your Xbox One? Don’t be. Giant Sparrow and Annapurna Interactive have confirmed that Edith Finch is coming to Xbox One on July 19th for $20. The experience remains the same, but that’s no bad thing if you’re a fan of exploration stories like Everybody’s Gone to the Rapture or the upcoming Tacoma — it’s all about revealing an intricate narrative.
To recap: you assume the role of Edith Finch as she combs through the family home to learn why she’s the only remaining member of the Finch clan. While each of the short stories that Edith finds is guaranteed to end in death, it’s not a grim tale. It’s meant to provoke happiness, intrigue, and even a little appreciation for beauty. A bit hokey, we know — but if you’re tired of the usual fantasy and sci-fi narratives, this might be worth a look.
Source: YouTube, Annapurna
Next time you fire up the Google Play Movies & TV app, pay attention to every title — it could sport a tag you’ve been waiting for. The service has finally introduced HDR or High Dynamic Range playback, and it’s now available for select movies and TV shows. HDR makes what you’re watching more life-like by displaying a wider range of colors, so that you can see more details in the darkest and brightest parts of an image despite displaying greater contrast. You’ll need an HDR-compatible monitor or TV to fully enjoy the upgrade, though… and the feature is only live in the US and Canada.
The good news is that you can still enjoy the feature even if you have a non-smart TV by using Chromecast Ultra, Google’s more expensive streaming puck for 4K displays. If you can fulfill all those conditions, you’re golden — just find HDR tags in the app like in the screenshot below.
In addition to Mad Max: Fury Road, Google also confirmed HDR compatibility for Fantastic Beasts and Where to Find Them. We’ll bet you’ll find a few more options if you look, since the big G has teamed up with its studio partners for the feature’s launch, including Sony and Warner Bros.
As a mark of just how serious Apple is about its smart home initiatives, the company has built HomeKit into 46 of its brick and mortar stores. That means if you stop into the Union Square location in San Francisco or the World Trade Center and Williamsburg stores in New York you’ll be able to give the IoT suite a test run, TechCrunch writes. Some 28 other stores throughout the country will have the demos up and running. If you don’t have one of the fancy stores, you’ll have to settle for non-interactive literature and the like. Ugh.
As far as the actual testing goes, it sounds pretty simple. Customers will be able to futz with Hunter ceiling fans, Hue lightbulbs and even connected window shades via iPhone, iPad or Watch, apparently.
Apple is playing catch-up here, so transforming HomeKit from a feature people never use on their phones into something they can see and touch is pretty important. After all, Tim Cook and Co. need to gain mindshare against offerings from Google and Amazon if they want the HomePod to succeed.
If you’ve had to call Verizon customer service recently, you might want to keep a close eye on your data. ZDNet has learned that an employee at a carrier partner, Nice Systems, exposed 14 million residential customer records from the past 6 months on an unguarded Amazon S3 server. As long as you could guess the web address (which reportedly wasn’t that hard), you had free rein to download whichever log files you wanted. Each record included a name, cellphone number and account PIN, and only some of it was masked. Thieves would not only have personal info they could abuse elsewhere (such as social accounts that use a phone number for authentication) — they could impersonate you if they called Verizon later.
A spokesperson for the telecom says that it’s investigating the exposure, and acknowledges that there’s “some personal information.” Verizon had to give the data to Nice to verify customer info, and it was allowed to set it up on the Amazon server, but it clearly didn’t intend for that info to be made public. There’s “no indication” that the info has been compromised, it claims. That’s not going to be very reassuring, though, as it’s not clear who (if anyone) downloaded the data while it was public.
Nice will only say that the data was “part of a demo system.”
This isn’t the first time a big company’s data has been exposed online. However, the Verizon incident underscores an important point: data security is only as strong as the weakest link in the chain. If a partner company doesn’t guarantee airtight privacy, it’s just as dangerous as if the main company had revealed the data itself.
Today is the net neutrality Day of Action, and many thousands of organizations, companies and websites are standing up against the FCC’s plan to gut net neutrality rules and let ISPs regulate themselves. The first deadline for comments on the FCC proposal is up in five days, and plenty of sites are making it easy today to voice your opinion.
Some websites have added banners to their pages, including Netflix, Twitch, Spotify and PornHub. Mozilla, Vimeo and Airbnb have added more-substantial additions to the top of their pages while websites like Twitter and Google have opted for blog posts.
Some of the more creative messages can be found at Greenpeace’s and Reddit’s websites. Greenpeace has a pop-up telling you the site has been blocked by your ISP and then goes on to explain that while it might not be true yet, it could be if the FCC’s plan is approved. Reddit’s page has a pixelated logo with an alert saying, “Monthly bandwidth exceeded, click to upgrade” and a pop-up with a slowly typed message that eventually says, “The internet’s less fun when your favorite sites load slowly, isn’t it? Whether you’re here for news, AMAs, or some good old-fashioned cats in business attire, the internet’s at its best when you — not internet service providers — decide what you see online. Today, u/kn0thing and I are calling on you to be the heroes we need. Please go to battleforthenet.com and tell the FCC that you support the open internet.”
The ACLU also has a pop-up, and Kickstarter’s is full page. Amazon’s statement in favor of net neutrality is lower down on its page and only visible if you’re signed in.
Those are just a few of the many sites that have chosen to participate today. Others include OkCupid, Tumblr, Dropbox, GoDaddy, DuckDuckGo, Expedia and Rosetta Stone. AT&T, which sued the FCC to fight against net neutrality rules just a couple of years ago, decided to support the Day of Action and net neutrality this time around, though there’s nothing on its front page today. Verizon has decided that it’s above the “slogans and rhetoric” of things like the Day of Action and says we need more-concrete action.
Companies have also released statements today. Lee Tien, an attorney with Electronic Frontier Foundation said, “It’s our Internet and we will defend it. We won’t allow cable companies and ISPs, which already garner immense profits from customers, to become Internet gatekeepers.”
Rashad Robinson, the executive director of Color of Change said, “Chairman Pai’s plan to gut the FCC’s net neutrality rules will devastate Black communities. For the first time in history we can communicate with a global audience — for entertainment, education, or political organizing — without prohibitive costs, or mediation by government or industry. We join today’s Day of Action to fight back against this attack on our 21st century civil rights. The FCC has the power and responsibility to defend our digital oxygen, our right to communicate online, and to protect the public from predatory business practices from giant ISPs determined to invent new ways to charge us even more for even less.”
Co-founder of Reddit Alexis Ohanian said, “Net neutrality ensures that the free market — not big cable — picks the winners and losers. This is a bipartisan issue, and we at Reddit will continue to fight for it. We’ve been here before, and this time we’re facing even worse odds. But as we all know, you should never tell redditors the odds.”
And Sir Tim Berners-Lee, inventor of the world wide web, said, “If we lose net neutrality, we lose the open internet as we know it. What sort of a web do you want? Do you want a web where cable companies control the winners and losers online? Where they decide which opinions are read, which creative ideas succeed, which innovations manage to take off? That’s not the web I want.”
As of this writing, the FCC proposal has received over 6 million comments. Feel free to add yours here.
[Images: PornHub, Amazon]
As part of its continued push into the AI sector, Google has just revealed that it has purchased a new deep learning startup. The Indian-based Halli Labs are the latest addition to Google’s Next Billion Users team, joining the world-leading tech company less than two months after the startup’s first public appearance. The young company has described its mission statement at Google as “to help get more technology and information into more people’s hands around the world.” Halli announced the news itself in a brief post on Medium, and Caesar Sengupta, a VP at Google, confirmed the purchase shortly afterwards on Twitter.
Welcome @Pankaj and the team at @halli_labs to Google. Looking forward to building some cool stuff together. https://t.co/wiBP1aQxE9
— Caesar Sengupta (@caesars) July 12, 2017
While Halli Labs is still in its infancy, the company’s founder — Pankaj Gupta — is a data scientist who has worked at Twitter as well as defunct Indian Airbnb competitor Stayzilla. Google buying AI firms is hardly shocking news, but the tech giant’s focus on expanding into developing markets is what separates this from other recent acquisitions. With Google previously introducing region-specific offline versions of apps and even installing Wi-Fi into Indian train stations, the tech behemoth has already shown its interest in making life easier for the country’s growing population of internet users.
With Halli Labs’ announcement post stating that its emphasis is on solving “old problems”, this intriguing acquisition appears to show Google once again using AI to double down on servicing the world’s internet users.