You can finally play Spotify on multiple Amazon Echo speakers
Available first in the U.S., UK, Canada, Germany, and Ireland.
Amazon’s Echo smart speakers are truly awesome, and with fun hardware like the Echo Show and Echo Spot, they’re only getting better. However, as many improvements as we’ve seen on both the hardware and software fronts, one feature that’s been noticeably absent since multi-room audio was added in August is the ability to stream Spotify on multiple Echo speakers at the same time.

Thankfully, this changes today.
Multi-room audio on Echo speakers now officially supports Spotify, meaning you can finally listen to all your favorite tunes through the service on more than one Echo at once. Spotify will work with multi-room audio on Echos in the United States, United Kingdom, Canada, Germany, and Ireland at first, but we should see it expanded to more countries in the near future.
In addition to Spotify, Amazon is also adding similar support for SiriusXM. However, SiriusXM is launching first in just the United States.
Amazon Echo
- Tap, Echo or Dot: The ultimate Alexa question
- All about Alexa Skills
- Amazon Echo review
- Echo Dot review
- Top Echo Tips & Tricks
- Amazon Echo vs. Google Home
- Get the latest Alexa news
See at Amazon
HDMI 2.1: Everything you need to know

HDMI 2.1 promises to deliver 10K video, 120 Hz refresh rates and much more. Here’s everything you should know about HDMI’s next revision.
Earlier this year HDMI 2.1 was unveiled, the latest revision of the now-commonplace audio/video cable standard across consumer electronics. And while still in its early stages, the final specification provides a new spectrum of high-end features, designed to deliver premium home entertainment experiences. With improvements focusing on both video consumption and gaming, HDMI 2.1 lays the foundation for clearer and smoother future. But what does this mean to you?
What is HDMI 2.1?
HDMI 2.1 is the latest revised specification of the HDMI interface, which is used for transmitting both audio and video across modern devices. Having become the go-to solution across consumer electronics, you’ll have undoubtedly encountered previous versions of the cable or port over the last decade. And while using a visually identical connector, HDMI 2.1 delivers hardware refinements that push improved video and audio quality.
After its first unveiling at the Consumer Electronics Show in January 2017, HDMI Forum, the body managing the interface’s development, has created a specification for HDMI 2.1 going forward. As of November 2017, the specification was finalized, allowing manufacturers to begin adopting the technology itself.
HDMI 2.1’s new features
Improved bandwidth

One of the principal developments with HDMI 2.1 is a boost in available bandwidth, allowing for transfer rates up to 48 Gigabits per second (Gbps) – a significant step up from the 18 Gbps available with HDMI 2.0. In contrast to the leap from the 10.2 Gbps offered by its predecessor, HDMI 1.4, HDMI 2.1 represents a significant advancement in the potential throughput. Not only does this allow for improved visual clarity, but other video and audio upgrades have been detailed that take advantage of this overhead.
Higher resolutions and framerates

For general consumers, one of the alluring promises is a leap in both supported resolution and framerates when using an HDMI 2.1 connection. Topping out at 10K resolution and 120 Hz at lower pixel counts, the new revision should provide more than enough flexibility for any consumer display hitting the market for the years ahead.
While 10K video won’t be among the mainstream anytime soon, HDMI 2.1 also leverages its improved bandwidth to target lower resolutions. On supported displays, 8K (7,680 pixels x 4,320 pixels) at 60 Hz will be possible, as well as 4K (3,840 × 2,160) at 120 Hz. These resolutions also see the full benefit of High Dynamic Range (HDR) across supported content, with a wider gamut of colors and improved contrast ratio. Stepping up from the limit of 4K at 60Hz imposed with HDMI 2.0, resolutions can now be pushed even further without comprising fluidity.
Dynamic HDR

HDMI 2.1 also ushers in “Dynamic HDR,” which extends the potential of existing HDR technology with improved color tuning. When the feature is in use, dynamic metadata is processed on a frame-by-frame basis, allowing color settings and brightness to adapt on the fly. The result is an improvement to how colors are displayed, depending on the current scene. While dynamic metadata is already available over HDMI 2.0 with “Dolby Vision,” HDMI 2.1 aims to deliver this to the open HDR10 standard.
More: The difference between HDR10 and Dolby Vision
eARC
In a move to continue simplifying household entertainment centers, HDMI 2.1 adopts support for enhanced audio return channel (eARC), which is used to deliver audio over an HDMI connection to soundbars and receivers. Building on the existing ARC implementation implemented in earlier HDMI specifications, this allows a single HDMI to send and receive audio, reducing the cabling between external devices.
eARC is an extension of this technology, making your TV the central hub for entertainment, rather than a traditional receiver. With support for integrated TV tuners, streaming apps and other devices connected to the TV via other HDMI ports, the TV can handoff essentially all audio to an external sound system. With support for Dolby Atmos and DTS:X, too, this delivers high-quality sound to audio-only receivers, soundbars, and amplifiers.
Other features of HDMI 2.1

While the most impressive features of HDMI 2.1 deliver improvements to visual clarity and color, several other additions are making an arrival with the specification. Tailored for video playback and gaming, these provide enhancements for more specific scenarios.
For gamers, one of the most promising features of HDMI 2.1 is support for Variable Refresh Rate (VRR) outputs. In essence, VRR adapts the refresh rate of your display to that out outputted content, reducing screen tearing and stuttering, without the input lag suffered when using a similar solution known as V-Sync. This makes for a much smoother overall experience for gamers on supported displays, although has few applications for other content types.
HDMI 2.1 also delivers support for a minor feature known as “Quick Media Switching” (QMS), which streamlines the process of changing media types on the fly. On traditional displays, changing framerate, resolution and other settings can result in a short blackout. If using a QMS-supported display, HDMI 2.1 will smoothen the process of changing source material, without dropping the signal.
Another welcome feature for gamers is support for “Quick Frame Transport” (QFS) – a relatively simple concept, which makes for a snappier experience. When using QFS, frame output latency is reduced, which delivers a much more responsive experience to displays and VR headsets.
To further enhance refresh rates comes “Auto Low Latency Mode” (ALLM), which automatically adjusts latency on the fly, to make for the smoothest, lag-free viewing experience.
Pushing more advanced visuals, “Display Stream Compression” (DSC) is also a part of the new specification, meaning devices can theoretically surpass the 48 Gbps limit on HDMI 2.1 displays. With this feature enhanced, video streams can be compressed on the fly, delivering improved visual quality to supported displays.
Getting started with HDMI 2.1

With the HDMI 2.1 specification only recently having been finalized, development of hardware utilizing the technology is still in the early stages. Despite using the same connector, the revised interface requires dedicated ports and cables designed for the latest specification, meaning there are no consumer setups available that offer its feature set.
To take advantage of all the new features offered by HDMI 2.1, a new “Ultra High-Speed HDMI Cable” will be required. Cables certified for use with the new standard will be built for 48 Gbps transfer rates and all the other features details with the revised specification. As of publication, manufacturers are yet to release cables that have passed the HDMI 2.1 Compliance Test Specification (CTS), however, they are expected to first hit the market sometime in 2018. But be warned – at first, these cables won’t be cheap.

Some HDMI 2.1 features are accessible with older HDMI cables, but to get the complete range of features, you’ll have to invest in an Ultra High-Speed variant. For example, eARC can be accessed using HDMI High-Speed Cables with Ethernet, while less intensive resolution and frame rate combinations can be output alongside Dynamic HDR using older HDMI cables. Ultra High-Speed cables are backward compatible with older ports, only with older features offered by the port.
The same concept applies to devices themselves, meaning both outputting and receiving devices must support HDMI 2.1 to take full advantage of the interface. Using the same connector means that implementing HDMI 2.1 shouldn’t be too challenging for major manufacturers, but due to the infancy of the specification, it will once again be some time before these devices are in the home.
So, should you care about HDMI 2.1?
Not really — at least not yet. While HDMI 2.1 delivers some anticipated premium features, with the specification only recently having been finalized, the technology is still in early stages. It will still be some time before supported devices are available to consumers – and that’s not even touching the fact that price will be a major barrier to entry.
4K displays are only just finding a place among the mainstream, meaning the technology showcasing the greatest benefits of HDMI 2.1 will come at a cost. Simply put, HDMI 2.1 is so future-proofed, displays simply haven’t caught up to its capabilities. Once HDMI 2.1 is more accessible, there will be welcome improvements for both video and gaming experiences, but you’ll have to wait for now.
What is HDR and why should you care?
Actor Theo Rossi shares his thoughts on VR!
Take a look at the other side of working around VR.

While the current excitement in the world of VR is gaming, especially with titles like Fallout and Doom hitting the shelves, the world of VR video continues to grow at an incredible pace. I was recently invited to attend an event on Samsung’s journey so far with the Gear VR, and sat down with actor and philanthropist Theo Rossi to talk about his recent experiences on the other side of this new kind of camera.
Read more at VRHeads!
Deck your phone with these ho-ho-holiday wallpapers

A holiday poem, and some wallpapers, too!
‘Tis two weeks before Christmas,
So time to deck the halls.
We’ve decked the tree and the car,
And the ceilings and the walls.
But we’ve missed something special,
For your desktop is bare.
Time to get a festive wallpaper,
For your Android phone to wear.
So enjoy these sweet walls,
Some cute and some merry.
And happy holidays from AC,
Time to steal some rum from Jerry!
The Nutcracker

You say Christmas and millions of people immediately start hearing Tchaikovsky, and it’s little wonder. There just might be more renditions of The Nutcracker this time of year than there are tellings of the nativity story. Eleven months of the year, most Americans scoff at ballet tickets, but Christmas comes around and everyone comes around to watch the dance of the Sugar Plum Fairy. This illustration features toy soldiers trumpeting their prince’s return and the oft-overlooked Christmas Angels, which help guide Clara and the Nutcracker Prince through the winter snow to the fantastical celebration that await them.
Now, please excuse me, I have an old DVD of the Nutcracker I need to go watch for the 50th time…
The nutcracker by KibiQeQ
Christmas Reindeer

Christmas is a time where fairy lights and red velvet seem to decorate everything, and that includes the wildlife. This wallpaper features a reindeer with truly magnificent antlers, which are wrapped in the iconic Christmas lights, and some breathtaking ornamentation around his face. Perched among his expertly decorated antlers is a red robin, a perfect pop of red against the blue of the tableau.
Christmas wallpaper by SofiaGolovanova
Bugdroid Christmas

Android’s adorable little mascot has so much to celebrate each year as million of Android and Chrome devices are given as gifts, and so Motorola’s bokeh wallpaper featuring a traditional bugdroid in the festive spirit with a snowman bugdroid and some weird pink bugdroid. The tiny Christmas card-worthy scenes make for delightfully nerdy wallpapers, no matter which manufacturer you’re using today.
Motorola’s Bugdroid Christmas
Trey Ratcliff Christmas

The Christmas tree is one of the most iconic holiday symbols in the world. You can decorate them a million and a half ways, heck, you can decorate any kind of tree and turn it into a Christmas tree. Whether you decorate with garlands, lights, ornaments, or popcorn, seeing a Christmas tree just ignites a warm fire of cheer and hope in your heart. This mesmerizing wallpaper by awesome photographer Trey Ratcliff plays with the fairy lights and long exposure to create a Christmas tree that seems to be made entirely of light as the branches are blurred and obscured by the long exposure. It’s pure magic, and we could all use some magic this Christmas.
Merry Christmas from the Ratcliff Family
Disneyland Paris Christmas

This is the holiday wallpaper I’ve used for two years now, and I love it to pieces. I have the ever-charming Chateau de la Belle au Bois Dormant from Disneyland Paris shining in the distance, snow falling across the beautiful wintery scene, and a Santa boot left Cinderella-style on the steps (bet he’s missing that in this weather). This wallpaper is whimsical, yet refined, and the strong blue-white-red color scheme lends itself well to theming, with Glim red-variant icons for my dock and color-variable widgets when I get the whim.
Disneyland Paris Christmas
It’s Christmas Time by Anyzamarah

Fractals are all around in the winter, and they make lovely wallpapers, from snowflakes, to Christmas trees to the most ornate ornaments you’ll ever see. This is a wallpaper that can draw you in with its remarkable detailing and stun you with its beauty. You’ll be hard pressed to put this away at the end of the season.
It’s Christmas Time by Anyzamarah
Arctic Christmas

Once you’ve caught your Santa Pickachus in Pokémon Go, why not grab a holiday wallpaper with our favorite Ice-type Pokémon? From the magestic Lapras to the dark and mysterious Weavile, Pokémon from all over the frozen north come together to celebrate the holiday together and light up the winter’s night under the glow of the aurora.
Artic Christmas by arkeis-pokemon
Candy Cane Queen

It’s easy to go overboard with candy canes at Christmas time, but this remarkable Candy Cane Queen by littlepaperforest does it right. The stripes aren’t everywhere, and they’re not garish, they’re lovely accents instead. And her crown of holiday leaves and flowers is a stroke of genius. Her figure is also something to make me envious every time I look at my home screen, so there’s that, too.
Candy Cane Queen by littlepaperforest
MERRY CHAOSMAS EVERYPONY by BlackGryp0n

A lot of shows do Christmas wrong. Really, really wrong in some cases (cough cough Star Wars cough cough). But My Little Pony does it right. Not only is Hearth’s Warming Eve its own holiday with historical and magical importance, but it makes the carols and merrymaking an important part of the holiday. Of course, Discord’s take on Hearth’s Warming Eve is bound to be a bit more… lively. And really, as chaotic as Christmas is, it should be Discord’s holiday.
Now if you’ll excuse me, I need to go sing Pinkie’s Present at the top of my lungs.
MERRY CHAOSMAS EVERYPONY by BlackGryp0n
Update December 2017: We’ve added more holiday wallpapers for more holiday cheer!
Grammarly virtual keyboard is now available on Android
Grammarly’s virtual keyboard for Android is here to help make you a better writer.

If you find yourself doing any extensive amount of writing on your computer, chances are you’ve used Grammarly before. Grammarly is a tool that works with your web browser and word-processing app of choice to help catch any grammatical or spelling errors your built-in spell-checker might miss, and as the News Editor for Android Central, it’s something that I use every single day to keep me from looking like a buffoon when quickly pounding away on my keyboard.

On December 13, Grammarly announced that it’s officially bringing its virtual keyboard over to Android after having first launched it on iOS at the beginning of November. Grammarly’s keyboard looks a lot like Gboard at first glance, but as you can see, there’s something special happening near the top of it.
As you type, Grammarly will continually look at what you’re writing and make suggestions for any spelling or grammatical errors that it finds. If you want to make the correction, just tap on the suggestion and it’ll automatically be added. Once the correction has been added, you can tap on it to get an explanation of where you went wrong.
Grammarly encrypts what you type to ensure maximum protection, and any sensitive data that’s inputted (such as credit card information or passwords) isn’t saved by Grammarly at all – something that AI.type failed to do.



There’s support from American and British English, and Grammarly says that it’s working on adding swipe input in the near future.
Gboard’s been my go-to Android keyboard of choice for a while now, but considering how much I use Grammarly on my laptop and desktop, you can be sure I’ll be giving it a shot.
AI.type virtual keyboard leaks personal data for 31 million Android users
Facebook nabs a trio of shows from Machinima for its Watch tab
The last time Machinima was in the news it was for paying “influencers” to say nice things about the Xbox One. Now, the company has inked a deal with Facebook for three new shows that’ll live on the social network’s Watch platform, according to Deadline. Those include a comic-book talk show (Action Figures), a gaming themed dating show dubbed Co-op Connection and sketch comedy in the form of Dank/Fire. The deal says as much about Facebook as it does Machinima. For the former, it’s willing to spend that billion dollars on TV programming from just about any source. And for Machinima, this gives the network an even bigger platform to broadcast on than its 12-million subscriber YouTube channel.
Source: Deadline
Target acquisition brings same-day delivery to stores in early 2018
If it wasn’t already obvious that Target is nervous about Amazon, it is now. Target has acquired another same-day delivery startup, Shipt, for a cool $550 million. The deal should speed up Target’s plans to offer same-day delivery in its stores. It’s expecting to have the feature available in half its stores by early 2018, and in the “majority” of stores in time for 2018 holiday shopping. Your options will largely be limited to groceries, electronics, “essentials” and home products, but Target plans to offer products from all its major categories by the end of 2019.
Shipt will continue to run its own business, which relies on a network of personal shoppers to fill orders, independently of Target. That creates an unusual situation where Target partners could compete against each other — the big-box retailer also has an alliance with Instacart, creating a degree of overlap. This still gives you access to Shipt, however, and it’ll be free to pursue partnerships that serve its core business. Ironically, it has a partnership with Whole Foods… you know, the store owned by Amazon.
The deal should close by the end of December. Provided it does, Target will join Walmart in scrambling to offer same-day delivery services as quickly as possible. They’re clearly concerned that Amazon’s widening same-day delivery plans and integration with Whole Foods will cut directly into their core businesses, and they’re betting that you’ll stick with them if you can get comparably hasty service.
Via: TechCrunch
Source: Target
Netflix fires exec who said Masterson accusers weren’t believable
Last week, HuffPost reported that one of the women accusing actor Danny Masterson of rape was told by a Netflix executive that higher-ups at the company didn’t believe the women making accusations against him. Now, that exec has been fired.
The Netflix employee who made the statement was Andy Yeatman, Netflix’s director of global kids content. He coaches his daughter’s youth soccer team and two weekends ago, his team was playing against one on which a daughter of one of Masterson’s accusers plays. During the game, the victim approached Yeatman, asked him if he worked for Netflix and when he said he did, asked him why the company hadn’t done anything in response to the multiple accusations made against Masterson, who stars on the Netflix show The Ranch. Yeatman told the woman, who has chosen to remain anonymous, “we don’t believe them,” referring to the accusers. The victim then told him that she was one of the accusers.
Later, Netflix released a statement saying, “Mr. Yeatman’s comments were careless, uninformed and do not represent the views of the company. Further, he would have no insights into decision making on The Ranch. We are aware of the allegations against Danny Masterson and we are following the current investigation, and will respond if developments occur.”
Masterson was removed from The Ranch a day after the HuffPost story was published and has continued to deny any wrongdoing. As of this week, Yeatman has also been let go, about which Netflix has only said, “Mr. Yeatman is no longer employed at Netflix.”
Netflix’s slow response to accusations made against Masterson stood in contrast to the company’s fairly swift actions taken against Kevin Spacey and Louis C.K. When it finally did cut ties with Masterson, Netflix said, “After discussing with the producers, we’ve decided to write Danny Masterson off of The Ranch. Yesterday was his last day of work, and we’ll make new episodes in 2018 without him.”
Via: USA Today
Apple invests $390 million into Face ID and AirPod tech
Apple has made another investment with its Advanced Manufacturing Fund (AMF), awarding Finisar $390 million. If the name doesn’t sound familiar, its work will. Finisar is the company behind the iPhone X’s Face ID, Animoji and Portrait mode for selfies — all of which rely on vertical-cavity surface-emitting lasers (VCSEL). It’s also responsible for your AirPods’ proximity sensing tech. The investment means Finisar will set up shop in a 700,000 square-foot manufacturing facility in Texas in addition to expanding its research and development team.
For its part, Apple has committed to buying ten times more VCSEL components in the latter part of its fiscal 2017 than “were previously manufactured worldwide over a similar time period,” according to the tech juggernaut. While ambiguous, this gives a few hints that Apple is investing to meet demand for two of its newest products. It also more or less confirms the rumors that Apple had issues meeting demand for iPhone X’s Face ID parts.
The first time Apple opened the AMF checkbook it was for a $200 million investment in Gorilla Glass maker Corning this past May.
Source: Apple (1), (2), BusinessWire
Modern copyright law can’t keep pace with thinking machines
This past April, engineer Alex Reben developed and posted to YouTube, “Deeply Artificial Trees”, an art piece powered by machine learning, that leveraged old Joy of Painting videos. It generate gibberish audio in the speaking style and tone of Bob Ross, the show’s host. Bob Ross’ estate was not amused, subsequently issuing a DMCA takedown request and having the video knocked offline until very recently. Much like Naruto, the famous selfie-snapping black crested macaque, the Trees debacle raises a number of questions of how the Copyright Act of 1976 and DMCA’s Fair Use doctrine should be applied to a rapidly evolving technological culture, especially as AI and machine learning techniques approach ubiquity.
Questions like, “If a human can learn from a copyrighted book, can a machine learn from [it] as well?,” Reben recently posited to Engadget. Much of Reben’s art, supported by non-profit Stochastic Labs, seeks to raise such conundrums. “Doing something that’s provocative and doing something that’s public, I think, starts the conversation and gets them going in a place where the general public can start thinking about them,” he told Engadget.
To that end, Reben creates projects like Let Us Exaggerate, “an algorithm which creates gobbly-gook art-speak from learning Artforum articles,” Synthetic Penmanship, which accurately mimics a person’s handwriting, Korible Bibloran, an algorithm that generates new scripture based on its understanding of the Bible and Koran, or Algorithmic Collaboration: Fractal Flame, which blurs the line of creatorship between human and machine.
“I start with a program which generates phrases for me to think about, for example ‘obtrusive grass,’” Reben explained. He then thinks about the phrase while an EEG and other sensors record his reactions. That data is then fed into an art generating algorithm to create an image. “The digital version uses IFS fractal generation where the color palette is chosen by the computer from the phrase used in Google image search results,” he said, “then displays different versions for me to choose from by measuring my reactions to the images.”

Algorithmic Collaboration: Fractal Flame – Disobedient Strawberry
New technology running afoul of existing copyright law is nothing new, mind you. “In the 1980s, US Courts of Appeals evaluated who ‘authors’ images of a videogame that are generated by software in response to a player’s input,” Ben Sobel, an Affiliate at the Berkman Klein Center for Internet and Society, Harvard University, told Intellectual Property Watch in August. “IP scholars have been writing about how to treat output generated by an artificial intelligence for at least 30 years.”
One of the big sticking points between AI and copyright law centers around how these systems are trained, specifically the process machine learning. Most such systems rely on vast quantities of data — images, text, or audio — that enable the computer to discover patterns within them. “Well-designed AI systems can automatically tweak their analyzes of patterns in response to new data,” Dr. Amanda Levendowski, a clinical teaching fellow at New York University Law School, argues in her forthcoming Washington Law Review study. “Which is why these systems are particularly useful for tasks that reliance on principles that are difficult to explain, such as the organization of adverbs in English, or when coding the program would be impossibly complicated.”
Problems arise, however, when the datasets used to train AIs include copyrighted works without the permission of the rightsholder. “This is presumptively copyright infringement unless it’s excused by something like fair use,” Sobel explained. This is precisely the issue that Google ran into when it launched the Google Books initiative in 2005 and was promptly sued for copyright infringement.

Algorithmic Collaboration: Fractal Flame – Unrecoverable Discretionary Trust
In Authors Guild v. Google, the plaintiff argued that by digitizing and annotating some 20 million titles, the search company had violated the Guild’s copyrights. Google countered by arguing its actions were protected under fair use. The case was finally resolved last year when the Supreme Court declined to hear the Guild’s appeal, leaving a lower court’s ruling in favor of Google standing. “This is often because the uses are what some scholars call ‘non-expressive,’” Sobel told IPW. “They analyze facts about works instead of using authors’ copyrightable expression.”
Things get even stickier when AI is trained to create expressive works, like how Google fed its system 11,000 romance novels to improve the AI’s conversational tone. The fear, Sobel explains, is that the subsequent, AI-generated work will supplant the market for the original. “We’re concerned about the ways in which particular works are used, how it would affect demand for that work,” he said.

Synthetic Penmanship
“It’s not inconceivable to imagine that we would see the rise of the technology that could threaten not just the individual work on which it is trained,” Sobel continued. “But also, looking forward, could generate stuff that threatens the authors of those works.” Therefore, he argued to IPW, “If expressive machine learning threatens to displace human authors, it seems unfair to train AI on copyrighted works without compensating the authors of those works.”
It’s part of what Sobel calls the “fair use dilemma.” On one hand, if expressive use of machine learning isn’t protected by the fair use doctrine, any author whose work was used as even a single part of a massive training data set would be able to sue. This would create a major impediment to the further development of AI technology. On the other, Sobel asserted to IPW, “a hyper-literate AI would be more likely to displace humans in creative jobs, and that could exacerbate the income inequalities that many people fear in the AI age.”

Algorithmic Collaboration: Fractal Flame – Magnified Reassignment
These legal ramifications have other implications to more than just the defendant’s pocketbook. The rules around copyright influence the AI itself. If you can’t train your literary AI on copyrighted materials, you’ve got to look elsewhere: like the public domain. The problem with that is many of those titles — being written prior to the 1920s largely by white Western male authors — are themselves inherently biased.
One such example is the Enron emails, which were released to the public domain by the Federal Energy Regulatory Commission in 2003. This dataset contains 1.6 million emails and offer very low legal risk in using as Enron and its ex-employees aren’t in the position to sue anyone. However the dataset is typically only used to train spam filters. Namely because the emails are full of lies.
“If you think there might be significant biases embedded in emails sent among employees of Texas oil-and-gas company that collapsed under federal investigation for fraud stemming from systemic, institutionalized unethical culture, you’d be right.” Levandowski wrote. “Researchers have used the Enron emails specifically to analyze gender bias and power.”
Even more recent sources like those shared under the Creative Commons are not without at least a small degree of bias. Wikipedia, for example, is a massive trove of information, all of which is reproducible under their Creative Commons licensing, which makes it a desireable dataset for machine learning. However, as Levandowski points out 91.5 percent of the site’s editors identify as male, which — intentional or not — may influence how information relating to women and women’s issues are presented. This, in turn, could influence the AI’s algorithmic output.

Algorithmic Collaboration: Fractal Flame – Fair and Square
Frustratingly, solutions for these issues are hard to come by. Just as with the “timeshifting” fair use issues presented by DVRs or Naruto the macaque’s selfie, each new platform presents unique technological wrinkles and legal ramifications which must slowly wind their way through the court system for interpretation and guidance.
Sobel points to a number of competing proposals, such as the person who operated the computer or the people who developed the program “Perhaps there will be no copyright whatsoever,” he said, which was the case with Naruto’s selfie. “The more broad ranging proposals involve giving rights to the computer, which I think would require more dramatic reform in order to recognize an algorithm as a rights holding entity. I think that that’s a bit further afield. But to be honest, yeah, there’s no great answer.”



