While the original Red Dead Redemption was about a wistful cowboy trying to make things right, its sequel sounds a bit different. In Red Dead Redemption 2, you’re playing as Arthur Morgan, an outlaw, who along with his Van Der Linde gang, seems out to rob pretty much anyone who crosses his path. The biggest takeaways from the trailer (embedded below) are just how good Morgan’s face looks and the vastness of the wilderness. And, really, that’s about it for the 90 second clip. Spring 2018 isn’t that far away though, and hopefully the next trailer will either dive further into Morgan’s backstory or show off actual gameplay.
John Howard Griffin was perhaps the best-known race swapper of the 20th century.
In 1959, the white Texan writer went undercover as a black man in the Jim Crow South. Griffin spent days under a tanning lamp, took drugs for the skin pigmentation disorder vitiligo and shaved his head but otherwise spoke and acted exactly as he had as a white man. In assignments for the African-American magazine Sepia and later his acclaimed book Black Like Me, Griffin aimed to convey to white Americans what it was like to be the other. This was before Rachel Dolezal, Iggy Azalea or any of the Kardashians; blackness for him was not a cultural adornment but a target on his back.
After his transformation, Griffin was taken aback by how quickly his sense of self adjusted to a new identity. Peering in a New Orleans bathroom mirror for the first time after his temporary metamorphosis, he reflected:
“All traces of the John Griffin I had been were wiped from existence. Even the senses underwent a change so profound it filled me with distress. I looked into the mirror and saw nothing of the white John Griffin’s past. No, the reflections led back to Africa, back to the shanty and the ghetto, back to the fruitless struggles against the mark of blackness. Suddenly, almost with no mental preparation, no advance hint, it became clear and permeated my whole being.”
People around him flipped their behavior just as starkly. Black acquaintances who knew he was white lapsed into discussing “our struggle” with him; white women on the bus shot “hate stares” his way.
Through six weeks of travelling through Louisiana, Mississippi, Alabama and Georgia, Griffin conveyed to white people a truth that African-Americans had been saying for a long time and still holds today: People of different races in the US inhabit different realities.
Everything from police interactions to job applications can be experienced differently according to your race. As the comedian Dave Chappelle said in an interview earlier this summer, “If you had some glasses that someone could put on just to see the world how you saw the world, it’d be probably fucking terrifying.”
Yet as Chappelle made those comments in a New York radio studio, it turned out Courtney Cogburn was working on something like that just uptown.
An assistant professor of social work at Columbia University, Cogburn and her team have been creating a project named 1000 Cut Journey in collaboration with Stanford’s Virtual Human Interaction Lab, which is led by Jeremy Bailenson. Showcasing today for the first time at the Brown Institute for Media Innovation, the experience uses an HTC Vive virtual reality headset to put users in the body of a black man, Michael Sterling, at four different stages of his life. Titled in reference to the gruesome torture method of death by a thousand cuts, each scene in the experience is a composite of real-life stories — garnered from the media and personal accounts — that reveals the myriad ways race infiltrates one’s quotidian experiences.
Essentially, it’s a first-person simulation of the racism faced by a black male, for non-blacks to experience temporarily. Although tamer than Griffin’s gruelling social experiment from half a century ago, its aims are similar — except it’s contained entirely within a headset.
When 1000 Cut Journey starts, you’re seven years old and in the first grade. You begin by looking into a mirror and, like Griffin in that Louisiana bathroom, acclimatize to your new body.
Soon, it’s free time in your modern but neutral-looking California elementary school, and three kids are playing with blocks on the floor as a teacher tutors some other students in math across the room. You physically crouch to join the kids making robots and fireballs with their blocks. “Mike is the black fireball,” one of them says, to giggles from the other two. “Yours would be the scariest because yours is black and black is always the scariest.”
One boy starts throwing blocks at the others’ models. As soon as you join in and fling a block — however tamely with your childlike hands — they stop abruptly and stare. The teacher, a white woman, stands up, arms folded.
While your character and environment are 3D models, the other humans are all live-acted, and the teacher looms over you. “Mike, look at me,” she orders. “I shouldn’t have to tell you not to throw things in the classroom. You’re being dangerous, and you’re going to hurt someone.” The next scene cuts to you in the corner, in front of another mirror reminding you of your new body, while your three classmates continue to play in the distance, seemingly oblivious to your embarrassment.
There is little interactivity in the linear experience but this, too, reflects racism: the unequal treatment you receive through little choice of your own.
You empathize with Michael’s childhood innocence and feel the hot shame of being singled out for scolding by the teacher. The early demo I tried lasted only several minutes, and the simple 3D models were redolent of PlayStation 2-era graphics, but there was still a feeling of presence. There is little interactivity in the linear experience but this, too, reflects racism: the unequal treatment you receive through little choice of your own.
Most significantly, the scene revolved around the kinds of small incidents — being stereotyped as aggressive, treated unfairly — that may sound easy to dismiss in third person but when experienced first-hand in VR have a visceral impact.
“You may be affected even if you don’t notice that it’s happened,” said Cogburn. “Even if you can’t articulate to me what it was about the school scene that made you feel uncomfortable, we know what happened in the scene that may have triggered that response.”
Virtual reality seems uniquely suited to revealing the hidden texture of implicit bias. In this context, the term — also called unconscious bias — refers to the subconscious reactions we have to people of other races. It’s not about blatant racial epithets but subtle, socially conditioned beliefs that, for instance, black people are a greater physical threat or even feel less pain than white people. These biases often emerge in the open through the sorts of microaggressions that — because they’re unintentional and don’t necessarily align with an individual’s professed beliefs — can easily go unnoticed by one person even as they’re painfully obvious to the other.
Traditional three-act storytelling in films, books or the news homes in on specific moments of tension and drama, involving a protagonist, antagonist, conflict and resolution. Yet communicating implicit bias is less about clarifying moments of moral rightness, and more about showing the quiet, coded moments of racial bias that shape one’s worldview and sense of justice over time. Virtual reality doesn’t need to put a spotlight on a racial flashpoint nor list a series of facts about inequity to have an impact. By immersing you in another person’s experience, you understand his hurt intuitively.
After the elementary school scene, 1000 Cut Journey places you in Michael’s shoes at age 15 having his first encounter with the police; age 30, interviewing for an elite corporate job; and finally age 50, looking back at his life. (These scenes weren’t yet ready in the build that I tried.) Taken as a whole, they have the potential to reflect the structure of racism: not isolated incidents but a pattern of little events that start in a child’s formative years and accrete over the course of a lifetime, creating a window on the world that is fundamentally different from those in the racial majority.
“What I really want is for people to come out saying, ‘I thought I understood this but I don’t.’”
Someone who has lived through experiences like a run-in with police could find revisiting them in VR disturbing. But similarly to Black Like Me, this project is made primarily for those who have lived a different life.
“What I really want is for people to come out saying, ‘I thought I understood this but I don’t.’ [For] whites, in particular, I would like for that to be the reaction,” said Cogburn. “And for blacks and perhaps other people of color who go through the experience to come out saying, ‘That’s it exactly.’”
Virtual reality has long been touted as a vehicle for empathy through body swapping, and crossing race barriers has been part of several projects.
Nonny de la Peña’s One Dark Night reconstructs the shooting of Trayvon Martin entirely from public records and 911 calls, showing what every witness saw or didn’t see. Janicza Bravo’s Hard World for Small Things is a short, powerful film that places the viewer amidst a race-related accidental shooting in LA. Earlier this year, Australia’s public broadcaster SBS released a VR experience of racial abuse on a bus — first from a bystander’s perspective, then from the victim’s.
1000 Cut Journey, however, is unique in its efforts to bridge narrative and social science. The project builds on specific research developments over the past five years in what’s termed embodied cognition. One key insight from the field is precisely what the race swapping Griffin learned the hard way: When you change the body, the mind quickly follows.
In one study, white subjects were set up with a prosthetic dark-skinned hand. A researcher stroked the hand with a paintbrush while simultaneously stroking the participant’s hidden real hand. The procedure took only a few minutes. Yet participants showed a drop in unconscious bias in the implicit association test, a common measurement of unconscious bias. In another experiment, brushing a white person’s face with a cotton bud as he or she watched a video of a black person being brushed the same way led to subjects reporting that the black person’s face looked more like their own.
“You can very easily fool the brain into thinking that a different body belongs to you.”
“You can very easily fool the brain into thinking that a different body belongs to you,” said Manos Tsakiris, a professor of psychology at Royal Holloway, University of London, who co-published these papers. “[Including] a body that has a different appearance than yours.”
Subsequent studies applied these techniques to VR, putting mostly white subjects in dark-skinned bodies and seeing how their implicit bias decreased. Interestingly, the VR experiences themselves did not involve racially charged scenarios and contained no lessons on racism. Participants were simply asked to follow a tai chi instructor’s movements, or play a photo description exercise, all the while conscious of their new bodies. The brain accepted its new skin and rearranged its biases and attitudes accordingly.
In contrast, an earlier study by Stanford’s Bailenson explicitly asked white participants to imagine a day in the life of a black character, and then to act out a VR job interview as if they were that person. Their implicit bias ended up worse than subjects who played a white person in their interview. It’s possible that instead of engendering empathy, participants relied on black stereotypes when told to imagine their world. An effective way to remove unconscious bias, it seems, is by showing not telling.
There are still open questions with this technology. How long, for instance, can a VR-induced reduction in racism last? (A 2016 study by the University of Barcelona’s Mel Slater showed it could be effective after one week). What is the precise neural mechanism that takes place when we embody a different race?
The key issue, however, is whether changing attitudes in virtual reality can also change real-world behavior.
“As soon as you take off the head mounted display, you still find yourself in real reality,” said Tsakiris. “This is where behavior matters. And this is where behavior has significant consequences.”
The point could apply to many socially conscious virtual reality experiences, which don’t get far beyond the film festival circuit or the university lab. Yet one key application could be in diversity training. The NFL has had discussions with Stanford’s Virtual Human Interaction Lab in overcoming race and gender bias. Meanwhile, Alexandra Ivanovitch, founder of the nonprofit Equality Lab, is creating a series of first-person VR trainings with the Police Foundation, with hopes of bringing them to law-enforcement departments across the country. The idea is to allow police to virtually switch places with the communities that they are protecting — such as people of color — and therefore train them in racial sensitivity in a less didactic form than mandatory lectures.
“If we can actually have positive results in that type of environment, where the culture is historically not very favorable to racially sensitive programs, I think that we can make a tremendous impact across the board,” said Ivanovitch. “We can make it across a whole variety of social impact areas, like schools, hospitals, court rooms.”
Still, it’s doubtful any virtual embodiment researcher would suggest that VR alone will end racism. The way that humans have carved into factions based on our most superficial physical traits is part of a political, economic and cultural structure that stretches generations. The point of these projects in race swapping is for people to comprehend that structure better. It’s for a tech-enhanced degree of cross-cultural understanding that perhaps wasn’t possible before; shortening the distance between one mind and another.
“We blame individuals because of their choices and their behaviors, and we don’t think about history or policies that deliberately disadvantage particular people.”
“As I see it now — and this is not just for race, this is for any number of social issues, especially in the United States — we’re very individualized when we think about problems,” said Cogburn.
“We blame individuals for their plights. We blame individuals because of their choices and their behaviors, and we don’t think about history or policies that deliberately disadvantage particular people. It’s that type of logic that I would like to shift.”
Projects like 1000 Cut Journey are an illustration of how virtual embodiment can aid that shift. While virtual reality has already proved it can immerse us in fantastical narratives and games, experiences such as this show a far more socially compelling use: helping us understand the neighbors we walk past every day but hardly know. It’s not using VR to escape the world, but to see it more clearly.
There’s been a bit of hype surrounding the LG V30 phone; our senior editor for mobile, Chris Velazco, was very impressed with it, saying “it’s been ages since LG built a phone this good.” Now, we have some release information for you. It’s coming to all four US carriers and will be available starting October 5th. We’re still waiting on confirmation of that date from Sprint.
T-Mobile will have preorders starting at 5:00 am PT on October 5th; you can walk into a store and buy one on October 13th. Pricing is $80 down and $30 per month on a plan, or $800 at full retail. AT&T is allowing DirecTV customers a buy one, get one free if you add a line and use AT&T Next to purchase both; their full price is $810. You can order online on October 5th or buy a phone in stores on the 6th. We still don’t have pricing information from Sprint or Verizon, but Verizon has confirmed it will have phones in store on the 5th, as well as online.
The LG V30 is the only phone that will currently work on T-Mobile’s 600 MHz network, which is available in Cheyenne, Wyoming, and Scarborough, Maine. It’s also compatible with AT&T’s 5G tech, called Evolution, which is available in Indianapolis and Austin, Texas.
Source: T-Mobile, AT&T
It’s safe to say that people are eagerly anticipating the iPhone X; it represents a step forward in design and tech for Apple. But now, The Wall Street Journal reports that difficulties in manufacturing components crucial to Face ID could lead to significant shortages of the iPhone X.
The components are called Romeo and Juliet, and as their names suggest, they work together in Apple’s face recognition system. Romeo is the home of the projector that uses a laser beam to create a 3D map of the user’s face, while Juliet’s infrared camera reads that map. According to The Wall Street Journal’s sources, assembly of the Romeo component, and the challenge of incorporating its various components, was taking longer than its Juliet counterpart. This means there are more Juliets than Romeos.
While one source assured The Wall Street Journal that things were back on track, this is a troubling development for the iPhone X. Initially, rumors swirled around possible shortages surrounding the phones OLED display. Coupled with the Face ID component issues, this could mean shortages beyond those we traditionally expect surrounding a new iPhone launch. The iPhone X starts at $999 and will be able for preorder starting October 27th.
Source: The Wall Street Journal
When GoPro announced it was working on a drone, pretty much everyone thought that it’d have some sort of “follow” feature. It didn’t. But it had the required technology all along. Finally, today, Karma is being updated to unlock that feature, along with a few other goodies.
There are two new “auto-path” modes in today’s update: Follow and Watch. The former, as you’d imagine, allows Karma to follow you, but there’s a small concession: It only follows the controller, so you’ll need that with you whatever you are doing. In reality, this is either not a problem, or highly impractical. If you want Karma to follow your car, it’s great. If you’re skiing in the backcountry, it’s a little less convenient. I predict this is a temporary measure, and we might see a GPS wearable accessory in the coming months. (Let’s hope!)
Watch sounds more like some sort of tripod mode. Send Karma up, tell it to “watch” you, and it’ll stay put, but rotate to keep the controller (or, rather, the person with it) in frame. This sort of mode is ideal for something like a skate park, where you’re moving within a closed area, and just want the drone to keep you in the shot. This same mode can also be found on AirDog, GoPro’s main rival in the action sport drone world.
Other goodies in the update include the option to add up to 10 different waypoints in the “cable cam” mode. This is actually incredibly useful, as it allows you to map out complex paths for Karma to follow in advance, so you can plan creative swooping shots. You can still rotate and tilt Karma as it goes, opening up many creative possibilities. One final small update gives Karma a skill that not many other drones can do: the ability to look up. Drones that carry the camera directly below will always get the propellors or body in shot. Karma puts the GoPro at the front, so there’s nothing above it. Great for when flying under trees, or for shots where you want the camera to pan down to the horizon.
Some of these features might feel overdue, but it’s good to see that GoPro is still working on making Karma better. CEO and founder Nick Woodman has confirmed to Engadget that a second drone is in the works, so you can expect all of these updates, and hopefully, some more to be in the new model at launch (whenever that may be). The new Karma features are available today, so fire up your controller and connect it to your WiFi network to snag them.
The European Union (EU) has proposed a raft of new measures to tackle online hate speech, telling social media companies that they can expect legal consequences if they don’t get rid of illegal content on their platforms. Despite companies such as Facebook, Twitter and Google pledging to do more to fight racist and violent posts, the European Commission says they’re not acting fast enough, and that it’s prepared to initiate a rigorous framework to hold them to account.
The proposed guidelines, which address issues such as the detection of hate speech, effective removal and the prevention of its reappearance, will act as a sounding board for the EU when it considers future instances of illegal content. If things go wrong for a company, the EU can use the guidance as proof that it’s not toeing the line. If the company repeatedly messes up, the guidance means the EU has the power to implement binding laws on the matter. Punishments could be severe — the EU is known for taking a tough stance against companies that don’t play by its rules. Earlier this year, for example, it ordered Google to pay out $2.7 billion in an antitrust ruling.
The guidelines also suggest that companies should “invest in automatic detection technologies”, which has called into question the issue of censorship. Speaking to Fortune, Marietje Schaake, a Dutch liberal member of the European Parliament, said that automated measures would be “extremely dangerous”, and that “there can be no room for upload filters or ex-ante censorship in the EU”. She added that such measures could inspire worrying laws in technology-restricting countries such as China and Russia. However, EU justice commissioner Vera Jourova said, “We cannot accept a digital Wild West… if the tech companies don’t deliver, we will do it.” The EU, it seems, is running out of patience.
Traditional retail may be failing, but it’s giving way to tech-infused showrooms. Marie Claire, Neiman Marcus and Mastercard teamed up to showcase some of the concepts that will be driving that development in their New York City pop-up shop called The Next Big Thing. The store is open to the public every day until Oct. 12th, on 120 Wooster Street in trendy SoHo, and according to the invite, it’s “a first-to-market, hands-on retail pop-up experience bringing to life the newest innovations in fashion, beauty, entertainment, technology and wellness.”
Having recently covered what the future of retail would look like, I was curious to see some of the advanced technology I had heard about come to life. But after an hour-long visit, I left the store feeling underwhelmed by the lack of groundbreaking technology.
Stepping into the pop-up shop was less transcendental an experience than I had been expecting. There were no holographic displays, no robots to greet me, no screens that knew my name. Instead, it looks pretty much like a regular store. The area is split up into three “zones,” named after the magazine’s sections — Work, Play and Peak (health and fitness). Each section showcases clothes, accessories and gadgets related to its theme. In the Peak zone, for example, you’ll find workout clothes and sunglasses, and there’s even a “pelvic floor muscle exerciser” (a PG-13 way to describe a vibrator).
I loved all the goods on display: The clothes were stunning, and the skincare products were tempting. As a tech reporter, I was far less impressed by the range of gadgets available. There’s the Mighty Purse, which is really just a clutch with a built-in battery and charger, the Click and Grow smart herb garden kit, and the Bang and Olufsen Beoplay A1 Bluetooth speaker. For Marie Claire’s intended audience, though, who may be less concerned than I am about the best specs and the latest in tech, these seem like an appropriately stylish and trendy collection of devices.
Despite the unexciting selection of gadgets, it’s the presentation of these products that left me underwhelmed more than anything else. In addition to the standard racks with hangers for clothes, there are also counters with items each accompanied by an iPad that shows you details like its dimensions, colors and price. It feels like shopping at home, except you’re standing in front of the actual purse in a physical store. Unlike shopping at home, you don’t have the luxury of time and a dozen open tabs to help you decide.
It’s also not that much more convenient to purchase something in the store: You have to hit “buy” on the tablet, which will summon an in-store employee to finish the transaction. That’s not as seamless as the setup at Amazon Go stores, where shoppers can simply pick up what they want and walk out.
For consumers this doesn’t seem helpful, but retailers could gain valuable insight and build a better shopping experience. Overhead cameras above these counters keep an eye on how many people walk past the display and count how many linger and how long they dwell. The iPads also track what shoppers look at and interact with, so companies can understand where they’re losing and winning customers. It’s all pretty bare bones right now, but it hints at a hyper-personalized experience the industry expects will be the future of retail.
A more sophisticated example of blending technology with a highly personal experience is the row of three smart mirrors in the Marie Claire store. Featuring Clarins branding and products, the two-foot-tall mirrors take photos of shoppers, detect potential skincare concerns and recommend creams and serums to target those problems.
The idea isn’t new: We’ve seen many other companies implement smart mirrors to identify individual issues plaguing shoppers, like dry skin, fine lines, wrinkles, acne or blemishes. What’s more interesting is being able to eventually use these mirrors to try on different cosmetics, sunglasses or hairstyles. Of course, you can also do all that at home, but Marie Claire Vice President and Publisher Nancy Berger believes real-world shopping isn’t going away anytime soon.
Beyond the social experience, some retail setups can actually make the whole process fun. She shared a story of how her son, who normally doesn’t enjoy shopping, said he would go back to a store they had visited because they had a mock basketball court on-site to help customers try out different shoes.
One of the most fun parts of shopping in the real world is trying on clothes, but it can also get tiring quickly. To ease some of that frustration, interactive fitting room mirrors can get other colors and sizes to you, saving you from having to scour the store repeatedly. You’ll need to book a room with a store employee to hold it for your session. Inside, the mirror detects the RFID tags on the clothes you’ve brought in and list them on-screen. Oak Labs, a startup that has already worked with Ralph Lauren to integrate these mirrors in some stores, opted for RFID tech to avoid using cameras. After all, you wouldn’t want anyone hacking into these systems and livestreaming what goes on while you’re trying on clothes.
The mirror will also recommend other pieces of clothing based on your preferences, and you can see on the screen how they match with the items you already have. When you’ve made up your mind, you can check out from the fitting room, elsewhere in the store or even after you leave. That’s because to pay for your clothes, you’ll need to download the Marie Claire app (as opposed to using the previously mentioned iPads, which are meant for devices). Oak Labs told Engadget that the app is meant for shoppers to specify pickup or delivery options. That’s still not a completely seamless experience, but a representative at the event said it’s possible to build NFC readers into the mirrors to enable contactless payments inside the fitting rooms.
Again, connected mirrors in fitting rooms aren’t new. Even the most intriguing demo at the store — a giant touchscreen on its windows that lets passersby peruse and buy what’s inside the shop — has sort of been done before. But there hasn’t been a store that’s brought together all of these new concepts in one place just yet. And as Berger told Engadget, this experience isn’t about showcasing bleeding-edge retail tech.
The Next Big Thing wants to educate Marie Claire’s intended audience about what you can expect to see in stores over the next year, not what might be here 10 years down the road. The demos aren’t groundbreaking, but the store provides a superficial glimpse of the technology that will change how we shop in the future.
In yet another surprising union of traditional retail and the tech world, IKEA has reportedly snapped up the on-demand service TaskRabbit, Recode reports. The deal makes a lot of sense. TaskRabbit has made a name for itself as the go-to service for random tasks — and that often involves moving and building furniture. An acquisition is just the next step for the two companies: IKEA is already relying on TaskRabbit as a partner in the UK, and it’s advertising the service for customers who need installation help in the US and the rest of the world.
While the deal gives IKEA a deeper foothold in the technology world — it just launched an AR-focused app for iOS 11 — it also helps the company solve some annoying problems for consumers. Nobody actually likes building the company’s wares, and its expensive and lengthy delivery options also seem archaic in 2017. In particular, IKEA needs to compete with Amazon, which can easily ship out furniture within a day and offer easy installation options.
IKEA says TaskRabbit will continue to operate as an independent company. Looking ahead, though, it’s unclear just how useful the rest of TaskRabbit’s offerings will be to the furniture company. You can also use the on-demand service to have people stand in line, pick up food from the grocery store or even take your junk to an electronics recycling center (something I’ve needed it for several times).
Source: Recode, IKEA
Facebook has touted its ability to provide infrastructure for disaster-stricken areas before, and now it’s dispatching a team to Puerto Rico to help reestablish internet connectivity. “We’re sending the Facebook connectivity team to deliver emergency telecommunications assistance to get the systems up and running,” Mark Zuckerberg writes in a status update.
That’s in addition to a $1.5 million donation to NetHope, a charity that works to improve internet access for disadvantaged countries and those affected by disaster, and the World Food Programme.
Zuckerberg also says Facebook is donating ads to “get critical information to people in the region on how to get assistance and stay safe.” The only way folks will see those ads is if they have internet access, so the moves go hand in hand.
Puerto Rico has been without power for the last month following Hurricane Maria’s devastation. Reuters reports 90.9 percent of the island nation’s cell towers are offline.
Source: Mark Zuckerberg (Facebook)
As Apple Music and Spotify continue to battle for subscribers, each service has released new personalized playlists that curate a specific selection of songs for each user. Apple Music’s latest addition was its “Chill Mix” this past June, and today Spotify has added onto its roster of personalized playlists with “Your Time Capsule.”
As explained by Spotify, Your Time Capsule will gather the 30 “most nostalgic tracks” from your teenage years and early twenties, resulting in a soundtrack that lets you revisit classic songs, albums, and artists from when you were younger. Any Spotify user below the age of 16 will not be able to access the new playlist.
Your Time Capsule follows the launch of Spotify’s “Your Summer Rewind” from June, which surfaced all of the songs that you listened to most during prior summers. Spotify said Your Time Capsule is similar, but is meant “to evoke powerful memories from your youth.” The new playlist will be at the top of Home or in the Decades section of the Spotify app’s Browse tab on iOS and Android smartphones.
Visit Spotify’s website to start generating your own version of the new playlist. Your Time Capsule is launching worldwide today for all appropriately aged Spotify users.
Discuss this article in our forums