Doctor Otto Octavius may have been a power-mad scientist bent on world domination and the utter ruin of his nemesis, Spider-Man, but the guy had some surprisingly cogent thoughts on prosthetics development. And although mind-controlled supernumerary robotic limbs like Doc Oc’s still only exist in the realm of the Marvel Universe, researchers here in reality are getting pretty darn close to creating their own. And in the near future, we’ll be strapping on extra appendages whenever we need a helping hand — or supplemental third thumb.
Supernumerary Robotic Limbs (SRLs) are not prosthetics. They are designed to supplement a person’s existing full complement of limbs as opposed to replacing the lost functionality of a missing one. That’s not to say that an amputee couldn’t use one of these devices, simply that they’re meant to be used as add-ons to human protuberances instead of stand-ins for them.
Don’t expect to toss cars around like throw pillows while wearing an SRL rig. Well, not initially at least (keep those fingers crossed, though). Rather, they’re built to help people perform tasks that would otherwise be irritating, difficult or outright impossible without them — like twisting a doorknob while carrying armfuls of moving boxes or holding a ceiling panel in place while you nail it in. So, rather than get yourself a helper, you could soon instead get a pair of MIT’s shoulder-mounted SRLs.
This 10-pound assistive device sits atop your shoulders like a horse collar so the load is distributed throughout your core instead of being concentrated on your shoulders. Each arm offers five degrees of freedom and can be equipped with a variety of attachments depending on what your use.
As for control, the SRL relies on a pair of inertial measurement units strapped to the user’s wrists. It figures out how to best position itself through a demonstrative learning process, so depending on where your hands are, the robot will respond accordingly. It can also be pre-programmed to perform specific actions, like pushing the elevator call button for you if your hands are full.
These aren’t the only extra limbs being developed at MIT’s d’Arbeloff Laboratory. Teams there are working on SRL systems that can supplement your stance as well. Boeing helped develop one such device, which is worn around the waist and designed to help with aircraft assembly — specifically to prevent repetitive stress injuries. Its “arms” can either be used to help the assembly worker brace themselves while they work with a piece of fuselage or convert into “legs” that can support the worker’s body weight. The system can even act as a pair of robotic walking sticks, helping the wearer walk faster and with less effort.
Controlling another pair of appendages can prove to be a challenge. And though we are still at least four years from being able to install “wetworks” and control these limbs with our thoughts, the d’Arbeloff Lab has an interesting solution: Command the SRLs by flexing your pecs.
The team, led by MIT PhD candidate Federico Parietti and engineering professor Harry Asada, developed a sensor vest that receives inputs from the user’s pectoral and abdominal muscles. Contracting your left pec moves the left appendage forward, squeezing your left abs to move the appendage back and the same on the other side. This enables the users to reposition the extra arms without having to stop what they’re doing with their hands.
Then again, if you’re not entirely up to having electrodes affixed to your torso, you could always just control your second pair of hands with your feet, like the research team from Keio University and the University of Tokyo do with its “metalimbs” system. Clenching your toes causes the partnered robotic arm to make a fist while lifting your foot raises its arm. The system is primarily designed to assist amputees though it could eventually prove a boon to multitaskers as well.
The same lab is also toying with a simpler SRL design, dubbed the Aucto, that uses a Granular Jamming Gripper instead of a more complex actuated robot claw. GJGs are essentially a balloon filled with coffee grounds that holds objects, like the name suggests, by jamming itself into crevasses and then solidifying by vacuuming all the air out of the balloon. This design reduces the arm’s required degrees of freedom from six to three since you don’t need a complicated robot wrist to align the gripper with whatever it’s gripping.
Of course, sometimes you don’t need a helping hand, just a helping finger… or two. Luckily MIT’s still got you covered. Researchers there developed a robotic glove with a pair of long robotic fingers on either side. “You do not need to command the robot, but simply move your fingers naturally,” professor of engineering Harry Asada said in a statement. “Then the robotic fingers react and assist your fingers.”
The glove’s actions are guided by an algorithm that has been trained on a set of specific movement patterns, which inform it as to what the wearer is trying to do. That in turn enables the glove to most effectively position itself to support whatever you’re grabbing at. This glove has been in development since 2014 and remains pretty bulky, though the team hopes to eventually miniaturize the mechanism down to the point where it can fit in a wristband and then pop out when needed so you aren’t walking around town with 12 fingers.
And if you don’t mind a few extra fingers, why not strap on another thumb? Developed by Dani Clode, a graduate student at the Royal College of Art in London, the Third Thumb is a 3D-printed digit powered by a wrist-worn actuator and controlled by a Bluetooth-connected pressure sensor located under your foot. Much like a piano pedal works, stepping on the sensor causes the Third Thumb to contract. With it, users can play cards, cook and cover more of a guitar’s fretboard than they would naturally. Unfortunately, this device is still only a prototype so don’t expect to be shredding any DragonForce tunes with it in the near future.
These devices, though many are still just prototypes, offer exciting possibilities for human augmentation. Whereas gene-editing techniques like CRISPR are likely to remain ethically tenuous for the foreseeable future, SRLs carry no more stigma than that of conventional prosthetics. In the near future, we may well find ourselves strapping on an extra arm or leg when we need a hand, rather than plying friends with beer and pizza in exchange for their help.
We were pretty impressed with BlackBerry’s newest smartphone, the KEYone, especially given how long the company had gone without a serious contender on the market. But it seems its US versions carried by Sprint have brought something else from the past: Bloatware. Users discovered that certain apps they’d deleted kept reappearing on their phones. Sprint is reportedly working on a fix, but they’ve got a hard-and-fast solution in the meantime: Delete the whole software launcher.
To be clear, this problem only affects Sprint versions of the KEYone. That’s because the carrier loaded each BlackBerry with Sprint Mobile ID, which automatically installs sponsored apps — some popular, like Uber and Facebook, others not — when users first buy the device. Except the latest versions of the service are buggy, re-uploading apps periodically.
A software update is coming to address the issue, Sprint told Phone Arena, but affected users can try an aggressive fix in the meantime. Go to Settings>Apps>Blackberry Launcher, click ‘View Details’ in the Play Store, and uninstall it. Rebooting the device should re-upload a factory-fresh version of the launcher, which shouldn’t have the issue.
We’ve reached out to Sprint to confirm the issue and workaround steps and will update when we hear back.
Via: The Verge
Source: Phone Arena
Google is capitalizing on the solar eclipse hype to tease out the latest version of Android. On Monday at 2:40 PM Eastern, the internet juggernaut says we’ll “meet the next release of Android and all of its super (sweet) new powers.” Given that the final beta shipped last month, this could mean a few possibilities: either the name will be revealed (my money is on Oreo), or we’ll learn update’s release day. After months of previews and non-final builds, at least we won’t have much longer to wait for when we’ll actually be able to download and install the new OS.
What does Android O entail? Decreased load times, better battery life and lot of onboard AI features like Google Lens (a visual search app), copy-less pasting and a picture-in-picture mode, among other things. None of them immediately call Nabisco’s trademark cookie to mind, but hey, a guy can dream, right?
A bird? A plane? No, it’s #AndroidO, touching down to Earth for #Eclipse2017 & bringing super (sweet) new powers: https://t.co/7nslzmxar3 pic.twitter.com/MFxHxUdiia
— Android (@Android) August 18, 2017
If you’re a Google Home user without a paid Spotify account, your use of the service on Home has been limited. Up until now, Spotify has only integrated with Home for paid accounts. But at I/O, Google announced that would change; free Spotify users would be able to stream their library to Home. And now, it appears that the integration is now live for US users: free-tier Spotify users can now stream their music to Google Home.
Spotify confirmed to Engadget that the connection is now available for non-paying users and the streaming service’s free tier is currently showing as supported on Home. Deezer, a streaming music service that’s relatively new to the US market, is also now available on Google Home.
It’s worth noting that Spotify’s integration with Google Home will be different, depending on what tier of Spotify service you have. You can see a full list of available commands, depending on tier, at Google’s support page, but while Spotify Premium users can play specific content on demand, free users are limited. You cannot listen to specific songs, artists, albums or personal playlists; instead, you can listen to one of Spotify’s radio stations inspired by the specific song/artist/etc. you want to listen to. You can, however, listen to Spotify’s curated playlists.
It’s not ideal, but Spotify has been enforcing its tiers more as it’s trying to renew licensing agreements and negotiate favorable royalty agreements. While the company had been long resistant to distinguishing between free and paid tiers it looks like the company is starting to chip away at services provided to unpaid customers.
Update: This post has been updated with confirmation from Spotify.
Via: Android Police
When we launched the Alternate Realities grant program in May we had no idea what to expect. We saw a need for funding in the arts happening at just the time when new media like AR and VR were starting to go mainstream. So, with support from our parent company, Oath, we set out to fund five immersive art projects that push the limits of storytelling through emerging technologies. The response was overwhelming. Proposals came from as far away as Iran and Australia and ranged in discipline from theater to fashion, documentary to animation. There were multi-million dollar VR productions, animated shorts and escape rooms. (SO. MANY. ESCAPE. ROOMS.)
We received more than 300 applications, which we narrowed to a pool of 80. Those projects were then presented to our selection committee, a group of four technology, art and entertainment tastemakers (more on them here), each of whom recommended five projects to Engadget’s editorial leadership based on their ability to address a short list of predefined criteria.* Engadget’s editorial leadership made its final selections based on those recommendations.
In the end, we chose five projects that represent the true potential of art and technology as a unified force. We’ll see humans and flamingos come together in an interspecies, augmented reality-dance off, relive America’s first reported alien abduction in VR, and give birth to new life forms by way of an interactive Cosmo-style quiz. Yes, things are going to get weird. Our grantees, like their projects, are a diverse group working across disciplines. There’s a TV heart throb, a rap historian and the founder of the Stupid Hackathon.
Creating art through technology isn’t cheap, but we strongly believe it’s important to our evolution. We couldn’t be happier to be supporting the arts at a time when funding is so critical. Thank you to everyone who submitted, nominated and participated in this program. The projects will debut at the first-ever Engadget Experience, a one-day event exploring the future of creativity at the historic United Artists Theatre at the Ace Hotel in downtown Los Angeles on Tuesday, November 14th, 2017.**
You can find more information or buy tickets to the event here. And now, the grantees.
Dance with flARmingos
Dance with flARmingos
Dance with flARmingos is a mixed reality experience that features a interspecies dance between humans and flamingos, and pays homage to the flamingo — a consummate showman and embattled victim of environmental neglect — by staging kinship from an ethical distance. To Kristin Lucas, this is an exercise in going beyond a human-centered worldview into a more fluid ecological discourse, through the use of technological embodiment and sensory play.
Kristin Lucas is an interdisciplinary artist who pairs the intangible with the uncertain in experiential works that are performative and social, circuitous and bittersweet, and that lie somewhere between reality and “reality.” Her work has been presented nationally and internationally, and appears in collections of major institutions, including the Dia Center for the Arts and the Museum of Modern Art. She is represented by Postmasters and Electronic Arts Intermix (EAI) in New York and And/Or Gallery in Los Angeles and has been featured in Art in America. Lucas earned a BFA from the Cooper Union School of Art and an MFA in art practice from Stanford University. She lives between Queens and Austin, where she serves as studio-rt faculty for the department of art and art history at University of Texas at Austin.
Regine Basha is the Residency Director at Pioneer Works in Red Hook, Brooklyn. For two decades, Basha has worked internationally as an independent curator of contemporary art, writer and radio producer (bashaprojects.com) and is often working closely with artists to explore innovative models of dissemination and alternative forms of direct public engagement. Her exhibitions have taken place in public spaces, private homes, heritage buildings, and within large abandoned heritage sites. Basha was the recipient of the Robert Rauschenberg Curatorial Residency at Captiva in 2014.
Tommy Martinez is a New York City based artist and technologist. As coordinator of the Virtual Environments Lab at Pioneer Works, Martinez facilitates a residency and research program focused on emerging technologies in media art. He has performed, exhibited, and served as a collaborator and technical consultant on wide range of projects worldwide.
Thomas Wester works as an independent creative and technical director. Over the last 15 years he has blended digital and physical to create meaningful interactive experiences. His ever curious, ever inquisitive nature results in a wide and deep knowledge of both the creative and technical process with regards to producing for the interactive medium, specifically in the physical space. He has worked with The Royal Shakespeare Company, The National Archives, Library of Congress, Hermès, Coca Cola, Target, MoMA, MFA Boston and exhibited at Tribeca and Sundance Film festivals.
Ben Purdy is a creative technologist focusing on the application of software and hardware for use in all manner of interactive projects. After a decade of corporate software development he later transitioned into the creative industry, eventually working as a Technical Director at Instrument. In 2014 he founded Glowbox, an interactive technology studio in Portland Oregon. Ben is a lifelong advocate of creative curiosity and technical exploration.
Support has been provided in part through an AR/VR Artist Research Residency co-organized by Oregon Story Board, Upfor and Eyebeam; Harvestworks Digital Media Arts Center Artist-in-Residence Program; Yafo Creative/Print Screen Festival Digital Arts Residency; BAU Institute Arts Residency at Camargo Foundation, Cassis; and a Pioneer Works Technology Residency; and through the University of Texas at Austin Department of Art and Art History and College of Fine Arts.
Dinner Party is a virtual-reality thriller based on the true story of the Betty and Barney Hill UFO-abduction incident, the first nationally known UFO abduction in American history. After an inexplicable nighttime encounter, the Hills, an interracial couple living in 1960s America, sought hypnosis to recover memories of what they experienced. Upon waking from hypnosis, the Hills had no conscious recollection of what they’d said. But their account was captured on tape. Frightened to listen alone, they played the tapes for friends at a dinner party. What the tapes contain will threaten their marriage and raise troubling questions about race and perception that are as relevant today as they were in the 1960s.
Angel Soto is at the forefront of virtual reality, directing and supervising VR content for RYOT News. His VR short Bashir’s Dream premiered at Sundance 2017, was screened at Cannes, and named one of Time Magazine’s “Five Virtual-Reality Films You Should Experience Right Now.” In 2013, he won the prestigious Cannes Lion. He recently completed his first feature film, La Granja, which premiered at Fantastic Fest and competed in festivals around the world. His latest film is the documentary short, El Pugil, which made its world premiere at the Tribeca Film Festival.
Charlotte Stoudt is a writer-producer on House of Cards. Previous television work includes six seasons of Homeland and development for Amazon Studios. Her VR projects have been workshopped at the Venice Biennale VR Lab and Sundance New Frontier Story Lab. As a journalist, she wrote on politics and culture for The Village Voice, LA Times, Variety and NPR.
Saschka Unseld is a German-born director and writer who cofounded the Academy Award–nominated animation studio Soi, where he directed and produced numerous award- winning shorts and commercials before joining Pixar Animation Studios in 2008. During his six years at Pixar, he worked on Toy Story 3, Cars 2 and Brave, and wrote and directed the 2013 short film The Blue Umbrella. In 2014, he co-founded Oculus Story Studio to help explore the future of VR storytelling and won the first Emmy for original VR content for Henry. His latest VR experiences, the Emmy-nominated Dear Angelica, and his independent VR dance project, “Through You,” both premiered at Sundance 2017.
Laura Wexler is a Baltimore-based author, screenwriter and producer who was recently selected for the Sundance Institute’s 2017 New Frontier Lab. She’s the author of a nonfiction book about an unsolved mass lynching, and journalism published in the Washington Post, The New York Times and elsewhere. She has developed for Amazon and is the co-founder and co-producer of The Stoop Storytelling Series.
Mapper’s Delight is a cultural tale representing worlds, experiences and gameplay told through the most-listened-to musical genre on the planet. Part explorer, part cultural critic, part archaeologist, part DJ, the Datanauts of Mapper’s Delight use sight, sound and touch to investigate the global distances traveled by the lyrics contained in each rap artist’s career while exploring the secret flows of hip-hop’s spacetime through a panoptic interface. This exhibit immerses the viewer in an alternate experience of reality by creating a viewpoint that is above this world; by combining two different configurations of space and time: a) that of the geographic reference in the lyric, with b) the viewer’s experience of assessing the visualization of this travel – something that is typically reserved for rappers or for those who perform close, academic readings of rap lyrics.
Tahir Hemphill is a designer, creative technologist and educator based in New York City. Hemphill’s practice investigates the role systems play in the generation of form and the role that collaborative knowledge production plays in the resilience of communities. Hemphill is influenced by scientific work that pushes investigation to artistic limits and artistic work that pushes repetition toward scientific method. Over the past 20 years, this productive tension between art and technology has been fueled by his reverence for scientific methodologies as well as his irreverent tinkering with them. Since 2010, Hemphill has been operating the Rap Research Lab, a creative technology studio that explores rap as a cultural indicator through educational, editorial and creative interrogations.
David A.M. Goldberg is an accomplished Hawaii-based writer, teacher, programmer and media developer who has used a lifelong interest in art, culture and technology to transform the means by which people access, assess and organize knowledge. Goldberg’s cultural lens was cut from a matrix of liberal arts and hard science. Early on, he spotted profound reiterations of America’s best and worst cultural and social practices in the digital context of video games, chat rooms, mailing lists and the early World Wide Web. That lens was polished by a commitment to writing about these changes, teaching others to recognize them, and lecturing at universities such as UC Santa Cruz, USC, CCA, Otis and Columbia.
Nick Fox-Gieg is an animator and creative technologist based in Toronto. His film The Orange won the jury prize for Best Animated Short at SXSW 2010. His films have also screened at the Ottawa, Rotterdam and TIFF film festivals, at the Centre Pompidou and on CBC TV. Fox-Gieg was awarded an Eyebeam Fellowship in 2012, a Fulbright Fellowship in 2006, and has received media-arts grants from Bravo!Fact, the Canada Council for the Arts, and the arts councils of Ontario, Pennsylvania, Toronto and West Virginia. He holds an MFA from the California Institute of the Arts and a BFA from Carnegie Mellon University. Most recently, he’s been working on virtual-reality projects at Framestore and Google Creative Lab.
Untrained Eyes is a conceptual technology project that takes its inspiration from observing the explicit bias that can be found during everyday image searches within Google and other public-image archives. This interactive installation will expose the problems of our current machine-learning trajectories by revealing the hidden challenges of creating artificial-intelligence algorithms. When viewers enter the installation, they will encounter a salon-style hanging arrangement of dozens of framed images. After a few seconds, the images will all change in synchronicity, as if a new image-search batch was loaded. Each one will display a physically similar face to one “lucky” audience member standing in the center of the room. This sets off an unsettling chain reaction, as everyone in the space tries to find the target person and then focuses in on him or her. It is an exaggeration of our selfie-obsessed culture, which raises a question for all to consider when engaging in a dialog about inclusion: Are you really ready for it?
Glenn Kaino is an artist with a career that spans a wide range of media and creative activity. In 2012, he was selected by the State Department to represent the United States in the 13th International Cairo Biennale in Cairo and was included in the 2004 Whitney Biennial at the Whitney Museum of American Art, New York, the 12th Lyon Biennial in Lyon, France, and Prospect 3 in New Orleans. He has had exhibitions at The Modern Fort Worth, Texas, the Andy Warhol Museum, Pittsburgh, the Museum of Contemporary Art, Los Angeles, Los Angeles County Museum of Art, the International Film Festival Rotterdam and many others. He has upcoming solo exhibitions at the CAC Cincinnati, the High Museum of Atlanta and Mass MOCA in North Adams, Massachusetts.
Jesse Williams is a native of Chicago and graduate of Temple University. He began his career teaching at low-income Philadelphia public charter schools. After moving to New York City, he later began his professional acting career. Williams stars in ABC’s Grey’s Anatomy and has served as senior producer and correspondent for Epix docuseries America Divided with Norman Lear. He also executive-produced the documentary Stay Woke: The Black Lives Matter Movement. Williams gained international attention while accepting the 2016 BET Humanitarian Award, where he spoke about police brutality and systemic inequities.
Your Hands Are Feet
Your Hands Are Feet is an interactive room-scale VR experience that places you in surreal realities made up of experiential metaphors. You start out in a kitchen with a carton of six eggs, which can be picked up and thrown or cracked on the countertop. Each egg acts as a portal to a new experience; the room is transformed into a surreal landscape, presenting a reality where your head can be in the clouds, the whole world can crumble around you, you can be all thumbs or have two left feet (but really, though). Your Hands Are Feet is being produced in connection with Egg, an independent feature film created by an entirely female and Sundance-alumni team.
Amelia Winger-Bearskin is a 2017 Sundance Institute Time Warner Fellow, an artist at the 2017 Sundance New Frontiers Story Lab and a 2016 Oculus Launchpad fellow. She is the founder of the Stupid Hackathon and is the director of Idea New Rochelle, a nonprofit dedicated to creating an alliance of facilities for the immersive tech community in New York. Amelia began her career as an opera singer and became the writer, director and star of productions that were too weird for opera, theater and museums but are quite at home in the bonkers world of VR.
Sarah Rothberg is an artist who works with emerging technologies. In 2014, Sarah became fascinated with Facebook’s new efforts to capitalize on nostalgia and its coinciding acquisition of virtual reality company Oculus VR. This launched her interest in virtual reality and its implications, leading her to create her first major VR experience, Memory/Place, which has been called by Artspace perhaps “the first true virtual-reality art masterpiece.” Her VR artworks Touching A Cactus and Memory/Place were recently included in the Bunker pop-up show at Sotheby’s S2 Gallery. She teaches VR at NYU and has been an artist-in-residence at NYU, Superbright, Mana Contemporary and Harvestworks.
*Recommendations for the Alternate Realities grants program were made by an independent selection committee, but the final selections were made by Engadget’s editorial leadership. Committee members with financial or contractual ties to a project or artist being considered were asked to recuse themselves from recommending those projects to avoid any perceived or actual conflicts of interest.
**Note: The Engadget Experience was originally scheduled for November 16th. Ticket holders have been notified of the date change.
Marvel’s latest TV series The Defenders, which brings together heroes such as Daredevil, Jessica Jones and Luke Cage, might have just debuted on Netflix, but that doesn’t mean that the comic book company is resting on its laurels. Marvel TV is using the series’ premiere to hype its next series, The Punisher; they released a new teaser trailer today.
We know a little about this series, starring Jon Bernthal in the titular role, as Frank Castle. He was first introduced in the second season of Daredevil, so it makes sense that Deborah Ann Woll will costar, reprising her role as Karen Page. Rumors indicate that the series will arrive on Netflix sometime in November.
Marvel and Netflix have found quite a bit of success with their joint endeavors such as Jessica Jones and Luke Cage. However, it’s difficult to say exactly how well they’ve done, given that the streaming service doesn’t release viewing numbers. Critical reception to their latest release, Iron Fist, wasn’t exactly stellar (though Netflix and Marvel did greenlight a second season under a new showrunner) so it will be interesting to see how they course correct with future shows.
In a statement today, President Trump announced that he’s elevating the US Cyber Command to a unified combatant command, bringing it the level of others like the US European Command and the US Special Operations Command. “This new Unified Combatant Command will strengthen our cyberspace operations and create more opportunities to improve our Nation’s defense,” said Trump. “The elevation of United States Cyber Command demonstrates our increased resolve against cyberspace threats and will help reassure our allies and partners and deter our adversaries.”
This move was at one time also considered by the Obama administration and brings with it a renewed discussion of whether the command should be split from the NSA and aligned more closely with the military. The split was also considered by the previous administration and has been in talks for some time. Secretary of Defense James Mattis will review the possibility of separating the two and will announce his recommendations at a “later date” according to the president’s statement.
“United States Cyber Command’s elevation will also help streamline command and control of time-sensitive cyberspace operations by consolidating them under a single commander with authorities commensurate with the importance of such operations. Elevation will also ensure that critical cyberspace operations are adequately funded,” said Trump.
Via: The Hill
Source: White House
By Erin Lodi
This post was done in partnership with The Wirecutter, a buyer’s guide to the best technology. When readers choose to buy The Wirecutter’s independently chosen editorial picks, it may earn affiliate commissions that support its work. Read the full article here.
After 10 hours of new research and testing (on top of three years’ worth of work on previous guides), we think the Fujifilm Instax Mini 90 Neo Classic is the best instant film camera for most people, combining ease of use, great-looking photos, and retro-cool style at a reasonable price.
Who this is for
When it’s time to party, the Fujifilm Instax Mini 90 Neo Classic is perfect for passing around. Photo: Erin Lodi
The big draw of instant cameras is that they’re fun to use. A great conversation starter, an instant camera gives you an easy way to coax even the most camera-shy subjects into posing for a portrait. Add to that the fact that you can’t share these images on Facebook at the touch of a button, and people are only too happy to offer up great, uninhibited poses. You’re also likely to draw a crowd of curious onlookers as you wait for the prints to develop.
Instant cameras are a decidedly retro proposition, with a limited set of features. You don’t have a zoom lens, and the viewfinders are tiny and less than precise at close distances. Instant film isn’t cheap, either—you’re looking at more than 50¢ for each shot you take. And you don’t get an on-screen preview of how the lighting and contrast will affect your photograph, so you can’t predict how the photo will turn out. But those shortcomings are part of the charm of shooting with an instant camera—you never know quite how the image will look until you pull the trigger.
How we picked and tested
We tested Fujifilm’s Instax offerings as well as some new instant-camera options from Lomography and The Impossible Project. Photo: Erin Lodi
Because authoritative editorial reviews of instant cameras are scarce, we talked to a number of photographers who work with the instant format about their preferences. Although everyone associates the Polaroid brand with instant cameras, when it came to recommending a current model, the resounding call from our photography experts was for Fujifilm models. It wasn’t even close.
Since 2013, we’ve compared instant-camera usability, image quality, and features by shooting in a variety of indoor and outdoor conditions. We also put the cameras through the most appropriate real-world examination we could think of: the party test. What happens when a novice shooter picks this thing up at a gathering? Is it fun to pass around and shoot with at a company holiday party or a family dinner? How do those photos look?
Photo: Erin Lodi
The Fujifilm Instax Mini 90 Neo Classic is the best instant film camera for most people looking to experience the fun of analog photos. Intuitive controls mean it’s ready to pass around at a party, and it has a great viewfinder that makes framing your shot easy. It uses a rechargeable lithium-ion battery and offers additional shooting options that let experienced shooters get creative, too. What you get with all of that are predictably pleasing images with more-accurate colors and finer detail than its competitors can produce, all in a compact and durable retro design. Prints measure 2.1 by 3.4 inches, about the size of a credit card and perfect for a souvenir that’s easy to pocket, stack, or hand out.
In daylight or in brightly lit interiors, the camera’s auto mode offers image quality that is consistent and mostly color accurate for instant film (results can be a little bluish in tint). The Mini 90 Neo Classic also has more manual controls than any other instant camera we tested, including an L/D (lighter/darker) button that lets you under- or overexpose the image slightly, and options to shoot in macro, double exposure, or landscape mode.
The Fujifilm Instax Mini 90 Neo Classic’s small square shape can make for an awkward grip, mostly because it offers no real estate to hold on to for the middle, ring, and pinky fingers positioned to the right of the lens housing. Also, a small loop harness on the bottom of the camera body means the camera tips over easily when you place it on a flat surface. This is a strange design decision, considering that Fujifilm could have easily placed the loop on the right-hand side panel instead.
Photo: Erin Lodi
If you’re looking only for an instant camera to pass around at your next party and you aren’t interested in going beyond basic snapshots and fiddling with more creative exposures, the Fujifilm Instax Mini 50S makes good-looking prints and costs a bit less than our top pick at the time of this writing. It uses the same film as the Fujifilm Instax Mini 90 Neo Classic—but its images don’t look as good, it doesn’t have as many manual controls, and you’re stuck with CR2 batteries instead of a rechargeable.
The Mini 50S has fewer controls than our top pick, with just three buttons allowing for exposure compensation, fill flash, or landscape mode, plus a timer. But this design keeps operation even simpler: We doubt anyone would take more than a second or two to figure out how to snap a photo with it. In our tests most of its prints looked a little less saturated and detailed than the Mini 90’s images, but the differences weren’t night-and-day.
For wider photos
Photo: Erin Lodi
We recommend the Fujifilm Instax Wide 300 if you like a larger print (3.4 by 4.3 inches) and don’t need extra controls such as exposure compensation. This camera is substantial—it’s twice the size and weight of our top pick but it has better ergonomics and a beefier grip, as well as a tripod socket, and it runs on four AA batteries (though there’s no reason you can’t use rechargeable AAs).
Fujifilm Instax Wide Instant Film might be slightly harder to find at the corner drugstore, but online a two-pack of 10-exposure cartridges costs about $20, or $1 per print.
Although the Instax Wide 300 makes larger prints, its image quality is not as good as that of the Mini 90, which delivers richer colors and better contrast. Bigger is more impressive, however, when you’re handing out these photos at a party: People love the larger size, and we’ve noticed that most dinner guests are rarely concerned about saturation or sharpness.
For printing smartphone pics
Photo: Erin Lodi
You don’t need an instant camera to get old-school instant prints: The Fujifilm Instax Share SP-2 can print your smartphone pics to Instax film in seconds. Setup is seamless once you’ve downloaded the Instax Share app and connected to the tiny printer’s Wi-Fi. The app is easy enough to use, and you can add a filter or a border to your image if you want.
The hamburger-size SP-2 runs on the same-size rechargeable battery as the Fujifilm Instax Mini 90 Neo Classic and comes with a USB cord for keeping it powered up.
This guide may have been updated by The Wirecutter. To see the current recommendation, please go here.
Note from The Wirecutter: When readers choose to buy our independently chosen editorial picks, we may earn affiliate commissions that support our work.
The USA Today Network has announced that in collaboration with Instagram, it will livestream the total solar eclipse on August 21st. The feed will feature real-time video broadcasts by journalists in areas along the eclipse’s path of totality. Reporters in Oregon, Idaho, Wyoming, Nebraska, Missouri, Kentucky, Tennessee and South Carolina will cover and record the event. You can see a schedule of who will be broadcasting from where and when here.
Along with the Instagram livestream, USA Today will also broadcast eclipse coverage through Facebook Live across all 110 of its news sites. And its eclipse-dedicated web page will feature the rotating livestream, a real-time map of the eclipse’s path and eclipse FAQs. Coverage will begin at 9:00AM Pacific time.
We’ve put together a guide on how to watch the eclipse including tips on seeing it in person and where you can watch online. Along with USA Today, CNN and of course NASA are offering live coverage of the event.
Source: USA Today
Apple appeared in Los Angeles Superior Court on Thursday to argue that it shouldn’t be held liable for iPhone-related distracted driving accidents, in response to a lawsuit filed against the company earlier this year.
California resident Julio Ceja filed a class action complaint against Apple in January, accusing the company of placing profit before consumer safety by choosing not to implement a lock-out mechanism that would disable an iPhone’s functionality when being used behind the wheel by an engaged driver.
Ceja said his vehicle was involved in a collision with another vehicle in which the driver was texting on an iPhone.
Apple, however, told the court that it’s a driver’s fault if they choose to misuse an inherently safe iPhone while operating a vehicle. Apple essentially said it cannot be blamed simply because it manufactures the device, according to court documents filed electronically and obtained by MacRumors.
Just yesterday, a U.S. district court in Texas dismissed a similar distracted driving lawsuit brought against Apple last year. In that case, Meador v Apple, Inc., the plaintiffs accused Apple of failing to automatically disable a user’s ability to operate an iPhone while driving, and of improper marketing.
However, judge Robert W. Schroeder III said the plaintiff’s injuries stemmed from neglecting to safely operate her vehicle.
When a driver negligently operates her vehicle because she is engaging in compulsive or addictive behaviors such as eating food, drinking alcohol, or smoking tobacco, it is the driver’s negligence in engaging in those activities that causes any resulting injuries, not the cook’s, distiller’s, or tobacconist’s supposed negligence in making their products so enticing.
Similarly, her decision to direct her attention to her iPhone 5 and maintain her attention on her phone instead of the roadway is the producing cause of the injury to Plaintiffs.
Apple has faced similar lawsuits in the past. In response to one filed in Texas in 2015, Apple indicated the responsibility is on the driver to avoid distractions in a statement provided to The New York Times:
“We discourage anyone from allowing their iPhone to distract them by typing, reading or interacting with the display while driving,” Apple said… “For those customers who do not wish to turn off their iPhones or switch into Airplane Mode while driving to avoid distractions, we recommend the easy-to-use Do Not Disturb and Silent Mode features.”
Ceja’s lawsuit mentioned a patent for a motion analyzer that would detect whether a handheld device is in motion beyond a certain speed. A scenery analyzer would then determine whether the holder of the handheld device is sitting somewhere other than the driver’s seat. Otherwise, the device could be disabled.
In other embodiments, a vehicle or car key could transmit a signal that disables functionality of the handheld device while it is being operated. To a lesser degree, a vehicle could also transmit a signal that merely sends the device a notification stating that functionality should be disabled.
Apple hasn’t gone as far as implementing any of those functions, but in iOS 11 it introduced Do Not Disturb While Driving.
Do Not Disturb While Driving is an optional setting that, when enabled, turns on whenever an iPhone connects to a vehicle via Bluetooth or detects rapid acceleration. While active, the feature mutes all incoming phone calls, notifications, and text messages, and the iPhone’s screen stays off completely.
Phone calls are allowed, so long as an iPhone is connected to a car’s Bluetooth or a hands-free accessory, allowing drivers to respond without needing to pick up their phone. If not connected to Bluetooth or a compatible accessory, calls will be blocked like text messages and notifications.
For text messages, there is an option to send your contacts a message that lets them know you’re driving and will get back to them later. In an emergency, a person who is attempting to contact you via text while you’re driving can break through Do Not Disturb by sending a second “urgent” message.
Do Not Disturb While Driving can also be activated manually in Settings > Do Not Disturb or in Control Center.
Tags: distracted driving, Do Not Disturb While Driving
Discuss this article in our forums