By Amadou Diallo
This post was done in partnership with The Wirecutter, a buyer’s guide to the best technology. When readers choose to buy The Wirecutter’s independently chosen editorial picks, it may earn affiliate commissions that support its work. Read the full article here.
After researching and testing more than 30 high-end compact cameras over the past three years, we recommend the Panasonic Lumix DMC-LX10 if you’re looking to take the best pictures possible with a camera small enough to slip into your pocket.
How we picked
Our top picks, the Panasonic LX10 (left) and Sony RX100 III (right), combine the high image quality of a 1-inch sensor and a bright lens with the convenience of a pocketable camera body. Photo: Amadou Diallo
Today’s high-end compact cameras are defined by an ability to capture higher-quality images than your smartphone, combined with a design that’s small enough for you to always carry around. They pair large sensors with wide-aperture zoom lenses and fit it all in a package that slips easily into a jacket or pants pocket. We specifically looked at models under the $1,000 mark, because models priced above that tend to be specialist devices, beyond the “best for most people” focus we take.
Our contenders here use 1-inch sensors that have nearly four times the imaging area of those found in an iPhone, a gap that was unheard of just a few years ago. All else being equal, a larger sensor can capture more light, which leads to cleaner, more detailed images and greater background blur when you’re shooting at wide apertures.
Lens speed, or aperture, is a measure of how much light the lens can let in at a given opening. The faster the lens—or the wider the aperture—the more light it allows. With more light available, you can shoot at lower ISO values for less noise or at faster shutter speeds to freeze subject movement and avoid camera shake.
Limiting our research to cameras with large sensors, zoom lenses with fast apertures, and a reasonable degree of pocketability, we were left with a handful of models to consider from Canon, Nikon, Panasonic, and Sony.
Though it’s much thicker than a smartphone, the LX10 can still slip easily into a relaxed-fit jeans pocket. Photo: Amadou Diallo
The Panasonic Lumix DMC-LX10 is easy to use and very fast to focus, and it offers touchscreen control and shoots 4K video. The LX10 also has a built-in flash that’s more useful than most found on compact cameras, because you can tilt it back to bounce light off the ceiling, avoiding harsh shadows. Although many of its rivals can match it in one or more of those respects, the LX10 offers the most compelling combination of these features at a lower launch price than the competition. To read about some of the more advanced features of the LX10, please see our full guide.
The LX10 is easy and quick to operate, largely because of its touchscreen interface. The customizable Quick Menu and a virtual pull-out tab allows you to quickly access your most commonly used parameters and functions. Perhaps the biggest advantage of the touchscreen is that with the camera set to its single- or multi-point AF mode, you simply tap the rear screen to set focus. This feature is a boon for video shooting, as you can “pull focus” to a subject located anywhere in the frame with a single tap.
Images shot with the LX10 look great, with plenty of detail and reasonably realistic colors. Raw files from the LX10 can withstand substantial corrections in image editing software. The LX10 isn’t a groundbreaking camera—we can trace its headline features to previous models from Panasonic, Sony, and Canon. The LX10 is our pick because it offers such a strong combination of well-implemented features at a price that gives you more bang for your buck than we’ve seen in this class of camera.
The LX10 captures 4K video at 30 frames per second and Full HD output at 60 fps (or 120 fps with stabilization and AF disabled). Even if you’re interested only in still photography, the included 4K capability is still relevant thanks to Panasonic’s 4K Photo mode. This mode records a video burst instead of a single image when you press the shutter button and then automatically presents the result as individual 8-megapixel still images. You get 30 still images per second every time you press the shutter button.
As you might expect, the LX10 connects to your phone via built-in Wi-Fi. Once your phone is connected, you can operate the camera remotely, previewing the scene on your phone and adjusting settings like shutter speed, ISO, and white balance, though oddly not aperture or exposure mode. You can also copy images from the SD card to your phone, but only JPEGs, as the app will ignore raw files.
The most obvious shortcoming of the Panasonic Lumix DMC-LX10 is its lack of an electronic viewfinder. An EVF is most useful when glare makes viewing a camera’s rear screen difficult. The screen on the LX10 does tilt upward, which can help minimize glare, but the screens on the Sony and Canon models offer wider ranges of movement to accommodate shooting with the camera held overhead, something not feasible with the LX10.
This limitation won’t be a dealbreaker for most people, and explains how Panasonic can release the LX10 at a lower starting price than we’ve ever seen for a similarly specced 1-inch-sensor camera.
An EVF but no touchscreen
Photo: Tim Barribeau
If you find an EVF more useful than 4K video capabilities and can live without the convenience of a touchscreen, we like the Sony Cyber-shot DSC-RX100 III. It’s a former top pick, and though it may be a few generations behind Sony’s current model in the popular RX100 series, the Mark III is still a solid choice. It captures great-looking still images and HD video that looks better than what you get from some DSLRs, while offering good performance in low light—all at a price far below that of Sony’s latest iteration.
The RX100 III delivers outstanding images, largely indistinguishable in image quality from those of our top pick. That similarity is no surprise, as it’s widely assumed (though not officially acknowledged) that all of the 1-inch sensors found in these compact cameras are actually made by Sony. The lenses and image processors that each manufacturer uses will account for most of the minor differences in how the in-camera JPEG images look.
Sony’s first-of-its-kind pop-up electronic viewfinder enables eye-level shooting and composition, even when glare makes the LCD screen hard to see. You rarely see a built-in EVF of any kind on a camera this size, let alone one that retracts to retain pocketability. In fact, the RX100 III manages to be slightly more compact than our top pick, which lacks an EVF.
Though it doesn’t shoot 4K video, as our top pick does, the RX100 III produces some of the sharpest, most detailed Full HD footage you can get from nearly any consumer camera.
Sony allows you to add more camera functionality through free and paid apps. Although additional functionality is welcome, several other cameras offer features like these but built-in and free.
This guide may have been updated by The Wirecutter. To see the current recommendation, please go here.
Note from The Wirecutter: When readers choose to buy our independently chosen editorial picks, we may earn affiliate commissions that support our work.
Spotify’s streaming business is legally tricky, and just ahead of an IPO that could value it at $13 billion, it’s facing two more vexing lawsuits, according to THR. In one, Bob Gaudio of Frankie Valli and the Four Seasons alleges that ’60s hits like “Can’t Take My Eyes off of You” are being streamed without proper licensing. The other suit comes from Bluewater Music Services Corporation, which manages the streaming rights of songs like “White Liar” from Miranda Lambert and the Guns ‘N Roses’ track “Yesterdays.” All told, over 2,500 songs are in dispute.
Bluewater said Spotify “rules the streaming market through a pattern of willful infringement on a staggering scale,” and added that and adds anything less than $150,000 per infringed song will “amount to a slap on the wrist.”
While Spotify has licensing deals with labels to use sound recordings, the latest suit is over composition rights. Every time a song is played anywhere, its writer receives a cut of the profits. To figure out who to dole out payments to, Spotify works with large agencies, but those groups don’t represent every artist out there.
Spotify has called the task of tracking those composers down “daunting,” but artists and publishers who have sued it say it has failed to do enough. It was recently hit by a class action suit, and settled this May for $43.5 million. Any publishers who want to opt out of the settlement have until September, so Gaudio and Bluewater did just that.
Spotify CEO Daniel Ek (Toru Yamankaka/AFP/Getty Images)
Bluewater’s claim states that the class action settlement will let Spotify pay about four dollars per infringed song, a sum that will do nothing to punish its behavior. “Such a settlement is essentially an empty gesture that encourages infringement and is entirely insufficient to remedy years of illegal activity.”
Spotify may face other claims before the deadline, too. Bluewater and Gaudio contracted Audiam, a company that sniffs out unlicensed streams, and Richard Busch, the lawyer who won a $5.3 million settlement from “Blurred Lines” artists Pharrel Williams and Robin Thicke.
Spotify has recently denied allegations that it uses fake artists to reduce royalty fees. “We do not and have never created ‘fake’ artists and put them on Spotify playlists,” a Spotify spokesperson told Engadget last week.
Source: Pitchfork, THR
Lyft is getting into the autonomous driving game too, and opening a research facility in Palo Alto in case you were wondering just how serious the company is. In a post on Medium, the new division’s vice president Luc Vincent writes that ten percent of the company’s engineers are working on the tech and that that number will only increase as the project goes on. “We aren’t thinking of our self-driving division as a side project,” he told the New York Times. “It’s core to our business.”
“Every day, there are over one million rides completed on our network in over 350 cities,” Vincent says. “This translates into tens of millions of miles on a daily basis.” He further says that the company is already using said data to figure out what’s going on.
The company’s Open Platform Initiative might be the key differentiator versus Uber’s autonomous ambitions. It’s partnering with General Motors, Land Rover, Jaguar, nuTonomy and Waymo to share its driving data. It’s an effort to rapidly accelerate the machine learning needed to keep self-driving cars on the road.
Ultimately, this would make Lyft a licensor of sorts, developing software and hardware versus building cars
This doesn’t mean that Lyft is abandoning its (current) bread and butter, though. Vincent says that there will “always” be human driven vehicles, but a ride will be analyzed to see if it’d be a better candidate for an autonomous trip instead.
Rather than outright saying the reasoning is to capitalize on Uber’s recent litany of missteps, Lyft says its ambitions are a fair bit greener, citing research about the impact an electric self-driving fleet could have on carbon dioxide output. “[Americans] could reduce CO2 emissions by a gigaton every single year.”
More than that, Vincent believes that this could dramatically reduce the number of cars on the road, and thus make our current infrastructure a bit archaic. “It’s a future where we can devote less of our space to roads, concrete and parking lots — and more to parks, playgrounds, homes and local businesses.
“It’s a future where we build our communities and our world around people, not cars.”
With Valerian and the City of a Thousand Planets, Luc Besson is once again delivering an elaborate sci-fi epic, his first since The Fifth Element. (Lucy, his last film, with Scarlett Johansson, was decidedly more small-scale.) Based on the French comic series Valerian and Laureline — which also served as a major inspiration for Star Wars — the film centers on a duo of space and time-traveling agents who are tasked with solving a galactic mystery.
Valerian, which opens in theaters July 21, stars Dane DeHaan, Cara Delevingne, Rihanna (as a shapeshifting alien, no less) and Clive Owen. The film is also the biggest independent-film production ever, with a budget of $180 million. It’s filled with the sumptuous visuals we’ve come to expect from Besson — the only difference now is that filmmaking technology has finally caught up with his imagination.
How did working on this film compare to The Fifth Element? We’ve been waiting for a while to see you handling a big sci-fi world again.
I’m back! Honestly, The Fifth Element was a nightmare to do, because, at the time, special effects were old-fashioned, it was before we went digital. Every time I wanted to use a green screen, it took like six hours. It was really painful. This one … because we prepped a lot, it was easier. Longer, but easier. The Fifth Element has 188 shots with special effects; Valerian has 2,734. It was less painful, but technology has made it easier for a director. You just have to specifically be very precise about what you want, and you need to have a good time to do it.
I was also very lucky because I worked with WETA, who did Avatar, and ILM, who did Star Wars. So I’ve already got the best two companies in the world. Usually, you have one or the other. But for the first time, they accepted working together. The film was actually too big to handle by themselves. And it was just a pure pleasure to work with those guys because they were trying to be the best, each of them. I was on the postproduction for the last year, and I was amazed every day by the level of what they had.
There’s a third company who worked on [Valerian] called Rodeo; they did all the spaceships and Alpha station. They were also very good.
Now that you can do so much more with digital effects, when do you go practical instead? The practical costumes in The Fifth Element still look very realistic today.
I’m going to give you an example: You have something to build, so you open your toolbox and take what is most appropriate to do it. If it’s a hammer, or a screwdriver or a saw, it’s the same. So if I have a shot to do, my first question with my team is to say, “What is the best way of doing this?” It’s not automatically CGI or props. And for me, that’s the best approach.
The Doghan Daguis [above], which are like the Three Stooges in the film, in fact, are all CG. But I have real actors playing them all the time, so there’s lots of rehearsal with them so we can get the rhythm of the lines. But the third time we shoot, we built three Doghan Daguis figures and have them as a model. They’re like 5 feet tall, they have the hair and the eyes, they’re perfect.
They can’t move, but every time we finished a shot, we pushed the actors away and put the three models in, and suddenly you have exactly the right light on them. You see exactly how the skin is reacting, you see the reflections on the eyes. We were filming a lot of that, even if it wasn’t animated, so the people from WETA know exactly how the light will react on them later. So, in reality, we have actors, CGI and props in the same shot.
Do you find the fact that you can do practically anything now digitally to be limiting?
You know, give a Ferrari to someone, and one guy will be scared of driving it. And one guy will be thrilled and excited to drive it. And I’m the thrilled and excited one.
You’ve also talked about how watching Avatar made you think this film was possible. Could you talk a little more about that?
It wasn’t the film by itself — I’m a huge fan of the film and the storytelling, but that’s one thing. What inspired me a lot is the experience of [director James Cameron] going through it. The technology wasn’t ready, so he worked with people, he invented the technology to make it possible. Jim invited me on the set, so I went for a day to watch how he was working. He makes it very easy. Suddenly, you realize, “OK, in fact, a human can do it!”
I’m not trying to compare myself to him at all. But, suddenly, before I came on the set of Avatar, I thought it was impossible to make the film. And when I saw him working, I thought, “OK, if I take my time and I prep a lot, we can do it too.”
Was there anything about the technology Cameron used to make Avatar that influenced you?
What’s very important is he worked with this team at WETA when they did Avatar. I used most of the same team, because I went to New Zealand to work with them. So the knowledge they had from Avatar, plus the new evolution [in technology] that they’ve had for the last four years… the tools they’ve given me are amazing, and they would never have this quality if Jim didn’t do Avatar.
What specific tools or technologies did you use in the film?
It’s the knowledge that they [the visual-effects teams] have. Suddenly you have people in gray with suits and a camera, and you’re recording that, and it’s no more complicated than if you were on a set with normal actors. They make it totally easy to use. And that’s the big change. When I was with my actors using blue screens, it was easy. I just had to direct my actors and do my framing, and I wasn’t even thinking about the rest, because it was handled by them.
The same thing with ILM — they did the entire big market scene. I dealt with the location, set and actors, and I didn’t have to think about the technology and how they’re going do it. The souvenir I have from Fifth Element is exactly the opposite. When a shot was a special effect, the entire set was locked like a church. Even me, I needed a pass to get in. They didn’t make it easy for the director, they were taking the set from me.
How did you approach making the complex desert-market sequence in the film? Just the ingenuity of how you visualized normal people walking around the desert, while also interacting with another dimension, seemed astounding.
When we started the scene, I had a meeting with all the technicians, around 200 people, and tried to explain the big market. And at the end, they all smiled and I could tell nobody understood what I was saying. So I said to myself, “OK, I have a problem here, they’re not going to know what they’re doing.” Storyboards and my explanation were not enough because we had to portray the desert vision, the market vision, the point of view of an alien and the point of view of a human.
So I took all the students from my director and writing school in Paris — there’s 120 students, almost like a big massive class — and we shot all the 600 shots from the storyboard one by one. The entire thing. The students were playing the actors, they were doing the props, costumes and everything. I edited that 18-minute scene together, added some temp music, and then I had the full scene to show the team. And once they saw it, they said “OK, we understand!”
It’s so funny to watch the scene now — we have the final version, as well as the one made by the students, and it’s hilarious. It’s exactly the same length and the same shots, except one is creaky and the other is good. It was the only way for everyone to understand what we were doing.
Will we be able to see the entire student clip on the DVD later?
Probably a little bit, but not the entire 18 minutes.
What is your biggest lesson from this film?
I think the biggest lesson is, when I started seven years ago, I said “Let’s try to do it.” Now, I made it. And I’m kind of surprised myself. It’s exactly the same feeling as wanting to take a boat around the world without stopping. And then now, a couple days before the opening, I see the coast. I made the tour! And I’m just happy for that.
Do you know what you’re doing next?
I’ll probably do something smaller without special effects. (Laughs)
Unless you’re a YouTube power user, you may not have known that the site had the Video Editor and Photo slideshow tools to create finished video projects. Now that you’ve learned that, I’m afraid to say that those tools are about to get the axe. If you’re currently cutting a project, you have until September 20th to finish and publish it, Google notes on its YouTube support pages.
As for why the tools are going away, it’s likely that not many folks were using it. Web video editing is hella slow compared to native applications, because you have to upload your video and download the final copy. That said, it was a good option for Chromebook users or folks with underpowered laptops or tablets.
However, Google points out that “there are many free and paid third-party editing tools available if you’re looking for new editing software.” In other words, the app was useful to as a way to get folks on board YouTube when it came along in 2010, but with more apps out there, it’s no longer worth the resources.
If you’re willing to pay and have a decent PC or Mac, other standalone apps include Adobe Premiere CC, Apple’s Final Cut Pro X for Mac only and Avid’s professionally-oriented Media Composer. Avid also has a free app called Media Composer First, Apple has iMovie for Mac and Clips for iOS, while Microsoft has Movie Maker for Windows. If you just want a web editor, there are options like Magisto and WeVideo that work on most browsers, including the Chrome browser on Chromebook.
Video Editor and Photo slideshow are heading to Google app heaven along with Spaces, Windows and Mac Chrome apps and Google Reader (RIP). However, at least Google is keeping its Enhancements app that can filter, blur and trim videos, to keep the social media crowd happy.
Via: 9 to 5 Gogle
Some of us here at Engadget can remember a time when cellphones on college campuses were strictly “for emergencies” (read: calls home to Mom and Dad). By now, of course, things have changed: Our handsets come with us everywhere, and most of us don’t have to worry about pissing off our parents by exceeding our minutes allotment. We imagine many of today’s college freshmen already have phones, but for those of you who’ve earned an upgrade, we crammed five into our back-to-school guide, including some budget options. Not in the market yet for a new phone? You might still want an external battery pack, a fast microSDXC card or, heaven forbid, a “selfie case,” which is definitely a thing.
Source: Engadget’s 2017 Back to School Guide
We’re getting another look at the upcoming Galaxy Note 8 thanks to some mockup renders created by a case maker based on leaked details. BGR got ahold of them and said they’re probably our best look at the new model yet.
From these images, we see that the phone has an Infinity Display with almost no bezel and a slew of sensors above, including a front-facing camera, iris scanner, LED and a light sensor. The design is a little boxier than the Galaxy S8. On the back, the Note 8 has its new dual-camera system, an LED flash/heart rate monitor and a fingerprint sensor, which while still being awkwardly close to the camera, appears to be separated from the other components by a lip.
These images add to what we already knew about the Note 8. It will come with 6GB of RAM, have a 6.3-inch Infinity Display and pack a 3,300 mAh battery. The Galaxy Note 8 is expected to launch in September and will be officially announced at Samsung’s Unpacked event on August 23rd.
Snap Inc. has quietly acquired a team that specializes in protection against reverse engineering. Prior to joining SnapChat, the Strong.Codes team built software which prevented dismantling a product to learn how to copy or rebuild it. This process obviously isn’t news to Snap, which has previously acknowledged the risk of other social media sites mimicking Snapchat’s features. Risks that were realized when Instagram and Facebook unveiled their “story” features in quick succession.
Of course, companies can develop their own features without trawling Snapchat’s code. And the company deals with far more brazen imitation than added features on Facebook or Instagram. South Korea’s Snow, for example, is basically a carbon copy of Snapchat.
During his first earnings conference call in May, Snapchat co-founder Evan Spiegel said: “If you want to be a creative company, you have got to be comfortable with and enjoy the fact that people copy your products if you make great stuff.” Imitation may be a form of flattery, but the acquisition of the Strong.Codes team suggests Snapchat isn’t going to make it easy for its rivals to do that.
Delta is expanding its biometric check-in feature that allows some customers to use their fingerprints instead of a boarding pass. The service was first launched at Ronald Reagan Washington National Airport (DCA) in May and let Delta SkyMiles members enter the Delta Sky Club with their fingerprints rather than a physical ID. Now, those members can use their fingerprints to board their plane.
The airline is partnering with Clear for this service and SkyMiles members just have to enroll with Clear in order to take advantage of the feature at DCA. “It’s a win-win program. Biometric verification has a higher level of accuracy than paper boarding passes and gives agents more time to assist customers with seat changes and other skilled tasks instead of having to scan individual tickets – and customers have less to keep track of as they travel through the airport,” said Delta COO Gil West in a statement.
Earlier this year, Delta began testing a facial recognition system for checking luggage at the Minneapolis-Saint Paul International Airport. And it’s not the only group looking to use biometrics for airport identification. The US is looking at a widespread facial recognition-based security plan and Australia is working on equipping all of its international airports with facial, iris and fingerprint recognition to remove the need for passport checks.
Delta says the next step for its DCA fingerprint rollout is to allow passengers to use their prints for baggage check. “Once we complete testing, customers throughout our domestic network could start seeing this capability in a matter of months – not years. Delta really is delivering the future now,” said West.
Since the announcement of Apple’s new augmented reality developer platform at WWDC in June, developers have been sharing interesting new AR experiences on iOS devices, including practical applications like measuring tape apps and basic character model demos.
Today, we’ve rounded up the newest examples of how ARKit could work in real-world scenarios, starting off with a maps addition that could bolster directions in Apple Maps. As with all ARKit demos, today’s examples are not confirmed to be the final launch products for augmented reality apps coming down the line, but they are intriguing glimpses into what users can expect when the AR features debut on iOS 11 this fall.
Images via @AndrewProjDent
Shared by iOS developer Andrew Hart on Twitter, the first example of the AR-enhanced maps software overlays destinations on points of interest when looked at through the camera of your iPhone or iPad, giving an estimation of how far you are from each location.
Burrowing deeper into getting directions to a specific location, Hart used ARKit and Apple developer framework Core Location — which lets developers integrate the geographic location and orientation of a device directly into their software — to create augmented reality turn-by-turn directions.
Acquisitions of mapping companies and patent filings dating back to 2009 have long suggested that Apple is interested in adding AR features into Apple Maps, but the technology prior to ARKit has likely not been promising enough for such an implementation.
ARKit + CoreLocation, part 2 pic.twitter.com/AyQiFyzlj3
— Andrew Hart (@AndrewProjDent) July 21, 2017
Continuing on the measuring AR app trend, a new tool was shared on the Made With ARKit Twitter account recently, allowing users to perform precise square foot measurements of an entire room. The last few measuring apps detailed in our ARKit roundup in June centered upon AR-enabled measuring tapes that could only provide distance estimates in a straight line.
For those interested in gaming AR apps, developer Kobi Snir shared a real-life version of Pac-Man that uses ARKit to place users directly within the game’s maze, filled with dots and ghosts. The players take on the role of Pac-Man, and move around the maze to eat every dot while avoiding the ghosts. Another recent gaming-related ARKit example showcased what Minecraft would look like in the real world.
Games have been a core part of ARKit from the day it was announced, with Apple’s senior vice president of Software Engineering Craig Federighi touting Pokémon Go as one of the first apps that will receive ARKit-related enhancements this fall. “The Pokémon is so real, he’s right there on the ground,” Federighi said at WWDC. “As the ball bounces, it actually bounces right there in the real environment. It’s AR like you’ve never seen it before.”
Of course, these are just a handful of recent examples of ARKit that developers have shared. Others include a graffiti doodling app, a shopping app (similar to IKEA’s planned ARKit app), and an inter-dimensional portal. Apple CEO Tim Cook has said AR makes him so excited that he just wants to “yell out and scream,” telling Bloomberg Businessweek last month that, “When people begin to see what’s possible, it’s going to get them very excited—like we are, like we’ve been.”
Discuss this article in our forums