Marvel’s ‘The Punisher’ renewed for second season on Netflix
Almost a month after Netflix released the first season of The Punisher, the network has officially renewed it for a second. As usual for a first announcement, there’s no hint at a release date or what’s in store for violent vigilante Frank Castle whenever it drops. But clearly Netflix wants to keep the superhero train running after the highly-anticipated Defenders series flopped.
Time to reload. #ThePunisher Season 2 is coming. pic.twitter.com/J76ksLfDqx
— The Punisher (@ThePunisher) December 12, 2017
Other networks have scooped up and put out other superhero shows in the last few months, including Marvel’s Runaways on Hulu. Not that Netflix should worry about losing its lead, with a second season of the hit series Jessica Jones out next March and a spin-off of Riverdale based on teen witch Sabrina Spellman coming someday. But fleshing out its shared Marvel universe has been a valuable draw for the streaming provider: Only the CW’s “Arrowverse” and its crossover events have rivaled Netflix’s connected shows.
Source: Netflix’s The Punisher (Twitter)
Watching this guy play piano with a bionic hand restores our faith in tech
Are you one of those folks, stuck in the past, who thinks that having no hands means that you’re not eligible for a career as a pianist? Researchers at the Georgia Institute of Technology are here to send your judgmental ass back to the 20th century where you belong! They have developed a smart new prosthesis, driven by ultrasound, which allows amputees to tinkle the ivories with a bionic hand equipped with impressively agile prosthetic fingers. It was recently used by a musician who lost part of his right arm to play the piano for the first time since his accident five years ago.
“In a nutshell, our technology allows amputees to gain full control of their prosthetic hand in a dexterous and intuitive manner, including finger-by-finger control,” Gil Weinberg, the Georgia Tech College of Design professor who leads the project, told Digital Trends. “The accuracy and sensitivity of our novel ultrasonic sensor and machine learning algorithms allow amputees to perform sophisticated tasks such as playing the piano. For the first time, amputees can now control fine motor skills in their prosthetic hands in a manner that is not possible with current EMG technology.”
As you can see from the video below, the technology was loosely inspired by Luke Skywalker’s bionic hand as received following his battle with Darth Vader in The Empire Strikes Back. In this case, however, no movie-style illusions were needed to pull off the effect, however.
While amputees don’t have their physical hands, their nerves still communicate with their forearm muscles, which are mapped to operate their “phantom fingers” as if they still existed. When individuals attempt to move these phantom fingers, it is possible to sense the muscles that represent said movements using ultrasound. “Unlike myoelectric arms that use EMG to sense electric activity in the muscles, we actually look at the trajectory and speed of the muscle movements, which allows for finger-by-finger control,” Weinberg continued. “Such dexterous control can be used for intuitive fine motor skills gestures for a wide variety of operations — from bathing, grooming and feeding to the highly expressive activity of piano playing.”
Next, he says the team is working hard to miniaturize and improve power consumption for the ultrasound sensor so that it could become more easily wearable and possibly commercialized. Heck, if they work at lightspeed, they might even be able to work out some kind of tie-in to coincide with the next Star Wars movie.
Editors’ Recommendations
- 2018 Mitsubishi Outlander PHEV first drive review
- Sennheiser HD1 Wireless review
- 2018 Audi R8 V10 Spyder review
- High-tech neuroprosthetic ‘Luke’ arm lets amputee touch and feel again
- Take your PC back to the ’90s with the coolest screensavers, like, ever
Adobe narrows the gap between Lightroom CC and Classic with new tools
Adobe split Lightroom in two with the last update but a number of the more advanced features did not make the cut for the mobile-focused Lightroom CC. But Adobe is catching up the desktop version of Lightroom CC with new curves and split-toning tools with a new list of updates launched on Tuesday, December 12. The added features also include new artificially intelligent auto options across the entire Lightroom suite, along with updates to the mobile versions of Lightroom CC, Lightroom Classic, and Adobe XD.
Lightroom CC was initially designed to be identical across any devices, so the same tools on desktop were accessible on a smartphone. That meant several of the more advanced tools from the original but renamed Lightroom Classic were left out. Now, Adobe is bringing back some of those features, but the change also means Lightroom CC isn’t identical across all devices anymore.
On the desktop version of Lightroom CC, the photo editor now includes both the tone curve and split toning. Curves is a popular tool for adjusting tone, contrast, and color balance, Adobe said. Like in Lightroom Classic, Lightroom CC can now adjust in either a parametric-curve or point-curve modes. But inside Lightroom CC, the feature is in a bit different location since the entire user interface was redesigned — the tone curve options live inside the light panel.
Split toning is another more advanced tool that didn’t make the cut into Lightroom CC, but the tool is back with the update. Split toning allows photographers to control the color tones in the highlights or shadows individually to create colorized effects, such as a sepia look or imitating a specific classic film.
With the update, there are fewer differences when comparing Lightroom CC with Lightroom Classic. While Classic is still more advanced with the HSL panel and options for exporting a watermark, the addition of the tone curve could sway some photographers to try the revised, mobile-focused edition of Lightroom CC. Another update allows photographers to change the capture time inside Lightroom CC if the camera clock was set incorrectly.
Across all Lightroom options, including CC, Classic and both mobile versions, Adobe Sensei is powering new auto-correction tools. Using artificial intelligence and a database of professionally corrected photos, the new auto edit analyzes the photo and automatically creates several updates suggested by machine learning, improving on Lightroom’s previous auto adjustment options with better results, according to Adobe.
While the updates to Lightroom CC already mean that the program’s feature list is no longer identical across mobile and desktop, a handful of mobile updates push that even further. iOS users can now add a watermark when exporting images from Lightroom CC on mobile devices only. The tool allows for text-based watermarks, while the user can change the size, position, opacity, and color of the mark. The iOS update also includes bug fixes, speed improvements and improved HDR when using the app’s camera mode.
Android users also get a few bug fixes, including issues with specific devices from Huawei, Pixel 2 and Samsung. The update also gives users the option to change how the app launches — tapping and holding the app icon from the home screen will bring up options that allow users to change the default opening screen. The shortcut is only accessible on devices with Android Nougat and later OS updates. The update, Adobe says, also gives more control over how images are stored.
Adobe’s traditional Lightroom Classic also gets a minor update. The color range mask launched in the previous update will now allow photo editors to remove points from the sample for more control over which colors are selected. The option is available by holding the “Alt” key on Windows or the “Option” key on Mac while using the color range eyedropper inside of a mask. The update also brings tethered support for the Nikon D850.
And Lightroom 6, the last stand-alone version of the software before Adobe switched to a subscription program, will be getting an update after all — Adobe will be adding support for recently released cameras on December 19.
The Lightroom split was designed to give photographers an option for editing anywhere across all devices, with Lightroom CC offering a similar user interface from mobile to desktop, along with including full-sized cloud storage for backing up photos. Lightroom Classic, on the other hand, gives photographers access to the traditional version of Lightroom, which Adobe says they still plan to continue expanding.
Along with updating their photo program, Adobe is also releasing updates to Adobe XD, a program for designing the user interface on apps and websites. Adding images from Adobe Stock is faster with new options for licensing raster files from Photoshop in one click. Additional updates allow designers to align their type on objects and underline text while the beta version of design specs is available in more languages. Users also have new options for sharing prototypes.
Editors’ Recommendations
- What’s the difference between Lightroom CC and Lightroom Classic?
- Adobe just made it easier to download Lightroom files from the cloud
- Adobe’s new Lightroom leverages the cloud for cross-platform photo editing
- Two for One? Maphun Luminar will soon gain Lightroom-like manager
- How to use Lightroom: A beginner’s guide to Adobe’s photo editing software
Twitter’s threaded tweets keep tweetstorms from clogging your feed
Twitter is turning another user idea into a permanent feature as the “tweetstorm” becomes a built-in feature called threads. While users have been manually creating their own tweet threads for years, Twitter is now building the feature into the app, no manual replies or numbered posts required. The new threaded tweets feature will be rolling out to both the app and desktop versions over the next few weeks, Twitter announced on Tuesday, December 12.
Users already post hundreds of thousands of threads a day, Twitter says, but the new feature simplifies the task. Now, inside the options for composing a new tweet, users can tap the plus icon to add another tweet to the series. Once finished composing the series, hitting “tweet all” will post the entire series at once.
Twitter will also allow users to add to a tweet thread later — once the feature launches, users can navigate to the tweet and click “add another Tweet” to continue adding to the same series.
Along with making those tweetstorms easier to write, the new tool also simplifies viewing the entire series by putting all the threaded tweets together. The threaded tweets will have a “show this thread” that allows users to click to see the next post or scroll past like it’s just another short post if the first tweet doesn’t interest them.
Twitter confirmed it was testing the feature in November. Tweetstorms have been around for several years and under a number of different formats. Some were simply posted with numbers indicating which order to read the posts in and how many tweets were included. Others composed longer posts by replying to their own tweets, often preferred to avoid putting dozens of tweets on the same subject in the feed.
The new format both makes the threaded tweets easier to compose and prevents the longer series from clogging up news feeds by grouping everything together, which could make the format a bit more welcome on the platform. The update comes after Twitter doubled the character limit to 280 as another way Twitter is allowing users to have more space to have their say.
Threads now join standard Twitter features like retweets and the @reply as features that users started doing on their own before Twitter incorporated the idea into the platform’s different tools and options.
Editors’ Recommendations
- Twitter confirms it’s testing a new ‘tweetstorm’ feature for lengthy rants
- Tweet as if you were using a megaphone with Twitter Promote Mode
- Chicken nuggets, Trump, and natural disasters among Twitter’s top 2017 trends
- 50 is the new magic number for your Twitter handle
- Twitter is working on a feature that lets you save tweets for later
What would disaster look like in your city? ‘Deep Empathy’ uses A.I. to show you
It’s a common observation: some terrible tragedy, inflicted by either humans or nature, takes place close to home and suddenly it’s all that anyone talks about. Newspapers are full of stories, 24/7 news channels dissect every last detail, and all your friends switch up their profile picture on Facebook in a unified show of solidarity. If something bad happens on the other side of the world, however, that’s not necessarily the case. The disaster might be every bit as horrible, or even more so, but the amount of coverage it receives is significantly less.
It’s easier to relate to events which take place closer to home than ones that happen thousands of miles away.
There are plenty of possible explanations for why this might be the case, but a big one could simply be that it’s easier to relate to events which take place closer to home than ones that happen thousands of miles away. If an incident happens on your street, or in your neighborhood, or in your city, country, or a neighboring country, there’s a higher likelihood that you’ll have some personal connection to it. You might know someone who lives there or has lived there, you may have visited in the past, or it could simply look a bit more familiar in a way that means that it is easier to empathize. Is that a trait we should be proud of? Not really. Is it a part of human nature? Almost certainly.
But in a globalized world, building empathy with people elsewhere is essential. While it’s understandable that we care for the people around us, we should also be able to put ourselves in the shoes of people from all over the world — especially when it comes to helping victims of disasters.
Could A.I. help? That’s a bold ambition, but it’s one that researchers from MIT Media Lab have embraced with a new project. Called “Deep Empathy,” their system uses a popular deep learning method called neural style transfer to create images showing neighborhoods from around the word as if they’ve been hit by some of the disasters afflicting other countries.
Using A.I. for social good
For example, it’s almost impossible to imagine the scale of the brutal six-year war in Syria, which has affected more than 13.5 million people, displaced hundreds of thousands, and destroyed an unimaginable number of homes. But what if some of those scenes took place in your local city of, say, Boston? That’s where Deep Empathy comes in. Rather than requiring human creators to mock up the scenes, it takes two images, namely a “source” and a “style” image as input, and then simulates a third image which reflects the semantic content of the source image and the texture of the style image.
Toronto Input
Toronto Output
San Francisco Output
San Francisco Input
“Researchers and artists used this technique to create interesting art pieces in the past, where they transform arbitrary images to Van Gogh-esque pictures, but this is among the first instances to use this technique towards social good,” Pinar Yanardag, one of the researchers on the project, told Digital Trends. “We also experimented the idea on different types of disasters, such as earthquake and wildfires, and got promising results. By using this technique, we wondered whether we can use A.I. to increase empathy for victims of far away disasters by making our homes appear similar to the homes of victims?”
The idea of using tech to promote empathy is a tricky one.
The idea of using tech to promote empathy is a tricky one. After all, as psychologists like Sherry Turkle have spent a career pointing out, technology can serve to distance us from others — even while it provides the opportunity to connect with an unprecedented number of people. Her 2011 non-fiction book perfectly encapsulated this idea with its title: Alone Together.
Other writers like the noted technology critic Evgeny Morozov have explored similar territory. Morozov’s tetchy, though brilliantly argued, 2013 book To Save Everything, Click Here takes issue with what he calls, “the folly of technological solutionism.” At its most basic, it’s the idea that — whatever the giant social, political, or philosophical problem — there’s an app, a smart device, or an algorithm that will fix it.
Boston
In the most cynical reading, Deep Empathy is solutionism. It algorithmically airbrushes out the otherness of foreign places and makes them seem more relatable by showing them as taking place to people on our home turf, presumably to people who look like us. But this is also a harsh reading of a project that could genuinely do some good.
Zoe Rahwan, a research associate at London School of Economics, told us that: “We are seeking to understand whether artificial intelligence can be used to evoke empathy for disaster victims from far away places. As humans, we have a range of biases which can limit our care for people who are different from us, and numb us to large numbers of injuries and deaths. We hope that Deep Empathy will help to overcome these biases, enabling empathy to be scaled in an unprecedented manner.”
Building in empathy
This isn’t the only use of technology we’ve come across that aims to make us into more empathic human beings. In their book The New Digital Age: Reshaping the Future of People, Nations and Business, authors Eric Schmidt and Jared Cohen imagine how virtual reality could be used to make people better capable of empathy by, for instance, transporting them to the Dharavi slum in Mumbai. Since then, we have seen numerous real world illustrations of virtual reality allowing us to literally see the world through the eyes of people we might otherwise never come into contact with, or hear about only as numbers on an international news report.
A.I. could help break down the polarization of views that exist in society.
A.I. could help break down the polarization of views that exist in society, too, by making sure we get exposed to views other than our own. Researchers in Europe have developed an algorithm which aims to burst the filter bubble of social networks by making sure we see stories reflecting a variety of perspectives about issues on which we may have views that are rarely challenged.
At the University of North Carolina at Charlotte, researchers are developing chatbot whose goal is not to simply obey orders, but to engage users in arguments and counterarguments with the specific aim of changing a person’s mind. Such tools could help pull us up on biases and see other points of view. Heck, there are even devices that remind us of important issues like energy usage.
The so-called Forget Me Not reading lamp starts closing like a flower, throwing out less and less light the moment you turn it on. To reactivate it, touch one of its petals — thereby constantly offering a reminder of your responsibility to use energy responsibly.
There’s even an exoskeleton that’s designed not to make you feel younger, fitter and physically stronger, but rather to simulate the effects of old age. The goal? You guessed it: to make you more empathic to the struggles of being a senior citizen.
Ultimately, no one tool is going to make us more empathic. It’s not something that can be easily “augmented,” like adding a sixth sense that buzzes when we face north. Technology doesn’t offer quick fixes to these kinds of challenges, although it can provide creative new solutions to big problems. Deep Empathy is one such approach. If it can succeed at opening a few people’s eyes about global problems about which they might otherwise not consider, that can only be a good thing. Is it a comprehensive perfect solution? No. But it’s an attempt at using these tools for genuine social change.
Frankly, we’d love to see more computer science projects like it.
Editors’ Recommendations
- IBM and MIT are working together to make sure A.I. isn’t our downfall
- MIT built an A.I. bot that writes scary stories — and some are terrifying
- A beginner’s guide to A.I. superintelligence and ‘the singularity’
- IBM’s new A.I. predicts chemical reactions, could revolutionize drug development
- A.I. predicts how you vote by looking at where you live on Google Street View
Experimental 3D printer uses laser holograms to crank out objects quickly
We love 3D printers, but they sure can take their sweet time to print something. Large objects can take several hours to make it from our desktops onto the print bed. Now, a team of researchers, led by Lawrence Livermore National Laboratory, think they’ve found a better, much faster way. And for bonus “awesome tech” points, it involves using laser holograms.
Instead of the classic method of printing an object by putting down one layer at a time, the team’s new “holographic” printing technique utilizes special resins that solidify as soon as they are exposed to light. By shining three laser beams simultaneously at a vat filled with the resin, the researchers have showcased the ability to fabricate a 3D structure in only 10 seconds.
“We are demonstrating a new technology that can produce 3D objects instantly in just in a few seconds,” Nicholas Fang, associate professor in the department of mechanical engineering at the Massachusetts Institute of Technology, told Digital Trends. “This is not yet a replicator machine from Star Trek, but to my knowledge it is the fastest way to turn a design from the digital world to a physical copy. The fabrication process is much like the inverse of engineering drawings. In engineering drawings, we project the model of 3D parts with top views, front views, and right views. In our fabrication process, we send the images of these different views from three sides of a resin vat, allowing them to overlap in the volume of the liquid polymer. The parts that are exposed by all three intersecting beams will solidify, forming the desired shape in real time.”
Dr. Maxim Shusteff of the Lawrence Livermore National Laboratory told us that the technique can improve on existing 3D printing because it won’t create the same layering-based defects, such as zigzag or step-style surfaces, that come with regular additive manufacturing. “Although our parts aren’t particularly smooth yet, we’ve broken the conceptual barrier for how to get there,” Shusteff said. “I don’t think this will make other ways of doing additive manufacturing obsolete, but it adds a powerful new tool to the broad additive manufacturing toolset.”
A paper describing the work was recently published in the journal Scientific Advances.
Editors’ Recommendations
- 3D-printed stainless steel is up to 3 times tougher than alternatives
- How do 3D printers work? Here’s a super simple breakdown
- High fashion meets high tech in this 3D-printed store
- New 3D-printing technique uses UV light to print working electronic circuits
- Cubibot brings affordable 3D printing to the masses
Humble Mobile Bundle lets you get 11 Android games for $5
Includes Alto’s Adventure, Pug’s Quest, FRAMED 2, and more.
Ever since 2010, Humble Bundle has been the best way to buy awesome video games at excellent prices will helping to support even more awesome charities at the same time. The latest Humble Mobile Bundle is here, and it’s filled with a heap of awesome independent Android games that are all worth checking out.
As usual with Humble Bundle, there are a few different tiers that you can choose from. If you pay $1 or more, you’ll get Invert – Tile Flipping Puzzles, Superbrothers: Sword & Sworcery EP, Alto’s Adventure, and Pug’s Quest. Pay $4.50 or more, and you’ll get all of the games included with the $1 tier in addition to Vignettes, Shooting Stars!, Tower Dwellers, and Caterzillar.
If you want to go all the way and pay $5 or more, you’ll get FRAMED 2, The Bug Butcher, Snowball, and the other eight games that we just mentioned. That $5 tier comes with a total value of $35 worth of games, so in other words, this is absolutely worth picking up if you’ve been looking for some new adventures to get lost in.

This Humble Mobile Bundle is available until December 25 at 11:00 AM PT.
See at Humble Bundle
Instagram now lets you follow hashtags in your main feed
Stay up to date with all of your interests.
Hashtags have been a part of Instagram since forever, and as fun as it is to look up #cats or #pugs, they’ve always required extra work on the user’s part. Adding hashtags to your own posts is easy enough, but if you actually want to look at other photos/videos using certain hashtags, you need to head to the Discover page and manually search for them.
Today, that ends with the new ability to follow hashtags right in your main feed.
Now when searching for your favorite hashtag, you’ll see a new button at the top of the page that lets you “follow” it. When you follow a hashtag, this then shows posts from anyone using it in your main feed along with posts from people that you follow.
Instagram’s algorithm will more than likely prevent every single post using a hashtag that you follow from showing up in your feed, but considering that there are some hashtags being used just about every second, that’s not necessarily a bad thing. We’ll probably see adjustments made to how often hashtag posts show up in your feed as Instagram matures the feature, but for the time being, feel free to experiment with larger and smaller hashtags to see what’s the best fit for you and your timeline.



There are plenty of hashtags on Instagram that have caught my attention, but I honestly can’t tell you the last time I took the time to search for one and look at posts using it. The ability to follow hashtags on Instagram might not sound huge at first, but thinking about my daily use habits with the service, I can see myself using this quite often. What about you?
Instagram testing Regram feature, GIFs in Stories, and more
Deals Reminder: Gazelle Has $25 Off All Apple Products and Free Shipping for Christmas Through December 13
This week, popular used electronics seller Gazelle has launched a few promos that will last until tomorrow, December 13, which is also the last date that you can order from the company and receive free shipping that will arrive in time for Christmas. Gazelle is marking down all Apple products by $25, as well as $10 off all other products, allowing you to purchase used iPhones, iPads, and MacBooks at a slight discount.
Note: MacRumors is an affiliate partner with these vendors. When you click a link and make a purchase, we may receive a small payment, which helps us keep the site running.
The offer is applied automatically when you add any Apple product to your cart, and it even applies to Apple devices within Gazelle’s clearance tab. We’ve listed a few of the older Apple products you can purchase on Gazelle down below, but many others have already sold out due to popularity of carrier, color, or storage capacity. Be sure to head over to the company’s website to browse its full selection before the discount and Christmas delivery guarantee end tomorrow.
- iPhone SE 64GB AT&T (Gold/Good Condition) – $264.00, down from $289.00
- iPhone 6 64GB Unlocked (Silver/Good Condition) – $284.00, down from $309.00
- iPhone 7 128GB Unlocked (Jet Black/Fair Condition) – $504.00, down from $529.00
- 13-inch MacBook Air, Mid 2011, 1.8GHz, 4GB RAM, 256GB SSD (Excellent Condition) – $634.00, down from $659.00
- 9.7-inch iPad Pro 32GB WiFi (Silver/Good Condition) – $404.00, down from $429.00
There are a few other notable deals going on this week, including up to $100 in savings on various Sonos speakers over at Amazon. Specifically, the Play:1 is at $147.00, down from $199.99; the Play:3 is at $249.00, down from $299.99; and the Playbar is at $629.00, down from $699.00. There are even a few bundles, where you can get 2 Play:1 speakers for $298.00, down from $398.00, so be sure to check out Sonos’ Amazon storefront for all of the deals.

Drone maker DJI has also opened up its holiday sale with up to 30 percent off select drones and other products. These include the Mavic Pro Fly More Combo for $1,149, down from $1,299; as well as the Osmo Mobile for $199, down from $299. If you’re specifically looking for the DJI Spark, B&H Photo has the drone for $349 (compared to DJI’s $399 sale price), and you get a $50 B&H Photo e-gift card as well.
Check out DJI’s Winter Holiday Sale page for the full list of drones, camera gimbals, and other accessories on sale.
If you’re thinking about buying Apple accessories for anyone on your holiday shopping list, MacRumors still has its exclusive deals with Nomad and Pad & Quill ongoing through December 18 and December 21, respectively. Visit our Deals Roundup for detailed information on those promo codes and more sales happening this week.
Related Roundup: Apple Deals
Discuss this article in our forums
Apple Releases Firmware Update 7.7.9 and 7.6.9 for AirPort Base Stations [Updated]
Apple today released new firmware updates for its Wi-Fi base stations, including the AirPort Express, AirPort Extreme, and AirPort Time Capsule. The 7.7.9 update is available for 802.11ac base stations, while the 7.6.9 update is available for 802.11n base stations.
Release notes for the update were not provided by Apple, but it is likely that this firmware update fixes the KRACK Wi-Fi vulnerabilities that affected many modern Wi-Fi networks and devices.
The KRACK vulnerability had the potential to allow attackers to exploit weaknesses in the WPA2 protocol to decrypt network traffic to sniff out credit card numbers, usernames, passwords, photos, and other sensitive information. Apple released KRACK security updates for other devices earlier this year.
The new firmware updates can be installed using the AirPort Utility app for iOS or macOS.
Apple has allegedly stopped development on its AirPort wireless routers in 2016, and to our knowledge, the company does not plan to produce another product in the AirPort family in the near future.
Update: Support documents for the security contents of the 7.7.9 and 7.6.9 updates confirm that the KRACK Wi-Fi vulnerability has been addressed alongside a few other security issues.
Related Roundup: AirPort
Discuss this article in our forums



