If your data is found on the dark web, Firefox Monitor will let you know
If you’ve been ‘pwned,’ Firefox will let you know. After beta testing the new Firefox Monitor service this summer, Firefox is finally rolling out its credential monitoring tool to all users. Firefox Monitor, which is based on security researcher Troy Hunt’s Have I Been Pwned (HIP) database, will notify you if it spots your email address on the dark web. By alerting users when their credentials are found on the dark web, Firefox hopes that the Monitor service will motivate vigilant consumers to change their passwords to avoid an even larger data breach.
It should be noted that the service is free to all users, and Firefox Monitor, despite the name, isn’t restricted to users of the Firefox browser. Users can type monitor.firefox.com into any browser of choice to enroll in the service. Once you’re on the Firefox Monitor webpage, you can enter your email address. Firefox Monitor will check your email against the database of Have I Been Pwned to see if it’s found on the dark web.
“We’ll let you know if your email address and/or personal info was involved in a publicly known past data breach,” Firefox said in a blog post. “Once you know where your email address was compromised you should change your password and any other place where you’ve used that password.”
As part of Have I Been Pwned, Hunt has collected nearly 520 million email addresses to date from real-world breaches. And even though the Firefox Monitor service doesn’t offer more anything more than Hunt’s service, Firefox hopes that it could leverage its more recognizable name to bring more awareness to the topic.
Even if your email address isn’t found on databases compiled from existing or past breaches, Firefox Monitor will continue to scan your email for future breaches. If you’re a victim in the future, the service will let you know when it learns about it. By being informed of a hack, users can take proactive measures to ensure that their login credentials are safe.
As a general precaution, users can take steps like using unique and complex passwords for each individual site they visit, changing their passwords regularly, and using a password manager to manage all their passwords. Additional security steps include enabling two-factor or multi-factor authentication or using a security key.
“Firefox Monitor is just one of many things we’re rolling out this fall to help people stay safe while online,” the company said. “Recently, we announced our road map to anti-tracking and in the next couple of months, we’ll release more features to arm and protect people’s rights online.”
Editors’ Recommendations
- The best web browsers
- Firefox Quantum review
- Firefox mobilizes a three-pronged attack against ad-based tracking
- Firefox Reality wants to bring the ‘whimsical web’ to VR
- Asus ROG Swift PG279Q review
Chrome OS update could make switching to tablet mode far easier

A new option for dynamically switching between two different user interfaces for Chrome OS has been spotted in the Chromium Gerrit. The rumored update for the browser-based operating system would reconfigure the layout of tabs and other interface objects when in tablet mode to make them easier to tap, switching back to a more streamlined interface when once again docked.
A 2-in-1 that is both a good tablet and a good laptop with minor reconfiguring is no mean feat. Microsoft’s Surface devices have been some of our favorite convertibles in recent years, but an update to Chrome OS could make a new generation of Chromebooks even more competitive. A revamped user interface for Chrome OS appears to introduce the Material Design style to Chromebooks, but also make the interface more intuitive for its usage mode. If you’re using it with a keyboard and touchpad, it will have finer icons and buttons, but if you’re tapping away on the touchscreen, it will make selecting what you want that much easier.
Although the particular commit in the Chromium Gerrit hasn’t been merged yet, XDA-Developers was still able to play with the changes. It discovered that the refreshed Chrome OS has a new “Dynamic Refresh” option which lets you switch between the standard Material Design overhaul and a new, touchscreen-friendly version.
Previous
Next
1 of 2

The refreshed Chrome layout on a Pixelbook XDA Developers

The refreshed Chrome layout on a Pixelbook in touchscreen mode. XDA Developers
The standard refresh makes the tab bar and other UI elements much more compact than current iterations of Chrome OS, giving more space to the main browser window. Although the touchscreen-centric refresh looks similar, it’s more padded, with greater room for error when it comes to using chunky fingertips for selecting small buttons and icons.
Each of these modes must be manually switched on with some specific commands as is, but when this new version of Chrome OS rolls out, it’s expected that the Dynamic Refresh will take place automatically depending on the state of the Chromebook that’s running it. If it’s in a docked state with a keyboard attached, it will likely offer up the ultra-streamlined Chrome OS UI. But when that keyboard is detached and we enter tablet mode, it can thicken up the UI to make it more applicable for touchscreen inputs.
All of this lands a little extra credence to the longstanding rumor that the second-generation Pixelbook will come with a configuration that allows for a detachable keyboard.
Editors’ Recommendations
- Google Pixelbook review
- Google’s Fuchsia OS: Everything you need to know
- Useful iOS 12 tips and tricks
- The most common iOS 11 problems, and how to fix them
- Android vs. iOS: Which smartphone platform is the best?
Teaching machines to see illusions may help computer vision get smarter
Do you remember the kind of optical illusions you probably first saw as a kid, which use some combination of color, light, and patterns to create images that prove deceptive or misleading to our brains? It turns out that such illusions — where perception doesn’t match up with reality — may, in fact, be a feature of the brain, rather than a bug. And teaching a machine to recognize the same kind of illusions may result in smarter image recognition.
This is what computer vision experts from Brown University have been busy working on. They are teaching computers to see context-dependent optical illusions, and thereby to hopefully create smarter, more brain-like artificial vision algorithms that will prove more robust in the real world.
“Computer vision has become ubiquitous, from self-driving cars parsing a stop sign to medical software looking for tumors in an ultrasound,” David Mely, one of the Cognitive Science researchers who worked on the project, now working at artificial intelligence company Vicarious, told Digital Trends. “However, those systems have weaknesses stemming from the fact that they are modeled after an outdated blueprint of how our brains work. Integrating newly understood mechanisms from neuroscience like those featured in our work may help making those computer vision systems safer. Much of the brain remains poorly understood, and further research at the confluence of brains and machines may help unlock further fundamental advances in computer vision.”
In their work, the team used a computational model to explore and replicate the ways that neurons interact with one another when viewing an illusion. They created a model of feedback connections of neurons, which mirrors that of humans, that responds differently depending on the context. The hope is that this will help with tasks like color differentiation — for example, helping a robot designed to pick red berries to identify those berries even when the scene is bathed in red light, as might happen at sunset.
“A lot of intricate brain circuitry exists to support such forms of contextual integration, and our study proposes a theory of how this circuitry works across receptive field types, and how its presence is revealed in phenomena called optical illusions,” Mely continued. “Studies like ours, that use computer models to explain how the brain sees, are necessary to enhance existing computer vision systems: many of them, like most deep neural networks, still lack the most basic forms of contextual integration.”
While the project is still in its relative infancy, the team has already translated the neural circuit into a modern machine learning module. When it was tested on a task related to contour detection and contour tracing, the circuit vastly outperformed modern computer vision technology.
Editors’ Recommendations
- Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
- Feast your eyes on the world’s most detailed image of a fruit fly brain
- What is an artificial neural network? Here’s everything you need to know
- This is an artificial brain. This is an artificial brain on meth
- Artificial intelligence discovers dozens of mysterious cosmic signals
Teaching machines to see optical illusions may help computer vision get smarter
Do you remember the kind of optical illusions you probably first saw as a kid, which use some combination of color, light, and patterns to create images that prove deceptive or misleading to our brains? It turns out that such illusions — where perception doesn’t match up with reality — may, in fact, be a feature of the brain, rather than a bug. And teaching a machine to recognize the same kind of illusions may result in smarter image recognition.
This is what computer vision experts from Brown University have been busy working on. They are teaching computers to see context-dependent optical illusions, and thereby to hopefully create smarter, more brain-like artificial vision algorithms that will prove more robust in the real world.
“Computer vision has become ubiquitous, from self-driving cars parsing a stop sign to medical software looking for tumors in an ultrasound,” David Mely, one of the Cognitive Science researchers who worked on the project, now working at artificial intelligence company Vicarious, told Digital Trends. “However, those systems have weaknesses stemming from the fact that they are modeled after an outdated blueprint of how our brains work. Integrating newly understood mechanisms from neuroscience like those featured in our work may help making those computer vision systems safer. Much of the brain remains poorly understood, and further research at the confluence of brains and machines may help unlock further fundamental advances in computer vision.”
In their work, the team used a computational model to explore and replicate the ways that neurons interact with one another when viewing an illusion. They created a model of feedback connections of neurons, which mirrors that of humans, that responds differently depending on the context. The hope is that this will help with tasks like color differentiation — for example, helping a robot designed to pick red berries to identify those berries even when the scene is bathed in red light, as might happen at sunset.
“A lot of intricate brain circuitry exists to support such forms of contextual integration, and our study proposes a theory of how this circuitry works across receptive field types, and how its presence is revealed in phenomena called optical illusions,” Mely continued. “Studies like ours, that use computer models to explain how the brain sees, are necessary to enhance existing computer vision systems: many of them, like most deep neural networks, still lack the most basic forms of contextual integration.”
While the project is still in its relative infancy, the team has already translated the neural circuit into a modern machine learning module. When it was tested on a task related to contour detection and contour tracing, the circuit vastly outperformed modern computer vision technology.
Editors’ Recommendations
- Machine learning? Neural networks? Here’s your guide to the many flavors of A.I.
- Feast your eyes on the world’s most detailed image of a fruit fly brain
- What is an artificial neural network? Here’s everything you need to know
- This is an artificial brain. This is an artificial brain on meth
- Artificial intelligence discovers dozens of mysterious cosmic signals
How many GPU video ports is too many? The Aorus RTX 2080 packs seven
Nvidia’s new RTX-series graphics cards have only been shipping for a few days, but already aftermarket versions are starting to appear with simply ludicrous configurations. The Aorus GeForce RTX 2080 Xtreme 8G comes with all of the Turing cores, RT cores, and general processing power of a standard (albeit likely overclocked) 2080, but it also comes with a massive three-fan cooling solution and as many as seven video output ports on the back.
Standard RTX 2080 cards ship out with five video ports, but the new Auros branded card takes that to a new level. The addition of two extra HDMI 2.0b ports brings the total to three, with an additional three DisplayPort 1.4 connectors, and a USB-C port with VirtualLink. That makes it possible to output to four different displays at once (and a VR headset) leveraging either three HDMI, a DisplayPort, and the USB-C connector, or three DisplayPort, an HDMI, and the USB-C port.
The cooling system the new Aorus card uses is also rather impressive. Ditching Nvidia’s championed two-fan configuration, it leverages the power of three 100mm fans sitting on top of a heat pipe direct touch heatsink that also covers VRAM and MOSFET chips. It’s all covered with a metal back plate for some minor cooling enhancements and a cleaner look. The fans are designed to alternate their rotations (clockwise for the center, anticlockwise for the outside two) in order to maximize airflow and avoid turbulence, resulting in quieter operation and better airflow.
Should your GPU stay cool enough under low load though, they’ll also turn off entirely, making for an utterly silent card at times.
As you might expect from a graphics card with this branding, it also supports customizable RGB lighting with full support for the Aorus Engine software, and it’s all backed up by a four-year warranty.
What we don’t know at this point is what kind of clocks we can expect with this card, nor its price. With all that extra cooling we’d expect both to come in high, but until Aorus decides to give us a release date we’ll be left guessing for the time being.
Editors’ Recommendations
- Everything you need to know about the Nvidia GeForce RTX GPUs
- Nvidia’s GeForce RTX 20 Series starts at $500 and features real-time ray tracing
- Nvidia teases new GeForce RTX 2080 launch at Gamescom next week
- Nvidia RTX 2080 reviews may not drop until September 19
- Nvidia GeForce RTX GPUs are coming to Alienware and Predator gaming desktops



