Skip to content

Archive for

10
Aug

Meet TIKAD: the gun-toting drone that can aim, fire, and compensate for recoil


Why it matters to you

Sending soldiers on foot into some of the world’s toughest warzones isn’t always the best solution. TIKAD’s creators hope this drone could provide a solution.

Have you ever looked at a drone and thought, “Yeah, that’s kind of neat, but I sure wish it came with some mounted firearms?” If so, you may be interested to hear about the TIKAD: a new drone that’s described by its Florida-based creators Duke Robotics as the “Future Soldier.”

Intended for military deployment, TIKAD is an unmanned aerial vehicle (UAV) designed to replace boots on the ground in some of the toughest warzones on the planet. It weighs 110 pounds (50kg), can fly at an altitude of anywhere from 30 – 1,500 feet and — oh yes — did we mention that it can sport a plethora of semi automatic weapons, and a 40mm grenade launcher for good measure?

“As a former Special Mission Unit commander, I have been in the battlefield for many years, and more than once I hoped that a different solution other than sending in the troops existed,” CEO Raziel Atuar told Digital Trends. “At Duke Robotics, we wanted to create something that would be a game changer in future battles, [and] that could save lives of troops, as well as uninvolved civilians, in the combat zone.”

Atuar says that, as classic army versus army confrontations on battlefield have become increasingly rare, tools intended for guerilla warfare have become more important than ever. That’s where the requirement for a drone that’s capable of firing small arms from the air comes into play. TIKAD features plenty of smart tech, but perhaps its smartest tools are its high levels of accuracy and a proprietary robotic stabilization system that enables it to absorb the recoil of its mounted gun as it’s fired. (This stabilization platform can also be used as a standalone unit for snipers on the ground.)

“TIKAD is ready to be delivered,” Atuar continued. “We are in the process of implementing an initial order from the Israeli Ministry of Defense, and we are in contact with selected governments as potential customers. For obvious reasons, our government customers are highly sensitive regarding this type of information, and it is up to them to decide if and when to share more information [about where these drones are deployed].”

And, just like that, we’re one step closer to the opening montage of a Terminator movie!




10
Aug

Man responsible for strong password requirements regrets his 2003 guidelines


Why it matters to you

Here is a look into how the current password requirement system came to be and why the rules are now changing.

The man responsible for your requirement to use a combination of lower-case letters, upper-case letters, numbers, and symbols in passwords at least eight characters long is now regretting his advice. Former National Institute of Standards and Technology manager Bill Burr recently admitted in an interview with The Wall Street Journal that his 2003 document about crafting strong passwords and changing them every 90 days was somewhat off the mark.

At the time, he said that users will choose an easily remembered, easily guessed password, and likely one stemming from a batch of “a few thousand commonly chosen passwords.” In turn, hackers trying to gain access to user accounts, computers, and so on would try the most likely chosen passwords first. But even though services would reject specific passwords given their common use, Burr suggested a more secure alternative.

On page 52 of the 2003 document, he clearly states that systems should rely on a password of eight characters or more that are selected from an alphabet of 94 printable characters. This password should also include at least one upper case letter, one lower case letter, one number, and one special character. Systems should even rely on a dictionary that prevents users from including familiar words and using their login name as the password too.

The problem with this method is that users tend to have patterns when creating a password. For instance, they may take a familiar word, such as “password,” and alter it slightly to meet the requirements. The result could be something like P@zzwurd2017, which isn’t all that original, and something we conjured up in a matter of seconds.

Right now, systems give users a thumbs-up when they follow the current standard and even provide a visual measurement tool indicating the password’s strength against hacking. But then users are requested/forced to change their password every 90 days, thus they may use the same base word, but alter the character usage to please the update process (such as P@ssw0rd2K17).

When the guidelines were created in 2003, they were not based on collected data. System administrators would not cough up any passwords for examination, thus Burr turned to a whitepaper published in the 1980s — long before the general American population purchased a modem and jumped onto the world wide web using Netscape or America Online.

Fast forward to 2017, and the National Institute of Standards and Technology provides new guidelines for systems to follow. Authored by technical adviser Paul Grassi, it tosses out much of what Burr established years ago. But Grazzi admits that Burr’s system lasted for 14 years, and hopes that his revised password ruleset lasts just as long. He suggests that systems remove the 90-day password refresh and the requirement for special characters.

Ultimately, the best practice for everyone is to throw out familiar, easily linked ideas, such as the name of your favorite movie or pet. Instead, create a phrase of words that doesn’t make much sense, and does not include spaces. Password managers like LastPass are helpful too when you are required to remember a multitude of unique passwords across dozens of services.




10
Aug

Man responsible for strong password requirements regrets his 2003 guidelines


Why it matters to you

Here is a look into how the current password requirement system came to be and why the rules are now changing.

The man responsible for your requirement to use a combination of lower-case letters, upper-case letters, numbers, and symbols in passwords at least eight characters long is now regretting his advice. Former National Institute of Standards and Technology manager Bill Burr recently admitted in an interview with The Wall Street Journal that his 2003 document about crafting strong passwords and changing them every 90 days was somewhat off the mark.

At the time, he said that users will choose an easily remembered, easily guessed password, and likely one stemming from a batch of “a few thousand commonly chosen passwords.” In turn, hackers trying to gain access to user accounts, computers, and so on would try the most likely chosen passwords first. But even though services would reject specific passwords given their common use, Burr suggested a more secure alternative.

On page 52 of the 2003 document, he clearly states that systems should rely on a password of eight characters or more that are selected from an alphabet of 94 printable characters. This password should also include at least one upper case letter, one lower case letter, one number, and one special character. Systems should even rely on a dictionary that prevents users from including familiar words and using their login name as the password too.

The problem with this method is that users tend to have patterns when creating a password. For instance, they may take a familiar word, such as “password,” and alter it slightly to meet the requirements. The result could be something like P@zzwurd2017, which isn’t all that original, and something we conjured up in a matter of seconds.

Right now, systems give users a thumbs-up when they follow the current standard and even provide a visual measurement tool indicating the password’s strength against hacking. But then users are requested/forced to change their password every 90 days, thus they may use the same base word, but alter the character usage to please the update process (such as P@ssw0rd2K17).

When the guidelines were created in 2003, they were not based on collected data. System administrators would not cough up any passwords for examination, thus Burr turned to a whitepaper published in the 1980s — long before the general American population purchased a modem and jumped onto the world wide web using Netscape or America Online.

Fast forward to 2017, and the National Institute of Standards and Technology provides new guidelines for systems to follow. Authored by technical adviser Paul Grassi, it tosses out much of what Burr established years ago. But Grazzi admits that Burr’s system lasted for 14 years, and hopes that his revised password ruleset lasts just as long. He suggests that systems remove the 90-day password refresh and the requirement for special characters.

Ultimately, the best practice for everyone is to throw out familiar, easily linked ideas, such as the name of your favorite movie or pet. Instead, create a phrase of words that doesn’t make much sense, and does not include spaces. Password managers like LastPass are helpful too when you are required to remember a multitude of unique passwords across dozens of services.




10
Aug

MIT’s new radio tech can monitor your sleep quality from afar


Why it matters to you

Sleep-tracking device can monitor how much shut-eye you’re getting in the most non-invasive way possible.

From accessories which you attach to your mattress to smart sensor-studded headphones, there is no shortage of devices out there which promise to monitor (and thereby help improve) your sleep in some way. A new project created by researchers at the Massachusetts Institute of Technology and Massachusetts General Hospital aims to assist the percentage of the population who suffer from some form of sleep disorder. The total number of people affected by sleep disorders in the United States is around 50 million — including those who suffer from diseases such as Alzheimer’s and Parkinson’s which can disrupt our ability to catch some well-earned shut-eye. The difference between this and other approaches? That this project involves no physical contact with users.

Instead, it builds on previous work carried out by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), showing that a device capable of emitting and receiving lower-power radio frequency (RF) signals can remotely measure a person’s vital signs. This is achieved by analyzing the frequency of these waves, which change slightly as they reflect off the body and can reveal information such as pulse and breathing rate.

The researchers have now built a smart WiFi-like box, which sits in your room and uses these body signature insights to track your sleep, via some deep-learning neural networks.

“This work uses wireless signals and advanced AI algorithms to know when you are dreaming, when your brain is consolidating memory, and more generally, your sleep stages,” Mingmin Zhao, a Ph.D. student who worked on the project, told Digital Trends.

In tests involving 25 healthy volunteers, the researchers found that the technology was 80 percent accurate, a number that is comparable to the accuracy of ratings determined by sleep specialists based on more invasive EEG sensors. Next up, the researchers hope to use the technology to explore how certain neurological diseases affect sleep.

“We are currently working with medical doctors to understand diseases and track response to treatments with this device,” Zhao continued. “There is definitely value in commercialization of the system. We are interested in doing so.”

The work was presented Wednesday at the International Conference on Machine Learning in Sydney, Australia. A paper describing the project is also available online.




10
Aug

T-Mobile reveals Smartpick plans and its first phone, the REVVL


T-Mobile has introduced a new program to help buyers find high-quality, low-cost devices.

Much as I love to talk about how awesome the latest high-end phones are, it’s becoming more and more clear that most people don’t push their phones hard enough to justify that extra power and extra cost. Folks just want something they can use to make calls, send messages, browse Facebook, and take photos. While the latest flagship will do all those, a lot of mid-range phones will do the job just fine without costing an arm and a leg.

tmobile-revvl-front.jpg?itok=BcjXwST0tmoble-revvl-back.jpg?itok=6PSAodlf

To that end, T-Mobile has announced its “Smartpicks” program to help customers find their next low-cost device. The Smartpicks lineup includes the Samsung Galaxy J3 Prime, the LG K20 Plus, and the ZTE ZMAX PRO. These may not be household names like the Galaxy S8 or LG G6, but they’ll do the job just fine.

T-Mobile also announced its own phone as one of the Smartpicks. The T-Mobile REVVL features a 5.5-inch HD screen, a 1.5Ghz Mediatek MT6738 processor, 2GB of RAM, 32GB of storage that is expandable with a microSD slot, a 3000mAh battery, and Android 7.0 Nougat. The rear camera is a 13-megapixel unit, while the front camera is 5 megapixels. Finally, there is a fingerprint sensor included as well. There was no mention of NFC, so I would not count on it, especially at this price range. The REVVL also supports T-Mobile’s Band 66, allowing for faster download speeds in markets that support that frequency. The phone won’t be blazing fast, but it’ll do just fine for plenty of people. The REVVL will be available in select T-Mobile stores beginning August 10th.

T-Mobile is rolling its Smartpicks program into the larger JUMP! on Demand program, and all of its Smartpicks phones will be available for $0 down and between $5 and $8 per month as part of JUMP! on Demand. This program allows users to upgrade their phones every 30 days, so someone could try all the available phones for only a few dollars a month. For those that want to skip the JUMP! on Demand plan or just want a nice backup phone, the REVVL will be available to purchase for $125.

Do you plan on purchasing the T-Mobile REVVL or any of the other Smartpicks? Let us know down below!

Learn more about T-Mobile

1x1.gif?tid=mobilenations&subid=UUacUdUn

10
Aug

Dino Frontier review on PlayStation VR


dino-frontier-new-hero.jpg?itok=SkmJ-9Yd

Build a town, attract settlers, and domesticate dinosaurs.

When I stumble onto the location where I’ll build my town, there isn’t much of anything here. A single settler, and a dead dinosaur, surrounded by woods and bushes that bear fruit. To call it inhospitable would be a bit of an understatement I think. But aside from the lack of…well…anything, it’s a good place to start. I send the settler to start chopping down trees for the wood we’ll need for buildings, and I harvest the dinosaur for the meat it can provide us. Soon there is a food depot, and lumber yard.

I am the Mayor here, and soon this will be my town. This is Dino Frontier.

Read more at VRHeads

10
Aug

Moon discovery may expand where we search for alien life


Every discovery we make about the universe has implications for us. Understanding how our solar system formed, and how we got here, helps us figure out how likely it is that we’ll find life on other planets. Even just our corner of the galaxy is vast; anything we can do to help narrow the possibilities of where we can and should search for life beyond our planet is helpful. And that’s why a new study from Rutgers and MIT is so interesting. A team discovered that the moon’s magnetic field survived much longer than previously thought.

The Earth’s magnetic field is what protects us from space radiation. Some comes from outside our galaxy, but our sun also emits highly charged particles that would be incredibly harmful if we weren’t protected from them. (That’s one of the serious concerns about sending astronauts to Mars and beyond: How do we protect them from radiation when they’re outside the protection of the Earth’s magnetic field?) As far as scientists know, a magnetic field is a key ingredient in finding life on another planet, but they were under the impression that a body as small as the moon wouldn’t be able to support one for very long. This new study may prove that theory wrong.

In order to conduct their experiments, the team used moon rocks that were collected by astronauts on Apollo 15. Objects lose their magnetism when exposed to extreme heat, so Sonia Tikoo, the lead author of the article published in Science Advances, heated the lunar rock to 1,436 degrees F and demagnetized it. She was then able to determine what its original magnetization was; it was higher than expected.

Scientists have dated the lunar rock to between 1 billion and 2.5 billion years old. This means that the moon’s magnetic field, which was once as strong as the Earth’s, lasted a lot longer than scientists thought. It was still active as of 2ish billion years ago. The moon currently has no magnetic field, but scientists are unclear as to when it shut down. In comparison, Mars lost the bulk of its magnetic field about 4 billion years ago.

Not only did we just find out more about our satellite, but this study tells us that we possibly should be looking at exomoons as potential hosts for life, in addition to exoplanets. “The question becomes what size planets and moons should we be considering as possibly habitable worlds,” said Tikoo. The answer is clear: We have a lot more work to do.

Source: EurekAlert

10
Aug

Intel will unveil 8th-gen Core processors on August 21st


Intel’s 7th-generation Core processors still feel relatively young, but the company is already poised to talk about their successors. The chip designer has announced that it will premiere its 8th-generation Core CPUs on August 21st, complete with a livestream on Intel’s Facebook page. The company is unsurprisingly shy on technical details, but it promises previews of PCs built on 8th-gen chips as well as a demo from a VR creator. As it stands, there’s already some idea of what to expect.

On the record, Intel has already acknowledged that the 8th-gen (aka Coffee Lake) won’t be a radical redesign: it’s still built on a 14-nanometer manufacturing process and shares a lot in common with its predecessors. However, both Intel’s early benchmarking tease and various leaks suggest there may be reasons to get excited. More than anything, the focus is on cramming in more cores at similar power levels. Both the Core i5 and Core i7 would move to six cores on the desktop, while you’d see four cores in the Core i3, Pentium and some low-power laptop chips.

In essence, it should be a continuation of what Intel is doing with the Core i9 and X-series Core i7: it’s adding extra cores both a foil to AMD’s Ryzen processors (where core count is an advantage) and as an acknowledgment that there are diminishing returns from tweaking familiar architectures and processes. This gives it a big boost to performance in multitasking and highly multithreaded apps without having to reinvent the wheel. The real breakthrough should come with Cannonlake, which should move to a far more efficient (and likely faster) 10nm process.

Via: AnandTech

Source: Intel

10
Aug

FaceApp changes your race with its latest selfie-editing filters


FaceApp uses neural networks for realistic-looking changes to your selfie photos. Originally, it had filters to add smiles, change your age, change gender or “beautify” your face. Unlike Snapchat’s overlays, FaceApp uses deep learning technologies to change the photo itself. Now, a new update adds race to the mix, with an update to enable users to make themselves look Asian, Black, Caucasian or Indian.

There has been some outrage over the new filters, with some Twitter users calling it “digital blackface.” This isn’t the first time photographic filters have come under fire, of course. Snapchat’s 420 Bob Marley filter came under fire for the same lack of racial sensitivity, while its “anime-inspired” lens felt downright racist. FaceApp itself has already faced criticism when it released a “hot” filter that basically just made people look more “white.”

The app developer doesn’t feel that the new filters are specifically racist, however. “The ethnicity change filters have been designed to be equal in all aspects,” Yaroslav Goncharov, the app’s CEO and creator, told Engadget in an email. “They don’t have any positive or negative connotations associated with them. They are even represented by the same icon. In addition to that, the list of those filters is shuffled for every photo, so each user sees them in a different order.”

Source: iTunes

10
Aug

Intel plans a test fleet of 100 self-driving cars


Intel isn’t wasting any time now that it officially owns Mobileye. The Mobileye team has unveiled plans to build a fleet of 100 or more self-driving vehicles to conduct tests in both its native Israel as well as the US and Europe. They’ll meld Mobileye’s sensor, mapping and driving technology with Intel’s computing platforms, data center tech and 5G wireless to make Level 4 autonomous cars (they can do all the driving themselves but may ask for intervention) that talk to the cloud. They won’t be tied to any one brand — sorry, BMW. As Intel explains, it’s as much about selling the concept as actual experimentation.

The fleet will show would-be customers how self-driving cars behave in real-world circumstances, including mapping and safety features, and will give Intel a better way to talk to regulators. Intel wants to prove that its self-driving tech can work around the world, and that it can tweak its formula to suit what companies want

It’ll take a while before you see the fruits of this effort. The first vehicles don’t deploy until later in 2017, and the magic 100 mark is coming “eventually.” And of course, any customers sold on the tech will take a while after that to make use of it. Still, it’s an important step toward a widely available platform for self-driving cars.

Source: Intel