Nvidia saw a revenue spike from sales of graphics cards to crypto miners
One of Easy Crypto Hunter’s six-card GTX 1080Ti mining rigs. Jon Martindale / Digital Trends
Despite Nvidia originally downplaying the impact of cryptocurrency miners on the availability and pricing problems facing the graphics card industry over the past year, new financial data suggests the impact may have been great. In the last quarter alone, Nvidia estimated that almost 10 percent of Nvidia’s revenue came from cryptocurrency miners and that doesn’t even factor in cards bought by miners under the guise of being high-end gamers.
We’ve been covering the unprecedented spikes in graphics card pricing for the past year and documented how most mainstream cards were priced far beyond what gamers could afford to pay. Cryptocurrency miners were blamed for buying up much of the stock and Nvidia’s new financial report suggests that could well have been the case. With near $290 million in sales to miners in the last quarter alone, Nvidia has made a lot of money from miners buying its graphics cards.
Overall Nvidia’s revenue reached $3.21 billion in the first quarter of the 2019 financial year, a rise of 66 percent over a year ago. That can be broken down into several key areas. Hardware sold for rendering purposes made up $250 million, while automotive graphics purchases topped $145 million, and datacenter sales hit a new high of $700 million. “OEM and IP” sales accounted for $387 million, but gaming still made up the largest portion of Nvidia’s revenue stream, reaching $1.723 billion, but Nvidia did say that may not be entirely accurate.
Nvidia’s CEO, Jensen Huang, said in an interview that there was no accounting for cryptocurrency miners who purchased gaming cards from retailers, as there is no stipulation that they need to be forthcoming about what they were buying the cards for. There’s also no telling how many ‘gamers’ also used their cards for mining in their downtime, as some companies have previously suggested they do.
“There is no way to tell [how many gamer cards were sold to miners] because a lot of gamers when they aren’t playing games, they’re doing a little mining,” Huang told MarketWatch. “The reason why they bought it is for gaming, but while they’re not gaming — while they’re at school, at work, or in bed — they’ll turn it on and do a little mining. There’s nothing wrong with that, I think that’s fine, but the real reason why they bought it is for gaming.”
Editors’ Recommendations
- Nvidia CEO: cryptocurrency mining “is not our business”
- AMD gains ground on Nvidia thanks to cryptocurrency miners
- Fed up with GPU prices? Crypto-rig builder says gamers must suck it up and mine
- Who’s to blame for the GPU pricing crisis? Everyone
- Nvidia’s top-end GTX graphics cards could more than double in price
Gamers aren’t convinced by Windows Mixed Reality headsets
Dan Baker/Digital Trends
Windows Mixed Reality (WMR) headsets, such as Acer’s blue-painted option, are not setting the world on fire, at least when it comes to gamers. The latest Steam hardware survey shows that of the total number of virtual reality headset owners on the platform, WMR makes up just 5 percent. That’s a lower market share than it had a few months ago.
Although marketed as something new and blending both the real and virtual worlds, Windows Mixed Reality is, for now at least, a virtual reality platform — it’s just Microsoft’s interpretation of it. While we found early hardware passable, we’ve not come across a headset that really impressed us just yet. Apparently, our feelings are mirrored by the gaming community, which still unequivocably prefers existing standouts like the HTC Vive and Oculus Rift.
To the credit of Microsoft and its partners, WMR hardware did steal a portion of the VR market when they launched, notching up to above 5 percent in a couple of months after the platform debuted last year. However, those numbers have remained pretty stagnant, and lately they’ve even fallen, dropping around a third of a percent since the start of February.
In comparison, the HTC Vive and Oculus Rift have battled back and forth with one another with around 45 percent share a piece, though the Rift currently has a little more at 47.5 percent. Oculus also enjoys a slightly bolstered position in the market with just under two percent of all VR users still running the old DK2 developer headset from time to time.
All of the major VR headsets have been falling in price over the past six months, with the Rift now sitting at $400, the Vive at $500, and the PSVR at $210. Those prices will have helped them all remain competitive, though WMR hardware has fallen in price right along with them. Some headsets can now be found for close to $200, making it one of the more affordable VR platforms, but that doesn’t appear to have enticed Steam gamers to opt for it over the more established competition.
Perhaps what WMR needs, as MSPowerUser suggests, is a concerted push for the fledgling technology from Microsoft. If the software giant hopes that its new platform will catch on among the increasingly saturated VR space, it may need to.
Editors’ Recommendations
- Oculus Rift is now more popular than HTC Vive among Steam users
- ‘Beat Saber’ becomes highest-rated Steam game on Star Wars Day
- Sharp’s new countertop cooker uses superheated steam
- Steam Machines quietly disappear from Valve’s home page
- Gabe Newell says Valve currently has multiple games in development
Google Duplex sounds human when it, um, calls to book appointments
Dan Baker/Digital Trends
For some people, the reason for choosing one hair salon over another is based solely on its ability to book an appointment online. At the Google I/O developer conference, Sundar Pichai, the company’s CEO, explained how its Google Duplex technology can help the phone-shy avoid having to actually speak to someone to make an appointment.
Around 60 percent of U.S. businesses don’t have online booking systems, according to Google. It’s been working on a way for users to give a time and date to Google Assistant, which can then make the call and set up an appointment. “It brings together all our investments over the years in natural language understanding, deep learning, text to speech,” said Pichai.
Duplex scheduling a hair salon appointment
It can be an incredibly complex interaction. Using an actual call between Google Assistant and a hair salon, Pichai showed how it would work. The user asked the Assistant to make an appointment on Tuesday morning “anytime between 10 and 12.” During the call, Google Assistant sounded convincingly human, tossing in “ums” and “uhs” in a woman’s voice. “Give me one second,” the salon employee said. “Mm-hmm,” the Assistant responded. After some back and forth, the employee made an appointment for Lisa at 10 a.m. and sent a confirmation notification to the phone’s screen. There was no appointment available at noon — the time the A.I. initially requested.
Duplex calling a restaurant
For the next call, Google Assistant had to overcome a bit of a language barrier. The restaurant employee thought it was calling to make a reservation for seven people, not for the seventh. When she informed the A.I. caller that the restaurant didn’t reserve tables for parties of fewer than seven, the A.I. asked how long the wait time would be. When the employee assured the Assistant it wouldn’t be a long wait on a Wednesday, it replied, “Oh, I gotcha, thanks.”
This technology isn’t quite ready at the moment, but Google is rolling out what Pichai called an “experiment.” During holidays, restaurants and businesses often have different hours. Google plans to use its Assistant to make one phone call to a bunch of businesses, then update their holiday hours for web searches. That way, restaurants can avoid getting dozens of calls about whether or not they’re open on Memorial Day, for example. Pichai said this experiment would start in a few weeks, so if it’s not working for that holiday, check again on the Fourth of July.
As exciting as it may be to have a remarkably human-sounding bot doing all your chores on your behalf, the Duplex announcement catalyzed quite a bit of controversy around how it would identify itself on the phone. After all, wouldn’t you want to know if you were speaking to a bot rather than a live being? Now, Google has weighed in, noting that it will include disclosures in the feature.
“We understand and value the discussion around Google Duplex — as we’ve said from the beginning, transparency in the technology is important,” a Google spokeswoman said in a statement. “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”
Updated on May 11, 2018: Google is designing Duplex with disclosure that people are talking to a bot built-in.
Editors’ Recommendations
- Updates to Google Assistant could make it the most natural digital helper yet
- Google Home vs. Google Home Mini vs. Google Home Max: It’s all about the sound
- Before Google’s lifelike Duplex A.I., these chatbots paved the way
- Google Assistant is going international with plans to expand across the globe
- Microsoft wants Cortana and Alexa to be friends. Is that cool or just awkward?
Acer Switch 7 Black Edition vs. Microsoft Surface Book 2 13
Mark Coppock/Digital Trends
As yet another indication of their willingness to innovate, Windows PC makers have been working hard to squeeze increasingly powerful components into modern 2-in-1s. One example is the discrete GPU, which has made its way into a handful of convertible 2-in-1s such as the Microsoft Surface Book 2 13, promising significantly better performance over relatively slow Intel integrated graphics. Now, Acer has taken steps to make the detachable tablet 2-in-1 more powerful with its Switch 7 Black Edition.
Which 2-in-1 most benefits from an infusion of faster graphics, the Acer Switch 7 Black Edition or the Microsoft Surface Book 2 13? We took an in-depth look to find out.
Design
Bill Roberson/Digital Trends
Microsoft’s Surface Book 2 is standout in terms of innovative design, with its tear-off display that houses the machine’s main PC components and turns into a surprisingly thin and light tablet. In terms of its aesthetics, it fits perfectly within the Surface lineup, with a futuristic and yet conservative silver-grey color. It’s all wrapped up in a solid, all-magnesium alloy build that exudes quality. For your average 2-in-1, the complete package is a little thick (0.90 inches along the rear thanks to its “fulcrum hinge” that’s required to balance the PC components packed into the display) and slightly heavy at 3.38 pounds.
The Switch 7 Black Edition takes a completely different approach to achieving a modern and elegant aesthetic. It sports an all-black design that’s stealthy and attractive without being ostentatious. It’s also pretty chunky for a tablet, with very large bezels flanking its display that add considerably to its width and height. The metal and glass design add to the weight, coming in at 2.6 pounds for the tablet alone and 3.53 pounds with the keyboard attached. The tablet has a (slightly overcomplicated) kickstand that makes it easier to prop up on a flat surface, while the Surface Book 2’s tablet is significantly smaller and lighter and so easier to carry around on its own.
They’re both well-built machines, but the Surface Book 2 works better as both a traditional notebook and a tablet than does the Switch 7.
Performance
Mark Coppock/Digital Trends
Microsoft built the Surface Book 2 around the latest Intel 8th-gen CPUs, topping out at the very fast Core i7-8650U. These quad-core processors are very speedy when running demanding loads, and they’re also efficient when performing lighter tasks. Packed away in the keyboard base is the Surface Book 2’s trump card: An Nvidia GeForce GTX 1050 GPU that’s great for entry-level gaming when playing modern titles at 1080p and moderate graphical details. The PCIe SSD is also very fast, and overall the Surface Book 2 is a well-performing 2-in-1 indeed.
Acer’s approach was similar in terms of the CPU, using the slightly slower Core i7-8550U that’s still a quick and efficient processor. Then, it packed in the Nvidia GeForce MX150 entry-level discrete GPU that’s best for esports titles and modern games at lower resolutions and graphical detail. That’s impressive for a tablet, but it doesn’t put the Switch 7 Black Edition in the same class as the Surface Book 2. The slower SATA SSD also holds the tablet back a bit.
Both 2-in-1’s have 13.5-inch displays in the productivity-friendly 3:2 aspect ratio, with the Surface Book 2’s being sharper at 3,000 x 2,000 (267 PPI) compared to the Switch 7’s 2,256 x 1,504 (201 PPI). Both provide excellent contrast and brightness, although the Surface Book 2 is one of the best we’ve tested in these aspects, and their color gamuts are just about equally wide. The Switch 7’s colors are slightly more accurate, though, and it’s gamma is perfect. You can’t go wrong with either display.
The Surface Book 2 is the faster machine by all accounts, even while noting that the Switch 7 is impressive for a detachable tablet.
Portability
Mark Coppock/Digital Trends
The Surface Book 2 sports a very specific 2-in-1 design where the display is heavier than usual and its fulcrum hinge makes for an extreme wedge shape. That makes it thicker in the rear than the typical notebook (although it also adds a nice curved edge for carrying it around), and the extra battery capacity and electronic components on the keyboard base add to its weight. Even so, the Surface Book 2 is lighter than the very chunky Switch 7 Black Edition, although the latter is thinner even when its detachable keyboard is included. Call it a wash in terms of tossing these 2-in-1s in a backpack and heading out — neither is the thinnest or lightest 2-in-1 you’ll find, but they’re not unreasonably large or heavy, either.
The real difference in portability is how long these two machines can last away from a plug. The Surface Book 2 packs in a whopping 70 watt-hours of capacity between the tablet and the keyboard base, compared to the Switch 7 Black Edition that houses a skimpy 37 watt-hours. Simply put, the Surface Book 2 demolishes the Switch 7 in terms of battery life, whether you’re working hard or just watching movies. For example, Microsoft’s 2-in-1 lasts for almost 17 hours when playing a local video compared to the Acer’s barely more than six hours, putting the Surface Book 2 at the top of the heap and the Switch 7 closer to the bottom.
Conclusion
Mark Coppock/Digital Trends
It might seem strange to compare these two very different machines, but the rationale is simple: They’re both attempts to pack faster graphics into the 2-in-1 format for enhanced gaming and creativity performance, and they take very different approaches that make them an interesting contrast.
In terms of price, the Switch 7 starts at $1,700, and doesn’t offer any other configuration options in the US, which makes it fairly limited in its use case. Meanwhile, the Surface Book 2 ranges from $1,200 all the way up to $3,300. There’s no one-for-one comparison because of the different components offered in each, but it should be noted that you can’t get a Surface Book 2 with a discrete GPU for less than $2,000.
Overall, the Surface Book 2 13 is the better choice for most people, though, providing an overall faster experience and more 2-in-1 flexibility. All the while, it’ll weigh you down slightly less and ensure that you don’t need to carry around a power supply.
Editors’ Recommendations
- Acer Switch 7 Black Edition review
- Dell XPS 15 2-in-1 vs. Surface Book 2 15
- Microsoft Surface Pro (2017) review
- Lenovo ThinkPad X1 Yoga Gen 3 vs Microsoft Surface Book 2 13
- Asus ZenBook Flip 14 vs. Microsoft Surface Book 2 13
World domination, phase two: Facebook ponders its own cryptocurrency
After forming an internal team dedicated to blockchain technologies, Facebook is reportedly jumping into cryptocurrency with both feet. Speaking with anonymous sources close to Facebook, Cheddar claims the social media giant is considering the release of its own virtual token for electronic payments.
“They are very serious about it,” an anonymous source told Cheddar.
This news comes just days after Facebook’s internal blockchain team went public. Headed up by Messenger chief David Marcus, the new team will be exploring how blockchain technologies could be used to improve Facebook.
“Like many other companies Facebook is exploring ways to leverage the power of blockchain technology,” a Facebook spokesperson told Cheddar. “This new small team will be exploring many different applications. We don’t have anything further to share.”
It’s important to point out that Marcus is a cryptocurrency evangelist, as evidenced by his early investments in Bitcoin. He also recently joined the board of Coinbase, a popular cryptocurrency exchange. When asked if Facebook would be integrating cryptocurrency into its apps anytime soon, Marcus mentioned that he thinks cryptocurrencies have ample room to improve their effectiveness.
“Payments using crypto right now is just very expensive, super slow, so the various communities running the different blockchains and the different assets need to fix all the issues, and then when we get there someday, maybe we’ll do something,” Marcus said in an interview with CNBC.
Let’s put all this in context. This is big news if it turns out that Facebook is serious about potentially developing its own cryptocurrency. A “Facebook Coin” would have the potential to become the premier cryptocurrency, with Facebook’s massive number of daily active users. Creating and controlling a universal currency could put Facebook into a unique position going forward.
Already under congressional scrutiny for its privacy missteps, the development of its own cryptocurrency to rival Bitcoin, Ethereum, and even the U.S. dollar could attract the watchful eye of federal regulators.
Not to mention, if Facebook does in fact intend to develop its own cryptocurrency, it also casts the recent expulsion of all cryptocurrency-related advertisements from the platform in a particularly nefarious light.
Editors’ Recommendations
- What is a blockchain? Here’s everything you need to know
- Before crypto nirvana, blockchain needs to solve these basic problems
- Google is working on blockchain technology for the cloud
- Facebook confirms it’s making a big move with many of its executives
- Amazon patent wants to unmask Bitcoin users, sell data to law enforcement, others
From drones to bionic arms, here are 8 examples of amazing mind-reading tech
Elon Musk is a firm believer that brain-computer interfaces will be a big part of how we interact with computers in the future. But make no mistake: Mind-reading machines are here already. As science fiction writer William Gibson has noted, “The future is already here — it’s just not evenly distributed.”
Without further ado, then, here are eight examples of amazing mind-reading tech being explored in some of the world’s most exciting research labs.
Mind-reading hearing aids
kzenon / 123RF Stock Photo
Hearing aids are amazing inventions, but they run into problems in certain scenarios, such as crowded rooms where multiple people are speaking at the same time. One possible solution? Add in a dose of mind-reading.
That’s the broad idea behind a so-called “cognitive hearing aid” developed by researchers at the Columbia University School of Engineering and Applied Science. The device is designed to read brain activity to determine which voice a hearing aid user is most interested in listening to, and then focusing in on it. It’s still in the R&D phase, but this could be a game-changer for millions of deaf or hard of hearing people around the world.
“Working at the intersection of brain science and engineering, I saw a unique opportunity to combine the latest advances from both fields, to create a solution for decoding the attention of a listener to a specific speaker in a crowded scene which can be used to amplify that speaker relative to others,” Nima Mesgarani, an associate professor of electrical engineering, told Digital Trends.
Future interrogation techniques
Ken Jones/University of Toronto
Want an idea of what future interrogation scenarios might look like? Researchers at Japan’s Ochanomizu University have developed artificial intelligence that’s capable of analyzing a person’s fMRI brain scans and providing a written description of what they have been looking at. Accurate descriptions can extend to the complexity of “a dog is sitting on the floor in front of an open door” or “a group of people standing on the beach.”
Ichiro Kobayashi, one of the researchers on the project, said that there are no plans to use it as the basis for a supercharged lie detector… just yet, at least. “So far, there are not any real-world applications for this,” he told Digital Trends. “However, in the future, this technology might be a quantitative basis of a brain-machine interface.”
Another project from neuroscientists at Canada’s University of Toronto Scarborough was able to recreate the faces of people that participants had previously seen.
Next-gen bionic prostheses
Bionic prostheses have made enormous strides in recent years — and the concept of a mind-controlled robot limb is now very much a reality. In one example, engineers at Johns Hopkins built a successful prototype of such a robot arm that allows users to wiggle each prosthetic finger independently, using nothing but the power of the mind.
Perhaps even more impressively, earlier this year a team of researchers from Italy, Switzerland, and Germany developed a robot prosthesis which can actually feed sensory information back to a user’s brain — essentially restoring the person’s sense of touch in the process.
“We ‘translate’ information recorded by the artificial sensors in the [prosthesis’] hand into stimuli delivered to the nerves,” Silvestro Micera, a professor of Translational Neuroengineering at the Ecole Polytechnique Fédérale de Lausanne School of Engineering, told Digital Trends. “The information is then understood by the brain, which makes the patient feeling pressure at different fingers.”
Early warning epilepsy warnings
For people with epilepsy, seizures can appear to come out of nowhere. Unchecked, they can be extremely dangerous, as well as traumatic for both the sufferer and those people around them. But mind-reading tech could help.
Researchers at the University of Melbourne and IBM Research Australia have developed a deep learning algorithm which analyzes the electrical activity of patients’ brains and greatly improves seizure prediction.
“Our hope is that this could inform the development of a wearable seizure warning system that is specific to an individual patient, and could alert them via text message or even a fitbit-style feedback loop,” Stefan Harrer, an IBM Research Australia staff member who worked on the recent study, told Digital Trends. “It could also one day be integrated with other systems to prevent or treat seizures at the point of alert.”
Treating impulsive behavior
In not dissimilar work, researchers from Stanford University School of Medicine have developed mind-reading tech that could be used to moderate dangerously impulsive behavior.
Their system watches for a characteristic electrical activity pattern in the brain which occurs prior to impulsive actions, and then applies a quick jolt of targeted electricity. (No, it’s not as painful as that makes it sound!)
“This is the first example in a translatable setting that we could use a brain machine interface to sense a vulnerable moment in time and intervene with a therapeutic delivery of electrical stimulation,” Dr. Casey Halpern, assistant professor of neurosurgery, told Digital Trends. “This may be transformative for severely disabling impulse control disorders.”
Controlling virtual reality
University of Michigan
Imagine if it was possible to navigate through a virtual reality world without having to worry about any handheld controller. That’s the idea behind a project by tech company Neurable and VR graphics company Estudiofuture. They’re busy developing the technology that will make brain-controlled virtual reality a… well, real reality.
Neurable’s custom headset monitors users’ brain activity using head-mounted electrodes to determine their intent. While there are limitations (it’s not ideal for typing or navigating menus), it could nonetheless be invaluable for making fields like VR gaming even more immersive than they already are.
Mind-reading drones
When we control a vehicle, it’s important that our ability to manipulate its controls are as close as possible to our ability to perceive potential obstacles. In other words, we see something; we process it; our brain tells our hands to turn the wheel. Wouldn’t it be a whole lot easier if we just cut out the middleman?
That’s the concept behind neural interfaces which make it possible to steer drones (or even swarms of drones) using nothing more than our thoughts. Back in 2016, the University of Florida made headlines when it organized the world’s first ever brain-controlled drone race. Participants donned electroencephalogram headsets powered by brain-computer interface (BCI) technology, and then flew drones around a course using only their brainwaves.
While there’s still work to go, this could potentially be a useful method of rethinking the way in which future vehicles are piloted. Speaking of which…
The brainy way to drive a car
So you’ve got a new possible means of controlling a vehicle using brainwaves, but it’s not quite ready for prime time just yet. What do you test it on? Driving a car, of course — with the passengers inside. At least, that was the basis for an intriguing (if terrifying) experiment carried out by carmaker Renault late last year.
The company recruited three willing participants and gave them the opportunity to work together to mentally pilot a modified Renault Kadjar SUV. One person controlled the car’s left turns, another controlled its right turns, and the third handled its acceleration.
No, this is unlikely to make it to our roads any time soon, but it’s certainly a memorable tech demo. Even if, quite frankly, we’d rather walk to pick up our groceries!
Editors’ Recommendations
- Assistive tech is progressing faster than ever, and these 7 devices prove it
- This hearing aid will read your brain to help you understand what’s being said
- Shocking the brain with electricity can prompt people to remember old dreams
- 7 ambitious DARPA projects that will help out the military of the future
- A ’bionic’ larynx sounds far more natural than regular artificial voice boxes
After the San Bernardino iPhone fiasco, lawmakers introduce the Secure Data Act
Lawmakers introduced the Secure Data Act on Friday: a new bill that prevents law enforcement and surveillance agencies from forcing companies to insert backdoor entrances into their products and services. The bill was presented by U.S. Representatives Zoe Lofgren (D-Calif.) and Thomas Massie (R-Ky.) along with four co-sponsors.
“U.S. intelligence and law enforcement agencies have requested, required, and even sought court orders against individuals and companies to build a ‘backdoor,’ weakening secure encryption in their product or service to assist in electronic surveillance,” Lofgren said in a press release.
Why is this bill needed? A prime example would be the fiasco between the FBI and Apple over an iPhone 5C. The FBI recovered the phone from one of the shooters in the San Bernardino attack at the end of 2015, but couldn’t unlock the device. After turning to the National Security Agency to break into the phone with no success, the FBI then demanded Apple to create a version of iOS to install on the device packing a backdoor. Agents could then bypass the phone’s 10-try PIN entry screen.
Apple refused.
“In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession,” Apple CEO Tim Cook said. “The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.”
The battle grew ugly, incorporating a court order against Apple under the All Writs Act of 1789 and pressure by the U.S. Department of Justice. Apple offered four methods to access the iPhone 5C data, but the FBI instead chose to request that Apple develop malware for this one specific device, granting access to the phone’s contents.
Eventually the government dropped its court case against Apple after the FBI hired hackers to create a tool that exploited a zero-day vulnerability in iOS. With the tables turned, Apple wanted to know how the FBI cracked the iPhone. But even lawsuits filed under the Freedom of Information Act couldn’t persuade federal judge Tanya Chutkan to release the details, citing possible theft of the tool and a target on the third-party hackers.
“FBI officials did not pursue available technical solutions to access Farook’s iPhone because the FBI preferred obtaining a precedent-setting court judgement compelling Apple to weaken their product encryption,” Lofgren said on Friday. “It is well-documented that encryption backdoors put the data security of every person and business using the products or services in question at risk.”
Lofgren also said that backdoors created for law enforcement and intelligence surveillance are “vulnerabilities available for hackers to exploit.” He points to the Recording eXpress call recording suite developed by Nice Systems, which included an undocumented backdoor account. This hidden entry granted hackers full access to the system and listened to recorded calls without authorization.
Editors’ Recommendations
- North Carolina police force asks Google for data from devices near crime scenes
- New iOS update will lock Lightning port to prevent unauthorized access
- Hackers seize Atlanta’s network system, demand $51,000 in Bitcoin as ransom
- AMD is working on fixes for the reported Ryzenfall, MasterKey vulnerabilities
- FBI hostage rescue team bamboozled after criminals unleash drone swarm
Master JavaScript and learn to code like a pro for only $29!
JavaScript is the engine that powers all modern browser, like Chrome and Firefox, and learning this dynamic language is your ticket to a career in front-end development, building frameworks and libraries. Alongside HTML and CSS, Javascript is among the three core language of the internet. Think of it as the Batman in the internet’s Trinity. Learning it is invaluable, especially if you’re looking for a career in development of any kind.
Learning in a classroom setting can be boring and taking all these extraneous electives and requirements that colleges and universities have is costly, time-consuming, and annoying (no, I don’t want to take Drama 1001; I want to learn JavaScript. I don’t care if this year’s musical is “Anything Goes”!). So you need courses that you can take on your own time and you need courses that start from scratch, especially if you’re not 100% about your career path.

Lucky for you, Android Central Digital Offers has the Essential JavaScript Coding Bundle. It features 11 courses on JavaScript and beyond, to which you’ll receive lifetime access. So if you only have time to learn JavaScript in your spare time, you can, and it can take years if you want. And why pay over $900 for these courses like you would elsewhere? Instead, pay only $29 at Android Central Digital Offers, a savings of 96%!
You’ll learn the ins and outs of JavaScript, from web development using popular frameworks to improving the efficiency of JavaScript code developing mobile apps using Angular. If you have any interest in learning JavaScript, now is the time to do it and Android Central Digital offers is the place to get the courses you need, and for only $29.
Whether you want to dive headfirst into JavaScript or you just want to dip a toe in the water, don’t go paying over $900 for the courses you need to get you going. And why have those courses be finite? Get lifetime access to the Essential JavaScript Coding Bundle for only $29! Only at Android Central Digital Offers!
See at Android Central Digital Offers
Google’s making it easier to understand and manage user data it collects
These changes will go into effect May 25.
Over the past couple weeks, you’ve likely gotten emails from a number of website and apps you use regarding updates to their privacy policies. Europe will soon launch its new General Data Protection Regulation (aka the GDPR) to give people more rights over their online data, and Google recently shared all of the steps it’s taken to ensure it complies with the new law.

First and foremost, Google’s updating the way its privacy policy is presented. Although the policy itself is remaining the same, it’s now much easier to understand. You can browse through the policy by certain categories, a “clearer language” is used, and Google’s added videos and images to give you a visual of what you’re reading.
Additionally, Google is also expanding the controls it gives its users for managing the way their data is collected and handled –
As part of our GDPR compliances efforts, we’ve improved both the controls and the clarity of information in My Account so that people are better informed about how and why their data is collected.
Being able to download and transfer copies of your data is another big point for the GDPR, and Google’s addressing this head on, too. The company’s data download tool already allows you to download and save your info from Photos, Drive, Calendar, Play Music, and Gmail, but Google notes it’s expanding this to additional services. Similarly, a tool will be added for scheduling downloads and the Data Transfer Project in GitHub makes it easier for app developers to allow users to transfer their account info to another service.
Along with all this, Google also announced that it’s bringing its Family Link parental control suite to the EU, is giving publishers more tools for managing ads on their sites, and has –
Now further improved our privacy program, enhancing our product launch review processes, and more comprehensively documenting our processing of data, in line with the accountability requirements of the GDPR.
All of these changes will go into effect on May 25, 2018.
Learn more about what’s changing
Icon Pack Studio means never having gaps in your app drawer again

App drawers have always been a mess, and Icon Pack Studio is the closest anyone’s come to fixing it.
Adaptive icons and icon packs try to make our app drawers and docks look uniform, consistent, and beautiful, but they both have their limits. Where they fail, however, Icon Pack Studio not only succeeds but thrives. Whether you just want your apps to have a consistent look and feel or you want to create an icon pack that perfectly suits your budding themes, Icon Pack Studio is a fantastic tool to keep in your toolkit.



Even the best icon packs leave gaps. There are millions of apps in Google Play, to say nothing of third-party app stores, and it’s impossible for icon makers to make specialized icons for them all in their packs. Icon masks help smooth over the gaps an icon pack can leave in your app drawer, and Icon Pack Studio helps users make an icon mask just the way they want it.
You go through the basic features when initially setting up your icon pack in Icon Pack Studio: picking a shape, a stroke, how big or small you want the logos in your icons to be, giving everything a color, adding some basic effects. It’s once you’ve named your pack that you can get into the full editor and start doing some really cool things with your icon pack:



- Don’t want your icon pack to have a shape behind it? Resize the Background to 0 and you’ll have a sweet monocolor icon pack, like a color-changable Whicons pack.
- Want see-through icons? The compositing options for Logo allow you to make semi-clear or completely clear logos in your icons?
- Want your icons to match your theme? Icon Pack Studio can pull colors from your wallpapers — just like Action Launcher’s Quicktheme — or even pull colors from your original app icons, making the Facebook app blue, Hangouts green, and so on. Pulling the app colors isn’t always perfect, but the wallpaper colors are usually pretty dead on.
- An update this week also gave Icon Pack Studio a new option for color-picking: hex color input, so that you can match your colors precisely with other elements of your theme.
- Use textures to create unique metallic packs, camo packs, and geometric packs. The scalloped pattern makes for a cool mermaid icon pack. Many of the FX like this are part of the Material Bundle, a $2.49 upgrade that turns Icon Pack Studio up to 11.
Once you’ve made an icon pack, you still need to apply the icon pack. If you’re using Smart Launcher 5 you can do this pretty much instantly — after all, Icon Pack Studio is made by the Smart Launcher team. If you’re using another launcher, Icon Pack Studio will export your icon pack as a separate app that can be installed and applied like a regular icon pack.



The export and install process is fairly painless, though you can only have one exported icon pack installed at a time, as every pack from Icon Pack Studio exports and installs as an app with the same name: Exported Icon Pack Studio. Since most launchers can only use one icon pack at a time, that’s not really a problem, but if you bounce back and forth a lot between themes, be prepared to come back and re-export each time you switch — or consider switching to Smart Launcher.
If you’re a themer that like to share your setups with others, Icon Pack Studio also got a heck of a lot more alluring with its last update, which allows you to Share the settings of one of your packs to someone else with Icon Pack Studio so they can import it and apply it in a few easy clicks. We’ll cover that next week in our new Deadpool theme!

Icon Pack Studio is a great tool for building icon packs that perfectly pair with your wallpapers, and it’s also a great tool for Smart Launcher to try and improve its functionality and image as a theming launcher. Whether you just want an app drawer that is completely consistent or you want icons that match each and every theme perfectly, Icon Pack Studio is for you.
Download Icon Pack Studio (Free, $2.49)



